[Bug 1515278] Re: [SRU] rabbit queues should expire when unused
@niedbalski FYI, tempest results from wily-liberty-proposed and trusty- liberty staging are consistent with wily-liberty (distro) and trusty- liberty (uca updates). ie. No new failures. Now that it has been promoted to trusty-liberty-proposed, we will need to re-test there. There is typically a 1-wk bake period for proposed cloud archive pockets. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to python-oslo.messaging in Ubuntu. https://bugs.launchpad.net/bugs/1515278 Title: [SRU] rabbit queues should expire when unused To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1515278/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1546445] Re: support vhost user without specifying vhostforce
FYI, following additional regression tests, today we promoted qemu 2.2 +dfsg-5expubuntu9.7~cloud2 from kilo-proposed to kilo-updates in the Ubuntu Cloud Archive. ** Changed in: cloud-archive/kilo Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1546445 Title: support vhost user without specifying vhostforce To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1546445/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..
FYI, following additional regression tests, today we promoted neutron 2014.1.5-0ubuntu4~cloud0 from proposed to icehouse-updates in the Ubuntu Cloud Archive. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1393391 Title: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port- update_fanout.. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1393391/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1474030] Re: amulet _get_proc_start_time has a race which causes service restart checks to fail
** Branch unlinked: lp:~1chb1n/charms/trusty/keystone/next-amulet- mitaka-1601 ** Changed in: glance (Ubuntu) Status: In Progress => Confirmed ** Changed in: heat (Ubuntu) Status: In Progress => Confirmed ** Branch linked: lp:~1chb1n/charms/trusty/neutron-openvswitch/next- amulet-mitaka-1601 ** Changed in: cinder (Juju Charms Collection) Milestone: None => 16.01 ** No longer affects: neutron-openvswitch (Ubuntu) ** No longer affects: heat (Ubuntu) ** No longer affects: glance (Ubuntu) ** Also affects: heat (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: heat (Juju Charms Collection) Status: New => Confirmed ** Changed in: heat (Juju Charms Collection) Milestone: None => 16.01 ** Also affects: glance (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: glance (Juju Charms Collection) Status: New => Confirmed ** Changed in: glance (Juju Charms Collection) Milestone: None => 16.01 ** Also affects: neutron-openvswitch (Juju Charms Collection) Importance: Undecided Status: New ** Changed in: neutron-openvswitch (Juju Charms Collection) Status: New => In Progress ** Changed in: neutron-openvswitch (Juju Charms Collection) Milestone: None => 16.01 ** Changed in: neutron-openvswitch (Juju Charms Collection) Assignee: (unassigned) => Ryan Beisner (1chb1n) -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to heat in Ubuntu. https://bugs.launchpad.net/bugs/1474030 Title: amulet _get_proc_start_time has a race which causes service restart checks to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-helpers/+bug/1474030/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1474030] Re: amulet _get_proc_start_time has a race which causes service restart checks to fail
** Also affects: cinder (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: glance (Ubuntu) Importance: Undecided Status: New ** Also affects: heat (Ubuntu) Importance: Undecided Status: New ** Also affects: neutron-openvswitch (Ubuntu) Importance: Undecided Status: New ** Changed in: neutron-openvswitch (Ubuntu) Assignee: (unassigned) => Ryan Beisner (1chb1n) ** Changed in: heat (Ubuntu) Assignee: (unassigned) => Ryan Beisner (1chb1n) ** Changed in: glance (Ubuntu) Assignee: (unassigned) => Ryan Beisner (1chb1n) ** Changed in: cinder (Juju Charms Collection) Assignee: (unassigned) => Ryan Beisner (1chb1n) ** Changed in: cinder (Juju Charms Collection) Status: New => In Progress ** Changed in: glance (Ubuntu) Status: New => In Progress ** Changed in: heat (Ubuntu) Status: New => In Progress ** Changed in: neutron-openvswitch (Ubuntu) Status: New => In Progress ** Branch linked: lp:~1chb1n/charms/trusty/keystone/next-amulet- mitaka-1601 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to glance in Ubuntu. https://bugs.launchpad.net/bugs/1474030 Title: amulet _get_proc_start_time has a race which causes service restart checks to fail To manage notifications about this bug go to: https://bugs.launchpad.net/charm-helpers/+bug/1474030/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1488453] Re: Package postinst always fail on first install when using systemd
With Vivid, openhpid fails to install and start @ 2.14.1-1.3ubuntu2. This is blocking hacluster from installing on Vivid. I believe this is due to this directive in the /etc/openhpi/openhpi.conf file: ### ## OpenHPI will not be useful unless it is configured for your system. Once ## you have modified this file, remove or comment the following line to allow ## the OpenHPI daemon to run. This line causes the daemon to exit immediately. OPENHPI_UNCONFIGURED = "YES" When I comment that line, dpkg completes and the daemon starts successfully, ie.: #OPENHPI_UNCONFIGURED = "YES" I've attached the original openhpi.conf file as placed on disk by the package, and here is some additional detail: http://paste.ubuntu.com/14053827/ To reproduce: 1. Launch a Vivid instance. 2. apt-get install openhpid 3. The service will fail to start and dpkg configure will fail, causing apt to exit nonzero. 4. sudo service openhpid status # confirm fail 5. Remove OPENHPI_UNCONFIGURED = "YES" from /etc/openhpi/openhpi.conf 6. sudo service openhpid start 7. sudo service openhpid status # confirm success ** Attachment added: "openhpi.conf" https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+attachment/4535835/+files/openhpi.conf -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1488453 Title: Package postinst always fail on first install when using systemd To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1488453] Re: Package postinst always fail on first install when using systemd
Checked again with 2.14.1-1.3ubuntu2.1 from vivid-proposed. While I've not done any functional testing against openhpid itself, apt and dpkg now exit cleanly, whereas they did not @ ubuntu2. ubuntu@juju-beis1-machine-3:/etc/apt/preferences.d$ apt-cache policy openhpid openhpid: Installed: (none) Candidate: 2.14.1-1.3ubuntu2 Version table: 2.14.1-1.3ubuntu2.1 0 400 http://archive.ubuntu.com/ubuntu/ vivid-proposed/main amd64 Packages 100 /var/lib/dpkg/status 2.14.1-1.3ubuntu2 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages ubuntu@juju-beis1-machine-3:/etc/apt/preferences.d$ sudo apt-get install openhpid/vivid-proposed Reading package lists... Done Building dependency tree Reading state information... Done Selected version '2.14.1-1.3ubuntu2.1' (Ubuntu:15.04/vivid-proposed [amd64]) for 'openhpid' The following NEW packages will be installed: openhpid 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/103 kB of archives. After this operation, 457 kB of additional disk space will be used. Selecting previously unselected package openhpid. (Reading database ... 68000 files and directories currently installed.) Preparing to unpack .../openhpid_2.14.1-1.3ubuntu2.1_amd64.deb ... Unpacking openhpid (2.14.1-1.3ubuntu2.1) ... Processing triggers for man-db (2.7.0.2-5) ... Processing triggers for systemd (219-7ubuntu6) ... Processing triggers for ureadahead (0.100.0-19) ... Setting up openhpid (2.14.1-1.3ubuntu2.1) ... ubuntu@juju-beis1-machine-3:/etc/apt/preferences.d$ echo $? 0 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1488453 Title: Package postinst always fail on first install when using systemd To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1488453] Re: Package postinst always fail on first install when using systemd
** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1488453 Title: Package postinst always fail on first install when using systemd To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1519527] Re: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address
Here is a non-OpenStack generic reproducer to re-confirm. Generic reproducer bundle: http://paste.ubuntu.com/13576737/ PASS: MAAS 1.9b2 + Juju 1.25.0 lxc units get unique IPs: http://paste.ubuntu.com/13576758/ FAIL: MAAS 1.9b2 + Juju 1.25.1 lxc units all have the same IP: http://paste.ubuntu.com/13578689/ Reproduction steps: Deploy the bundle, once with [MAAS 1.9b2 + Juju 1.25.0] and once with [MAAS 1.9b2 + Juju 1.25.1]: `juju bootstrap && juju-deployer -v -c ubuntu18lxc.yaml -d vivid` -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1519527 Title: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1519527/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1519527] Re: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address
** Description changed: - With the proposed Juju 1.25.1, lxc units all possess the same IP address: + With MAAS 1.9b2 + proposed Juju 1.25.1, lxc units all possess the same IP address: http://paste.ubuntu.com/13499208/. - With stable Juju 1.25.0, lxc units get unique IP addresses as expected: + With MAAS 1.9b2 + stable Juju 1.25.0, lxc units get unique IP addresses as expected: http://paste.ubuntu.com/13500012/. This can be reproduced with any workload deployed via MAAS 1.9RC2. The issue is not specific to OpenStack. Originally observed as: I've run 5 bare metal deploy tests with Juju proposed 1.25.1, and all 5 have had one or more lxc units go into a "workload-state: error" + "agent-state: lost" condition. The same bundle has a passing test history with Juju 1.25.0. Lab is MAAS 1.9RC2 ("dellstack"). -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1519527 Title: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1519527/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1519527] Re: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address
@andreserl b2 About to upgrade to latest RC in our lab. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1519527 Title: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1519527/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1519527] Re: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address
@andreserl RC2 Please disregard my mentions of B2. Indeed I am using RC2. ** Description changed: - With MAAS 1.9b2 + proposed Juju 1.25.1, lxc units all possess the same IP address: + With MAAS 1.9rc2 + proposed Juju 1.25.1, lxc units all possess the same IP address: http://paste.ubuntu.com/13499208/. - With MAAS 1.9b2 + stable Juju 1.25.0, lxc units get unique IP addresses as expected: + With MAAS 1.9rc2 + stable Juju 1.25.0, lxc units get unique IP addresses as expected: http://paste.ubuntu.com/13500012/. This can be reproduced with any workload deployed via MAAS 1.9RC2. The issue is not specific to OpenStack. Originally observed as: I've run 5 bare metal deploy tests with Juju proposed 1.25.1, and all 5 have had one or more lxc units go into a "workload-state: error" + "agent-state: lost" condition. The same bundle has a passing test history with Juju 1.25.0. Lab is MAAS 1.9RC2 ("dellstack"). -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1519527 Title: juju 1.25.1: lxc units all have the same IP address - changed to claim_sticky_ip_address To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1519527/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1519527] Re: MAAS 1.9b2+ with juju 1.25.1: lxc units all have the same IP address
Given that with Juju 1.25.0 + MAAS 1.9b2, the container IPs were sane and unique, I think Juju should still track and block on this bug as a regression with shared interest in releasing a functional MAAS 1.9.x + JUJU 1.25.x combo. Whether that means waiting for MAAS to be fix-released on a new beta, or deferring 1.25.1, I think we should not release 1.25.1 until a confirmed combo is ready and consumable via ppa. I know that if 1.25.1 releases alone as scheduled, it *WILL* break my test automation for charm testing on bare metal, as we have already upgraded to MAAS 1.9b2 so that we can exercise network spaces features. I would be willing to bet that others will be similarly blocked. I've re-added Juju to the affected list. Thank you all. ** Also affects: juju-core (Ubuntu) Importance: Undecided Status: New ** Also affects: juju-core Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju-core in Ubuntu. https://bugs.launchpad.net/bugs/1519527 Title: MAAS 1.9b2+ with juju 1.25.1: lxc units all have the same IP address To manage notifications about this bug go to: https://bugs.launchpad.net/juju-core/+bug/1519527/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1488453] Re: Package postinst always fail on first install when using systemd
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1488453 Title: Package postinst always fail on first install when using systemd To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openhpi/+bug/1488453/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1479661] Re: hacluster install hook fails on Vivid and Wily (pacemaker /var/lib/heartbeat home dir ownership issue)
@racb yep, that was my original take on this, but somehow I convinced myself otherwise apparently. Either way, hacluster won't install on V/W. ;-) -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1479661 Title: hacluster install hook fails on Vivid and Wily (pacemaker /var/lib/heartbeat home dir ownership issue) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1479661/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1479661] Re: hacluster install hook fails on Vivid and Wily (openhpid init script error)
# Wily ubuntu@1ea-wily214622:~$ sudo apt-get install corosync pacemaker python-netaddr ipmitool ... Setting up libesmtp6 (1.0.6-4) ... Setting up libheartbeat2 (1:3.0.5+hg12629-1.2) ... Setting up liblrmd1 (1.1.12-0ubuntu2) ... Setting up libpe-status4 (1.1.12-0ubuntu2) ... Setting up libpengine4 (1.1.12-0ubuntu2) ... Setting up libtransitioner2 (1.1.12-0ubuntu2) ... Setting up openhpid (2.14.1-1.3ubuntu2) ... Job for openhpid.service failed because a configured resource limit was exceeded. See systemctl status openhpid.service and journalctl -xe for details. invoke-rc.d: initscript openhpid, action start failed. dpkg: error processing package openhpid (--configure): subprocess installed post-installation script returned error exit status 1 Setting up openipmi (2.0.18-0ubuntu8) ... Setting up resource-agents (1:3.9.3+git20121009-3.1) ... Setting up python-lxml (3.4.4-1) ... Setting up python-yaml (3.11-2build1) ... Setting up crmsh (1.2.6+git+e77add-1.3ubuntu2) ... Setting up pacemaker-cli-utils (1.1.12-0ubuntu2) ... Setting up python-bs4 (4.3.2-2ubuntu3) ... Setting up python-chardet (2.3.0-1build1) ... Setting up python-html5lib (0.999-3build1) ... Setting up python-netaddr (0.7.15-1) ... Setting up libnss3-nssdb (2:3.19.2-1ubuntu1) ... Setting up libnss3:amd64 (2:3.19.2-1ubuntu1) ... Setting up libtotem-pg5 (2.3.4-0ubuntu1) ... Setting up corosync (2.3.4-0ubuntu1) ... Processing triggers for dbus (1.8.12-1ubuntu5) ... Setting up pacemaker (1.1.12-0ubuntu2) ... Adding group `haclient' (GID 119) ... Done. Warning: The home dir /var/lib/heartbeat you specified already exists. Adding system user `hacluster' (UID 112) ... Adding new user `hacluster' (UID 112) with group `haclient' ... The home directory `/var/lib/heartbeat' already exists. Not copying from `/etc/skel'. adduser: Warning: The home directory `/var/lib/heartbeat' does not belong to the user you are currently creating. Processing triggers for libc-bin (2.21-0ubuntu4) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (224-1ubuntu3) ... Errors were encountered while processing: openhpid E: Sub-process /usr/bin/dpkg returned an error code (1) ** Summary changed: - hacluster install hook fails on vivid (openhpid init script error) + hacluster install hook fails on Vivid and Wily (openhpid init script error) ** Description changed: - openhpid in vivid may need love + pacemaker and/or openhpid in Vivid and Wily may need love + # Vivid: http://paste.ubuntu.com/11964818/ 2015-07-30 07:27:14 INFO install Setting up openhpid (2.14.1-1.3ubuntu2) ... 2015-07-30 07:27:15 INFO install Job for openhpid.service failed. See systemctl status openhpid.service and journalctl -xe for details. 2015-07-30 07:27:15 INFO install invoke-rc.d: initscript openhpid, action start failed. 2015-07-30 07:27:15 INFO install dpkg: error processing package openhpid (--configure): 2015-07-30 07:27:15 INFO install subprocess installed post-installation script returned error exit status 1 2015-07-30 07:27:15 INFO install Errors were encountered while processing: 2015-07-30 07:27:15 INFO install openhpid 2015-07-30 07:27:15 INFO install E: Sub-process /usr/bin/dpkg returned an error code (1) 2015-07-30 07:27:15 INFO install Traceback (most recent call last): 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/install, line 405, in module 2015-07-30 07:27:15 INFO install hooks.execute(sys.argv) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/core/hookenv.py, line 557, in execute 2015-07-30 07:27:15 INFO install self._hooks[hook_name]() 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/install, line 87, in install 2015-07-30 07:27:15 INFO install apt_install(filter_installed_packages(PACKAGES), fatal=True) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py, line 183, in apt_install 2015-07-30 07:27:15 INFO install _run_apt_command(cmd, fatal) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py, line 428, in _run_apt_command 2015-07-30 07:27:15 INFO install result = subprocess.check_call(cmd, env=env) 2015-07-30 07:27:15 INFO install File /usr/lib/python2.7/subprocess.py, line 540, in check_call 2015-07-30 07:27:15 INFO install raise CalledProcessError(retcode, cmd) 2015-07-30 07:27:15 INFO install subprocess.CalledProcessError: Command '['apt-get', '--assume-yes', '--option=Dpkg::Options::=--force-confold', 'install', 'corosync', 'pacemaker', 'ipmitool', 'libnagios-plugin-perl']' returned non-zero exit status 100 2015-07-30 07:27:15 INFO juju.worker.uniter.context context.go:543 handling reboot 2015-07-30 07:27:15 ERROR juju.worker.uniter.operation runhook.go:103
[Bug 1479661] Re: hacluster install hook fails on Vivid and Wily (openhpid init script error)
#Vivid ubuntu@1ea-vivid214622:~$ sudo apt-get install corosync pacemaker python-netaddr ipmitool ... Setting up libpe-rules2 (1.1.12-0ubuntu2) ... Setting up libcib4 (1.1.12-0ubuntu2) ... Setting up libstonithd2 (1.1.12-0ubuntu2) ... Setting up libcrmcluster4 (1.1.12-0ubuntu2) ... Setting up libcrmservice1 (1.1.12-0ubuntu2) ... Setting up libesmtp6 (1.0.6-4) ... Setting up libheartbeat2 (1:3.0.5+hg12629-1.2) ... Setting up liblrmd1 (1.1.12-0ubuntu2) ... Setting up libpe-status4 (1.1.12-0ubuntu2) ... Setting up libpengine4 (1.1.12-0ubuntu2) ... Setting up libtransitioner2 (1.1.12-0ubuntu2) ... Setting up openhpid (2.14.1-1.3ubuntu2) ... Job for openhpid.service failed. See systemctl status openhpid.service and journalctl -xe for details. invoke-rc.d: initscript openhpid, action start failed. dpkg: error processing package openhpid (--configure): subprocess installed post-installation script returned error exit status 1 Setting up openipmi (2.0.18-0ubuntu8) ... Setting up resource-agents (1:3.9.3+git20121009-3.1) ... Setting up python-lxml (3.4.2-1) ... Setting up python-yaml (3.11-2) ... Setting up crmsh (1.2.6+git+e77add-1.3ubuntu2) ... Setting up pacemaker-cli-utils (1.1.12-0ubuntu2) ... Setting up python-bs4 (4.3.2-2ubuntu2) ... Setting up python-html5lib (0.999-3) ... Setting up python-netaddr (0.7.12-2) ... Setting up libnss3-nssdb (2:3.19.2-0ubuntu15.04.1) ... Setting up libnss3:amd64 (2:3.19.2-0ubuntu15.04.1) ... Setting up libtotem-pg5 (2.3.4-0ubuntu1) ... Setting up corosync (2.3.4-0ubuntu1) ... Processing triggers for dbus (1.8.12-1ubuntu5) ... Setting up pacemaker (1.1.12-0ubuntu2) ... Adding group `haclient' (GID 119) ... Done. Warning: The home dir /var/lib/heartbeat you specified already exists. Adding system user `hacluster' (UID 111) ... Adding new user `hacluster' (UID 111) with group `haclient' ... The home directory `/var/lib/heartbeat' already exists. Not copying from `/etc/skel'. adduser: Warning: The home directory `/var/lib/heartbeat' does not belong to the user you are currently creating. Processing triggers for libc-bin (2.21-0ubuntu4) ... Processing triggers for systemd (219-7ubuntu6) ... Processing triggers for ureadahead (0.100.0-19) ... Errors were encountered while processing: openhpid E: Sub-process /usr/bin/dpkg returned an error code (1) ubuntu@1ea-vivid214622:~$ ** Description changed: - pacemaker and/or openhpid in Vivid and Wily may need love + pacemaker and/or openhpid in Vivid and Wily may need love. - # Vivid: + This occurs when manually installing on fresh Wily and Vivid instances: + ... + Setting up pacemaker (1.1.12-0ubuntu2) ... + Adding group `haclient' (GID 119) ... + Done. + Warning: The home dir /var/lib/heartbeat you specified already exists. + Adding system user `hacluster' (UID 111) ... + Adding new user `hacluster' (UID 111) with group `haclient' ... + The home directory `/var/lib/heartbeat' already exists. Not copying from `/etc/skel'. + adduser: Warning: The home directory `/var/lib/heartbeat' does not belong to the user you are currently creating. + + + # Observation from the charm perspective: http://paste.ubuntu.com/11964818/ 2015-07-30 07:27:14 INFO install Setting up openhpid (2.14.1-1.3ubuntu2) ... 2015-07-30 07:27:15 INFO install Job for openhpid.service failed. See systemctl status openhpid.service and journalctl -xe for details. 2015-07-30 07:27:15 INFO install invoke-rc.d: initscript openhpid, action start failed. 2015-07-30 07:27:15 INFO install dpkg: error processing package openhpid (--configure): 2015-07-30 07:27:15 INFO install subprocess installed post-installation script returned error exit status 1 2015-07-30 07:27:15 INFO install Errors were encountered while processing: 2015-07-30 07:27:15 INFO install openhpid 2015-07-30 07:27:15 INFO install E: Sub-process /usr/bin/dpkg returned an error code (1) 2015-07-30 07:27:15 INFO install Traceback (most recent call last): 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/install, line 405, in module 2015-07-30 07:27:15 INFO install hooks.execute(sys.argv) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/core/hookenv.py, line 557, in execute 2015-07-30 07:27:15 INFO install self._hooks[hook_name]() 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/install, line 87, in install 2015-07-30 07:27:15 INFO install apt_install(filter_installed_packages(PACKAGES), fatal=True) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py, line 183, in apt_install 2015-07-30 07:27:15 INFO install _run_apt_command(cmd, fatal) 2015-07-30 07:27:15 INFO install File /var/lib/juju/agents/unit-hacluster-2/charm/hooks/charmhelpers/fetch/__init__.py, line 428, in _run_apt_command 2015-07-30 07:27:15 INFO install result =
[Bug 1431013] Re: Resource type AWS::RDS::DBInstance errors
This doesn't appear to be purely a charm issue. When I apt-get install heat-api on a fresh trusty instance, I do not have an /etc/heat/templates/ dir. Suspect this is a packaging issue, as the package source does contain the awol items: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/heat/trusty/files/head:/etc/heat/templates/ ** Also affects: heat (Ubuntu) Importance: Undecided Status: New ** Changed in: heat (Juju Charms Collection) Status: Confirmed = Invalid -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to heat in Ubuntu. https://bugs.launchpad.net/bugs/1431013 Title: Resource type AWS::RDS::DBInstance errors To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/heat/+bug/1431013/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1480677] [NEW] oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors'
Public bug reported: Starting with nova-cloud-controller/next rev179 (which was a c-h sync), functional tests for Trusty-Juno began to fail. New instances are in an ERROR state. Messages are timing out. NOTE: Trusty-Icehouse and Precise-Icehouse tests are passing as of n-c-c/next rev181. # amulet test - new instances are in ERROR status: ... 01:09:19.296 2015-08-01 03:19:59,429 create_instance DEBUG: instance status: ERROR 01:09:19.296 2015-08-01 03:19:59,429 create_instance ERROR: instance creation timed out 01:09:19.296 Instance create failed 01:09:19.297 01:09:19.297 juju-test.conductor.016-basic-trusty-juno DEBUG : Got exit code: 1 01:09:19.297 juju-test.conductor.016-basic-trusty-juno RESULT : FAIL # n-c-c's nova-conductor log: ... 2015-08-01 03:15:46.121 23994 TRACE oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors' ... # nova-compute's nova-compute.log: ... 2015-08-01 03:16:46.052 24453 TRACE nova.openstack.common.threadgroup MessagingTimeout: Timed out waiting for a reply to message ID 06f90e0970814f29a1db29b2a7555e67 ... For full traces other info: http://paste.ubuntu.com/11987079/ # FYI, last known good @ n-c-c/next r178: http://paste.ubuntu.com/11987089/ ** Affects: nova-cloud-controller (Juju Charms Collection) Importance: Undecided Status: New ** Affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** Affects: rabbitmq-server (Juju Charms Collection) Importance: Undecided Status: New ** Tags: amulet openstack uosci ** Description changed: - Starting with nova-cloud-controller/next rev179, functional tests for - Trusty-Juno began to fail. New instances are in an ERROR state. - Messages are timing out. + Starting with nova-cloud-controller/next rev179 (which was a c-h sync), + functional tests for Trusty-Juno began to fail. New instances are in an + ERROR state. Messages are timing out. NOTE: Trusty-Icehouse and Precise-Icehouse tests are passing as of n-c-c/next rev181. - - # Last known good - # n-c-c/next r178 (before c-h sync) - http://10.245.162.77:8080/view/Dashboards/view/Amulet/job/charm_amulet_test/5412/consoleFull # amulet test - new instances are in ERROR status: ... 01:09:19.296 2015-08-01 03:19:59,429 create_instance DEBUG: instance status: ERROR 01:09:19.296 2015-08-01 03:19:59,429 create_instance ERROR: instance creation timed out 01:09:19.296 Instance create failed - 01:09:19.297 + 01:09:19.297 01:09:19.297 juju-test.conductor.016-basic-trusty-juno DEBUG : Got exit code: 1 01:09:19.297 juju-test.conductor.016-basic-trusty-juno RESULT : FAIL # n-c-c's nova-conductor log: ... 2015-08-01 03:15:46.121 23994 TRACE oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors' ... # nova-compute's nova-compute.log: ... 2015-08-01 03:16:46.052 24453 TRACE nova.openstack.common.threadgroup MessagingTimeout: Timed out waiting for a reply to message ID 06f90e0970814f29a1db29b2a7555e67 ... - See pastebin for full traces other info: + For full traces other info: http://paste.ubuntu.com/11987079/ + + # FYI, last known good @ n-c-c/next r178: + http://paste.ubuntu.com/11987089/ ** Also affects: nova-compute (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: rabbitmq-server (Ubuntu) Importance: Undecided Status: New ** No longer affects: rabbitmq-server (Ubuntu) ** Also affects: rabbitmq-server (Juju Charms Collection) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to rabbitmq-server in Ubuntu. https://bugs.launchpad.net/bugs/1480677 Title: oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors' To manage notifications about this bug go to: https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1480677/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): DONE [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): DONE [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): DONE [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: INPROGRESS [1chb1n] Update UOSCI for neutron-gateway charm name change: DONE [1chb1n] Update mojo specs for neutron-gateway charm name change: INPROGRESS - Re-validate, enhance I:J upgrade testing: TODO - Leader-election testing: TODO Work items for ubuntu-15.07: - [1chb1n] Add basic percona-cluster charm amulet tests: TODO + [1chb1n] Add basic percona-cluster charm amulet tests: INPROGRESS [1chb1n] Add basic cinder-ceph charm amulet tests: DONE [1chb1n] Add basic hacluster charm amulet tests: TODO + [1chb1n] Re-validate, enhance upgrade testing: INPROGRESS [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: INPROGRESS - Automate J:K upgrade testing (mojo spec): TODO - Automate K:L upgrade testing (mojo spec): TODO - Actions testing: TODO + Automate J:K upgrade testing (mojo spec): DONE + Automate K:L upgrade testing (mojo spec): DONE [1chb1n] Set up CI to deploy from source and run tempest: DONE Automate PowerNV OpenStack testing for for Ubuntu 14.04: TODO [1chb1n] Prep uosci for Wily and Liberty: DONE [1chb1n] Prep openstack-mojo-specs for Wily and Liberty: DONE Work items for ubuntu-15.08: + Leader-election test automation: TODO + Juju actions test automation, dependent on amulet WIP features: TODO + Juju status test automation, dependent on amulet WIP features: TODO [1chb1n] remove utopic amulet target from os-charms: TODO - [1chb1n] remove utopic mappings from uosci control file: TODO + [1chb1n] remove utopic mappings from uosci control file: DONE [1chb1n] remove utopic image sync from serverstack and dellstack labs: TODO UOSCI integration with charms hosted on github: TODO [1chb1n] ipv6 uosci/serverstack environment enablement: TODO [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): DONE [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): DONE [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): DONE [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: INPROGRESS [1chb1n] Update UOSCI for neutron-gateway charm name change: DONE [1chb1n] Update mojo specs for neutron-gateway charm name change: INPROGRESS Re-validate, enhance I:J upgrade testing: TODO Leader-election testing: TODO Work items for ubuntu-15.07: [1chb1n] Add basic percona-cluster charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: DONE [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: INPROGRESS Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO Actions testing: TODO [1chb1n] Set up CI to deploy from source and run tempest: DONE Automate PowerNV OpenStack testing for for Ubuntu 14.04: TODO [1chb1n] Prep uosci for Wily and Liberty: DONE [1chb1n] Prep openstack-mojo-specs for Wily and Liberty: DONE Work items for ubuntu-15.08: + [1chb1n] remove utopic amulet target from os-charms: TODO + [1chb1n] remove utopic mappings from uosci control file: TODO + [1chb1n] remove utopic image sync from serverstack and dellstack labs: TODO UOSCI integration with charms hosted on github: TODO [1chb1n] ipv6 uosci/serverstack environment enablement: TODO [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): DONE [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): DONE [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): DONE [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: INPROGRESS [1chb1n] Update UOSCI for neutron-gateway charm name change: DONE [1chb1n] Update mojo specs for neutron-gateway charm name change: INPROGRESS Re-validate, enhance I:J upgrade testing: TODO Leader-election testing: TODO Work items for ubuntu-15.07: [1chb1n] Add basic percona-cluster charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: DONE [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: INPROGRESS Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO Actions testing: TODO [1chb1n] Set up CI to deploy from source and run tempest: DONE Automate PowerNV OpenStack testing for for Ubuntu 14.04: TODO + [1chb1n] Prep uosci for Wily and Liberty: DONE + [1chb1n] Prep openstack-mojo-specs for Wily and Liberty: DONE Work items for ubuntu-15.08: UOSCI integration with charms hosted on github: TODO [1chb1n] ipv6 uosci/serverstack environment enablement: TODO [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: - [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): TODO - [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): TODO - [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): TODO - [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO - [1chb1n] ipv6 uosci/serverstack environment enablement: TODO - [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO - [1chb1n] Update UOSCI for neutron-gateway charm name change: TODO - [1chb1n] Update mojo specs for neutron-gateway charm name change: TODO + [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): DONE + [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): DONE + [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): DONE + [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: INPROGRESS + [1chb1n] Update UOSCI for neutron-gateway charm name change: DONE + [1chb1n] Update mojo specs for neutron-gateway charm name change: INPROGRESS Re-validate, enhance I:J upgrade testing: TODO Leader-election testing: TODO Work items for ubuntu-15.07: [1chb1n] Add basic percona-cluster charm amulet tests: TODO - [1chb1n] Add basic cinder-ceph charm amulet tests: TODO + [1chb1n] Add basic cinder-ceph charm amulet tests: DONE [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO - [1chb1n] Complete V/W amulet test coverage enablement: TODO - UOSCI integration with charms hosted on github: TODO + [1chb1n] Complete V/W amulet test coverage enablement: INPROGRESS Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO Actions testing: TODO - Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO + [1chb1n] Set up CI to deploy from source and run tempest: DONE Automate PowerNV OpenStack testing for for Ubuntu 14.04: TODO + + Work items for ubuntu-15.08: + UOSCI integration with charms hosted on github: TODO + [1chb1n] ipv6 uosci/serverstack environment enablement: TODO + [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): TODO [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): TODO [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): TODO [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO [1chb1n] ipv6 uosci/serverstack environment enablement: TODO [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO + [1chb1n] Update UOSCI for neutron-gateway charm name change: TODO + [1chb1n] Update mojo specs for neutron-gateway charm name change: TODO Re-validate, enhance I:J upgrade testing: TODO Leader-election testing: TODO Work items for ubuntu-15.07: [1chb1n] Add basic percona-cluster charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: TODO [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: TODO UOSCI integration with charms hosted on github: TODO Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO Actions testing: TODO Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO Automate PowerNV OpenStack testing for for Ubuntu 14.04: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.06: [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): TODO [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): TODO [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): TODO [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO + [1chb1n] ipv6 uosci/serverstack environment enablement: TODO + [1chb1n] ipv6 keystone amulet test as PoC/validation: TODO Re-validate, enhance I:J upgrade testing: TODO Leader-election testing: TODO Work items for ubuntu-15.07: [1chb1n] Add basic percona-cluster charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: TODO [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: TODO UOSCI integration with charms hosted on github: TODO Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO Actions testing: TODO Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Definition Status: New = Drafting -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.07: [1chb1n] Add basic heat charm amulet tests: TODO [1chb1n] Add basic percona-cluster charm amulet tests: TODO [1chb1n] Add basic ceilometer-agent charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: TODO [1chb1n] Add basic hacluster charm amulet tests: TODO [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: TODO [1chb1n] Add W amulet test coverage and corresponding charm-helper pieces: TODO + UOSCI integration with charms hosted on github: TODO + Automate J:K upgrade testing (mojo spec): TODO + Automate K:L upgrade testing (mojo spec): TODO + Re-validate, enhance I:J upgrade testing: TODO + Leader-election testing: TODO + Actions testing: TODO Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: + Work items for ubuntu-15.06: + [1chb1n] Add V/W charm-helper pieces for amulet testing (bug 1461535): TODO + [1chb1n] Add basic heat charm amulet tests (in conjunction with V/W enablement): TODO + [1chb1n] Add basic ceilometer-agent charm amulet tests (ahead of deploy-from-source dev): TODO + [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO + Re-validate, enhance I:J upgrade testing: TODO + Leader-election testing: TODO + Work items for ubuntu-15.07: - [1chb1n] Add basic heat charm amulet tests: TODO [1chb1n] Add basic percona-cluster charm amulet tests: TODO - [1chb1n] Add basic ceilometer-agent charm amulet tests: TODO [1chb1n] Add basic cinder-ceph charm amulet tests: TODO [1chb1n] Add basic hacluster charm amulet tests: TODO - [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO [1chb1n] Complete V/W amulet test coverage enablement: TODO - [1chb1n] Add W amulet test coverage and corresponding charm-helper pieces: TODO UOSCI integration with charms hosted on github: TODO Automate J:K upgrade testing (mojo spec): TODO Automate K:L upgrade testing (mojo spec): TODO - Re-validate, enhance I:J upgrade testing: TODO - Leader-election testing: TODO Actions testing: TODO Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Whiteboard changed: [USER STORIES] - Charlie wants to use the OpenStack charms to run CI in a multi-node scalable environment against every upstream commit to master. Charlie needs to set up CI in house to deploy from source and run tempest or other tests on every commit. + Johnny is an OpenStack charm developer who plans to propose and land changes to the OpenStack charms in alignment with product strategy, new toolchain features, bug fixes, and related work items. He seeks confidence that his changes do not cause regression of functionality, as well as confirmation of the new features or fixes. He will add unit and functional test coverage to represent his contributions as he seeks timely feedback through automated unit, lint, integrated and functional testing of each proposed change. + + Charlene is an OpenStack package maintainer who is responsible for + building and releasing new Ubuntu packages in cadence with upstream + releases. Prior to pushing packages into the archives, she seeks + confidence in his proposed packages, that they are functional and free + of regression. She also seeks confirmation of bug resolution via + automated deployment testing. + + Emmanuel is an Openstack Administrator who wants to upgrade from Juno to + Kilo. He needs confidence that the upgrade path is well-tested. + + Bob is a Cloud Consultant who wants to have common customer topologies + exercised against OpenStack charm development. Bob provides cloud + configurations, expressed as juju-deployer bundles, so that they may be + included in automated testing. + + Shawn is a Juju Core developer who works on a team that provides new + features in proposed and development versions of Juju, to ultimately be + released into Juju stable. He seeks confidence that the new features do + not cause regression of functionality, and that the new features + function as intended with regard to the OpenStack charms. + + Charlie wants to use the OpenStack charms to run CI in a multi-node + scalable environment against every upstream commit to master. Charlie + needs to set up CI in house to deploy from source and run tempest or + other tests on every commit. [ASSUMPTIONS] [RISKS] [IN SCOPE] [OUT OF SCOPE] [USER ACCEPTANCE] [RELEASE NOTE/BLOG] -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-w-openstack-qa] OpenStack QA for 15.10
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-15.07: + [1chb1n] Add basic heat charm amulet tests: TODO + [1chb1n] Add basic percona-cluster charm amulet tests: TODO + [1chb1n] Add basic ceilometer-agent charm amulet tests: TODO + [1chb1n] Add basic cinder-ceph charm amulet tests: TODO + [1chb1n] Add basic hacluster charm amulet tests: TODO + [1chb1n] Re-work existing percona-cluster charm amulet tests per series/release: TODO + [1chb1n] Re-work existing mongodb charm amulet tests per series/release: TODO + [1chb1n] Complete V/W amulet test coverage enablement: TODO + [1chb1n] Add W amulet test coverage and corresponding charm-helper pieces: TODO Setup CI to deploy from source and run tempest or other tests on every upstream commit to master: TODO -- OpenStack QA for 15.10 https://blueprints.launchpad.net/ubuntu/+spec/servercloud-w-openstack-qa -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1446507] Re: Could not load AWSTemplateFormatVersion.2010-09-09: testscenarios=0.4
I also hit this issue with Trusty-Kilo heat deployments. Installing python-testscenarios and python-testresources on the heat unit resolved as a workaround, but is probably not the right/permanent fix. ** Project changed: cloud-archive = heat (Juju Charms Collection) ** Also affects: heat (Ubuntu) Importance: Undecided Status: New ** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to heat in Ubuntu. https://bugs.launchpad.net/bugs/1446507 Title: Could not load AWSTemplateFormatVersion.2010-09-09: testscenarios=0.4 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/heat/+bug/1446507/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
** Changed in: cinder (Juju Charms Collection) Assignee: (unassigned) = Ryan Beisner (1chb1n) -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1443542] Re: curtin race on vivid when /dev/sda1 doesn't exist
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to curtin in Ubuntu. https://bugs.launchpad.net/bugs/1443542 Title: curtin race on vivid when /dev/sda1 doesn't exist To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1443542/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
ubuntu@juju-osci-sv07-machine-1:~$ tail /etc/apt/sources.list.d/* == /etc/apt/sources.list.d/cloud-archive.list == deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/icehouse main == /etc/apt/sources.list.d/cloud_config_sources.list == deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/cloud-tools main ubuntu@juju-osci-sv07-machine-1:~$ dpkg -l | egrep 'qemu|rbd|ceph' ii librbd1 0.41-1ubuntu2.1 RADOS block device client library ii qemu-utils 2.0.0+dfsg-2ubuntu1.6~cloud0 QEMU utilities ubuntu@juju-osci-sv07-machine-1:~$ apt-cache policy qemu-utils qemu-utils: Installed: 2.0.0+dfsg-2ubuntu1.6~cloud0 Candidate: 2.0.0+dfsg-2ubuntu1.6~cloud0 Version table: *** 2.0.0+dfsg-2ubuntu1.6~cloud0 0 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/icehouse/main amd64 Packages 100 /var/lib/dpkg/status 1.0+noroms-0ubuntu14.21 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 500 http://security.ubuntu.com/ubuntu/ precise-security/main amd64 Packages 1.0+noroms-0ubuntu13 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages ubuntu@juju-osci-sv07-machine-1:~$ apt-cache policy librbd1 librbd1: Installed: 0.41-1ubuntu2.1 Candidate: 0.80.5-0ubuntu0.14.04.1~cloud0 Version table: 0.80.5-0ubuntu0.14.04.1~cloud0 0 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/icehouse/main amd64 Packages *** 0.41-1ubuntu2.1 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.41-1ubuntu2 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
** Branch linked: lp:~1chb1n/charms/trusty/cinder/backport-vol-from-img- lp1440948 ** Branch linked: lp:~1chb1n/charms/trusty/cinder/amulet-fix-vol-from- img-lp1440948 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
** Description changed: - On Precise: + On Trusty: $ qemu-img --help qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard $ qemu-img qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard $ dpkg-query --show qemu-utils qemu-utils 2.0.0+dfsg-2ubuntu1.6~cloud0 Workaround: $ sudo apt-get install librbd1 Then qemu-img works. cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img juju-test.conductor.14-basic-precise-icehouse DEBUG : vol create failed - from glance img: id:f3fcd8fb-4ecd-46a7-bde1-0d83ac0166c8 stat:error boot:false next: revno 82 stable: revno 73 ** Description changed: - On Trusty: + On Precise or Trusty: $ qemu-img --help qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard $ qemu-img qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard $ dpkg-query --show qemu-utils qemu-utils 2.0.0+dfsg-2ubuntu1.6~cloud0 Workaround: $ sudo apt-get install librbd1 Then qemu-img works. cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img juju-test.conductor.14-basic-precise-icehouse DEBUG : vol create failed - from glance img: id:f3fcd8fb-4ecd-46a7-bde1-0d83ac0166c8 stat:error boot:false next: revno 82 stable: revno 73 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
** Branch unlinked: lp:~1chb1n/charms/trusty/cinder/volume-from-image -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img
Even with qemu-img installed, the test fails. This appears to be due to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=680307 Please see this demonstration of the busted qemu-img on Precise: http://paste.ubuntu.com/10764770/ ** Bug watch added: Debian Bug tracker #680307 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=680307 ** Summary changed: - cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img + qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard ** Also affects: qemu-kvm (Ubuntu) Importance: Undecided Status: New ** Description changed: + On Precise: + + ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ qemu-img --help + qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard + + ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ qemu-img + qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard + + Workaround: + ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ sudo apt-get install librbd1 + + Then qemu-img works. + cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img juju-test.conductor.14-basic-precise-icehouse DEBUG : vol create failed - from glance img: id:f3fcd8fb-4ecd-46a7-bde1-0d83ac0166c8 stat:error boot:false next: revno 82 stable: revno 73 ** Description changed: On Precise: - ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ qemu-img --help + $ qemu-img --help qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard - ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ qemu-img + $ qemu-img qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard + $ dpkg-query --show qemu-utils + qemu-utils 2.0.0+dfsg-2ubuntu1.6~cloud0 + Workaround: - ubuntu@juju-osci-sv07-machine-1:/var/log/cinder$ sudo apt-get install librbd1 + $ sudo apt-get install librbd1 Then qemu-img works. cinder (next and stable) charm amulet tests fail with: vol create failed - from glance img juju-test.conductor.14-basic-precise-icehouse DEBUG : vol create failed - from glance img: id:f3fcd8fb-4ecd-46a7-bde1-0d83ac0166c8 stat:error boot:false next: revno 82 stable: revno 73 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1440948] Re: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard
FYI, a potentially-related precise rbd libvirt issue: https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1427660 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu-kvm in Ubuntu. https://bugs.launchpad.net/bugs/1440948 Title: qemu-img: symbol lookup error: qemu-img: undefined symbol: rbd_aio_discard To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1440948/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1432596] Re: openstack-dashboard-ubuntu-theme fails to install
It looks like we still have package issues @ 2015.1~b2-0ubuntu5~cloud0. trusty-kilo-proposed: ubuntu@juju-machine-0-lxc-7:/var/log/juju$ dpkg-query --show openstack-dash* openstack-dashboard 1:2015.1~b2-0ubuntu5~cloud0 openstack-dashboard-ubuntu-theme1:2015.1~b2-0ubuntu5~cloud0 -03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 Traceback (most recent call last): 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File manage.py, line 25, in module 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 execute_from_command_line(sys.argv) 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/django/core/management/__init__.py, line 385, in execute_from_command_line 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 utility.execute() 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/django/core/management/__init__.py, line 377, in execute 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 self.fetch_command(subcommand).run_from_argv(self.argv) 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/django/core/management/__init__.py, line 238, in fetch_command 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 klass = load_command_class(app_name, subcommand) 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/django/core/management/__init__.py, line 41, in load_command_class 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 module = import_module('%s.management.commands.%s' % (app_name, name)) 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/importlib/__init__.py, line 37, in import_module 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 __import__(name) 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/compressor/management/commands/compress.py, line 28, in module 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 from compressor.cache import get_offline_hexdigest, write_offline_manifest 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 File /usr/lib/python2.7/dist-packages/compressor/cache.py, line 8, in module 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 from django.utils import simplejson 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 ImportError: cannot import name simplejson 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 dpkg: error processing package openstack-dashboard (--configure): 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 subprocess installed post-installation script returned error exit status 1 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 dpkg: dependency problems prevent configuration of openstack-dashboard-ubuntu-theme: 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 openstack-dashboard-ubuntu-theme depends on openstack-dashboard (= 1:2015.1~b2-0ubuntu5~cloud0); however: 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 Package openstack-dashboard is not configured yet. 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 dpkg: error processing package openstack-dashboard-ubuntu-theme (--configure): 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 dependency problems - leaving unconfigured 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 Processing triggers for libc-bin (2.19-0ubuntu6.6) ... 2015-03-24 21:32:39 INFO unit.openstack-dashboard/0.install logger.go:40 No apport report written because the error message indicates its a followup error from a previous failure. 2015-03-24 21:32:40 INFO unit.openstack-dashboard/0.install logger.go:40 Processing triggers for ureadahead (0.100.0-16) ... 2015-03-24 21:32:41 INFO unit.openstack-dashboard/0.install logger.go:40 Processing triggers for ufw (0.34~rc-0ubuntu2) ... 2015-03-24 21:32:41 INFO unit.openstack-dashboard/0.install logger.go:40 Errors were encountered while processing: 2015-03-24 21:32:41 INFO unit.openstack-dashboard/0.install logger.go:40 openstack-dashboard 2015-03-24 21:32:41 INFO unit.openstack-dashboard/0.install logger.go:40 openstack-dashboard-ubuntu-theme 2015-03-24 21:32:42 INFO unit.openstack-dashboard/0.install logger.go:40 E: Sub-process /usr/bin/dpkg returned an error code (1) -- You received this bug notification because you are a member of
[Bug 1432596] Re: openstack-dashboard-ubuntu-theme fails to install
FWIW, I just ran into this on ppc64el trusty-kilo-proposed testing. But -- it looks like I have but to wait. ** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1432596 Title: openstack-dashboard-ubuntu-theme fails to install To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1432596/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1416854] Re: Fail to install rabbitmq-server
** Branch linked: lp:~jjo/charms/trusty/rabbitmq-server/use- rabbitmqctl-q-for-list-cmds -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to rabbitmq-server in Ubuntu. https://bugs.launchpad.net/bugs/1416854 Title: Fail to install rabbitmq-server To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1416854/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1417211] Re: ERROR No module named backends.sql
** Changed in: keystone (Ubuntu) Status: New = Invalid ** Changed in: keystone (Juju Charms Collection) Status: New = Fix Committed -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1417211 Title: ERROR No module named backends.sql To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1417211/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 802117] Re: juju ssh/scp commands cause spurious key errors
FWIW - We saw this too in our automated OpenStack charm testing (UOSCI). Our work around is to overwrite known_hosts with our base known_hosts file on every build, on every jenkins slave. A bit of a hack, but it does the trick. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to juju in Ubuntu. https://bugs.launchpad.net/bugs/802117 Title: juju ssh/scp commands cause spurious key errors To manage notifications about this bug go to: https://bugs.launchpad.net/amulet/+bug/802117/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1416854] Re: Fail to install rabbitmq-server
See prior bug https://bugs.launchpad.net/charms/+source/rabbitmq- server/+bug/1417205 ** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to rabbitmq-server in Ubuntu. https://bugs.launchpad.net/bugs/1416854 Title: Fail to install rabbitmq-server To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1416854/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1418187] Re: _get_host_numa_topology assumes numa cell has memory
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1418187 Title: _get_host_numa_topology assumes numa cell has memory To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1418187/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1417211] Re: ERROR No module named backends.sql
** Branch linked: lp:~1chb1n/charms/trusty/keystone/kilo-support ** Branch linked: lp:~openstack-charmers/charms/trusty/keystone/next -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1417211 Title: ERROR No module named backends.sql To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1417211/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1417205] Re: cannot access '/var/lib/rabbitmq': No such file or directory
FYI juju deployment used amd64 images; my manual test inadvertently used the i386 image. I re-confirmed manual apt pkg install with an amd64 vivid image. Adding system user `rabbitmq' (UID 111) ... Adding new user `rabbitmq' (UID 111) with group `rabbitmq' ... Not creating home directory `/var/lib/rabbitmq'. * Starting message broker rabbitmq-server ...done. Processing triggers for libc-bin (2.19-13ubuntu3) ... Processing triggers for ureadahead (0.100.0-17) ... ubuntu@vivid181213:~$ apt-cache policy rabbitmq-server rabbitmq-server: Installed: 3.4.3-2 Candidate: 3.4.3-2 Version table: *** 3.4.3-2 0 500 http://nova.clouds.archive.ubuntu.com/ubuntu/ vivid/main amd64 Packages 100 /var/lib/dpkg/status ubuntu@vivid181213:~$ ** Also affects: rabbitmq-server (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to rabbitmq-server in Ubuntu. https://bugs.launchpad.net/bugs/1417205 Title: cannot access '/var/lib/rabbitmq': No such file or directory To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1417205/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1417211] [NEW] ERROR No module named backends.sql
Public bug reported: For Trusty-Kilo OpenStack deployments, the config-changed hook fails. Keystone.log shows: ERROR No module named backends.sql See attached for traceback. ** Affects: keystone (Ubuntu) Importance: Undecided Status: New ** Affects: keystone (Juju Charms Collection) Importance: Undecided Status: New ** Tags: openstack uosci ** Attachment added: trusty-kilo-keystone.txt https://bugs.launchpad.net/bugs/1417211/+attachment/4310633/+files/trusty-kilo-keystone.txt ** Also affects: keystone (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1417211 Title: ERROR No module named backends.sql To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1417211/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1410155] Re: Missing /etc/rabbitmq
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to rabbitmq-server in Ubuntu. https://bugs.launchpad.net/bugs/1410155 Title: Missing /etc/rabbitmq To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/1410155/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1408972] Re: openvswitch: failed to flow_del (No such file or directory)
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1408972 Title: openvswitch: failed to flow_del (No such file or directory) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1408972/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1336555] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1336555 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1336555/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
Update, fyi: Nova booted 110 instances. 16 had no net. Deleted the instances. Nova booted 110 more instances. 17 had no net. Deleted the instances. Consistent with the ~15% no net we saw last time around. v Deleted neutron nets and subnets, then re-added them. ^ Nova booted 110 instances. All had network. Deleted the instances. Nova booted 110 instances. All had network. Deleted the instances. Turned our CI engine back on (which will use this undercloud to instantiate a few hundred short-lived instances per day to test other code such as juju charms). I predict recurrence in about 20K instances, short of a solid lead on a fix. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
After revisiting the unit logs from successful runs and failed runs, this appears to be behind the fail (wants a reboot): Reading state information... Done 2015-01-06 19:56:06 INFO mon-relation-changed Creating new GPT entries. 2015-01-06 19:56:06 INFO mon-relation-changed Warning: The kernel is still using the old partition table. 2015-01-06 19:56:06 INFO mon-relation-changed The new table will be used at the next reboot. 2015-01-06 19:56:06 INFO mon-relation-changed GPT data structures destroyed! You may now partition the disk using fdisk or 2015-01-06 19:56:06 INFO mon-relation-changed other utilities. 2015-01-06 19:56:06 INFO mon-relation-changed Warning: The kernel is still using the old partition table. 2015-01-06 19:56:06 INFO mon-relation-changed The new table will be used at the next reboot. 2015-01-06 19:56:06 INFO mon-relation-changed The operation has completed successfully. 2015-01-06 19:56:07 INFO mon-relation-changed Warning: The kernel is still using the old partition table. 2015-01-06 19:56:07 INFO mon-relation-changed The new table will be used at the next reboot. 2015-01-06 19:56:07 INFO mon-relation-changed The operation has completed successfully. 2015-01-06 19:56:09 INFO mon-relation-changed Warning: The kernel is still using the old partition table. 2015-01-06 19:56:09 INFO mon-relation-changed The new table will be used at the next reboot. 2015-01-06 19:56:09 INFO mon-relation-changed The operation has completed successfully. 2015-01-06 19:56:09 INFO mon-relation-changed mkfs.xfs: cannot open /dev/vdb1: Device or resource busy 2015-01-06 19:56:09 INFO mon-relation-changed ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdb1']' returned non-zero exit status 1 2015-01-06 19:56:09 INFO worker.uniter.jujuc server.go:102 running hook tool juju-log [-l ERROR Unable to initialize device: /dev/vdb] 2015-01-06 19:56:09 DEBUG worker.uniter.jujuc server.go:103 hook context id ceph/1:mon-relation-changed:3229199905852646069; dir /var/lib/juju/agents/unit-ceph-1/charm 2015-01-06 19:56:09 ERROR juju-log mon:1: Unable to initialize device: /dev/vdb 2015-01-06 19:56:09 INFO mon-relation-changed Traceback (most recent call last): 2015-01-06 19:56:09 INFO mon-relation-changed File /var/lib/juju/agents/unit-ceph-1/charm/hooks/mon-relation-changed, line 339, in module 2015-01-06 19:56:09 INFO mon-relation-changed hooks.execute(sys.argv) 2015-01-06 19:56:09 INFO mon-relation-changed File /var/lib/juju/agents/unit-ceph-1/charm/hooks/charmhelpers/core/hookenv.py, line 528, in execute 2015-01-06 19:56:09 INFO mon-relation-changed self._hooks[hook_name]() 2015-01-06 19:56:09 INFO mon-relation-changed File /var/lib/juju/agents/unit-ceph-1/charm/hooks/mon-relation-changed, line 205, in mon_relation 2015-01-06 19:56:09 INFO mon-relation-changed reformat_osd(), config('ignore-device-errors')) 2015-01-06 19:56:09 INFO mon-relation-changed File /var/lib/juju/agents/unit-ceph-1/charm/hooks/ceph.py, line 327, in osdize 2015-01-06 19:56:09 INFO mon-relation-changed osdize_dev(dev, osd_format, osd_journal, reformat_osd, ignore_errors) 2015-01-06 19:56:09 INFO mon-relation-changed File /var/lib/juju/agents/unit-ceph-1/charm/hooks/ceph.py, line 375, in osdize_dev 2015-01-06 19:56:09 INFO mon-relation-changed raise e 2015-01-06 19:56:09 INFO mon-relation-changed subprocess.CalledProcessError: Command '['ceph-disk-prepare', '--fs-type', u'xfs', '--zap-disk', u'/dev/vdb']' returned non-zero exit status 1 2015-01-06 19:56:09 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
Please disregard my ceph comment on this bug. Wrong tab. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
And disregard GPT comment. Same story. Geez. I'm going EOD! -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
Oh, fyi, it is all 3 ceph units: ceph: charm: local:utopic/ceph-105 exposed: false relations: mon: - ceph units: ceph/0: agent-state: error agent-state-info: 'hook failed: mon-relation-changed' agent-version: 1.20.14 machine: 2 public-address: 172.20.108.60 ceph/1: agent-state: error agent-state-info: 'hook failed: mon-relation-changed' agent-version: 1.20.14 machine: 3 public-address: 172.20.108.61 ceph/2: agent-state: error agent-state-info: 'hook failed: mon-relation-changed' agent-version: 1.20.14 machine: 4 public-address: 172.20.108.62 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
So from a user symptom / impact standpoint, when new instances are nova booted, they are able to send DHCP DISCOVER packet through the corresponding bridge, but return DHCP OFFER traffic never reaches the new instance. In all cases that I have seen, the neutron net, subnet, and port statuses all report A-OK via cli queries. In some cases, inspecting the new underlying bridge with brctl results in error(s), but not always. ## Symptomatic info re: bridge: $ sudo brctl show qvob744fc12-71 bridge name bridge id STP enabled interfaces qvob744fc12-71 can't get info Operation not supported $ sudo brctl showmacs qvob744fc12-71 read of forward table failed: Operation not supported ## Symptomatic info from nova console-log: * Starting configure network device[74G[ OK ] cloud-init-nonet[13.52]: waiting 120 seconds for network device cloud-init-nonet[133.52]: gave up waiting for a network device. Cloud-init v. 0.7.5 running 'init' at Tue, 06 Jan 2015 16:03:14 +. Up 133.72 seconds. ci-info: +++Net device info+++ ci-info: ++--+---+---+---+ ci-info: | Device | Up | Address |Mask | Hw-Address| ci-info: ++--+---+---+---+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | True | . | . | fa:16:3e:f9:18:4f | ci-info: ++--+---+---+---+ ci-info: !!!Route info failed -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
We're seeing ovs crashes on a private deployment. They seem to surface only after being up for some time (1mo) and after creating/deleting a lot of instances over that time period (20K), but all with a sane system resource load. $ apt-cache policy openvswitch-common openvswitch-common: Installed: 2.0.2-0ubuntu0.14.04.1 Candidate: 2.0.2-0ubuntu0.14.04.1 Version table: *** 2.0.2-0ubuntu0.14.04.1 0 500 http://archive.ubuntu.com//ubuntu/ trusty-updates/main amd64 Packages 100 /var/lib/dpkg/status 2.0.1+git20140120-0ubuntu2 0 500 http://archive.ubuntu.com//ubuntu/ trusty/main amd64 Packages -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
** Tags added: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1352570] Re: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size()
For the life of me, I couldn't get ubuntu-bug or apport to automagically add the .crash and stack trace to this bug. Here it is, though, via attachment. ~9mb ** Attachment added: _usr_sbin_ovs-vswitchd.0.crash https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+attachment/4293053/+files/_usr_sbin_ovs-vswitchd.0.crash -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to openvswitch in Ubuntu. https://bugs.launchpad.net/bugs/1352570 Title: ovs-vswitchd crashed with SIGSEGV in nl_attr_get_size() To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1352570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1403114] [NEW] [SRU] icehouse package needs python-six 1.5.2-1 dependency
Public bug reported: keystone icehouse package needs dependency for python-six 1.5.2-1, which is in the cloud archive 2014-12-16 04:05:17 INFO install Setting up keystone (1:2014.1.3-0ubuntu2~cloud0) ... 2014-12-16 04:05:17 INFO install Traceback (most recent call last): 2014-12-16 04:05:17 INFO install File /usr/bin/keystone-manage, line 37, in module 2014-12-16 04:05:17 INFO install from keystone import cli 2014-12-16 04:05:17 INFO install File /usr/lib/python2.7/dist-packages/keystone/cli.py, line 23, in module 2014-12-16 04:05:17 INFO install from keystone.common import sql 2014-12-16 04:05:17 INFO install File /usr/lib/python2.7/dist-packages/keystone/common/sql/__init__.py, line 17, in module 2014-12-16 04:05:17 INFO install from keystone.common.sql.core import * 2014-12-16 04:05:17 INFO install File /usr/lib/python2.7/dist-packages/keystone/common/sql/core.py, line 35, in module 2014-12-16 04:05:17 INFO install from keystone.openstack.common.db.sqlalchemy import models 2014-12-16 04:05:17 INFO install File /usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/models.py, line 32, in module 2014-12-16 04:05:17 INFO install class ModelBase(six.Iterator): 2014-12-16 04:05:17 INFO install AttributeError: 'module' object has no attribute 'Iterator' 2014-12-16 04:05:17 INFO install dpkg: error processing keystone (--configure): 2014-12-16 04:05:17 INFO install subprocess installed post-installation script returned error exit status 1 ** Affects: keystone (Ubuntu) Importance: Undecided Status: New ** Tags: openstack uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1403114 Title: [SRU] icehouse package needs python-six 1.5.2-1 dependency To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1403114/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1403114] Re: [SRU] icehouse package needs python-six 1.5.2-1 dependency
ubuntu@juju-beis1-machine-8:~$ dpkg -l | egrep 'keystone|six' iF keystone 1:2014.1.3-0ubuntu2~cloud0 OpenStack identity service - Daemons ii python-keystone 1:2014.1.3-0ubuntu2~cloud0 OpenStack identity service - Python library ii python-keystoneclient1:0.7.1-ubuntu1~cloud0Client library for OpenStack Identity API ii python-six 1.1.0-2 Python 2 and 3 compatibility library (Python 2 interface) ubuntu@juju-beis1-machine-8:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 12.04.5 LTS Release:12.04 Codename: precise -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1403114 Title: [SRU] icehouse package needs python-six 1.5.2-1 dependency To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1403114/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1403114] Re: [SRU] icehouse package needs python-six 1.5.2-1 dependency
When installing keystone in a fresh enviro using the cloud archive, it installs cleanly. But if an older version of python-six is already installed, such as may be provided earlier by another package's dependency, the older 1.1.0-2 six version remains. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1403114 Title: [SRU] icehouse package needs python-six 1.5.2-1 dependency To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1403114/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1403114] Re: [SRU] icehouse package needs python-six 1.5.2-1 dependency
issue affects precise OpenStack charm deployments -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to keystone in Ubuntu. https://bugs.launchpad.net/bugs/1403114 Title: [SRU] icehouse package needs python-six 1.5.2-1 dependency To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1403114/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1382632] Re: Insecure key file permissions
** Tags added: openstack -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to horizon in Ubuntu. https://bugs.launchpad.net/bugs/1382632 Title: Insecure key file permissions To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1382632/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Blueprint servercloud-u-openstack-charms] OpenStack Charm work for Utopic
Blueprint changed by Ryan Beisner: Work items changed: Work items for ubuntu-14.05: [gnuoy] charm-helpers unit testing: DONE Work items for ubuntu-14.06: [mikemc] Simplestreams image sync charm: DONE [niedbalski] swift-storage block device persistence through reboots: DONE [gnuoy] Split neutron API from nova-cloud-controller: DONE [gnuoy] New neutron-openvswitch subordinate charm: DONE [corey.bryant] amulet testing approach for openstack charms: DONE [james-page] network reference architecture for openstack charms: DONE [james-page] nova-compute-vmware charm: DONE [james-page] cinder-vmware charm: DONE [zulcss] nova-compute-power charm: DONE Work items for ubuntu-14.07: [corey.bryant] amulet tests - keystone: DONE [corey.bryant] amulet tests - quantum-gateway: DONE [1chb1n] amulet tests - glance: DONE [corey.bryant] amulet tests - nova-compute: DONE [corey.bryant] amulet tests - nova-cloud-controller: DONE [corey.bryant] amulet tests - swift-proxy, swift-storage: DONE [james-page] Multiple network support across openstack charms: DONE Backport haproxy 1.5.x to trusty: DONE Work items for ubuntu-14.09: [james-page] HTTPS support with network-split configurations: DONE [james-page] hacluster charm updates to support reconfiguration: DONE amulet tests - cinder: DONE [corey.bryant] amulet tests - ceph-*: DONE [james-page] Updates to neutron charms for hyper-v integration: DONE [xianghui] Add IPv6 support to the charms: DONE Enable haproxy backport for 14.04 (supporting IPv6 backends + TLS): DONE Work items: [james-page] nvp-transport-node - nsx-transport-node rename: INPROGRESS worker configuration - cinder, glance, keystone, neutron-api: INPROGRESS juno release review across openstack charms: INPROGRESS Add support to mysql charm for network-splits: TODO Add support to heat charm for network-splits: POSTPONED Add support to mongodb charm for network-splits: TODO [james-page] Charm developer documentation: POSTPONED [james-page] Charm template for charm-tools: POSTPONED swift-proxy unit testing: POSTPONED nova-compute unit testing: POSTPONED nova-cloud-controller unit testing: POSTPONED HA cluster in-depth monitoring: POSTPONED [hopem] Ephemeral ceph backend for nova-compute: INPROGRESS [gnuoy] Spice/VNC support in nova charms: DONE Nagios nrpe sub-ordinate support for OpenStack charms: POSTPONED (stretch) MS-SQLServer as a backend for OpenStack: POSTPONED [corey.bryant] keystone deploy from git: INPROGRESS + make openstack-charm-testing a project: TODO -- OpenStack Charm work for Utopic https://blueprints.launchpad.net/ubuntu/+spec/servercloud-u-openstack-charms -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
Today's commits on n-api and ncc triggered deploy tests. P, T + U all deploy cleanly with -next branches: ceilometer: 52 ceilometer-agent: 41 ceph: 82 cinder: 45 glance: 62 keystone: 79 mongodb: 52 mysql: 126 neutron-api: 45 neutron-gateway: 67 neutron-openvswitch: 32 nova-cloud-controller: 104 nova-compute: 80 openstack-dashboard: 37 rabbitmq-server: 59 swift-proxy: 60 swift-storage-z1: 43 swift-storage-z2: 43 swift-storage-z3: 43 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
Also just so you know, unit test on n-api is failing: Starting tests... ...F..F == FAIL: test_register_configs (unit_tests.test_neutron_api_utils.TestNeutronAPIUtils) -- Traceback (most recent call last): File /var/lib/jenkins/checkout/neutron-api/unit_tests/test_neutron_api_utils.py, line 134, in test_register_configs self.assertItemsEqual(_regconfs.configs, confs) AssertionError: Element counts were not equal: First has 1, Second has 0: '/etc/apache2/sites-available/openstack_https_frontend.conf' First has 0, Second has 1: '/etc/apache2/sites-available/openstack_https_frontend' == FAIL: test_restart_map (unit_tests.test_neutron_api_utils.TestNeutronAPIUtils) -- Traceback (most recent call last): File /var/lib/jenkins/checkout/neutron-api/unit_tests/test_neutron_api_utils.py, line 115, in test_restart_map self.assertItemsEqual(_restart_map, expect) AssertionError: Element counts were not equal: First has 1, Second has 0: '/etc/apache2/sites-available/openstack_https_frontend.conf' First has 0, Second has 1: '/etc/apache2/sites-available/openstack_https_frontend' NameStmts Miss Cover Missing - hooks/neutron_api_context 61 493% 27-29, 67 hooks/neutron_api_hooks 167 597% 159-160, 233, 349-350 hooks/neutron_api_utils88 0 100% - TOTAL 316 997% -- Ran 51 tests in 2.185s FAILED (failures=2) make: *** [test] Error 1 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
Using these revs from -next, Trusty and Utopic deploy cleanly: neutron-api: 44 nova-cloud-controller: 103 ** Attachment added: unit-nova-cloud-controller-0.log.trusty https://bugs.launchpad.net/charms/+source/nova-cloud-controler/+bug/1372893/+attachment/4220680/+files/unit-nova-cloud-controller-0.log.trusty -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
And Precise deploy fails with same as jjo reports: hook failed: shared-db-relation-changed ** Attachment added: unit-nova-cloud-controller-0.log.precise https://bugs.launchpad.net/charms/+source/nova-cloud-controler/+bug/1372893/+attachment/4220681/+files/unit-nova-cloud-controller-0.log.precise -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
FYI/FWIW, there was err in my deploy test logic from gnuoy's MP. I deploy tested it against Juno, which passed. Once it merged, our CI deploy tests which run -next against P, T U, started to fail. Naturally, it'd be *much* better to detect that at the MP level. Adjusting logic. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
Nova sec operations are now OK after deploying with ncc and neutron-api. http://paste.ubuntu.com/8459292/ FYI, the deployer bundle used: http://paste.ubuntu.com/8459289/ lint checks pass for: https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-fix-1372893 https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-1372893 revnos 103 and 40, respectively. unit test pass for: https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-fix-1372893 revno 104 unit test fails for: https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/next-fix-1372893 revno 41 unit test results: http://paste.ubuntu.com/8459538/ Retesting deployment with 104 + 41. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
** Tags removed: osci ** Tags added: uosci -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
fyi, same bundle deploys ok with 104+41, and nova-secgroup cmds succeed. ## revnos deployed: ceilometer: 52 ceilometer-agent: 41 ceph: 82 cinder: 45 glance: 62 keystone: 79 mongodb: 52 mysql: 126 neutron-api: 41 neutron-gateway: 65 neutron-openvswitch: 31 nova-cloud-controller: 104 nova-compute: 80 openstack-dashboard: 37 rabbitmq-server: 59 swift-proxy: 60 swift-storage-z1: 43 swift-storage-z2: 43 swift-storage-z3: 43 ## nova secgroup cmd output + nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | tcp | 22| 22 | 0.0.0.0/0 | | +-+---+-+---+--+ + nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | icmp| -1| -1 | 0.0.0.0/0 | | +-+---+-+---+--+ + nova secgroup-add-rule default udp 53 53 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | udp | 53| 53 | 0.0.0.0/0 | | +-+---+-+---+--+ + nova secgroup-add-rule default tcp 80 80 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | tcp | 80| 80 | 0.0.0.0/0 | | +-+---+-+---+--+ + nova secgroup-add-rule default tcp 443 443 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | tcp | 443 | 443 | 0.0.0.0/0 | | +-+---+-+---+--+ + nova secgroup-add-rule default tcp 3128 3128 0.0.0.0/0 +-+---+-+---+--+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-+---+-+---+--+ | tcp | 3128 | 3128| 0.0.0.0/0 | | +-+---+-+---+--+ -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1372893] Re: Neutron has an empty database after deploying juno on utopic
** Also affects: neutron-api (Juju Charms Collection) Importance: Undecided Status: New ** Also affects: nova-cloud-controler (Juju Charms Collection) Importance: Undecided Status: New ** No longer affects: neutron-api (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to neutron in Ubuntu. https://bugs.launchpad.net/bugs/1372893 Title: Neutron has an empty database after deploying juno on utopic To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1372893/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1355877] Re: celery log files are not rotated
+1, we had a filled disk in on one box due to this. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1355877 Title: celery log files are not rotated To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1355877/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1313550] Re: ping does not work as a normal user on trusty tarball cloud images.
Is this a dup of 1302192? https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1302192 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1313550 Title: ping does not work as a normal user on trusty tarball cloud images. To manage notifications about this bug go to: https://bugs.launchpad.net/curtin/+bug/1313550/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1313550] Re: ping does not work as a normal user on trusty tarball cloud images.
FYI may also want to see comment 5 from previous/related bug 1302192; attributes were OK in cloudimage at that point. https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1302192/comments/5 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to the bug report. https://bugs.launchpad.net/bugs/1313550 Title: ping does not work as a normal user on trusty tarball cloud images. To manage notifications about this bug go to: https://bugs.launchpad.net/curtin/+bug/1313550/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1308756] Re: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04
Thank you for the info. Can you also collect and post more info about the host using these commands? dpkg -l | egrep 'qemu|kvm|libvirt' lsb_release -a free -m uname -a virsh capabilities qemu-system-x86_64 -machine help -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1308756 Title: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308756/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1308756] Re: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04
Add'l input from my environment: I recently upgraded one of my two local hosts from Precise to Trusty, containing a dozen or so guests, including Ubuntu Precise, Trusty, CentOS, and a Windows guest. I had to adjust the machine types in the XML definitions after the upgrade. Other than that, these guests have all been stable and happy. Granted, I generally don't use VNC/VGA to access them, so our scenarios are a bit different in that. As a quick check, I just tested using VNC (Vinagre client) from my Precise desktop to the Trusty-upgraded server that hosts about half of my VMs. The beloved Windows 8 GUI, LXDE and Openbox displayed flawlessly for me. Display and responsiveness on these was normal. On the Trusty guests with XFCE, Gnome, and Gnome classic, there were some odd video artifacts and screen drawing issues over the qemu vnc head. I expect this is due to things like gpu acceleration and other fancy gui features, but that may require some more research. Next, I can hop on a Win 7 laptop and test VNC performance to a couple of those Trusty-hosted VM heads. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1308756 Title: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308756/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1308570] [NEW] Sites not served after Precise - Trusty upgrade - new apache2 documentroot path
Public bug reported: An existing site is no longer online following a Precise to Trusty upgrade when the site is in the default DocumentRoot. This is because DocumentRoot on Trusty is now /var/www/html whereas it was /var/www for Precise. There is a simple work-around for simple deployments: move the contents of /var/www to /var/www/html. More complex scenarios may require additional reconfiguration, such as anything else referencing these as absolute paths. ** Affects: apache2 (Ubuntu) Importance: Undecided Status: New ** Tags: trusty -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to apache2 in Ubuntu. https://bugs.launchpad.net/bugs/1308570 Title: Sites not served after Precise - Trusty upgrade - new apache2 documentroot path To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1308570/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1291321] Re: migration fails between 12.04 Precise and 14.04 Trusty
@Doug I saw the same in a 12.04 to 14.04 upgrade, also had to edit the machine type. Other than that one edit, all of my VMs worked after the Trusty upgrade. Take note that this triggers Windows VMs to have to re- activate due to the changed 'hardware.' But I think this bug is more about a live migration scenario between two virtual machine hosts, source on 12.04 and the destination running 14.04. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1291321 Title: migration fails between 12.04 Precise and 14.04 Trusty To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1291321/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1308756] Re: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04
Thank you for filing this bug. For further clarification: are you connecting to a vncserver service running within the guest VM? Or are you connecting to the qemu VNC head on the server which hosts the virtual machines? Also, on which host and environment are you seeing the 'low graphics mode' dialog box? -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to qemu in Ubuntu. https://bugs.launchpad.net/bugs/1308756 Title: VNC graphics issues on guest VMs after server host upgrade from 12.04 to 14.04 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308756/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1306646] Re: dnsmasq provides recursive answers to the Internet by default
Curiosity fueled a couple of tests on this. In checking 2 common scenarios, at least one use case confirms. Aside from this confirmation, a bigger-picture question could be: in principle, how is 53 being open and interactive by default any different than 80, 22, or 137-139 being open and interactive by default, when dnsmasq is not installed by default? If a user chooses to add a service, whether that is ssh, samba, apache, dnsmasq, or others, in what scenarios are we to protect the user against him/herself? One could argue that all of those protocols are subject to abuse. In other words - this could be a slippery slope. Having said that little devil's advocate bit, I am *all for* making sure our default behavior is to not have an open recursive DNS server. Here's what I found: [test0]: Trusty default server install + Virtual Machine Host package selection (ok) [test1]: Trusty default server install + install dnsmasq (CONFIRMS open recursive DNS condition) # [test0] # Trusty default server install + Virtual Machine Host package selection * This method does not result in an open recursive DNS server. * The default ip interface layout follows; eth0 is connected and has obtained an address via dhcp; libvirt has created virbr0 interface, and dnsmasq is listening only on the virbr0 interface (192.168.122.1). rbeisner@isotest0:~$ sudo ip addr | grep gl inet 10.4.5.132/24 brd 10.4.5.255 scope global eth0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 rbeisner@isotest0:~$ sudo netstat -taupn | egrep ':22|:53' tcp0 0 192.168.122.1:530.0.0.0:* LISTEN 1148/dnsmasq tcp0 0 0.0.0.0:22 0.0.0.0:* LISTEN 852/sshd tcp6 0 0 :::22 :::*LISTEN 852/sshd udp0 0 192.168.122.1:530.0.0.0:* 1148/dnsmasq * The default iptables firewall rules for this use case follow; Destination ports 53 tcp udp are explicitly allowed in the virbr0 interface. DNS ports are not disallowed anywhere, and there isn't a default drop or reject rule in the input chain. But because dnsmasq is only bound to the virbr0 interface, it should not be accessible on any other interface, even if all iptables rules are flushed. beisner@isotest0:~$ sudo iptables -nvL Chain INPUT (policy ACCEPT 19526 packets, 29M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/00.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/00.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/00.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/00.0.0.0/0 tcp dpt:67 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/00.0.0.0/0 0 0 REJECT all -- * virbr0 0.0.0.0/00.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- virbr0 * 0.0.0.0/00.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 10169 packets, 592K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- * virbr0 0.0.0.0/00.0.0.0/0 udp dpt:68 * Flush iptables, all traffic allowed: rbeisner@isotest0:~$ sudo iptables -F rbeisner@isotest0:~$ sudo iptables -nvL Chain INPUT (policy ACCEPT 39 packets, 2712 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 20 packets, 1792 bytes) pkts bytes target prot opt in out source destination * Port scans from a neighboring node confirm that tcp and udp 53 are closed on the world-facing interface: rbeisner@bcu:~$ sudo nmap -sU -p 53 10.4.5.132 | grep 53 53/udp closed domain rbeisner@bcu:~$ sudo nmap -sT -p 53 10.4.5.132 | grep 53 53/tcp closed domain rbeisner@bcu:~$ sudo nmap -sT -p 22 10.4.5.132 | grep 22 22/tcp open ssh ... # [test1] # Trusty default server install + install dnsmasq (CONFIRMS open recursive DNS condition) * CONFIRMS the default condition to be an open recursive DNS server /!\. * DNS query from a neighboring host
[Bug 1306646] Re: dnsmasq provides recursive answers to the Internet by default
Yep, I'm with ya Robie. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to dnsmasq in Ubuntu. https://bugs.launchpad.net/bugs/1306646 Title: dnsmasq provides recursive answers to the Internet by default To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/dnsmasq/+bug/1306646/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
FYI - related bug regarding missing dependencies in a no-network scenario: https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1172566 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1172566] Re: MAAS Server ISO install fails when network is disconnected
Issue confirmed on Trusty daily ISO. ** Tags added: trusty -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1172566 Title: MAAS Server ISO install fails when network is disconnected To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-release-notes/+bug/1172566/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1172566] Re: MAAS Server ISO install fails when network is disconnected
More detail, see also attached tarball: Apr 14 17:26:05 in-target: dpkg: error processing package maas-region-controller-min (--configure): Apr 14 17:26:05 in-target: subprocess installed post-installation script returned error exit status 255 Apr 14 17:26:05 in-target: dpkg: dependency problems prevent configuration of maas-region-controller: Apr 14 17:26:05 in-target: maas-region-controller depends on maas-region-controller-min (= 1.5+bzr2236-0ubuntu1); however: Apr 14 17:26:05 in-target: Package maas-region-controller-min is not configured yet. Apr 14 17:26:05 in-target: Apr 14 17:26:05 in-target: dpkg: error processing package maas-region-controller (--configure): Apr 14 17:26:05 in-target: dependency problems - leaving unconfigured Apr 14 17:26:05 in-target: dpkg: dependency problems prevent configuration of maas: Apr 14 17:26:05 in-target: maas depends on maas-region-controller; however: Apr 14 17:26:05 in-target: Package maas-region-controller is not configured yet. Apr 14 17:26:05 in-target: Apr 14 17:26:05 in-target: dpkg: error processing package maas (--configure): Apr 14 17:26:05 in-target: dependency problems - leaving unconfigured Apr 14 17:26:05 in-target: dpkg: dependency problems prevent configuration of maas-dns: Apr 14 17:26:05 in-target: maas-dns depends on maas-region-controller-min (= 1.5+bzr2236-0ubuntu1); however: Apr 14 17:26:05 in-target: Package maas-region-controller-min is not configured yet. Apr 14 17:26:05 in-target: Apr 14 17:26:05 in-target: dpkg: error processing package maas-dns (--configure): Apr 14 17:26:05 in-target: dependency problems - leaving unconfigured Apr 14 17:26:05 in-target: Errors were encountered while processing: Apr 14 17:26:05 in-target: maas-region-controller-min Apr 14 17:26:05 in-target: maas-region-controller Apr 14 17:26:05 in-target: maas Apr 14 17:26:05 in-target: maas-dns Apr 14 17:26:06 main-menu[220]: WARNING **: Configuring 'pkgsel' failed with error code 100 Apr 14 17:26:06 main-menu[220]: WARNING **: Menu item 'pkgsel' failed. ** Attachment added: install.tgz https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1172566/+attachment/4083633/+files/install.tgz -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1172566 Title: MAAS Server ISO install fails when network is disconnected To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-release-notes/+bug/1172566/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1172566] Re: MAAS Server ISO install fails when network is disconnected
Thanks, Scott. Further confirmation: In previous no-network MAAS tests, the environment had no default gateway set. So, the missing dependency claim is debunked. This is not a dependency, nor an ISO issue. Dependencies are indeed met on the Trusty ISO. I confirmed this by successful installation of MAAS from ISO on a machine with a fully configured ip interface, but with no internet access. The 'no-network MAAS install failure' issue that we're seeing is specifically caused by having no default gateway set. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1172566 Title: MAAS Server ISO install fails when network is disconnected To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-release-notes/+bug/1172566/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Take note: the missing dependency claim is debunked. Dependencies are indeed met on the Trusty ISO. I confirmed this by successful installation of MAAS from ISO on a machine with a fully configured ip interface, but with no internet access. There is however, a separate MAAS installation problem when no default gateway is set. Updated bug 1172566. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1306646] Re: dnsmasq provides recursive answers to the Internet by default
I think you're right, and I think that is indeed the place to do it. I've seen other people, on similar topics, debate 'the line.' This very discussion will be valuable in this change because it shows that we are assessing that aspect. Thanks again for filing this. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to dnsmasq in Ubuntu. https://bugs.launchpad.net/bugs/1306646 Title: dnsmasq provides recursive answers to the Internet by default To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/dnsmasq/+bug/1306646/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Confirmed fix in a new install using Trusty iso 2014-apr-11 ... MAAS front end is functional on first boot. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Fix confirmed on Trusty server ISO 2014-APR-10. MAAS front end is functional after installation via 'Create a new MAAS on this server' method. Thanks, Andres! -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Dave - i'll re-test today with no internet. But the good news is, using 20140410 iso, you can now get a working MAAS front end. James - will do on 20140411, again with no-internet. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Installing MAAS controller from ISO with no internet connection fails due to dependencies of the maas, maas-dns and maas-region-controller packages. pkg-sel bails out with 100. If we do expect users to be able to deploy a MAAS controller on an island with no internet connection, then this is probably a separate bug. If we expect users to have internet access in order to resolve dependencies from repos during install, then I would not consider this a bug. What is our use case expectation for MAAS regarding connectivity during install? -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Re-confirmed fix in a new install using Trusty iso 2014-apr-10. MAAS front end is functional on first boot. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1298559] Re: Internal Server Error after installing MAAS from Trusty daily ISO
Confirmed 'internal server error' issue exists with Trusty 2014-APR-09 daily ISO, when using the 'Create a new MAAS on this server' method. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1298559 Title: Internal Server Error after installing MAAS from Trusty daily ISO To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1298559/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1257186] Re: memory leakage messages
Issue exists in Trusty server ISO 2014-APR-09. Workaround confirmed, but I've not explored other potential effects of removing this package: sudo apt-get remove libpam-smbpass ** Tags added: trusty -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to samba in Ubuntu. https://bugs.launchpad.net/bugs/1257186 Title: memory leakage messages To manage notifications about this bug go to: https://bugs.launchpad.net/samba/+bug/1257186/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 1302772] Re: update of maas-cluster-controller on trusty dumps traceback and crashes
Confirmed to resolve the MAAS upgrade crash on an existing Trusty MAAS environment. Beware however, after rev 2230 was confirmed in the repo, I was still stuck in a loop with apt/dpkg trying to install rev 2227. I couldn't complete an apt-get update (it halted, advising that dpkg --configure -a must first be run). But dpkg --configure -a tried to install rev 2227, bringing me back to the initial crash symptom. This broke that cycle, and upgraded to 2230: $ sudo dpkg --clear-selections $ sudo apt-get update -y sudo apt-get upgrade Thank you all! -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to maas in Ubuntu. https://bugs.launchpad.net/bugs/1302772 Title: update of maas-cluster-controller on trusty dumps traceback and crashes To manage notifications about this bug go to: https://bugs.launchpad.net/maas/+bug/1302772/+subscriptions -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs