[Bug 1505473] [NEW] pollen does not start on boot

2015-10-12 Thread Paul Collins
Public bug reported: ubuntu@keeton:~$ lsb_release -rc Release:14.04 Codename: trusty ubuntu@keeton:~$ dpkg-query -W pollen pollen 4.11-0ubuntu1 ubuntu@keeton:~$ _ pollen does not start on boot, due to an error in the upstart config: ubuntu@keeton:~$ grep start

[Bug 1367547] [NEW] qemu-img convert -O raw is broken in trusty

2014-09-09 Thread Paul Collins
Public bug reported: qemu-img convert -O raw with yields a file of the correct length with no content, e.g.: $ qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img trusty-server-cloudimg-amd64-disk1.raw $ ls -l trusty-* -rw-rw-r-- 1 paul paul 255590912 Sep 10 09:14

[Bug 1367547] Re: qemu-img convert -O raw is broken in trusty

2014-09-09 Thread Paul Collins
I can't reproduce this in a freshly created trusty VM, so this may be something strange with my machine. Marking Invalid for now. ** Changed in: qemu (Ubuntu) Status: New = Invalid ** Changed in: qemu (Ubuntu Trusty) Status: New = Invalid -- You received this bug notification

[Bug 1353269] [NEW] celery worker crashes on startup when python-librabbitmq is used

2014-08-05 Thread Paul Collins
Public bug reported: On a vanilla trusty install with rabbitmq-server and celeryd installed, celery worker crashes as follows: $ dpkg-query -W python-librabbitmq rabbitmq-server python-amqp librabbitmq1 celeryd celeryd 3.1.6-1ubuntu1 librabbitmq10.4.1-1 python-amqp 1.3.3-1ubuntu1

[Bug 1028268] Re: Bareword dns domain makes facter return incorrect info

2013-12-17 Thread Paul Collins
** Also affects: facter (Ubuntu Precise) Importance: Undecided Status: New ** Changed in: facter (Ubuntu Precise) Status: New = Confirmed ** Changed in: facter (Ubuntu Precise) Importance: Undecided = Medium -- You received this bug notification because you are a member of

[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2013-01-24 Thread Paul Collins
** Also affects: puppet (Ubuntu Precise) Importance: Undecided Status: New ** Changed in: puppet (Ubuntu Precise) Status: New = Confirmed ** Changed in: puppet (Ubuntu Precise) Importance: Undecided = High -- You received this bug notification because you are a member of

[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2013-01-24 Thread Paul Collins
** Description changed: Hi, This is related to https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where upstream has removed process_name.rb in 2.7.11 but it is still packaged and provided by puppet-common. [ This plugin frequently causes puppet to hang and requires

[Bug 1104691] [NEW] migrating from nova-volume is non-obvious

2013-01-24 Thread Paul Collins
Public bug reported: Today we upgraded an Openstack cloud from essex to folsom to grizzly. Switching from nova-volume to cinder was somewhat non-trivial. I'm not sure how much help is reasonable to expect from the Ubuntu packaging, but it seems that perhaps some of the steps involved

[Bug 1098320] Re: ceph: default crush rule does not suit multi-OSD deployments

2013-01-22 Thread Paul Collins
From my point of view, probably not very. We (= Canonical IS) are running 12.04 LTS plus packages from the Ubuntu Cloud Archive. I don't believe we'll do many more folsom+argonaut deployments before grizzly+bobtail arrives, and in any case it's sufficiently well documented internally that it's not

[Bug 1098320] Re: ceph: default crush rule does not suit multi-OSD deployments

2013-01-21 Thread Paul Collins
This has been fixed on upstream's master branch by commit c236a51a8040508ee893e4c64b206e40f9459a62 and cherry-picked to the bobtail branch as 6008b1d8e4587d5a3aea60684b1d871401496942. The change does not seem to have been applied to argonaut. -- You received this bug notification because you

[Bug 1098314] [NEW] pg_num inappropriately low on new pools

2013-01-10 Thread Paul Collins
Public bug reported: Version: 0.48.2-0ubuntu2~cloud0 On a Ceph cluster with 18 OSDs, new object pools are being created with a pg_num of 8. Upstream recommends that there be more like 100 or so PGs per OSD: http://article.gmane.org/gmane.comp.file- systems.ceph.devel/10242 I've worked around

[Bug 1098320] [NEW] ceph: default crush rule does not suit multi-OSD deployments

2013-01-10 Thread Paul Collins
Public bug reported: Version: 0.48.2-0ubuntu2~cloud0 Our Ceph deployments typically involve multiple OSDs per host with no disk redundancy. However the default crush rules appears to distribute by OSD, not by host, which I believe will not prevent replicas from landing on the same host. I've

[Bug 1065883] Re: ceph rbd username and secret should be configured in nova-compute, not passed from nova-volume/cinder

2012-12-18 Thread Paul Collins
Is there an essex variant of this patch available? -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1065883 Title: ceph rbd username and secret should be configured in nova-compute, not

[Bug 1091939] [NEW] nova-network applies too liberal a SNAT rule

2012-12-18 Thread Paul Collins
Public bug reported: Version: 2012.1.3+stable-20120827-4d2a4afe-0ubuntu1 We recently set up a new Nova cluster on precise + essex with Juju and MaaS, and ran into a problem where instances could not communicate with the swift-proxy node on the MaaS network. This turned out to be due to

[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-08-08 Thread Paul Collins
Upstream has addressed this problem by ensuring that mkcephfs always creates keyrings so that cephx can easily be enabled later. http://mid.gmane.org/alpine.deb.2.00.1208081405110.3...@cobra.newdream.net https://github.com/ceph/ceph/commit/96b1a496cdfda34a5efdb6686becf0d2e7e3a1c0 -- You

[Bug 1032405] [NEW] RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
Public bug reported: I've been doing a little work with Nova and Ceph. As part of this work I've been testing snapshots. I've discovered that RBDDriver does not implement create_volume_from_snapshot(). Attempts to create volumes from snapshots instead fall through to VolumeDriver's LVM-based

[Bug 1032405] Re: RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
** Patch added: implement RBDDriver.create_volume_from_snapshot() https://bugs.launchpad.net/bugs/1032405/+attachment/3246430/+files/rbd-implement-create-volume-from-snapshot.patch -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to

[Bug 1032405] Re: RBDDriver does not support volume creation from snapshots

2012-08-02 Thread Paul Collins
** Also affects: nova (Ubuntu Precise) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/1032405 Title: RBDDriver does not support volume

[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-26 Thread Paul Collins
I took a look at this last night and wrote a patch, which seems to work on my test cluster. http://article.gmane.org/gmane.comp.file-systems.ceph.devel/8170 -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to ceph in Ubuntu.

[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-24 Thread Paul Collins
Hi James, Not until very recently — I've just posted my report to ceph-devel. Sorry for the delay! Paul -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to ceph in Ubuntu. https://bugs.launchpad.net/bugs/1026402 Title: mon cluster (no

[Bug 1028718] [NEW] nova volumes are inappropriately clingy for ceph

2012-07-24 Thread Paul Collins
Public bug reported: I've been doing a little work with nova-volume and ceph/RBD. In order to gain some fault-tolerance, I plan to run a nova-volume on each compute node. However, a problem arises, because a given nova-volume host only wants to deal with requests for volumes that it created.

[Bug 1026402] Re: mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-19 Thread Paul Collins
Huh, interesting. Your log has this line 2012-07-19 10:16:10.235911 7f9e20d22780 1 mon.a@-1(probing) e1 copying mon. key from old db to external keyring which I wonder what it means. Maybe it's plucking a key from a previous cephx-enabled install from an undisclosed location? Anyway, it

[Bug 1026402] [NEW] mon cluster (no cephx) fails to start unless empty keyring files are created

2012-07-18 Thread Paul Collins
Public bug reported: I'm running a 3-node test cluster on 12.04, without cephx authentication. I started out running 0.47.2 (an impatiently-smashed- together backport based on the upstream sources) and then upgraded to 0.48-1ubuntu1 (the packages from quantal rebuilt on precise). So my situation

[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-07-12 Thread Paul Collins
Apologies for the delay in replying. We recently completed our migration to Keystone, and now nova volume-list works as expected. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/996233

[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-06-01 Thread Paul Collins
This Openstack installation is using deprecated auth, though, not keystone. The following flags are in nova.conf: --use_deprecated_auth --auth_strategy=deprecated I've only used Ubuntu packages on this machine — no devstack, no pip, no setup.py. -- You received this bug notification because

[Bug 996233] Re: nova and python-novaclient disagree on volumes API URLs

2012-05-31 Thread Paul Collins
Looking at William's trace, I see some differences with the traces I get. Not posting a full one in the first place was foolish of me. Here it is now. $ nova --debug volume-list connect: (XXX.XXX.XXX.XXX, 8774) send: 'GET /v1.1 HTTP/1.1\r\nHost: XXX.XXX.XXX.XXX:8774\r\nx-auth-project-id:

[Bug 1001088] [NEW] iSCSI targets are not restored following a reboot

2012-05-17 Thread Paul Collins
Public bug reported: When using nova-volume (with the flag --iscsi_helper=tgtadm, which is the default value in Ubuntu) if the host is rebooted, the iSCSI targets are not recreated. This means that the compute hosts are unable to reëstablish their iSCSI sessions, and volumes that were attached

[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2012-05-10 Thread Paul Collins
** Description changed: Hi, This is related to https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where upstream has removed process_name.rb in 2.7.11 but it is still packaged and provided by puppet-common. + + [ This plugin frequently causes puppet to hang and requires

[Bug 996233] [NEW] nova and python-novaclient disagree on volumes API URLs

2012-05-07 Thread Paul Collins
Public bug reported: I noticed the following (Ubuntu 12.04 LTS on the Nova cluster, Ubuntu 12.04 LTS on my machine): $ nova volume-list ERROR: n/a (HTTP 404) Based on the output of nova --debug volume-list, it looks like python- novaclient is expecting to be able to do GET

[Bug 995719] Re: process_name.rb removed in 2.7.11 but still provided by puppet-common

2012-05-06 Thread Paul Collins
Looks like debian/patches/debian-changes is adding the file back. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to puppet in Ubuntu. https://bugs.launchpad.net/bugs/995719 Title: process_name.rb removed in 2.7.11 but still provided

[Bug 955510] [NEW] failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
Public bug reported: Version: 2012.1~e4~20120217.12709-0ubuntu1 I attempted to attach an iSCSI volume to one of my instances. This failed because I specified /dev/vdb as the device, which was in use. Any further attempts to attach the volume then also failed. When I inspected nova-compute.log,

[Bug 955510] Re: failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
** Attachment added: failed-attach-stale-session.log https://bugs.launchpad.net/bugs/955510/+attachment/2872500/+files/failed-attach-stale-session.log -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu.

[Bug 955510] Re: failed attach leaves stale iSCSI session on compute host

2012-03-14 Thread Paul Collins
I can no longer reproduce the problem with 2012.1~rc1~20120309.13261-0ubuntu1, so I reckon this is indeed fixed. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to nova in Ubuntu. https://bugs.launchpad.net/bugs/955510 Title: failed

[Bug 954692] [NEW] cannot detach volume from terminated instance

2012-03-13 Thread Paul Collins
Public bug reported: Version: 2012.1~e4~20120217.12709-0ubuntu1 I attached a volume to an instance via iSCSI, shut down the instance, and then attempted to detach the volume. The result is the following in nova-compute.log, and the volume remains in-use. I also tried euca- detach-volume