Public bug reported:
ubuntu@keeton:~$ lsb_release -rc
Release:14.04
Codename: trusty
ubuntu@keeton:~$ dpkg-query -W pollen
pollen 4.11-0ubuntu1
ubuntu@keeton:~$ _
pollen does not start on boot, due to an error in the upstart config:
ubuntu@keeton:~$ grep start /etc/init/pollen.con
I can't reproduce this in a freshly created trusty VM, so this may be
something strange with my machine. Marking Invalid for now.
** Changed in: qemu (Ubuntu)
Status: New => Invalid
** Changed in: qemu (Ubuntu Trusty)
Status: New => Invalid
--
You received this bug notification be
Public bug reported:
qemu-img convert -O raw with yields a file of the correct length with no
content, e.g.:
$ qemu-img convert -O raw trusty-server-cloudimg-amd64-disk1.img
trusty-server-cloudimg-amd64-disk1.raw
$ ls -l trusty-*
-rw-rw-r-- 1 paul paul 255590912 Sep 10 09:14
trusty-server-clou
Public bug reported:
On a vanilla trusty install with rabbitmq-server and celeryd installed,
"celery worker" crashes as follows:
$ dpkg-query -W python-librabbitmq rabbitmq-server python-amqp librabbitmq1
celeryd
celeryd 3.1.6-1ubuntu1
librabbitmq10.4.1-1
python-amqp 1.3.3-1ubuntu1
pytho
** Also affects: facter (Ubuntu Precise)
Importance: Undecided
Status: New
** Changed in: facter (Ubuntu Precise)
Status: New => Confirmed
** Changed in: facter (Ubuntu Precise)
Importance: Undecided => Medium
--
You received this bug notification because you are a member of
Public bug reported:
Today we upgraded an Openstack cloud from essex to folsom to grizzly.
Switching from nova-volume to cinder was somewhat non-trivial. I'm not
sure how much help is reasonable to expect from the Ubuntu packaging,
but it seems that perhaps some of the steps involved
(http://wiki
** Description changed:
Hi,
This is related to
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where
upstream has removed process_name.rb in 2.7.11 but it is still packaged
and provided by puppet-common.
[ This plugin frequently causes puppet to hang and requires man
** Also affects: puppet (Ubuntu Precise)
Importance: Undecided
Status: New
** Changed in: puppet (Ubuntu Precise)
Status: New => Confirmed
** Changed in: puppet (Ubuntu Precise)
Importance: Undecided => High
--
You received this bug notification because you are a member of U
>From my point of view, probably not very. We (= Canonical IS) are
running 12.04 LTS plus packages from the Ubuntu Cloud Archive. I don't
believe we'll do many more folsom+argonaut deployments before
grizzly+bobtail arrives, and in any case it's sufficiently well
documented internally that it's not
This has been fixed on upstream's master branch by commit
c236a51a8040508ee893e4c64b206e40f9459a62 and cherry-picked to the
bobtail branch as 6008b1d8e4587d5a3aea60684b1d871401496942. The change
does not seem to have been applied to argonaut.
--
You received this bug notification because you are
Public bug reported:
Version: 0.48.2-0ubuntu2~cloud0
Our Ceph deployments typically involve multiple OSDs per host with no
disk redundancy. However the default crush rules appears to distribute
by OSD, not by host, which I believe will not prevent replicas from
landing on the same host.
I've bee
Public bug reported:
Version: 0.48.2-0ubuntu2~cloud0
On a Ceph cluster with 18 OSDs, new object pools are being created with
a pg_num of 8. Upstream recommends that there be more like 100 or so
PGs per OSD: http://article.gmane.org/gmane.comp.file-
systems.ceph.devel/10242
I've worked around th
Public bug reported:
Version: 2012.1.3+stable-20120827-4d2a4afe-0ubuntu1
We recently set up a new Nova cluster on precise + essex with Juju and
MaaS, and ran into a problem where instances could not communicate with
the swift-proxy node on the MaaS network. This turned out to be due to
nova-netw
Is there an essex variant of this patch available?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1065883
Title:
ceph rbd username and secret should be configured in nova-compute, not
Upstream has addressed this problem by ensuring that mkcephfs always
creates keyrings so that cephx can easily be enabled later.
http://mid.gmane.org/alpine.deb.2.00.1208081405110.3...@cobra.newdream.net
https://github.com/ceph/ceph/commit/96b1a496cdfda34a5efdb6686becf0d2e7e3a1c0
--
You receive
** Also affects: nova (Ubuntu Precise)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1032405
Title:
RBDDriver does not support volume creat
** Patch added: "implement RBDDriver.create_volume_from_snapshot()"
https://bugs.launchpad.net/bugs/1032405/+attachment/3246430/+files/rbd-implement-create-volume-from-snapshot.patch
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to n
Public bug reported:
I've been doing a little work with Nova and Ceph. As part of this work
I've been testing snapshots. I've discovered that RBDDriver does not
implement create_volume_from_snapshot(). Attempts to create volumes from
snapshots instead fall through to VolumeDriver's LVM-based
imple
I took a look at this last night and wrote a patch, which seems to work
on my test cluster.
http://article.gmane.org/gmane.comp.file-systems.ceph.devel/8170
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.l
Public bug reported:
I've been doing a little work with nova-volume and ceph/RBD. In order
to gain some fault-tolerance, I plan to run a nova-volume on each
compute node. However, a problem arises, because a given nova-volume
host only wants to deal with requests for volumes that it created.
Th
Hi James,
Not until very recently — I've just posted my report to ceph-devel.
Sorry for the delay!
Paul
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1026402
Title:
mon cluster (no
Huh, interesting. Your log has this line
2012-07-19 10:16:10.235911 7f9e20d22780 1 mon.a@-1(probing) e1 copying
mon. key from old db to external keyring
which I wonder what it means. Maybe it's plucking a key from a previous
cephx-enabled install from an undisclosed location?
Anyway, it certain
Public bug reported:
I'm running a 3-node test cluster on 12.04, without cephx
authentication. I started out running 0.47.2 (an impatiently-smashed-
together backport based on the upstream sources) and then upgraded to
0.48-1ubuntu1 (the packages from quantal rebuilt on precise). So my
situation m
Apologies for the delay in replying. We recently completed our migration
to Keystone, and now "nova volume-list" works as expected.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/996233
This Openstack installation is using deprecated auth, though, not
keystone. The following flags are in nova.conf:
--use_deprecated_auth
--auth_strategy=deprecated
I've only used Ubuntu packages on this machine — no devstack, no pip, no
setup.py.
--
You received this bug notification because you
Looking at William's trace, I see some differences with the traces I
get. Not posting a full one in the first place was foolish of me. Here
it is now.
$ nova --debug volume-list
connect: (XXX.XXX.XXX.XXX, 8774)
send: 'GET /v1.1 HTTP/1.1\r\nHost: XXX.XXX.XXX.XXX:8774\r\nx-auth-project-id:
pjdc_pr
Public bug reported:
When using nova-volume (with the flag "--iscsi_helper=tgtadm", which is
the default value in Ubuntu) if the host is rebooted, the iSCSI targets
are not recreated. This means that the compute hosts are unable to
reëstablish their iSCSI sessions, and volumes that were attached
** Description changed:
Hi,
This is related to
https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/959597 where
upstream has removed process_name.rb in 2.7.11 but it is still packaged
and provided by puppet-common.
+
+ [ This plugin frequently causes puppet to hang and requires man
Public bug reported:
I noticed the following (Ubuntu 12.04 LTS on the Nova cluster, Ubuntu
12.04 LTS on my machine):
$ nova volume-list
ERROR: n/a (HTTP 404)
Based on the output of "nova --debug volume-list", it looks like python-
novaclient is expecting to be able to do "GET
/v1.1/pjdc_project/
Looks like debian/patches/debian-changes is adding the file back.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to puppet in Ubuntu.
https://bugs.launchpad.net/bugs/995719
Title:
process_name.rb removed in 2.7.11 but still provided by
I can no longer reproduce the problem with
2012.1~rc1~20120309.13261-0ubuntu1, so I reckon this is indeed fixed.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/955510
Title:
failed att
** Attachment added: "failed-attach-stale-session.log"
https://bugs.launchpad.net/bugs/955510/+attachment/2872500/+files/failed-attach-stale-session.log
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.la
Public bug reported:
Version: 2012.1~e4~20120217.12709-0ubuntu1
I attempted to attach an iSCSI volume to one of my instances. This
failed because I specified /dev/vdb as the device, which was in use.
Any further attempts to attach the volume then also failed. When I
inspected nova-compute.log,
Public bug reported:
Version: 2012.1~e4~20120217.12709-0ubuntu1
I attached a volume to an instance via iSCSI, shut down the instance,
and then attempted to detach the volume. The result is the following in
nova-compute.log, and the volume remains "in-use". I also tried "euca-
detach-volume --fo
34 matches
Mail list logo