This bug is because of logging context enabled in glance-api.conf and
glance-registry.conf.There should not be user_id and project_id.So I
think this is specific to devstack.Will move this to devstack
** Project changed: glance = devstack
--
You received this bug notification because you are a
** Changed in: ossa
Status: Fix Committed = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1409142
Title:
[OSSA 2015-005] Websocket Hijacking
The corresponding bug for fuel is moved to
https://bugs.launchpad.net/mos/+bug/1431983
** No longer affects: fuel
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
Public bug reported:
Currently, notifications are only sent off for successful CRUD events,
however we should also send notifications in the event that an operation
fails.
** Affects: keystone
Importance: Medium
Assignee: Steve Martinelli (stevemar)
Status: In Progress
**
Public bug reported:
I'm seeing the following exception on creating a load balancer with a listener:
https://gist.github.com/fnaval/64e91dd864030b7bff71
This is running on neutron-lbaas against hash:
28b75a656be2f27807aa3d10a12b361534f84ad9
** Affects: neutron
Importance: Undecided
** Project changed: nova = juju
** Project changed: juju = juju-core
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431685
Title:
juju nova-compute charm not enabling
I stumbled across this accidentally today. I think the problem is
occurring when the _token_ times out, not when the session times out. I
re-created the problem on both Chrome + FF.
** Changed in: horizon
Status: Invalid = Confirmed
--
You received this bug notification because you are
The bug submitted does not fix the issue, it only changed the label of
Uptime.
Opening this bug again.
There is an outstanding patch already:
https://review.openstack.org/93630
** Changed in: horizon
Milestone: kilo-2 = kilo-3
** Changed in: horizon
Status: Fix Released = Confirmed
So this is actually working. There was some miscommunication that the
agent needed to be run automatically. However, in the newest version of
neutron_lbass, the agent will start up automatically. Having 2 agents
running was causing multiple failures.
** Changed in: neutron
Status: New =
** Changed in: nova
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165
Title:
https: client can cause nova/cinder to leak sockets for
Public bug reported:
alex@hp-pc:~/code/devstack$ nova tenant-network-create net2 10.0.0.0/24
ERROR (ClientException): Create networks failed (HTTP 503) (Request-ID:
req-e1cf8f25-309c-49a8-b460-b56172ac68ce)
get error as below:
2015-03-14 12:25:45.225 TRACE
Public bug reported:
Use admin user create new network called 'net1'
alex@hp-pc:~/code/devstack$ source ./openrc admin admin
alex@hp-pc:~/code/devstack$ nova network-list
+--+-+-+
| ID | Label | Cidr
Public bug reported:
http://logs.openstack.org/78/163978/1/check/check-tempest-dsvm-neutron-
full/792a4e4/logs/screen-q-svc.txt.gz?level=TRACE#_2015-03-13_16_07_36_406
2015-03-13 16:07:36.406 ERROR oslo_messaging.rpc.dispatcher
[req-d42b66e6-5ee7-4e08-b59d-318aebfe92d7 None None] Exception
Nova stable/juno is still affected by this issue, since the fix is not
available there currently due to the version cap on python-glanceclient.
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: cinder
Importance: Undecided
Status: New
--
You received
Public bug reported:
nova.cfg
nova-compute:
openstack-origin: cloud:trusty-juno
enable-resize: true
enable-live-migration: true
migration-auth-type: none
sysctl: '{ kernel.pid_max : 4194303 }'
libvirt-image-backend: rbd
libvirtd.conf
#listen_tcp = 1
#auth_tcp = sasl
Public bug reported:
I find AggregateCoreFilter will return incorrect value, the analysis is
bellow:
class AggregateCoreFilter(BaseCoreFilter):
def _get_cpu_allocation_ratio(self, host_state, filter_properties):
# TODO(uni): DB query in filter is a performance hit, especially for
Hi Anand,
Earlier I was getting same error when I have pulled latest glance code.
Then I have done fresh installation using devstack and this error is
not reproducible.
** Changed in: glance
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Public bug reported:
keystone/notifications.py:
from pycadf import cadftaxonomy as taxonomy
so, keystone has dependency of pycadf, but pycadf is not included in
requirement.txt.
How to reproduce:
When i run ./stack.sh from devstack, had an error:
2015-03-13 08:22:27.216 | File
I think this has been solved by https://review.openstack.org/#/c/145757/
(Bug report - https://bugs.launchpad.net/horizon/+bug/1308189)
** Changed in: horizon
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
Public bug reported:
The CSS for table rows contains 'vertical-align: top' causing the cell
data to align strangely. There is also an additional top border on the
tables.
** Affects: horizon
Importance: Undecided
Assignee: Rob Cresswell (robcresswell)
Status: New
** Tags:
Public bug reported:
when running devstack got the followng traceback whenever do a listing
of glance images from cli or horizon.The traceback occurs both in
glance-ap and glance-registry.
== glance image-list
Trace in g-api
+
Logged from file policy.py, line 296
Traceback
e162-45ce-98b0-54d9563bbb1c] VolumeNotCreated: Volume
abc781af-0960-4a65-87d2-a5cb15ce7273 did not finish being created even
after we waited 250 seconds or 61 attempts.
from this line, it indicated that create volume failed , nova did all he can do.
you need to check whether something wrong in
This is by design; see Ports, Subnets Firewalls. Those Name columns
are populated by Name or ID. If we are to change Networks, then we
should change the behaviour across Horizon, and this needs discussion.
Please bring it up in either IRC or the weekly meeting
Public bug reported:
network status value is not translatable in network table
** Affects: horizon
Importance: Undecided
Assignee: Masco Kaliyamoorthy (masco)
Status: New
** Changed in: horizon
Assignee: (unassigned) = Masco Kaliyamoorthy (masco)
--
You received this
Public bug reported:
http://logs.openstack.org/19/155319/13/check/check-tempest-dsvm-full-
ceph/4a14a01/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-03-10_17_19_46_145
2015-03-10 17:19:46.145 ERROR oslo_messaging.rpc.dispatcher
[req-72819513-908b-4210-a4c7-7f5d9ff7fd22
Public bug reported:
[DEFAULT]admin_token = ADMIN
curl -k -H X-Auth-Token:ADMIN http://localhost:35357/v3/auth/tokens |
python -mjson.tool
http://paste.openstack.org/show/192079/
rev 55d940c70be405e6dcf48eaa4aed0c2d766aadeb
** Affects: keystone
Importance: Undecided
Status: New
** Changed in: glance
Status: Invalid = New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1431784
Title:
Traceback in glanceapi and glance registry in devstack
Status in
We're still hitting this on master:
Public bug reported:
The following command worked in Icehouse. It do not work in Juno
anymore.
neutron net-create --tenant-id 7f41e236d56c4e9fa074a9185528cad2
--provider:network_type=flat --provider:physical_network=default
--router:external=True GATEWAY_NET
It returns error:
neutron
Public bug reported:
I deployed openstack with icehouse rc1 and booted 100 vms on my nodes. After my
testing, i tried to delete my vms at the same time. Then i fount all of my vms`
status change to deleting but cannot be deleted. I checked my openstack, the
rabbitmq-server crashed . Then i
Public bug reported:
hi
a im a new user of openstack services with trove
the installation was successful but i can not do anythings like create instance
or database
when i see the nova nova service there are no nova compute node
i get this
Binary Host
Reviewed: https://review.openstack.org/162773
Committed:
https://git.openstack.org/cgit/openstack/tempest/commit/?id=118cd39c61996785f21acfb1afecba5f0d3e7fb9
Submitter: Jenkins
Branch:master
commit 118cd39c61996785f21acfb1afecba5f0d3e7fb9
Author: Adam Gandelman ad...@ubuntu.com
Date: Mon
This is still an issue and from what I can tell a specific change wasn't
merged against this bug, so re-opening since I couldn't find it via LP
search before (since it was Fix Committed):
http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-
Public bug reported:
For some reason, some compute nodes are missing ovs flows of various
tenants (not all of them), resulting in vm isolation (no dhcp/metadata
on boot). A particular tenant A might have issues with node B whilst
tenant B might have problems with node A and not B. All of the
34 matches
Mail list logo