[Yahoo-eng-team] [Bug 1337700] Re: nova v3 behavior difference from v2/v1.1 when quota not enough

2014-07-03 Thread taget
This is an invalid bug , it should be python-novaclient issue.

after I pull latest code of python-novaclient.

but seems it is fixed by recently update.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337700

Title:
  nova v3 behavior difference from v2/v1.1 when quota not enough

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  v3 api report Invalid  user /password(HTTP 401) when Quota exceeded,
  which is not correct.

  
  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova  boot  --flavor 1 --image 
18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4c8d7094-3754-4f95-b131-c822215aff12)
  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 3 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Invalid user / password (HTTP 401)

  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 2 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4f2a57aa-47b4-4c62-95d2-7304d165cf7d)

  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 
1.1 boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-0cdcc33e-3fc9-4e06-9bca-ac5d5416103e)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337716] [NEW] cannot create an image by Glance V2 API, specifying parameter "id".

2014-07-03 Thread Maho Koshiya
Public bug reported:

Issue:

I cannot create an image by Glance V2 API, specifying parameter "id".
This is different from the specification.
http://docs.openstack.org/api/openstack-image-service/2.0/content/create-image.html

Expected:

"id" is allocated to image, and this image is created.

Detail:

* Execution result

$ curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/json' -H 'X-Auth-Token: 94dda5e987c74378a9ee455b1c45acac' 
http://127.0.0.1:9292/v2/images 
-d'{"id":"01de147c-b906-45c4-9b84-ce7a62f9ca54"}'
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 0
Date: Fri, 27 Jun 2014 02:23:27 GMT
Connection: close

* Log

glance-api.log

2014-06-27 10:31:14.439 13359 DEBUG routes.middleware 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] Match dict: {'action': u'create', 
'controller': } __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
2014-06-27 10:31:14.539 13359 INFO glance.wsgi.server 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, in 
handle_one_response
result = self.application(self.environ, start_response)
...
  File "/opt/stack/glance/glance/domain/proxy.py", line 114, in new_image
return self.helper.proxy(self.base.new_image(**kwargs))
  File "/opt/stack/glance/glance/domain/__init__.py", line 73, in new_image
self._check_unexpected(other_args) 
  File "/opt/stack/glance/glance/domain/__init__.py", line 60, in 
_check_unexpected
raise TypeError(msg % kwargs.keys())
TypeError: new_image() got unexpected keywords ['id']  
2014-06-27 10:31:14.543 13359 INFO glance.wsgi.server 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] 127.0.0.1 - - [27/Jun/2014 10:31:14] 
"POST /v2/images HTTP/1.1" 500 139 0.132006

In addition, "glance.api.v2.images:ImagesController.create()" should catch 
TypeError raised by a parameter specification error.
Return code "500" is not good in this case.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1337716

Title:
  cannot create an image by Glance V2 API, specifying parameter "id".

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Issue:

  I cannot create an image by Glance V2 API, specifying parameter "id".
  This is different from the specification.
  
http://docs.openstack.org/api/openstack-image-service/2.0/content/create-image.html

  Expected:

  "id" is allocated to image, and this image is created.

  Detail:

  * Execution result

  $ curl -i -X POST -H 'User-Agent: python-glanceclient' -H 'Content-Type: 
application/json' -H 'X-Auth-Token: 94dda5e987c74378a9ee455b1c45acac' 
http://127.0.0.1:9292/v2/images 
-d'{"id":"01de147c-b906-45c4-9b84-ce7a62f9ca54"}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Fri, 27 Jun 2014 02:23:27 GMT
  Connection: close

  * Log

  glance-api.log

  2014-06-27 10:31:14.439 13359 DEBUG routes.middleware 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] Match dict: {'action': u'create', 
'controller': } __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2014-06-27 10:31:14.539 13359 INFO glance.wsgi.server 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 389, 
in handle_one_response
  result = self.application(self.environ, start_response)
  ...
File "/opt/stack/glance/glance/domain/proxy.py", line 114, in new_image
  return self.helper.proxy(self.base.new_image(**kwargs))
File "/opt/stack/glance/glance/domain/__init__.py", line 73, in new_image
  self._check_unexpected(other_args) 
File "/opt/stack/glance/glance/domain/__init__.py", line 60, in 
_check_unexpected
  raise TypeError(msg % kwargs.keys())
  TypeError: new_image() got unexpected keywords ['id']  
  2014-06-27 10:31:14.543 13359 INFO glance.wsgi.server 
[f4b43959-b161-4cb4-a606-c06609d58f8c f208aaea60654eaf9e85724e38e10ecb 
2a6bebe5929e46ff85d599f92731c44b - - -] 127.0.0.1 - - [27/Jun/2014 10:31:14] 
"POST /v2/images HTTP/1.1" 500 139 0.132006

  In addition, "glance.api.v2.images:ImagesController.create()" should catch 
TypeError raised by a parameter specification error.
  Return code "500" is not good in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/

[Yahoo-eng-team] [Bug 1337717] [NEW] L2-population fanout-cast leads to performance and scalability issue

2014-07-03 Thread Chaoyi Huang
Public bug reported:

https://github.com/osrg/quantum/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc.py

def _notification_fanout(self, context, method, fdb_entries):

self.fanout_cast(context,
 self.make_msg(method, fdb_entries=fdb_entries),
 topic=self.topic_l2pop_update)

the fanout_cast will publish the message to all L2 agents listening
"l2population" topic.

If there are 1000 agents (it is a small cloud), and all of them are
listening to  "l2population" topic, adding one new port will leads to
1000 sub messages. Generally rabbitMQ can handle 10k messages per
second, and the fanout_cast method will leads to greatly performance
issues, and make the neutron service hard to scale, the concurrency of
VM port request will be very very small.

No matter how many ports in the subnet, the performance is up to the
number of the L2 agents listening the topic.

The way to solve the performance and scalability issue is to make the L2
agent listening a topic related to network, for example, using network
uuid as the topic. If one port is activated in the subnet, only those
agents where there are VMs of the same network should receive the L2-pop
message.  This is parial-mesh, the original design purpose, but not
implemented yet.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2 l2-pop

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337717

Title:
  L2-population fanout-cast leads to performance and scalability issue

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
https://github.com/osrg/quantum/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc.py

  def _notification_fanout(self, context, method, fdb_entries):
  
  self.fanout_cast(context,
   self.make_msg(method, fdb_entries=fdb_entries),
   topic=self.topic_l2pop_update)

  the fanout_cast will publish the message to all L2 agents listening
  "l2population" topic.

  If there are 1000 agents (it is a small cloud), and all of them are
  listening to  "l2population" topic, adding one new port will leads to
  1000 sub messages. Generally rabbitMQ can handle 10k messages per
  second, and the fanout_cast method will leads to greatly performance
  issues, and make the neutron service hard to scale, the concurrency of
  VM port request will be very very small.

  No matter how many ports in the subnet, the performance is up to the
  number of the L2 agents listening the topic.

  The way to solve the performance and scalability issue is to make the
  L2 agent listening a topic related to network, for example, using
  network uuid as the topic. If one port is activated in the subnet,
  only those agents where there are VMs of the same network should
  receive the L2-pop message.  This is parial-mesh, the original design
  purpose, but not implemented yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337700] [NEW] nova v3 behavior difference from v2/v1.1 when quota not enough

2014-07-03 Thread taget
Public bug reported:

v3 api report Invalid  user /password(HTTP 401) when Quota exceeded,
which is not correct.


taget@taget-ThinkCentre-M58p:~/code/devstack$ nova  boot  --flavor 1 --image 
18bb38c6-4100-48df-a726-a41c8b03b919 test1  
ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4c8d7094-3754-4f95-b131-c822215aff12)
taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 3 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
ERROR: Invalid user / password (HTTP 401)

taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 2 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4f2a57aa-47b4-4c62-95d2-7304d165cf7d)

taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 1.1 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-0cdcc33e-3fc9-4e06-9bca-ac5d5416103e)

** Affects: nova
 Importance: Undecided
 Assignee: taget (taget)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => taget (taget)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337700

Title:
  nova v3 behavior difference from v2/v1.1 when quota not enough

Status in OpenStack Compute (Nova):
  New

Bug description:
  v3 api report Invalid  user /password(HTTP 401) when Quota exceeded,
  which is not correct.

  
  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova  boot  --flavor 1 --image 
18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4c8d7094-3754-4f95-b131-c822215aff12)
  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 3 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Invalid user / password (HTTP 401)

  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 2 
boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-4f2a57aa-47b4-4c62-95d2-7304d165cf7d)

  taget@taget-ThinkCentre-M58p:~/code/devstack$ nova --os-compute-api-version 
1.1 boot  --flavor 1 --image 18bb38c6-4100-48df-a726-a41c8b03b919 test1  
  ERROR: Quota exceeded for ram: Requested 512, but already used 0 of 511 ram 
(HTTP 413) (Request-ID: req-0cdcc33e-3fc9-4e06-9bca-ac5d5416103e)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337698] [NEW] q-dhcp agent fails to start on Ubuntu 12.04

2014-07-03 Thread Hemanth Ravi
Public bug reported:

The fix for this bug https://bugs.launchpad.net/neutron/+bug/1212401
breaks q-dhcp agent on ubuntu 12.04 since dnsmasq --version can only be
run as root and neutron/agent/linux/utils.py/create_process() is not
invoked with root_helper. Bug fix for 1212401 addressed by
https://review.openstack.org/#/c/96976/

This is breaking the 3rd party plugin CI

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337698

Title:
  q-dhcp agent fails to start on Ubuntu 12.04

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The fix for this bug https://bugs.launchpad.net/neutron/+bug/1212401
  breaks q-dhcp agent on ubuntu 12.04 since dnsmasq --version can only
  be run as root and neutron/agent/linux/utils.py/create_process() is
  not invoked with root_helper. Bug fix for 1212401 addressed by
  https://review.openstack.org/#/c/96976/

  This is breaking the 3rd party plugin CI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337695] [NEW] "get console output" v3 API allows to fetch selecting range of output

2014-07-03 Thread matt
Public bug reported:

In our application ,we need to call this method several times .For each time 
,it will retrieve all the output. 
It would be nice to return selecting range of output instead.

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

-  "get console output" v3 API allows to fetch a range of output
+ "get console output" v3 API allows to fetch selecting range of output

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337695

Title:
  "get console output" v3 API allows to fetch selecting range of output

Status in OpenStack Compute (Nova):
  New

Bug description:
  In our application ,we need to call this method several times .For each time 
,it will retrieve all the output. 
  It would be nice to return selecting range of output instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278028] Re: VMware: update the default 'task_poll_interval' time

2014-07-03 Thread Stephen Gordon
** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1278028

Title:
  VMware: update the default 'task_poll_interval' time

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Manuals:
  In Progress

Bug description:
  https://review.openstack.org/70079

  Dear documentation bug triager. This bug was created here because we
  did not know how to map the project name "openstack/nova" to a
  launchpad project name. This indicates that the notify_impact config
  needs tweaks. You can ask the OpenStack infra team (#openstack-infra
  on freenode) for help if you need to.

  commit 73c87a280e77e03d228d34ab4781ca2e3b02e40e
  Author: Gary Kotton 
  Date:   Thu Jan 30 01:44:10 2014 -0800

  VMware: update the default 'task_poll_interval' time
  
  The original means that each operation against the backend takes at
  least 5 seconds. The default is updated to 0.5 seconds.
  
  DocImpact
  Updated default value for task_poll_interval from 5 seconds to
  0.5 seconds
  
  Change-Id: I867b913f52b67fa9d655f58a2e316b8fd1624426
  Closes-bug: #1274439

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1278028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337690] [NEW] Mysql/Eventlet deadlock when creating a port

2014-07-03 Thread Magesh GV
Public bug reported:

Another instance of the Mysql/Eventlet deadlock which is occuring while
trying to  bind the network to a DHCP agent during port create

2014-07-02 22:17:43.322 31258 ERROR neutron.api.v2.resource [-] create failed
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 84, in 
resource
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 407, in create
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource obj)})
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 386, in notify
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource notifier_method)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 268, in 
_send_dhcp_notification
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self._dhcp_agent_notifier.notify(context, data, methodname)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py",
 line 153, in notify
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self._notification(context, methodname, data, network_id)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py",
 line 71, in _notification
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource chosen_agents = 
plugin.schedule_network(adminContext, network)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py", line 207, 
in schedule_network
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource self, context, 
created_network)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py", 
line 83, in schedule
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self._schedule_bind_network(context, agent, network['id'])
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 447, in 
__exit__
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource self.rollback()
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 58, in 
__exit__
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 444, in 
__exit__
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource self.commit()
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 354, in 
commit
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self._prepare_impl()
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 334, in 
_prepare_impl
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self.session.flush()
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 524, in _wrap
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 718, in flush
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource return 
super(Session, self).flush(*args, **kwargs)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1818, in 
flush
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
self._flush(objects)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1936, in 
_flush
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 58, in 
__exit__
2014-07-02 22:17:43.322 31258 TRACE neutron.api.v2.resource 
compat.reraise

[Yahoo-eng-team] [Bug 1337677] [NEW] Error when starting metering agent

2014-07-03 Thread Fei Long Wang
Public bug reported:

There is an error when starting the metering agent with latest devstack
code, see below. As a result, metering agent won't get the routers info
for metering.

2014-07-03 22:48:26.320 ERROR neutron.openstack.common.loopingcall 
[req-3de9a0d0-a5a5-4e2a-8524-e2d10fcf4e93 None None] in fixed duration looping 
call
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall Traceback 
(most recent call last):
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/openstack/common/loopingcall.py", line 76, in _inner
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/service.py", line 296, in periodic_tasks
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/manager.py", line 45, in periodic_tasks
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.run_periodic_tasks(context, raise_on_error=raise_on_error)
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/openstack/common/periodic_task.py", line 160, in 
run_periodic_tasks
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall last_run 
= self._periodic_last_run[task_name]
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
AttributeError: 'MeteringAgentWithStateReport' object has no attribute 
'_periodic_last_run'
2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall

** Affects: neutron
 Importance: Undecided
 Assignee: Fei Long Wang (flwang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fei Long Wang (flwang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337677

Title:
  Error when starting metering agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is an error when starting the metering agent with latest
  devstack code, see below. As a result, metering agent won't get the
  routers info for metering.

  2014-07-03 22:48:26.320 ERROR neutron.openstack.common.loopingcall 
[req-3de9a0d0-a5a5-4e2a-8524-e2d10fcf4e93 None None] in fixed duration looping 
call
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall Traceback 
(most recent call last):
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/openstack/common/loopingcall.py", line 76, in _inner
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/service.py", line 296, in periodic_tasks
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/manager.py", line 45, in periodic_tasks
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
self.run_periodic_tasks(context, raise_on_error=raise_on_error)
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall   File 
"/opt/stack/neutron/neutron/openstack/common/periodic_task.py", line 160, in 
run_periodic_tasks
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
last_run = self._periodic_last_run[task_name]
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall 
AttributeError: 'MeteringAgentWithStateReport' object has no attribute 
'_periodic_last_run'
  2014-07-03 22:48:26.320 TRACE neutron.openstack.common.loopingcall

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337666] [NEW] databases tabs errors are being swallowed silently

2014-07-03 Thread Matthew D. Wood
Public bug reported:

Almost all errors on databases-tabs are swallowed silently.  That's a
bad thing.

** Affects: horizon
 Importance: Undecided
 Assignee: Matthew D. Wood (woodm1979)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Matthew D. Wood (woodm1979)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337666

Title:
  databases tabs errors are being swallowed silently

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Almost all errors on databases-tabs are swallowed silently.  That's a
  bad thing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1337666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337627] [NEW] Launch Stack improve form field labels

2014-07-03 Thread Cindy Lu
Public bug reported:

See image.

Field labels should not have _ in the name, e.g image_id should be
renamed Image ID.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "pag.png"
   https://bugs.launchpad.net/bugs/1337627/+attachment/4145094/+files/pag.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337627

Title:
  Launch Stack improve form field labels

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  See image.

  Field labels should not have _ in the name, e.g image_id should be
  renamed Image ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1337627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337624] [NEW] Launch Stack error message: "None"

2014-07-03 Thread Cindy Lu
Public bug reported:

After I complete "select template" and press 'Next' in the Launch Stack
modal, I come to the form.  If I have an improper image id and press
Launch, it gives me: Error: None.

(The error in the console says: Recoverable error: The image () could not be found.)

Please see attached image.

We could check to make sure the image exists, on at least, it should say
something like "Unable to launch stack" not just "None."

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "pag.png"
   https://bugs.launchpad.net/bugs/1337624/+attachment/4145091/+files/pag.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337624

Title:
  Launch Stack error message: "None"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After I complete "select template" and press 'Next' in the Launch
  Stack modal, I come to the form.  If I have an improper image id and
  press Launch, it gives me: Error: None.

  (The error in the console says: Recoverable error: The image () could not be found.)

  Please see attached image.

  We could check to make sure the image exists, on at least, it should
  say something like "Unable to launch stack" not just "None."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1337624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232348] Re: VMware: vmdk converted via qemu-img may not boot as SCSI disk

2014-07-03 Thread Tracy Jones
based on previous comments i am closing as not fix

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1232348

Title:
  VMware: vmdk converted via qemu-img may not boot as SCSI disk

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When converting disk images from other formats to vmdk using qemu-img,
  the vmdk disk produced is always a monolithic sparse disk with an
  adapter type of "ide" and disk geometry to match. Depending on the
  partitioning scheme of the guest OS, such a disk, even after being
  converted to a format compatible for ESX use, will still often be
  unbootable if attached to the virtual SCSI controller instead of the
  IDE controller.

  This behavior currently leads to a hard requirement that the
  vmware_adaptertype=ide image property be set when the vmdk is uploaded
  to glance. Failure to set this property, a step often overlooked by
  the user, will often lead to guest boot failure that is hard to track
  down. In some cases, vmdk disks uploaded to glance even lacks the DDB
  metadata that indicates the adaptertype of the disk, which complicates
  serviceability further.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1232348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337603] [NEW] css incorrect path to icons glyphicons-halflings.png

2014-07-03 Thread Cindy Lu
Public bug reported:

With the recent merge from LESS to SCSS, it seems that the a url path to
the glyphicons is broken when I use an icon directly ().

Not sure if this will be fixed with bootstrap upgrade?

https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/bootstrap/scss/bootstrap/_sprites.scss#L23

background-image: asset-url("glyphicons-halflings.png", image); ==> page
not found error @

/static/dashboard/scss/glyphicons-halflings.png. when it should have been:
/static/bootstrap/img/glyphicons-halflings.png

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337603

Title:
  css incorrect path to icons glyphicons-halflings.png

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  With the recent merge from LESS to SCSS, it seems that the a url path
  to the glyphicons is broken when I use an icon directly ().

  Not sure if this will be fixed with bootstrap upgrade?

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/bootstrap/scss/bootstrap/_sprites.scss#L23

  background-image: asset-url("glyphicons-halflings.png", image); ==>
  page not found error @

  /static/dashboard/scss/glyphicons-halflings.png. when it should have been:
  /static/bootstrap/img/glyphicons-halflings.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1337603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337172] Re: Topology for the stacks created from a nested template, template with ResourceGroup resource and provider resource in the template doesn't show the actual topology

2014-07-03 Thread Steve Baker
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337172

Title:
  Topology for the stacks created from a nested template, template with
  ResourceGroup resource and provider resource in the template doesn't
  show the actual topology

Status in Orchestration API (Heat):
  Triaged
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Create stacks as mentioned below

  1. Nested template
  template 1 has server, volume and volume_attachment resources.
  template 2 just has one resource of type "template 1". That is, template 2 is 
a nested template.
  create a stack using the template 2

  This kind of template can be found at 
  
https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example2_server_with_volume_nested

  2. provider resource in the template
template 1 has server, volume and volume_attachment resources.
template 2 defines the provider resource as follows 

resource_registry:
My::Server::WithVolume: template 1

template 3 has only one resource of type "My::Server::WithVolume" (as 
defined in resource_registry)

Create a stack using the template 3

  This kind of template can be found at
  
https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example4_provider_environment

  3. ResourceGroup resource in the template
template 1 has server, volume and volume_attachment resources.
template 2 has resource group resource of type: OS::Heat::ResourceGroup and 
points to the template 1 and define count > 1

Create a stack using template 2

  This kind of template can be found at
  
https://github.com/hardys/demo_templates/tree/master/juno_summit_intro_to_heat/example3_server_with_volume_group

  After the stacks are created, navigate to view the topology of the stacks.
  In the topology of the each stack, we can see only one resource that is 
defined in the template that is used to create the stack. 

  This is correct literally but doesn't make any sense.

  Find the screen shot of the topologies attached to this bug.

  
  For a nested template, shouldn't the resources from the template that is 
being referred also needs to be displayed in the topology?

  Same case for provider resource in the template and ResourceGroup
  resource in the template

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1337172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334368] Re: HEAD and GET inconsistencies in Keystone

2014-07-03 Thread Morgan Fainberg
Added to Tempest as it requires tempest changes to complete.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Status: New => In Progress

** Changed in: keystone/icehouse
   Importance: Undecided => Medium

** Changed in: keystone/icehouse
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334368

Title:
  HEAD and GET inconsistencies in Keystone

Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  In Progress
Status in Tempest:
  In Progress

Bug description:
  While trying to convert Keystone to gate/check under mod_wsgi, it was
  noticed that occasionally a few HEAD calls were returning HTTP 200
  where under eventlet they consistently return HTTP 204.

  This is an inconsistency within Keystone. Based upon the RFC, HEAD
  should be identitcal to GET except that there is no body returned.
  Apache + MOD_WSGI in some cases converts a HEAD request to a GET
  request to the back-end wsgi application to avoid issues where the
  headers cannot be built to be sent as part of the response (this can
  occur when no content is returned from the wsgi app).

  This situation shows that Keystone should likely never build specific
  HEAD request methods and have HEAD simply call to the controller GET
  handler, the wsgi-layer should then simply remove the response body.

  This will help to simplify Keystone's code as well as mkae the API
  responses more consistent.

  Example Error in Gate:

  2014-06-25 05:20:37.820 | 
tempest.api.identity.admin.v3.test_trusts.TrustsV3TestJSON.test_trust_expire[gate,smoke]
  2014-06-25 05:20:37.820 | 

  2014-06-25 05:20:37.820 | 
  2014-06-25 05:20:37.820 | Captured traceback:
  2014-06-25 05:20:37.820 | ~~~
  2014-06-25 05:20:37.820 | Traceback (most recent call last):
  2014-06-25 05:20:37.820 |   File 
"tempest/api/identity/admin/v3/test_trusts.py", line 241, in test_trust_expire
  2014-06-25 05:20:37.820 | self.check_trust_roles()
  2014-06-25 05:20:37.820 |   File 
"tempest/api/identity/admin/v3/test_trusts.py", line 173, in check_trust_roles
  2014-06-25 05:20:37.821 | self.assertEqual('204', resp['status'])
  2014-06-25 05:20:37.821 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-06-25 05:20:37.821 | self.assertThat(observed, matcher, message)
  2014-06-25 05:20:37.821 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-06-25 05:20:37.821 | raise mismatch_error
  2014-06-25 05:20:37.821 | MismatchError: '204' != '200'

  
  This is likely going to require changes to Keystone, Keystoneclient, Tempest, 
and possibly services that consume data from keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336258] Re: Section 'links' misplaced in OS-FEDERATION identity API

2014-07-03 Thread Dolph Mathews
Addressed by https://review.openstack.org/#/c/103888/

** Project changed: keystone => openstack-api-site

** Changed in: openstack-api-site
   Status: New => Confirmed

** Changed in: openstack-api-site
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1336258

Title:
  Section 'links' misplaced in OS-FEDERATION identity API

Status in OpenStack API documentation site:
  In Progress

Bug description:
  It was discovered that section 'links' is misplaced in  HTTP responses
  in mapping's  examples in OD-FEDERATION Identity  API docs:
  https://github.com/openstack/identity-api/blob/master/v3/src/markdown
  /identity-api-v3-os-federation-ext.md

  For instance Response to Mapping's PUT operations is depicted as:

  Status: 201 Created

  {
  "links": {
  "self": "http://identity:35357/v3/OS-FEDERATION/mappings/ACME";
  },
  "mapping": {
  "id": "ACME",
  "rules": [
  {
  "local": [
  {
  "group": {
  "id": "0cd5e9"
  }
  }
  ],
  "remote": [
  {
  "type": "orgPersonType",
  "not_any_of": [
  "Contractor",
  "Guest"
  ]
  }
  ]
  }
  ]
  }
  }

  
  whereas 'links' section should be inside 'mappings' section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1336258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199536] Re: Move dict test matchers into oslo

2014-07-03 Thread Doug Hellmann
The change was submitted to oslotest in
https://review.openstack.org/#/c/74861/ and in that review I said:

I discussed this change with lifeless and he indicated that he would be
very happy to have something like this (without committing to accepting
this exact patch after only a quick review). Please submit a pull
request to testtools (https://github.com/testing-cabal/testtools) and
when it's committed we can bump the version of testtools OpenStack uses.

** Changed in: oslo
   Status: In Progress => Won't Fix

** Also affects: testtools
   Importance: Undecided
   Status: New

** Summary changed:

- Move dict test matchers into oslo
+ Move dict test matchers into testtools

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199536

Title:
  Move dict test matchers into testtools

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Triaged
Status in OpenStack Identity (Keystone):
  Triaged
Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  Won't Fix
Status in testtools:
  New
Status in Openstack Database (Trove):
  Triaged

Bug description:
  Move classes DictKeysMismatch, DictMismatch and DictMatches from
  glanceclient/tests/matchers.py into oslo-incubator

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1199536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337472] [NEW] UnboundLocalError: local variable 'domain' referenced before assignment

2014-07-03 Thread Clark Boylan
Public bug reported:

This unbound local error happens when running the grenade test that does
not upgrade nova-cpu. In this test grenade upgrades all services but
n-cpu then runs some tempest tests. Could be a backward compat issue?

In any case domain is an unbound local variable in
_create_domain_and_network() and seems to lead to the failure of
instance creation.

Logs for the node that failed can be seen starting at
http://logs.openstack.org/48/104448/5/check/check-grenade-dsvm-partial-
ncpu/6b93538/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-07-03_17_15_26_819

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337472

Title:
  UnboundLocalError: local variable 'domain' referenced before
  assignment

Status in OpenStack Compute (Nova):
  New

Bug description:
  This unbound local error happens when running the grenade test that
  does not upgrade nova-cpu. In this test grenade upgrades all services
  but n-cpu then runs some tempest tests. Could be a backward compat
  issue?

  In any case domain is an unbound local variable in
  _create_domain_and_network() and seems to lead to the failure of
  instance creation.

  Logs for the node that failed can be seen starting at
  http://logs.openstack.org/48/104448/5/check/check-grenade-dsvm-
  partial-
  ncpu/6b93538/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-07-03_17_15_26_819

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337473] [NEW] horizon gets the default quota for floating ips for new projects from the what the current user has

2014-07-03 Thread Matt Fischer
Public bug reported:

After some investigation it appears that when you login to Horizon and
go to create a new project, it gets the default neutron floating IP
quota value from the current value that the logged in user has. This was
quite confusing.


This will likely not be fixable until this feature lands in neutron: 
https://bugs.launchpad.net/neutron/+bug/1204956, Since Horizon has no way to 
get defaults and this is probably a good reasonable solution until then.

Steps:

Login as a user with who's default tenant has a quota of 50 floating IPs. 
Go to create project, note the pre-filled value is 50.
Change tenant's quota to "123"
Logout
Login again 
Go to create project, note the pre-filled value is 123.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337473

Title:
  horizon gets the default quota for floating ips for new projects from
  the what the current user has

Status in OpenStack Compute (Nova):
  New

Bug description:
  After some investigation it appears that when you login to Horizon and
  go to create a new project, it gets the default neutron floating IP
  quota value from the current value that the logged in user has. This
  was quite confusing.

  
  This will likely not be fixable until this feature lands in neutron: 
https://bugs.launchpad.net/neutron/+bug/1204956, Since Horizon has no way to 
get defaults and this is probably a good reasonable solution until then.

  Steps:

  Login as a user with who's default tenant has a quota of 50 floating IPs. 
  Go to create project, note the pre-filled value is 50.
  Change tenant's quota to "123"
  Logout
  Login again 
  Go to create project, note the pre-filled value is 123.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337447] [NEW] "AttributeError: 'module' object has no attribute 'InstanceInfoCache'\n"

2014-07-03 Thread Thang Pham
Public bug reported:

In the latest devstack (pulled on 7/3/14), you cannot access an
instance's noVNC console on the horizon dashboard after you launched the
instance.  If you look at the logs, you will find:

INFO nova.console.websocketproxy [req-0851007a-adbb-
4aff-b712-0108702efa58 None None] handler exception: 'module' object has
no attribute 'InstanceInfoCache'#012Traceback (most recent call last):

  File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
incoming.message))

  File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File "/opt/stack/nova/nova/consoleauth/manager.py", line 128, in check_token
if self._validate_token(context, token):

  File "/opt/stack/nova/nova/consoleauth/manager.py", line 114, in 
_validate_token
instance = objects.Instance.get_by_uuid(context, instance_uuid)

  File "/opt/stack/nova/nova/objects/base.py", line 153, in wrapper
result = fn(cls, context, *args, **kwargs)

  File "/opt/stack/nova/nova/objects/instance.py", line 312, in get_by_uuid
return cls._from_db_object(context, cls(), db_inst,

  File "/opt/stack/nova/nova/objects/instance.py", line 288, in _from_db_object
# passed to us by a backlevel service, things will break

AttributeError: 'module' object has no attribute 'InstanceInfoCache'

This was noted in the following forum post:
https://ask.openstack.org/en/question/33966/vnc-in-the-dashbaord-says-
failed-to-connect-to-server-code-1006-the-set-up-is-by-devstack-on-
ubuntu-1204-with-kvm/


The problem is in line 228:
instance.info_cache = objects.InstanceInfoCache(context)

It should be corrected to: 
from nova.objects import instance_info_cache
...
instance.info_cache = instance_info_cache.InstanceInfoCache(context)

** Affects: nova
 Importance: Undecided
 Assignee: Thang Pham (thang-pham)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Thang Pham (thang-pham)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337447

Title:
  "AttributeError: 'module' object has no attribute
  'InstanceInfoCache'\n"

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the latest devstack (pulled on 7/3/14), you cannot access an
  instance's noVNC console on the horizon dashboard after you launched
  the instance.  If you look at the logs, you will find:

  INFO nova.console.websocketproxy [req-0851007a-adbb-
  4aff-b712-0108702efa58 None None] handler exception: 'module' object
  has no attribute 'InstanceInfoCache'#012Traceback (most recent call
  last):

File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  incoming.message))

File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

File "/opt/stack/oslo.messaging/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

File "/opt/stack/nova/nova/consoleauth/manager.py", line 128, in check_token
  if self._validate_token(context, token):

File "/opt/stack/nova/nova/consoleauth/manager.py", line 114, in 
_validate_token
  instance = objects.Instance.get_by_uuid(context, instance_uuid)

File "/opt/stack/nova/nova/objects/base.py", line 153, in wrapper
  result = fn(cls, context, *args, **kwargs)

File "/opt/stack/nova/nova/objects/instance.py", line 312, in get_by_uuid
  return cls._from_db_object(context, cls(), db_inst,

File "/opt/stack/nova/nova/objects/instance.py", line 288, in 
_from_db_object
  # passed to us by a backlevel service, things will break

  AttributeError: 'module' object has no attribute 'InstanceInfoCache'

  This was noted in the following forum post:
  https://ask.openstack.org/en/question/33966/vnc-in-the-dashbaord-says-
  failed-to-connect-to-server-code-1006-the-set-up-is-by-devstack-on-
  ubuntu-1204-with-kvm/

  
  The problem is in line 228:
  instance.info_cache = objects.InstanceInfoCache(context)

  It should be corrected to: 
  from nova.objects import instance_info_cache
  ...
  instance.info_cache = instance_info_cache.InstanceInfoCache(context)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297512] Re: libvirt domain launch "Cannot find 'pm-is-supported' in path" error

2014-07-03 Thread Tracy Jones
** Changed in: nova
   Status: In Progress => New

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297512

Title:
  libvirt domain launch "Cannot find 'pm-is-supported' in path" error

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/29/82829/1/check/check-tempest-dsvm-
  full/67c8984/logs/screen-n-cpu.txt.gz

  2014-03-25 14:27:02.318 ERROR nova.virt.libvirt.driver 
[req-b485d6b2-cf63-468f-ae09-0c6ba31274db SecurityGroupsTestJSON-1020006517 
SecurityGroupsTestJSON-1157433096] An error occurred while trying to launch a 
defined domain with xml: 
instance-0032
c3baa6cf-9127-4a69-af9a-58dbba0824cc
65536
65536
1

  
OpenStack Foundation
OpenStack Nova
2014.1
44361562-54ff-5e7f-6c5b-ba157fe8645a
c3baa6cf-9127-4a69-af9a-58dbba0824cc
  


  hvm
  
/opt/stack/data/nova/instances/c3baa6cf-9127-4a69-af9a-58dbba0824cc/kernel
  
/opt/stack/data/nova/instances/c3baa6cf-9127-4a69-af9a-58dbba0824cc/ramdisk
  root=/dev/vda console=tty0 console=ttyS0
  
  


  
  


destroy
restart
destroy

  /usr/bin/qemu-system-x86_64
  




  
  





  
  

  
  






  
  


  
  

  
  


  
  

  

  

  Then:

  2014-03-25 14:27:05.135 ERROR nova.compute.manager 
[req-b485d6b2-cf63-468f-ae09-0c6ba31274db SecurityGroupsTestJSON-1020006517 
SecurityGroupsTestJSON-1157433096] [instance: 
c3baa6cf-9127-4a69-af9a-58dbba0824cc] Cannot reboot instance: internal error 
Process exited while reading console log output: char device redirected to 
/dev/pts/2
  qemu-system-x86_64: -drive 
file=/opt/stack/data/nova/instances/c3baa6cf-9127-4a69-af9a-58dbba0824cc/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none:
 could not open disk image 
/opt/stack/data/nova/instances/c3baa6cf-9127-4a69-af9a-58dbba0824cc/disk: 
Invalid argument

  Looking in the libvirtd log:
  
http://logs.openstack.org/29/82829/1/check/check-tempest-dsvm-full/67c8984/logs/libvirtd.txt.gz
  2014-03-25 14:54:56.792+: 12413: warning : qemuCapsInit:856 : Failed to 
get host power management capabilities
  2014-03-25 14:54:56.936+: 10184: error : virExecWithHook:327 : Cannot 
find 'pm-is-supported' in path: No such file or directory
  2014-03-25 14:54:56.936+: 10184: warning : qemuCapsInit:856 : Failed to 
get host power management capabilities
  2014-03-25 14:54:57.004+: 12413: error : virExecWithHook:327 : Cannot 
find 'pm-is-supported' in path: No such file or directory
  2014-03-25 14:54:57.004+: 12413: warning : qemuCapsInit:856 : Failed to 
get host power management capabilities

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1297512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329546] Re: Upon rebuild instances might never get to Active state

2014-07-03 Thread Attila Fazekas
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1329546

Title:
  Upon rebuild instances might never get to Active state

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  VMware mine sweeper for Neutron (*) recently showed a 100% failure
  rate on tempest.api.compute.v3.servers.test_server_actions

  Logs for two instances of these failures are available at [1] and [2]
  The failure manifested as an instance unable to go active after a rebuild.
  A bit of instrumentation and log analysis revealed no obvious error on the 
neutron side - and also that the instance was actually in "running" state even 
if its take state was "rebuilding/spawning"

  N-API logs [3] revealed that the instance spawn was timing out on a
  missed notification from neutron regarding VIF plug - however the same
  log showed such notification was received [4]

  It turns out that, after rebuild, the instance network cache had still
  'active': False for the instance's VIF, even if the status for the
  corresponding port was 'ACTIVE'. This happened because after the
  network-vif-plugged event was received, nothing triggered a refresh of
  the instance network info. For this reason, the VM, after a rebuild,
  kept waiting for an even which obviously was never sent from neutron.

  While this manifested only on mine sweeper - this appears to be a nova bug - 
manifesting in vmware minesweeper only because of the way the plugin 
synchronizes with the backend for reporting the operational status of a port.
  A simple solution for this problem would be to reload the instance network 
info cache when network-vif-plugged events are received by nova. (But as the 
reporter knows nothing about nova this might be a very bad idea as well)

  [1] http://208.91.1.172/logs/neutron/98278/2/413209/testr_results.html
  [2] http://208.91.1.172/logs/neutron/73234/34/413213/testr_results.html
  [3] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=WARNING#_2014-06-06_01_46_36_219
  [4] 
http://208.91.1.172/logs/neutron/73234/34/413213/logs/screen-n-cpu.txt.gz?level=DEBUG#_2014-06-06_01_41_31_767

  (*) runs libvirt/KVM + NSX

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1329546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337275] Re: fail to launch more than 12 instances after updating qouta with 'PortLimitExceeded: Maximum number of ports exceeded'

2014-07-03 Thread Rossella Sblendido
You should modify the port quota and use something > 100 if you plan to launch 
100 instances. Some ports are created automatically by Neutron and are included 
in the quota (like dhcp ports for example). 
Modify the port quota in horizon. Or you can use the command line,  specifying 
in the credential OS_USERNAME=admin_user, OS_TENANT_NAME= 
tenant_that_will_create_the_VMs 

neutron quota-update --port 120

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337275

Title:
  fail to launch more than 12 instances after updating qouta with
  'PortLimitExceeded: Maximum number of ports exceeded'

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I installed openstack with packstack as AIO + 3 computes. 
  Trying to run 100 instances, we fail to launch more than 12 with 
'PortLimitExceeded: Maximum number of ports exceeded' ERROR. 

  to reproduce - launch 100 instances at once after changing admin
  tenant project default quota.

  attaching the answer file + logs but here is the ERROR from nova-
  compute.log

  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] Traceback (most recent call last):
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in 
_build_instance
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] set_access_ip=set_access_ip)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in 
decorated_function
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return function(self, context, *args, 
**kwargs)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] block_device_info)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2253, in 
spawn
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] admin_pass=admin_password)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2704, in 
_create_image
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] instance, network_info, admin_pass, 
files, suffix)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2522, in 
_inject_data
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] net = 
netutils.get_injected_network_template(network_info)
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 71, in 
get_injected_network_template
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] if not (network_info and template):
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 420, in __len__
  2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return self._sync_wrapper(fn, *args, 
**kwargs)
  2014-07-03 12:52:17.061 3045 TRACE n

[Yahoo-eng-team] [Bug 1337367] [NEW] The add method of swift.py have a problem.When a large image is uploading and the glance-api is restarted, then we can not delete the image content that have been

2014-07-03 Thread Hua Wang
Public bug reported:

1. upload a large image, for example 50G
2. kill glance-api when image status:saving
3. restart glance-api
4. delete image

the image content that have been uploaded can not be deleted. I think the add 
method of glance/swift/BaseStore should put the object manifest onto swift 
first, before we upload the content when we upload a large image in chunks.
 manifest = "%s/%s-" % (location.container, location.obj)
 headers = {'ETag': hashlib.md5("").hexdigest(), 'X-Object-Manifest': manifest}
connection.put_object(location.container, location.obj,  None, headers=headers)
the code above shoud put before the code we upload the image chunks.

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  1. upload a large image, for example 50G
  2. kill glance-api when image status:saving
  3. restart glance-api
  4. delete image
  
- the image content that have been uploaded can not be deleted. I think the add 
method should put the object manifest onto swift first, before we upload the 
content when we upload a large image in chunks. 
-  manifest = "%s/%s-" % (location.container, location.obj)
-  headers = {'ETag': hashlib.md5("").hexdigest(), 'X-Object-Manifest': 
manifest}
+ the image content that have been uploaded can not be deleted. I think the add 
method of glance/swift/BaseStore should put the object manifest onto swift 
first, before we upload the content when we upload a large image in chunks.
+  manifest = "%s/%s-" % (location.container, location.obj)
+  headers = {'ETag': hashlib.md5("").hexdigest(), 'X-Object-Manifest': 
manifest}
  connection.put_object(location.container, location.obj,  None, 
headers=headers)
  the code above shoud put before the code we upload the image chunks.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1337367

Title:
  The add method of swift.py have a problem.When a large image is
  uploading and the glance-api is restarted, then we can not delete the
  image content that have been uploaded in swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. upload a large image, for example 50G
  2. kill glance-api when image status:saving
  3. restart glance-api
  4. delete image

  the image content that have been uploaded can not be deleted. I think the add 
method of glance/swift/BaseStore should put the object manifest onto swift 
first, before we upload the content when we upload a large image in chunks.
   manifest = "%s/%s-" % (location.container, location.obj)
   headers = {'ETag': hashlib.md5("").hexdigest(), 'X-Object-Manifest': 
manifest}
  connection.put_object(location.container, location.obj,  None, 
headers=headers)
  the code above shoud put before the code we upload the image chunks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1337367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337359] [NEW] The io_ops_filter is not working while instance is rebuilding.

2014-07-03 Thread XiaoDui Huang
Public bug reported:

I am trying to control the host's ios at the same time. I set the
properties in nova.conf:

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,IoOpsFilter
max_io_ops_per_host=2

But I can still schedule an instance at the host which has two instances
is rebuilding.

The task status of rebuild instances is
REBUILD_SPAWNING="rebuild_spawning", but the io_workload in stats.py is
:

@property
def io_workload(self):
"""Calculate an I/O based load by counting I/O heavy operations."""

def _get(state, state_type):
key = "num_%s_%s" % (state_type, state)
return self.get(key, 0)

num_builds = _get(vm_states.BUILDING, "vm")
num_migrations = _get(task_states.RESIZE_MIGRATING, "task")
num_rebuilds = _get(task_states.REBUILDING, "task")
num_resizes = _get(task_states.RESIZE_PREP, "task")
num_snapshots = _get(task_states.IMAGE_SNAPSHOT, "task")
num_backups = _get(task_states.IMAGE_BACKUP, "task")

return (num_builds + num_rebuilds + num_resizes + num_migrations +
num_snapshots + num_backups)


The I/O heavy operations not contain the "rebuild_spawning" status.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: io ioopsfilter workload

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337359

Title:
  The io_ops_filter is not working while instance is rebuilding.

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am trying to control the host's ios at the same time. I set the
  properties in nova.conf:

  
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,IoOpsFilter
  max_io_ops_per_host=2

  But I can still schedule an instance at the host which has two
  instances is rebuilding.

  The task status of rebuild instances is
  REBUILD_SPAWNING="rebuild_spawning", but the io_workload in stats.py
  is :

  @property
  def io_workload(self):
  """Calculate an I/O based load by counting I/O heavy operations."""

  def _get(state, state_type):
  key = "num_%s_%s" % (state_type, state)
  return self.get(key, 0)

  num_builds = _get(vm_states.BUILDING, "vm")
  num_migrations = _get(task_states.RESIZE_MIGRATING, "task")
  num_rebuilds = _get(task_states.REBUILDING, "task")
  num_resizes = _get(task_states.RESIZE_PREP, "task")
  num_snapshots = _get(task_states.IMAGE_SNAPSHOT, "task")
  num_backups = _get(task_states.IMAGE_BACKUP, "task")

  return (num_builds + num_rebuilds + num_resizes + num_migrations +
  num_snapshots + num_backups)

  
  The I/O heavy operations not contain the "rebuild_spawning" status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-07-03 Thread Vladimir Kuklin
** Also affects: mos
   Importance: Undecided
   Status: New

** Changed in: mos
   Importance: Undecided => High

** Changed in: mos
Milestone: None => 5.1

** Changed in: mos
 Assignee: (unassigned) => MOS Oslo (mos-oslo)

** Changed in: mos
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Fix Released
Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  Confirmed
Status in Orchestration API (Heat):
  Confirmed
Status in Mirantis OpenStack:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298472] Re: SSHTimeout in tempest scenario tests using nova-network

2014-07-03 Thread Sean Dague
As the current indicators trend towards being related to iptables rules,
I think we're able to cross cinder off the list of possible causes.

** Changed in: nova
   Status: New => Confirmed

** Changed in: cinder
   Status: New => Invalid

** Summary changed:

- SSHTimeout in tempest scenario tests using nova-network
+ SSHTimeout in tempest trying to verify that computes are actually functioning

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298472

Title:
  SSHTimeout in tempest trying to verify that computes are actually
  functioning

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Fix Committed

Bug description:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  failed at least once with the following traceback when trying to
  connect via SSH:

  Traceback (most recent call last):
File "tempest/scenario/test_volume_boot_pattern.py", line 156, in 
test_volume_boot_pattern
  ssh_client = self._ssh_to_server(instance_from_snapshot, keypair)
File "tempest/scenario/test_volume_boot_pattern.py", line 100, in 
_ssh_to_server
  private_key=keypair.private_key)
File "tempest/scenario/manager.py", line 466, in get_remote_client
  return RemoteClient(ip, username, pkey=private_key)
File "tempest/common/utils/linux/remote_client.py", line 47, in __init__
  if not self.ssh_client.test_connection_auth():
File "tempest/common/ssh.py", line 149, in test_connection_auth
  connection = self._get_ssh_connection()
File "tempest/common/ssh.py", line 65, in _get_ssh_connection
  timeout=self.channel_timeout, pkey=self.pkey)
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in connect
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/local/lib/python2.7/dist-packages/paramiko/util.py", line 279, 
in retry_on_signal
  return function()
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in 
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  raise TimeoutException()
  TimeoutException

  Logs can be found at: 
http://logs.openstack.org/86/82786/1/gate/gate-tempest-dsvm-neutron-pg/1eaadd0/
  The review that triggered the issue is: 
https://review.openstack.org/#/c/82786/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1298472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337304] [NEW] Extract common code from get_fc_wwpns and get_fc_wwnns

2014-07-03 Thread ling-yun
Public bug reported:

Since get_fc_wwpns and get_fc_wwnns has almost the same code, so extract
common code from these two function and add a flag to identify what info
to collect.

** Affects: nova
 Importance: Undecided
 Assignee: ling-yun (zengyunling)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => ling-yun (zengyunling)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337304

Title:
  Extract common code from get_fc_wwpns and get_fc_wwnns

Status in OpenStack Compute (Nova):
  New

Bug description:
  Since get_fc_wwpns and get_fc_wwnns has almost the same code, so
  extract common code from these two function and add a flag to identify
  what info to collect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337275] [NEW] fail to launch more than 12 instances after updating qouta with 'PortLimitExceeded: Maximum number of ports exceeded'

2014-07-03 Thread Dafna Ron
Public bug reported:

I installed openstack with packstack as AIO + 3 computes. 
Trying to run 100 instances, we fail to launch more than 12 with 
'PortLimitExceeded: Maximum number of ports exceeded' ERROR. 

to reproduce - launch 100 instances at once after changing admin tenant
project default quota.

attaching the answer file + logs but here is the ERROR from nova-
compute.log

2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] Traceback (most recent call last):
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in 
_build_instance
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] set_access_ip=set_access_ip)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in 
decorated_function
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return function(self, context, *args, 
**kwargs)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] six.reraise(self.type_, self.value, 
self.tb)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] block_device_info)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2253, in 
spawn
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] admin_pass=admin_password)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2704, in 
_create_image
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] instance, network_info, admin_pass, 
files, suffix)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2522, in 
_inject_data
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] net = 
netutils.get_injected_network_template(network_info)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/virt/netutils.py", line 71, in 
get_injected_network_template
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] if not (network_info and template):
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 420, in __len__
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] return self._sync_wrapper(fn, *args, 
**kwargs)
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 407, in 
_sync_wrapper
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] self.wait()
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/nova/network/model.py", line 439, in wait
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8] self[:] = self._gt.wait()
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3dcc-80a3-42ae-b2a1-8f4b852ce7f8]   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 168, in wait
2014-07-03 12:52:17.061 3045 TRACE nova.compute.manager [instance: 
bc4a3d

[Yahoo-eng-team] [Bug 1337278] [NEW] Client-side checks for flavor requirements not working correctly for RAM

2014-07-03 Thread Julie Pichon
Public bug reported:

The RAM checks always pass because we're comparing GB against MB - the
image minimum RAM is returned in GB while the flavour minimum RAM is
returned in MB.

Steps to reproduce:
1. Create a new image and set its minimum requirement to 8GB disk, 8GB RAM.
2. Try to launch an instance and select the new image

Actual result:
3. Minimum flavour is set to m1.small even though 

Expected result:
3. Flavour should be at least m1.large (using the devstack default flavours) 
since it offers 80GB disk/8,192 MB, m1.small should be disabled.


If you add some debug statements around the related code, you'll see we're 
comparing 8 against 512 / 2048 / etc.
https://github.com/openstack/horizon/blob/759e497b0d/horizon/static/horizon/js/horizon.quota.js#L99

** Affects: horizon
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1337278

Title:
  Client-side checks for flavor requirements not working correctly for
  RAM

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The RAM checks always pass because we're comparing GB against MB - the
  image minimum RAM is returned in GB while the flavour minimum RAM is
  returned in MB.

  Steps to reproduce:
  1. Create a new image and set its minimum requirement to 8GB disk, 8GB RAM.
  2. Try to launch an instance and select the new image

  Actual result:
  3. Minimum flavour is set to m1.small even though 

  Expected result:
  3. Flavour should be at least m1.large (using the devstack default flavours) 
since it offers 80GB disk/8,192 MB, m1.small should be disabled.

  
  If you add some debug statements around the related code, you'll see we're 
comparing 8 against 512 / 2048 / etc.
  
https://github.com/openstack/horizon/blob/759e497b0d/horizon/static/horizon/js/horizon.quota.js#L99

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1337278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337265] [NEW] HTTP 500 when `nova list --name` contains invalid regexp

2014-07-03 Thread Jaroslav Henner
Public bug reported:

# nova list --name \*
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-e399bee0-2491-4e4a-9197-944b19c86075)

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "log"
   https://bugs.launchpad.net/bugs/1337265/+attachment/4144541/+files/log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337265

Title:
  HTTP 500 when `nova list --name` contains invalid regexp

Status in OpenStack Compute (Nova):
  New

Bug description:
  # nova list --name \*
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-e399bee0-2491-4e4a-9197-944b19c86075)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337264] [NEW] live migaraions fails if used libvirt_cpu_mode=host-passthrough option

2014-07-03 Thread Michael Kazakov
Public bug reported:

Livemigartions fials with libvirt error in nova-compute.log:

ERROR nova.virt.libvirt.driver [-] [instance:
d8234ed4-1c7b-4683-afc6-0f481f91c6e4] Live Migration failure: internal
error: cannot load AppArmor profile 'libvirt-
d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

libvirtd.log:

warning : qemuDomainObjTaint:1628 : Domain id=6 name='instance-0154' 
uuid=d8234ed4-1c7b-4683-afc6-0f481f91c6e4 is tainted: host-cpu
error : virNetClientProgramDispatchError:175 : internal error: cannot load 
AppArmor profile 'libvirt-d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

libvirt-bin 1.2.2-0ubuntu13.1 
nova-compute1:2014.1+git201406232336~trusty-0ubuntu1  
Host CPU model Intel(R) Xeon(R) CPU E5-2695 v2

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- live migaraions fails if used libvirt_cpu_mode=host-passthrough optin
+ live migaraions fails if used libvirt_cpu_mode=host-passthrough option

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337264

Title:
  live migaraions fails if used libvirt_cpu_mode=host-passthrough option

Status in OpenStack Compute (Nova):
  New

Bug description:
  Livemigartions fials with libvirt error in nova-compute.log:

  ERROR nova.virt.libvirt.driver [-] [instance:
  d8234ed4-1c7b-4683-afc6-0f481f91c6e4] Live Migration failure: internal
  error: cannot load AppArmor profile 'libvirt-
  d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

  libvirtd.log:

  warning : qemuDomainObjTaint:1628 : Domain id=6 name='instance-0154' 
uuid=d8234ed4-1c7b-4683-afc6-0f481f91c6e4 is tainted: host-cpu
  error : virNetClientProgramDispatchError:175 : internal error: cannot load 
AppArmor profile 'libvirt-d8234ed4-1c7b-4683-afc6-0f481f91c6e4'

  libvirt-bin 1.2.2-0ubuntu13.1 
  nova-compute1:2014.1+git201406232336~trusty-0ubuntu1  
  Host CPU model Intel(R) Xeon(R) CPU E5-2695 v2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337267] [NEW] Lock wait timeout inserting floating ip

2014-07-03 Thread Eugene Nikanorov
Public bug reported:

Traceback:

 TRACE neutron.api.v2.resource Traceback (most recent call last):
 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
 TRACE neutron.api.v2.resource result = method(request=request, **args)
 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 447, in create
 TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs)
 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/services/l3_router/l3_router_plugin.py", line 
96, in create_floatingip
 TRACE neutron.api.v2.resource 
initial_status=q_const.FLOATINGIP_STATUS_DOWN)
 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 773, in create_floatingip
 TRACE neutron.api.v2.resource context.session.add(floatingip_db)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 447, in 
__exit__
 TRACE neutron.api.v2.resource self.rollback()
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 58, in 
__exit__
 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 444, in 
__exit__
 TRACE neutron.api.v2.resource self.commit()
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 354, in 
commit
 TRACE neutron.api.v2.resource self._prepare_impl()
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 334, in 
_prepare_impl
 TRACE neutron.api.v2.resource self.session.flush()
 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
444, in _wrap
 TRACE neutron.api.v2.resource return f(self, *args, **kwargs)
 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
728, in flush
 TRACE neutron.api.v2.resource return super(Session, self).flush(*args, 
**kwargs)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1818, in 
flush
 TRACE neutron.api.v2.resource self._flush(objects)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1936, in 
_flush
 TRACE neutron.api.v2.resource transaction.rollback(_capture_exception=True)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 58, in 
__exit__
 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1900, in 
_flush
 TRACE neutron.api.v2.resource flush_context.execute()
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
 TRACE neutron.api.v2.resource rec.execute(self)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 525, in 
execute
 TRACE neutron.api.v2.resource uow
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 64, in 
save_obj
 TRACE neutron.api.v2.resource table, insert)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 541, in 
_emit_insert_statements
 TRACE neutron.api.v2.resource execute(statement, multiparams)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
 TRACE neutron.api.v2.resource params)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
 TRACE neutron.api.v2.resource compiled_sql, distilled_params
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
 TRACE neutron.api.v2.resource context)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, in 
_handle_dbapi_exception
 TRACE neutron.api.v2.resource exc_info
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 196, in 
raise_from_cause
 TRACE neutron.api.v2.resource reraise(type(exception), exception, 
tb=exc_tb)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 867, in 
_execute_context
 TRACE neutron.api.v2.resource context)
 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 324, in 
do_execute
 TRACE neutron.api.v2.re

[Yahoo-eng-team] [Bug 1337263] [NEW] Multi-provider extension does not support XML

2014-07-03 Thread MARIN-FRISONROCHE Julien
Public bug reported:

multi-provider extension does not support network creation using xml
body because segments markup are not correctly parsed.


Use case:

#$ curl -i http://192.168.128.11:9696/v2.0/networks -X POST  -H "X-Auth-
Token:  blah" -d @xml_input -H 'Content-Type: application/xml'

HTTP/1.1 500 Internal Server Error
Content-Type: application/xml; charset=UTF-8
Content-Length: 290
X-Openstack-Request-Id: req-e4234a24-7ca3-4327-892c-802acd50cba3
Date: Thu, 03 Jul 2014 10:04:33 GMT


http://openstack.org/quantum/api/v2.0"; 
xmlns:quantum="http://openstack.org/quantum/api/v2.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>Request Failed: internal 
server error while processing your request.


xml_input

http://openstack.org/quantum/api/v2.0"; 
xmlns:provider="http://docs.openstack.org/ext/provider/api/v1.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
  aa
  
  
  1002
  gre
  

  local

  


** Affects: neutron
 Importance: Undecided
 Assignee: MARIN-FRISONROCHE Julien (julien-marinfrisonroche)
 Status: In Progress


** Tags: api

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => MARIN-FRISONROCHE Julien 
(julien-marinfrisonroche)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337263

Title:
  Multi-provider extension does not support XML

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  multi-provider extension does not support network creation using xml
  body because segments markup are not correctly parsed.


  Use case:

  #$ curl -i http://192.168.128.11:9696/v2.0/networks -X POST  -H "X
  -Auth-Token:  blah" -d @xml_input -H 'Content-Type: application/xml'

  HTTP/1.1 500 Internal Server Error
  Content-Type: application/xml; charset=UTF-8
  Content-Length: 290
  X-Openstack-Request-Id: req-e4234a24-7ca3-4327-892c-802acd50cba3
  Date: Thu, 03 Jul 2014 10:04:33 GMT

  
  http://openstack.org/quantum/api/v2.0"; 
xmlns:quantum="http://openstack.org/quantum/api/v2.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>Request Failed: internal 
server error while processing your request.


  xml_input
  
  http://openstack.org/quantum/api/v2.0"; 
xmlns:provider="http://docs.openstack.org/ext/provider/api/v1.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";>
aa


1002
gre

  
local
  

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-07-03 Thread Artem Panchenko
Fix released for 5.0.1, verified on iso # 88

** Changed in: fuel/5.0.x
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Fuel for OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack:
  In Progress
Status in Mirantis OpenStack 5.0.x series:
  Fix Committed
Status in Mirantis OpenStack 5.1.x series:
  In Progress
Status in Mirantis OpenStack 6.0.x series:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337258] [NEW] fwaas: Admin should not be able to share tenant's firewall

2014-07-03 Thread Rajkumar
Public bug reported:

Admin should not be able to update/create the shared attribute of tenant's 
firewall. Since if he shared, then it will affect the traffic of other tenants. 
 And also member tenant is not able to update with shared as true however still 
he is able to update with false hence I am seeing three more issues below.
 I have seen this issue in neutron version 2014.2.dev543.g8bdc649

1.Admin able to create firewall with shared option with tenant id
 
root@overcloud-controller0-eq56cfdcitoq:~# fwc p1 --name f1 --tenant-id 
9bc0f43fcefe46ceb1124d714467a788 --shared
Created a new firewall:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | e16342f9-7fd9-45c1-845f-f13f0dffc0dd |
| id | 44a7cf82-8606-45d9-be45-a1f5253ce6f4 |
| name   | f1   |
| status | PENDING_CREATE   |
| tenant_id  | 9bc0f43fcefe46ceb1124d714467a788 |
++--+
 
2. admin is able to update shared attribute for that tenant firewall
 
root@overcloud-controller0-eq56cfdcitoq:~# fwu f1 --shared true
Updated firewall: f1
root@overcloud-controller0-eq56cfdcitoq:~# fwu f1 --shared false
Updated firewall: f1
 
root@overcloud-controller0-eq56cfdcitoq:~# fws f1
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| description|  |
| firewall_policy_id | e16342f9-7fd9-45c1-845f-f13f0dffc0dd |
| id | 44a7cf82-8606-45d9-be45-a1f5253ce6f4 |
| name   | f1   |
| status | ACTIVE   |
| tenant_id  | 9bc0f43fcefe46ceb1124d714467a788 |
++--+
root@overcloud-controller0-eq56cfdcitoq:~# ktl
+--+-+-+
|id|   name  | enabled |
+--+-+-+
| 50b6196e426544638128f4b76ad24938 |  admin  |   True  |
| 4ea2a3dff61142a08b231c971c075bdf | service |   True  |
| 9bc0f43fcefe46ceb1124d714467a788 | tenant1 |   True  |
| 5db163fb680c4030a406a4ccaa259ce4 | tenant2 |   True  |
+--+-+-+
 
3. From tenant also, he is able to update the firewall with false. ( reference 
bug: 1323322)
It should throw error like "Unrecognized attribute(s) 'shared'" instead of 
"resource not found"
 
root@overcloud-controller0-eq56cfdcitoq:~# fwu f1 --shared true
The resource could not be found.
root@overcloud-controller0-eq56cfdcitoq:~# fwu f1 --shared false
Updated firewall: f1

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337258

Title:
  fwaas:  Admin should not be able to share tenant's firewall

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Admin should not be able to update/create the shared attribute of tenant's 
firewall. Since if he shared, then it will affect the traffic of other tenants. 
 And also member tenant is not able to update with shared as true however still 
he is able to update with false hence I am seeing three more issues below.
   I have seen this issue in neutron version 2014.2.dev543.g8bdc649

  1.Admin able to create firewall with shared option with tenant id
   
  root@overcloud-controller0-eq56cfdcitoq:~# fwc p1 --name f1 --tenant-id 
9bc0f43fcefe46ceb1124d714467a788 --shared
  Created a new firewall:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | e16342f9-7fd9-45c1-845f-f13f0dffc0dd |
  | id | 44a7cf82-8606-45d9-be45-a1f5253ce6f4 |
  | name   | f1   |
  | status | PENDING_CREATE   |
  | tenant_id  | 9bc0f43fcefe46ceb1124d714467a788 |
  ++--+
   
  2. admin is able to update shared attribute for that tenant firewall
   
  root@overcloud-controller0-eq56cfdcitoq:~# fwu f1 --shared true

[Yahoo-eng-team] [Bug 1337245] [NEW] Changing user password is totally mishandled

2014-07-03 Thread mouadino
Public bug reported:

Problems:


 1. There is a special RBAC entry for identity:change_password in v2 but no in 
the v3 default policy.json that come with the keystone repository.
 
 2. In v2 the set_user_password controller method call update_user, which mean 
that setting only 'identity:change_password' to 'rule:owner' will not works 
unless 'identity:update_user' is also changed to 'rule:owner' or similar.
 
 3. Both the keystoneclient and openstackclient do a GET /v./users/ before 
sending a PUT /users//password which mean that to allow user to change his 
password from command line, user should also be able to do a get i.e. 
'identity:get_user' should also be changed to 'rule:owner'.

 4. The openstackclient v3 doesn't use
identityclient.users.update_password for just updating the password
instead it use the full user update, which will not work with just
changing the 'identity:change_password'.

NOTE: Stating the obvious, I picked up 'rule:owner' as an example, which
is what make sense in our case, but the problem is not specific to this
rule

** Affects: keystone
 Importance: Undecided
 Assignee: mouadino (mouadino)
 Status: New

** Affects: python-keystoneclient
 Importance: Undecided
 Assignee: mouadino (mouadino)
 Status: New

** Affects: python-openstackclient
 Importance: Undecided
 Assignee: mouadino (mouadino)
 Status: New

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Summary changed:

- Changing own password is totally mishandled
+ Changing user password is totally mishandled

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1337245

Title:
  Changing user password is totally mishandled

Status in OpenStack Identity (Keystone):
  New
Status in Python client library for Keystone:
  New
Status in OpenStack Command Line Client:
  New

Bug description:
  Problems:
  

   1. There is a special RBAC entry for identity:change_password in v2 but no 
in the v3 default policy.json that come with the keystone repository.
   
   2. In v2 the set_user_password controller method call update_user, which 
mean that setting only 'identity:change_password' to 'rule:owner' will not 
works unless 'identity:update_user' is also changed to 'rule:owner' or similar.
   
   3. Both the keystoneclient and openstackclient do a GET /v./users/ 
before sending a PUT /users//password which mean that to allow user to 
change his password from command line, user should also be able to do a get 
i.e. 'identity:get_user' should also be changed to 'rule:owner'.

   4. The openstackclient v3 doesn't use
  identityclient.users.update_password for just updating the password
  instead it use the full user update, which will not work with just
  changing the 'identity:change_password'.

  NOTE: Stating the obvious, I picked up 'rule:owner' as an example,
  which is what make sense in our case, but the problem is not specific
  to this rule

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1337245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337236] [NEW] vmware: nova-compute will not start if some instance relying datastore not available

2014-07-03 Thread zhu zhu
Public bug reported:

First use vcenter driver to spawn some instances to one of the
datastores that esxi is binding to.  Later this datastore became
unavailable due to certain reason(power off or network problem).  Then
when restart nova-compute, found that compute service will exist with
errors.  This will openstack compute not usable.

2014-07-03 01:38:13.961 3634 DEBUG nova.compute.manager 
[req-11bc0618-8696-464d-8820-7565db8f44c3 None None] [instance: 9428cf95-5
37f-48f6-b79e-faa981f6066d] NV-AC7AA80 Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:10
54
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/eventlet/hubs/poll.py", line 97, in 
wait
readers.get(fileno, noop).cb(fileno)
  File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, in 
main
result = function(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", 
line 480, in run_service
service.start()
  File "/usr/lib/python2.6/site-packages/nova/service.py", line 180, in start
self.manager.init_host()
  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1037, 
in init_host
self._init_instance(context, instance)
  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 865, in 
_init_instance
try_reboot, reboot_type = self._retry_reboot(context, instance)
  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 963, in 
_retry_reboot
current_power_state = self._get_power_state(context, instance)
  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1056, 
in _get_power_state
return self.driver.get_info(instance)["state"]
  File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 
862, in get_info
return _vmops.get_info(instance)
  File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 
1376, in get_info
max_mem = int(query['summary.config.memorySizeMB']) * 1024
KeyError: 'summary.config.memorySizeMB'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337236

Title:
  vmware: nova-compute will not start if some instance relying datastore
  not available

Status in OpenStack Compute (Nova):
  New

Bug description:
  First use vcenter driver to spawn some instances to one of the
  datastores that esxi is binding to.  Later this datastore became
  unavailable due to certain reason(power off or network problem).  Then
  when restart nova-compute, found that compute service will exist with
  errors.  This will openstack compute not usable.

  2014-07-03 01:38:13.961 3634 DEBUG nova.compute.manager 
[req-11bc0618-8696-464d-8820-7565db8f44c3 None None] [instance: 9428cf95-5
  37f-48f6-b79e-faa981f6066d] NV-AC7AA80 Checking state _get_power_state 
/usr/lib/python2.6/site-packages/nova/compute/manager.py:10
  54
  Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/eventlet/hubs/poll.py", line 97, in 
wait
  readers.get(fileno, noop).cb(fileno)
File "/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 194, 
in main
  result = function(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", 
line 480, in run_service
  service.start()
File "/usr/lib/python2.6/site-packages/nova/service.py", line 180, in start
  self.manager.init_host()
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1037, 
in init_host
  self._init_instance(context, instance)
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 865, 
in _init_instance
  try_reboot, reboot_type = self._retry_reboot(context, instance)
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 963, 
in _retry_reboot
  current_power_state = self._get_power_state(context, instance)
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1056, 
in _get_power_state
  return self.driver.get_info(instance)["state"]
File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 
862, in get_info
  return _vmops.get_info(instance)
File "/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/vmops.py", line 
1376, in get_info
  max_mem = int(query['summary.config.memorySizeMB']) * 1024
  KeyError: 'summary.config.memorySizeMB'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337216] [NEW] Table 'agents' is missing for bigswitch plugin

2014-07-03 Thread Ann Kamyshnikova
Public bug reported:

Running migrations for Bigswitch plugin got an error
http://paste.openstack.org/show/85380/. For creating table
'networkdhcpagentbindings'  table 'agents' is needed to exist, but
Bigswitch plugin was not added to the migration_for_plugins list in
migration 511471cc46b_agent_ext_model_supp.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: bigswitch db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337216

Title:
  Table 'agents' is missing for bigswitch plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Running migrations for Bigswitch plugin got an error
  http://paste.openstack.org/show/85380/. For creating table
  'networkdhcpagentbindings'  table 'agents' is needed to exist, but
  Bigswitch plugin was not added to the migration_for_plugins list in
  migration 511471cc46b_agent_ext_model_supp.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337214] [NEW] VMware: Fail to boot VM when using VDS or the port gropu be created on different vSwitch

2014-07-03 Thread David Geng
Public bug reported:

Fail to boot a instance when using the nova-network, I got the error
message in log file

'InvalidVLANPortGroup: vSwitch which contains the port group
VS5_GUEST_SCODEV_G1_V231 is not associated with the desired physical
adapter. Expected vSwitch is vSwitch0, but the one associated is
vSwitch5.

Currently, the logic in vmware driver is that all ESXi systems must have
the exact same networking configuration (the same PortGroup/vSwitch/pNIC
mapping), which typically isn't the case on customer environments.

In my case, on ESX1 the portgroup could be on vSwitch1, but on ESX2 the 
portgroup could be on vSwitch2, so one of them would fail.
And also if I use the VDS, it doesn't have a physical adapter associated with 
it and there is a virtual router/firewall connected to that vSwitch which then 
acts as the gateway for the different PortGroups on it.

So our vSwitch validation should be enhanced.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337214

Title:
  VMware: Fail to boot VM when using VDS or the port gropu be created on
  different vSwitch

Status in OpenStack Compute (Nova):
  New

Bug description:
  Fail to boot a instance when using the nova-network, I got the error
  message in log file

  'InvalidVLANPortGroup: vSwitch which contains the port group
  VS5_GUEST_SCODEV_G1_V231 is not associated with the desired physical
  adapter. Expected vSwitch is vSwitch0, but the one associated is
  vSwitch5.

  Currently, the logic in vmware driver is that all ESXi systems must
  have the exact same networking configuration (the same
  PortGroup/vSwitch/pNIC mapping), which typically isn't the case on
  customer environments.

  In my case, on ESX1 the portgroup could be on vSwitch1, but on ESX2 the 
portgroup could be on vSwitch2, so one of them would fail.
  And also if I use the VDS, it doesn't have a physical adapter associated with 
it and there is a virtual router/firewall connected to that vSwitch which then 
acts as the gateway for the different PortGroups on it.

  So our vSwitch validation should be enhanced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337185] [NEW] Brocadeport table is not up to models

2014-07-03 Thread Ann Kamyshnikova
Public bug reported:

In models for brocadeport for columns admin_state_up and  network_id
have nullable=False, but it was skipped in migrations. Also vlan_id have
type Sring(10) instead of String(36) and there is a missing foreign key
on network_id column.

Difference is shown here http://paste.openstack.org/show/85376/

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1337185

Title:
  Brocadeport table is not up to models

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In models for brocadeport for columns admin_state_up and  network_id
  have nullable=False, but it was skipped in migrations. Also vlan_id
  have type Sring(10) instead of String(36) and there is a missing
  foreign key on network_id column.

  Difference is shown here http://paste.openstack.org/show/85376/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1337185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp