[Yahoo-eng-team] [Bug 1370869] [NEW] Cannot display project overview page due to cannot convert float infinity to integer error

2014-09-18 Thread Akihiro Motoki
Public bug reported:

Due to nova bug 1370867, nova absolute-limits sometimes returns -1 for *Used 
fields rather than 0.
If this happens, the project overview page cannot be displayed with cannot 
convert float infinity to integer error.
Users cannot use the dashboard without specifying URL directly, so it is better 
the dashboard guards this situation.

** Affects: horizon
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370869

Title:
  Cannot display project overview page due to cannot convert float
  infinity to integer error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Due to nova bug 1370867, nova absolute-limits sometimes returns -1 for *Used 
fields rather than 0.
  If this happens, the project overview page cannot be displayed with cannot 
convert float infinity to integer error.
  Users cannot use the dashboard without specifying URL directly, so it is 
better the dashboard guards this situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370803] Re: Neutron client fetches 'neutron user' token but tries to create port on 'demo' tenant network

2014-09-18 Thread wangrich
** Project changed: openstack-manuals = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370803

Title:
  Neutron client fetches 'neutron user' token but tries to create port
  on 'demo' tenant network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm following OpenStack Icehouse installation guide on Ubuntu 14.04.
  After Glance, Nova, Neutron and Keystone were deployed, I tried to
  boot a CirrOS instance. However it failed.

  I checked nova-compute.log, and found that before Neutron client tried
  to create a port for the VM on tenant network(username: demo,
  password: demopass, tenant: demo), it connected to Keystone server for
  token, with credential of user 'neutron'(username: neutron, password:
  REDACTED) attached to the request. After the token was returned by
  Keystone, the Neutron client put that token in the request to Neutron
  server to create port. And finally the Neutron server return 'HTTP
  401'.

  Is there a bug in neutron client of misusing the credential or the
  manual misled me in configuring Neutron?

  I don't know which manual page should be attached in this report..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370868] [NEW] SDNVE plugin sets the tenant-name in the controller as the UUID instead of using the openstack project name

2014-09-18 Thread Mamta Prabhu
Public bug reported:

During neutron network-create operation,  IBM SDN-VE plugin implicitly
also creates the tenant in the sdn-ve controller.

Its extract the tenant details using the keystone-client and issue a
POST for the tenant creation.

When this tenant gets created on the SDN-VE controller, the tenant-name
is being set to UUID of the openstack-tenant instead of the actual
project name.

The name of the tenant with the controller should be same as the
openstack project/tenant name

** Affects: neutron
 Importance: Undecided
 Assignee: Mamta Prabhu (mamtaprabhu)
 Status: In Progress


** Tags: ibm

** Changed in: neutron
 Assignee: (unassigned) = Mamta Prabhu (mamtaprabhu)

** Changed in: neutron
   Status: New = In Progress

** Changed in: neutron
 Assignee: Mamta Prabhu (mamtaprabhu) = (unassigned)

** Changed in: neutron
 Assignee: (unassigned) = Mamta Prabhu (mamtaprabhu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370868

Title:
  SDNVE plugin sets the tenant-name in the controller as the UUID
  instead of using the openstack project name

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  During neutron network-create operation,  IBM SDN-VE plugin implicitly
  also creates the tenant in the sdn-ve controller.

  Its extract the tenant details using the keystone-client and issue a
  POST for the tenant creation.

  When this tenant gets created on the SDN-VE controller, the tenant-
  name is being set to UUID of the openstack-tenant instead of the
  actual project name.

  The name of the tenant with the controller should be same as the
  openstack project/tenant name

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370885] [NEW] The log info is error in the method '_sync_instance_power_state'

2014-09-18 Thread Zi Lian Ji
Public bug reported:

In the method '_sync_instance_power_state', the log info is wrong.

if self.host != db_instance.host:
# on the sending end of nova-compute _sync_power_state
# may have yielded to the greenthread performing a live
# migration; this in turn has changed the resident-host
# for the VM; However, the instance is still active, it
# is just in the process of migrating to another host.
# This implies that the compute source must relinquish
# control to the compute destination.
LOG.info(_(During the sync_power process the 
   instance has moved from 
   host %(src)s to host %(dst)s) %
   {'src': self.host,
'dst': db_instance.host},
 instance=db_instance)
return

The 'src' value should be 'db_instance.host'and the 'dst' value should
be the 'self.host'.  The method '_post_live_migration' should be invoked
after the live migration completes and it is used to update the
database.

In the situation, the instance has been migrated to another host
successfully and the database has not been updated. The
'_sync_instance_power_state' method is executed. Nova can list it in the
dst host with the driver and the data in the database should be the
source host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370885

Title:
  The log info is error in the method '_sync_instance_power_state'

Status in OpenStack Compute (Nova):
  New

Bug description:
  In the method '_sync_instance_power_state', the log info is wrong.

  if self.host != db_instance.host:
  # on the sending end of nova-compute _sync_power_state
  # may have yielded to the greenthread performing a live
  # migration; this in turn has changed the resident-host
  # for the VM; However, the instance is still active, it
  # is just in the process of migrating to another host.
  # This implies that the compute source must relinquish
  # control to the compute destination.
  LOG.info(_(During the sync_power process the 
 instance has moved from 
 host %(src)s to host %(dst)s) %
 {'src': self.host,
  'dst': db_instance.host},
   instance=db_instance)
  return

  The 'src' value should be 'db_instance.host'and the 'dst' value should
  be the 'self.host'.  The method '_post_live_migration' should be
  invoked after the live migration completes and it is used to update
  the database.

  In the situation, the instance has been migrated to another host
  successfully and the database has not been updated. The
  '_sync_instance_power_state' method is executed. Nova can list it in
  the dst host with the driver and the data in the database should be
  the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370898] [NEW] Big Switch unit tests leave exceptions in subunit log

2014-09-18 Thread Kevin Benton
Public bug reported:

The Big Switch capabilities check code throws an exception during the
unit tests because a magicmock is passed into json.loads. This doesn't
affect the unit tests but it leaves stack traces in the test log that
take up unnecessary space.


ties. Newer API calls won't be supported.
Traceback (most recent call last):
  File neutron/plugins/bigswitch/servermanager.py, line 116, in 
get_capabilities
self.capabilities = jsonutils.loads(body)
  File neutron/openstack/common/jsonutils.py, line 172, in loads
return json.loads(strutils.safe_decode(s, encoding), **kwargs)
  File /usr/lib/python2.7/json/__init__.py, line 326, in loads
return _default_decoder.decode(s)
  File /usr/lib/python2.7/json/decoder.py, line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File /usr/lib/python2.7/json/decoder.py, line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 1 column 1 (char 1)
2014-09-17 01:11:55,083 INFO [neutron.plugins.bigswitch.servermanager] The 
following capabilities were received for localhost: []
2014-09-17 01:11:55,083ERROR [neutron.plugins.bigswitch.servermanager] 
Couldn't retrieve capabilities. Newer API calls won't be supported.
Traceback (most recent call last):
  File neutron/plugins/bigswitch/servermanager.py, line 116, in 
get_capabilities
self.capabilities = jsonutils.loads(body)
  File neutron/openstack/common/jsonutils.py, line 172, in loads
return json.loads(strutils.safe_decode(s, encoding), **kwargs)
  File /usr/lib/python2.7/json/__init__.py, line 326, in loads
return _default_decoder.decode(s)
  File /usr/lib/python2.7/json/decoder.py, line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File /usr/lib/python2.7/json/decoder.py, line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 1 column 1 (char 1)

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370898

Title:
  Big Switch unit tests leave exceptions in subunit log

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The Big Switch capabilities check code throws an exception during the
  unit tests because a magicmock is passed into json.loads. This doesn't
  affect the unit tests but it leaves stack traces in the test log that
  take up unnecessary space.

  
  ties. Newer API calls won't be supported.
  Traceback (most recent call last):
File neutron/plugins/bigswitch/servermanager.py, line 116, in 
get_capabilities
  self.capabilities = jsonutils.loads(body)
File neutron/openstack/common/jsonutils.py, line 172, in loads
  return json.loads(strutils.safe_decode(s, encoding), **kwargs)
File /usr/lib/python2.7/json/__init__.py, line 326, in loads
  return _default_decoder.decode(s)
File /usr/lib/python2.7/json/decoder.py, line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File /usr/lib/python2.7/json/decoder.py, line 382, in raw_decode
  obj, end = self.scan_once(s, idx)
  ValueError: Expecting property name: line 1 column 1 (char 1)
  2014-09-17 01:11:55,083 INFO [neutron.plugins.bigswitch.servermanager] 
The following capabilities were received for localhost: []
  2014-09-17 01:11:55,083ERROR [neutron.plugins.bigswitch.servermanager] 
Couldn't retrieve capabilities. Newer API calls won't be supported.
  Traceback (most recent call last):
File neutron/plugins/bigswitch/servermanager.py, line 116, in 
get_capabilities
  self.capabilities = jsonutils.loads(body)
File neutron/openstack/common/jsonutils.py, line 172, in loads
  return json.loads(strutils.safe_decode(s, encoding), **kwargs)
File /usr/lib/python2.7/json/__init__.py, line 326, in loads
  return _default_decoder.decode(s)
File /usr/lib/python2.7/json/decoder.py, line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File /usr/lib/python2.7/json/decoder.py, line 382, in raw_decode
  obj, end = self.scan_once(s, idx)
  ValueError: Expecting property name: line 1 column 1 (char 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370901] [NEW] Nova EC2 doesn't create empty volume while launching an instance

2014-09-18 Thread Feodor Tersin
Public bug reported:

AWS is able to create and attach a new empty volume while launching an 
instance. See 
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RunInstances.html:
---
To create an empty Amazon EBS volume, omit the snapshot ID and specify a volume 
size instead. For example: /dev/sdh=:20.
---
This can be set by run_instances parameters, and by image block device mapping 
structure.

But Nova EC2 isn't able to do this:

$ euca-run-instances --instance-type m1.nano ami-0001 
--block-device-mapping /dev/vdd=:1
euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is Invalid: 
Unrecognized legacy format.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370901

Title:
  Nova EC2 doesn't create empty volume while launching an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  AWS is able to create and attach a new empty volume while launching an 
instance. See 
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RunInstances.html:
  ---
  To create an empty Amazon EBS volume, omit the snapshot ID and specify a 
volume size instead. For example: /dev/sdh=:20.
  ---
  This can be set by run_instances parameters, and by image block device 
mapping structure.

  But Nova EC2 isn't able to do this:

  $ euca-run-instances --instance-type m1.nano ami-0001 
--block-device-mapping /dev/vdd=:1
  euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is 
Invalid: Unrecognized legacy format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370914] [NEW] When two ovs ports contain same external_ids:face-id field, ovs agent might fail finding correct port.

2014-09-18 Thread John Schwarz
Public bug reported:

As the title says, if there are 2 different ovs ports with the same
external_ids:iface-id field (which is the port_id), when at least one of
them is managed by the ovs agent, it might fail finding the correct one
if they are not connected to the same bridge.

Steps to reproduce:
1. Create a router with an internal port to some Neutron network
2. Find the port in 'ovs-vsctl show'
3. Use the following command to find the port_id in ovs: sudo ovs-vsctl  
--columns=external_ids list Interface port_name
4. Use the following commands to create a new port with the same field in a new 
bridge:
 sudo ovs-vsctl add-br a
 sudo ip link add dummy12312312 type dummy
 sudo ovs-vsctl add-port br-a dummy12312312
 sudo ovs-vsctl set Interface dummy12312312 external_ids:iface-id=port_id # 
port_id was obtained in point 3.
5. Restart the ovs agent.

At this point the ovs agent's log should show Port: dummy12312312 is on
br-a, not on br-int.

Expected result: ovs agent should know to iterate though the options and
find the correct port in the correct bridge.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = John Schwarz (jschwarz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370914

Title:
  When two ovs ports contain same external_ids:face-id field, ovs agent
  might fail finding correct port.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As the title says, if there are 2 different ovs ports with the same
  external_ids:iface-id field (which is the port_id), when at least one
  of them is managed by the ovs agent, it might fail finding the correct
  one if they are not connected to the same bridge.

  Steps to reproduce:
  1. Create a router with an internal port to some Neutron network
  2. Find the port in 'ovs-vsctl show'
  3. Use the following command to find the port_id in ovs: sudo ovs-vsctl  
--columns=external_ids list Interface port_name
  4. Use the following commands to create a new port with the same field in a 
new bridge:
   sudo ovs-vsctl add-br a
   sudo ip link add dummy12312312 type dummy
   sudo ovs-vsctl add-port br-a dummy12312312
   sudo ovs-vsctl set Interface dummy12312312 external_ids:iface-id=port_id 
# port_id was obtained in point 3.
  5. Restart the ovs agent.

  At this point the ovs agent's log should show Port: dummy12312312 is
  on br-a, not on br-int.

  Expected result: ovs agent should know to iterate though the options
  and find the correct port in the correct bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311401] Re: nova.virt.ironic tries to remove vif_port_id unnecessarily

2014-09-18 Thread James Polley
** Changed in: ironic
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311401

Title:
  nova.virt.ironic tries to remove vif_port_id unnecessarily

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  While spawning an instance, Ironic logs the following warning every
  time:

  2014-04-22 17:23:21.967 15379 WARNING wsme.api [-] Client-side error:
  Couldn't apply patch '[{'path': '/extra/vif_port_id', 'op':
  'remove'}]'. Reason: u'vif_port_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1311401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343359] Re: VMware: the retrieval of Datacenter is incorrect

2014-09-18 Thread Radoslav Gerganov
@Thang Correct, this looks fine now, thanks for looking in.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343359

Title:
  VMware: the retrieval of Datacenter is incorrect

Status in OpenStack Compute (Nova):
  Invalid
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  The implementation of get_datacenter_ref_and_name() in vmops.py is
  incorrect -- it simply returns the first Datacenter found instead
  searching for the relevant one.

  We need to return the datacenter which contains the corresponding
  cluster in VMwareVMOps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370935] [NEW] Description column in Projects table under 'Identity' when empty has '-' for only some cells

2014-09-18 Thread mariam john
Public bug reported:

For the default projects in the projects table under Identity, the
description field has value '-' to represent an empty description
whereas for newly created projects that do no have any description, the
field value is empty. The value for an empty description field needs to
be consistent.

** Affects: horizon
 Importance: Undecided
 Assignee: mariam john (mariamj)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = mariam john (mariamj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370935

Title:
  Description column in Projects table under 'Identity' when empty has
  '-' for only some cells

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For the default projects in the projects table under Identity, the
  description field has value '-' to represent an empty description
  whereas for newly created projects that do no have any description,
  the field value is empty. The value for an empty description field
  needs to be consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370934] Re: Add details on dhcp_agents_per_network option for DHCP agent HA

2014-09-18 Thread Tom Fifield
This option is currently listed in:

http://docs.openstack.org/trunk/config-reference/content
/section_networking-options-reference.html

however, the help text isn't very good:

dhcp_agents_per_network = 1 (IntOpt) Number of DHCP agents scheduled
to host a network.


To fix this bug, I think we need two patches
1. a patch to neutron to fix the help text to better explain why/when you would 
alter this option
2. ensure the new networking guide provides detailed information in the agent 
configuration area

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New = Confirmed

** Changed in: openstack-manuals
   Status: Confirmed = Triaged

** Changed in: openstack-manuals
   Importance: Undecided = Low

** Changed in: openstack-manuals
Milestone: None = juno

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370934

Title:
  Add details on dhcp_agents_per_network option for DHCP agent HA

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Manuals:
  Triaged

Bug description:
  The current documentation does not specify any information about the
  dhcp_agents_per_network configuration option in
  /etc/neutron/neutron.conf. Using this option, it is possible to have
  the scheduler automatically assign multiple DHCP agents to a tenant
  network, which provides high availability.

  If that option is not set, you have to manually assign multiple DHCP
  agents to each network, which does not scale in terms of management.

  Would it be possible to document this option here?

  ---
  Built: 2014-04-17T10:27:55 00:00
  git SHA: 1842612f99f1fe87149db9a3cb0bd43e7892e22b
  URL: 
http://docs.openstack.org/trunk/config-reference/content/multi_agent_demo_configuration.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/networking/section_networking-multi-dhcp-agents.xml
  xml:id: multi_agent_demo_configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370948] [NEW] ComputeCapabilitiesFilter doesn't provide enough information on failure

2014-09-18 Thread Matthew Gilliard
Public bug reported:

ComputeCapabilitiesFilter code is convoluted.  There are at least 3
different ways it can fail, and 2 of them don't provide any output at
all.  The one which does logs at debug (should be info), and does not
actually provide enough info to diagnose the problem.

The code around here
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L33
and below could do with a nice bit of TLC.  For example:

- the for loop on line #49 can only ever be iterating on range(0,1) which is 
once, ie no need for a loop.
- the redefinition of cap makes it difficult to reason about what data is 
being worked on.
- the above-mentioned lack of logging.

I recommend checking the strength of the unit tests also, to give
confidence that a refactoring doesn't introduce any regressions.

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- ComputeCapabilitiesFilter doesn't provide enough debug output on failure
+ ComputeCapabilitiesFilter doesn't provide enough information on failure

** Description changed:

  ComputeCapabilitiesFilter code is convoluted.  There are at least 3
- different ways it can fail, and 2 of them don't provide any debug output
- at all.  The one which does logs at debug (should be info), and does not
+ different ways it can fail, and 2 of them don't provide any output at
+ all.  The one which does logs at debug (should be info), and does not
  actually provide enough info to diagnose the problem.
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L69
  Could do with a nice bit of TLC.

** Description changed:

  ComputeCapabilitiesFilter code is convoluted.  There are at least 3
  different ways it can fail, and 2 of them don't provide any output at
  all.  The one which does logs at debug (should be info), and does not
  actually provide enough info to diagnose the problem.
- 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L69
- Could do with a nice bit of TLC.
+ 
+ The code around here
+ 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L33
+ and below could do with a nice bit of TLC.  For example:
+ 
+ - the for loop on line #49 can only ever be iterating on range(0,1) which is 
once, ie no need for a loop.
+ - the redefinition of cap makes it difficult to reason about what data is 
being worked on.
+ - the above-mentioned lack of logging.
+ 
+ Recommend checking the strength of the unit tests also, to give
+ confidence that a refactoring doesn't introduce any regressions.

** Description changed:

  ComputeCapabilitiesFilter code is convoluted.  There are at least 3
  different ways it can fail, and 2 of them don't provide any output at
  all.  The one which does logs at debug (should be info), and does not
  actually provide enough info to diagnose the problem.
  
  The code around here
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L33
  and below could do with a nice bit of TLC.  For example:
  
  - the for loop on line #49 can only ever be iterating on range(0,1) which is 
once, ie no need for a loop.
  - the redefinition of cap makes it difficult to reason about what data is 
being worked on.
  - the above-mentioned lack of logging.
  
- Recommend checking the strength of the unit tests also, to give
+ I recommend checking the strength of the unit tests also, to give
  confidence that a refactoring doesn't introduce any regressions.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370948

Title:
  ComputeCapabilitiesFilter doesn't provide enough information on
  failure

Status in OpenStack Compute (Nova):
  New

Bug description:
  ComputeCapabilitiesFilter code is convoluted.  There are at least 3
  different ways it can fail, and 2 of them don't provide any output at
  all.  The one which does logs at debug (should be info), and does not
  actually provide enough info to diagnose the problem.

  The code around here
  
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_capabilities_filter.py#L33
  and below could do with a nice bit of TLC.  For example:

  - the for loop on line #49 can only ever be iterating on range(0,1) which is 
once, ie no need for a loop.
  - the redefinition of cap makes it difficult to reason about what data is 
being worked on.
  - the above-mentioned lack of logging.

  I recommend checking the strength of the unit tests also, to give
  confidence that a refactoring doesn't introduce any regressions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1370954] [NEW] glance 500's when passed image name with a 4-byte utf-8 character

2014-09-18 Thread Christopher Yeoh
Public bug reported:

Glance currently 500's when passed an image name with a 4-byte utf-8
character in the name. This is because the mysql utf-8 type only handles
up to 3-byte utf-8 characters:

See http://stackoverflow.com/questions/10957238/incorrect-string-value-
when-trying-to-insert-utf-8-into-mysql-via-jdbc

You can replicate this by using the tempest test: 
test_create_image_specify_multibyte_character_image_name (the positive not 
negative one)
You'll need to change utf8_name to 
utf8_name = data_utils.rand_name('\xF0\x9F\x92\xA9')

(note removal of unicode prefix)


Backtrace from nova request to glance that triggers this:

2014-09-18 07:35:47.343 843 DEBUG routes.middleware 
[797abe72-9a70-488e-9254-c71888536278 a2bb050fa6e647398991ebc635741cb1 
33760203c2944644
a9ee2a0433f45d0b - - -] Route path: '/images', defaults: {'action': u'create', 
'controller': glance.common.wsgi.Resource object at 0x7f90c
47a6d10} __call__ 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py:102
2014-09-18 07:35:47.343 843 DEBUG routes.middleware 
[797abe72-9a70-488e-9254-c71888536278 a2bb050fa6e647398991ebc635741cb1 
33760203c2944644
a9ee2a0433f45d0b - - -] Match dict: {'action': u'create', 'controller': 
glance.common.wsgi.Resource object at 0x7f90c47a6d10} __call__ /u
sr/local/lib/python2.7/dist-packages/routes/middleware.py:103
2014-09-18 07:35:47.348 843 ERROR glance.registry.api.v1.images 
[797abe72-9a70-488e-9254-c71888536278 a2bb050fa6e647398991ebc635741cb1 3376
0203c2944644a9ee2a0433f45d0b - - -] Unable to create image None
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images Traceback (most 
recent call last):
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/opt/stack/glance/glance/registry/api/v1/images.py, line 424, in c
reate
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images image_data 
= self.db_api.image_create(req.context, image_data)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/opt/stack/glance/glance/db/sqlalchemy/api.py, line 124, in image_
create
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images return 
_image_update(context, values, None, purge_props=False)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/local/lib/python2.7/dist-packages/retrying.py, line 92, in wr
apped_f
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images return 
Retrying(*dargs, **dkw).call(f, *args, **kw)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/local/lib/python2.7/dist-packages/retrying.py, line 239, in c
all
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images return 
attempt.get(self._wrap_exception)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
reraise(self.value[0], self.value[1], self.value[2])
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/local/lib/python2.7/dist-packages/retrying.py, line 233, in call
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images attempt = 
Attempt(fn(*args, **kwargs), attempt_number, False)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/opt/stack/glance/glance/db/sqlalchemy/api.py, line 759, in _image_update
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
image_ref.save(session=session)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/opt/stack/glance/glance/db/sqlalchemy/models.py, line 77, in save
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
super(GlanceBase, self).save(session or db_api.get_session())
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/models.py, line 48, 
in save
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
session.flush()
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1818, in 
flush
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
self._flush(objects)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1936, in 
_flush
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
transaction.rollback(_capture_exception=True)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 58, in 
__exit__
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
compat.reraise(exc_type, exc_value, exc_tb)
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1900, in 
_flush
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images 
flush_context.execute()
2014-09-18 07:35:47.348 843 TRACE glance.registry.api.v1.images   File 

[Yahoo-eng-team] [Bug 1370986] [NEW] move set_id_as_name_if_empty to the api side for rule_list, fwaas_list and policy_list

2014-09-18 Thread Liyingjun
Public bug reported:

we can move set_id_as_name_if_empty to the api loop for rule_list,firewall_list 
and policy_list to reduce a extra loop . 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L61
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L47
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L82

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370986

Title:
  move set_id_as_name_if_empty to the api side for rule_list, fwaas_list
  and policy_list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  we can move set_id_as_name_if_empty to the api loop for 
rule_list,firewall_list and policy_list to reduce a extra loop . 
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L61
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L47
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/firewalls/tabs.py#L82

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358206] Re: ovsdb_monitor.SimpleInterfaceMonitor throws eventlet.timeout.Timeout(5)

2014-09-18 Thread Salvatore Orlando
** Changed in: neutron
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358206

Title:
  ovsdb_monitor.SimpleInterfaceMonitor throws
  eventlet.timeout.Timeout(5)

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  This is found during functional testing, when .start() is called with
  block=True during sightly high load.

  This suggest the default timeout needs to be rised to make this module
  work in all situations.

  
https://review.openstack.org/#/c/112798/14/neutron/agent/linux/ovsdb_monitor.py
  (I will extract patch from here)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370989] [NEW] s3_store_host including port num not working

2014-09-18 Thread Masashi Ozawa
Public bug reported:

Due to the following change in boto, s3_store_host with :port num
does not work for now.  If s3_store_host is configured with port num
like below, it will try to connect to 80 as the standard http port.

https://github.com/boto/boto/commit/0205cadde7b68c6648c410a9b1ef653655eae3b8

[/etc/glance/glace-api.conf]
s3_store_host=s3.cloudian.com:18080

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1370989

Title:
  s3_store_host including port num not working

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Due to the following change in boto, s3_store_host with :port num
  does not work for now.  If s3_store_host is configured with port num
  like below, it will try to connect to 80 as the standard http port.

  https://github.com/boto/boto/commit/0205cadde7b68c6648c410a9b1ef653655eae3b8

  [/etc/glance/glace-api.conf]
  s3_store_host=s3.cloudian.com:18080

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1370989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370999] [NEW] xenapi: windows agent unreliable due to reboots

2014-09-18 Thread John Garbutt
Public bug reported:

The windows nova-agent now can trigger a gust reboot during
resetnetwork, so the hostname is correctly updated.

Also there was always a reboot during the first stages of polling for
the agent version that can cause the need to wait for a call to timeout,
rather than detecting a reboot.

Either way, we need to take more care to detect reboots while talking to
the agent.

** Affects: nova
 Importance: Medium
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: xenserver

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370999

Title:
  xenapi: windows agent unreliable due to reboots

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The windows nova-agent now can trigger a gust reboot during
  resetnetwork, so the hostname is correctly updated.

  Also there was always a reboot during the first stages of polling for
  the agent version that can cause the need to wait for a call to
  timeout, rather than detecting a reboot.

  Either way, we need to take more care to detect reboots while talking
  to the agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371017] [NEW] Code comments still refer to tenant_id when the code says project_id

2014-09-18 Thread Sam Betts
Public bug reported:

Several comments in the openstack_dashboard code still refer to
tenant_id even after the code has been updated to use project_id this
can be confusing when reading the code without knowing the history. At
least one example of this is L93 in openstack_dashboard/policy.py

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371017

Title:
  Code comments still refer to tenant_id when the code says project_id

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Several comments in the openstack_dashboard code still refer to
  tenant_id even after the code has been updated to use project_id this
  can be confusing when reading the code without knowing the history. At
  least one example of this is L93 in openstack_dashboard/policy.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223695] Re: nova server creation with inexistent security group raises 404 with neutron

2014-09-18 Thread Oleg Bondarev
Checked on latest master - both for nova-net and neutron the response is
400. Marking as invalid.

** Changed in: nova
   Status: Confirmed = Invalid

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223695

Title:
  nova server creation with inexistent security group raises 404 with
  neutron

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Trying to create a new server with a security group that does not
  exist raises a 404 error with neutron, but not with nova-network.
  Since the security group not existing is not directly related to the
  server creation, I agree with the nova-network API, that the raised
  error should be 400, not 404.

  The following test fails in tempest when openstack is configured with
  neutron:
  
tempest.tests.compute.servers.test_servers_negative:ServersNegativeTest.test_create_with_nonexistent_security_group

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1223695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371040] [NEW] glance store rbd driver get method returns tupe more than basic driver defines

2014-09-18 Thread Yaguang Tang
Public bug reported:

glance_store/_drivers/rbd.py

def get(self, location, offset=0, chunk_size=None, context=None):
208 
209 Takes a `glance_store.location.Location` object that indicates
210 where to find the image file, and returns a tuple of generator
211 (for reading the image file) and image_size
212 
213 :param location `glance_store.location.Location` object, supplied
214 from glance_store.location.get_location_from_uri()
215 :raises `glance_store.exceptions.NotFound` if image does not exist
216 
217 loc = location.store_location
218 return (ImageIterator(loc.image, self),
219 self.get_size(location), chunk_size)


the return tupe should be two values, not three, this is what all else 
expected. 

@@ -453,7 +453,7 @@ class Controller(controller.BaseController):
 if dest is not None:
 src_store.READ_CHUNKSIZE = dest.WRITE_CHUNKSIZE
 
 image_data, image_size = src_store.get(loc, context=context)

when using rbd backend, an exception raised  too many values to unpack

tested with lastest trunk code base

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371040

Title:
  glance store rbd driver get method returns tupe more than basic driver
  defines

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance_store/_drivers/rbd.py

  def get(self, location, offset=0, chunk_size=None, context=None):
  208 
  209 Takes a `glance_store.location.Location` object that indicates
  210 where to find the image file, and returns a tuple of generator
  211 (for reading the image file) and image_size
  212 
  213 :param location `glance_store.location.Location` object, supplied
  214 from glance_store.location.get_location_from_uri()
  215 :raises `glance_store.exceptions.NotFound` if image does not exist
  216 
  217 loc = location.store_location
  218 return (ImageIterator(loc.image, self),
  219 self.get_size(location), chunk_size)

  
  the return tupe should be two values, not three, this is what all else 
expected. 

  @@ -453,7 +453,7 @@ class Controller(controller.BaseController):
   if dest is not None:
   src_store.READ_CHUNKSIZE = dest.WRITE_CHUNKSIZE
   
   image_data, image_size = src_store.get(loc, context=context)

  when using rbd backend, an exception raised  too many values to unpack

  tested with lastest trunk code base

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371046] [NEW] No type check for resource-id value in EC2 describe_tags filter

2014-09-18 Thread Feodor Tersin
Public bug reported:

$ euca-describe-tags --filter resource-id=vol-nnn
returns tags for ami-nnn instance.

For example:
$ euca-describe-tags
TAG i-000e  instancexxx yyy

$ euca-describe-tags --filter resource-id=vol-000e
TAG i-000e  instancexxx yyy

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371046

Title:
  No type check for resource-id value in EC2 describe_tags filter

Status in OpenStack Compute (Nova):
  New

Bug description:
  $ euca-describe-tags --filter resource-id=vol-nnn
  returns tags for ami-nnn instance.

  For example:
  $ euca-describe-tags
  TAG   i-000e  instancexxx yyy

  $ euca-describe-tags --filter resource-id=vol-000e
  TAG   i-000e  instancexxx yyy

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371045] [NEW] there is no sample configurations for glance_store

2014-09-18 Thread Yaguang Tang
Public bug reported:

There are a lot of options are defined under glance_store section
 but no sample configurations in /etc/glance-api.conf.sample file.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371045

Title:
  there is no sample configurations for glance_store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  There are a lot of options are defined under glance_store section
   but no sample configurations in /etc/glance-api.conf.sample file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370763] Re: Not responding to grenade 'kill' command

2014-09-18 Thread Sean Dague
This isn't a nova issue, this is an issue with screen -X stuff sometimes
dropping commands. There is work to change that in devstack / grenade.

** Project changed: nova = grenade

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370763

Title:
  Not responding to grenade 'kill' command

Status in devstack - openstack dev environments:
  Confirmed
Status in Grenade - OpenStack upgrade testing:
  New

Bug description:
  From a few recent gate failures it seems like nova-compute is not
  responding to being killed by grenade.

  For example:

  http://logs.openstack.org/66/121966/1/check/check-grenade-
  dsvm/c37f70d/logs/grenade.sh.txt.gz#_2014-09-17_00_16_47_262

  The nova-compute process was requested to stop at 16_47,

  It very much appears that nova-compute was idle at and before this
  time.

  http://logs.openstack.org/66/121966/1/check/check-grenade-
  dsvm/c37f70d/logs/old/screen-n-cpu.txt.gz#_2014-09-17_00_16_28_611

  Notice the last log there is from 16_28, nearly 20 seconds before
  shutdown was requested.

  About 7 seconds (at 16_53) after being requested to stop via the
  command at 16_47 it appears like grenade still found nova-compute
  running.

  http://logs.openstack.org/66/121966/1/check/check-grenade-
  dsvm/c37f70d/logs/grenade.sh.txt.gz#_2014-09-17_00_16_53_671

  This then causes grenade to fail proceeding forward, and therefore
  stops the job from declaring success...

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1370763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1007116] Re: nova should support showing 'DELETED' servers

2014-09-18 Thread Phil Day
I don't think this is a valid bug. Admins can already see deleted
instances by including deleted=True in the search options.   Non
admins shouldn't be able to see deleted instances.

** Changed in: nova
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1007116

Title:
  nova should support showing 'DELETED' servers

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Nova supports showing (HTTP GET) on deleted images and flavors. Trying
  to show a deleted server currently fails however:

  
  [root@nova1 ~]# nova delete 4e38efa4-6980-44b0-8774-3a28de88e22f
  [root@nova1 ~]# nova show 4e38efa4-6980-44b0-8774-3a28de88e22f
  ERROR: No server with a name or ID of '4e38efa4-6980-44b0-8774-3a28de88e22f' 
exists.

  
  It would seem for consistency that we should follow the model we do with 
images and flavors and allow 'DELETED' records that still exist in the database 
to be shown. See example of showing deleted image below:

  
  [root@nova1 ~]# nova image-show 01705a39-4deb-402c-a651-e6e8bbef83ef
  +--+--+
  | Property |Value |
  +--+--+
  | created  | 2012-05-31T20:39:36Z |
  | id   | 01705a39-4deb-402c-a651-e6e8bbef83ef |
  | minDisk  | 0|
  | minRam   | 0|
  | name | foo  |
  | progress | 0|
  | status   | DELETED  |
  | updated  | 2012-05-31T20:39:54Z |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1007116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370769] Re: Ensure all metadata definition code uses six.itertypes

2014-09-18 Thread Pawel Skowron
** Changed in: glance
 Assignee: (unassigned) = Pawel Skowron (pawel-skowron)

** Changed in: python-glanceclient
 Assignee: (unassigned) = Pawel Skowron (pawel-skowron)

** Changed in: glance
   Status: New = In Progress

** Changed in: python-glanceclient
   Status: New = Invalid

** Changed in: python-glanceclient
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1370769

Title:
  Ensure all metadata definition code uses six.itertypes

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Python client library for Glance:
  In Progress

Bug description:
  Similar to https://review.openstack.org/#/c/95467/

  According to https://wiki.openstack.org/wiki/Python3 dict.iteritems()
  should be replaced with six.iteritems(dict).

  All metadata definition code added should ensure that six.iteritems is
  used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1370769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371072] [NEW] xenapi: should clean up old snapshots before creating a new one

2014-09-18 Thread John Garbutt
Public bug reported:

When nova-compute gets forcably restarted, or fails, we get left over
snapshots.

We have some clean up code for after nova-compute comes back up, but it
would be good to clean up older snapshots, and generally try to minimize
the size of the snapshot that goes to glance.

** Affects: nova
 Importance: Low
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: xenserver

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
 Assignee: (unassigned) = John Garbutt (johngarbutt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371072

Title:
  xenapi: should clean up old snapshots before creating a new one

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When nova-compute gets forcably restarted, or fails, we get left over
  snapshots.

  We have some clean up code for after nova-compute comes back up, but
  it would be good to clean up older snapshots, and generally try to
  minimize the size of the snapshot that goes to glance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369945] Re: libvirt: libvirt reports even single cell NUMA topologies

2014-09-18 Thread Nikola Đipanov
Now that https://bugs.launchpad.net/nova/+bug/1369984  is fixed - we can
mark this as invalid.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369945

Title:
  libvirt: libvirt reports even single cell NUMA topologies

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Libvirt reports even single numa nodes in it's hypervisor capabilities
  (which we use to figure out if a compute host is a NUMA host). This is
  technically correct, but in Nova we assume that to mean - no NUMA
  capabilities when scheduling instances.

  Right now we just pass what we get from libvirt as is to the resource
  tracker, but we need to make sure that single NUMA node hypervisors
  are reported back to the resource tracker as non-NUMA.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371084] [NEW] nova-scheduler high cpu usage

2014-09-18 Thread Szymon
Public bug reported:

For no particular reason nova-scheduler cpu utilization can jump to
100%. I was unable to find any pattern and reason why this is happening.
We've small cluster 1 cloud controller and 7 node controllers. Except
high cpu usage nothing bad happens, we're able to create/delete
instances, after nova-scheduler restart everything goes back to normal
state.

I was able to strace 2 processes while nova-scheduler was using 100%
cpu.

1st process is in loop and it prints:
122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 12.322771 sec\0

2nd processes is in loop as well and it's repeating:
epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

Other processes doesn't have any issues with AMQP server just only nova-
scheduler.

We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371084

Title:
  nova-scheduler high cpu usage

Status in OpenStack Compute (Nova):
  New

Bug description:
  For no particular reason nova-scheduler cpu utilization can jump to
  100%. I was unable to find any pattern and reason why this is
  happening. We've small cluster 1 cloud controller and 7 node
  controllers. Except high cpu usage nothing bad happens, we're able to
  create/delete instances, after nova-scheduler restart everything goes
  back to normal state.

  I was able to strace 2 processes while nova-scheduler was using 100%
  cpu.

  1st process is in loop and it prints:
  122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall 
[-] task run outlasted interval by 12.322771 sec\0

  2nd processes is in loop as well and it's repeating:
  epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
  epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
  epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
  sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

  Other processes doesn't have any issues with AMQP server just only
  nova-scheduler.

  We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262595] Re: getting flavor by id prefers deleted flavors

2014-09-18 Thread Sean Dague
*** This bug is a duplicate of bug 1246017 ***
https://bugs.launchpad.net/bugs/1246017

** This bug is no longer a duplicate of bug 1153926
   flavor show shouldn't read deleted flavors.
** This bug has been marked a duplicate of bug 1246017
   Flavors v2,v3/flavors/​{flavor_id}​ API should not return deleted flavor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262595

Title:
  getting flavor by id prefers deleted flavors

Status in OpenStack Compute (Nova):
  New

Bug description:
  After changing my m1.tiny flavor from disk_gb==1 to disk_gb==0 I
  noticed something strange:

  $ nova flavor-list
  
++---+---+--+---+--+---+-+---+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public |
  
++---+---+--+---+--+---+-+---+
  | 1  | m1.tiny   | 512   | 0| 0 |  | 1 | 1.0 
| True  |

  Here the correct flavor is being shown. however when looking at it
  with details:

  $ nova flavor-show 1

  ++-+
  | Property   | Value   |
  ++-+
  | name   | m1.tiny |
  | ram| 512 |
  | OS-FLV-DISABLED:disabled   | False   |
  | vcpus  | 1   |
  | extra_specs| {}  |
  | swap   | |
  | os-flavor-access:is_public | True|
  | rxtx_factor| 1.0 |
  | OS-FLV-EXT-DATA:ephemeral  | 0   |
  | disk   | 1   |
  | id | 1   |
  ++-+

  
  disk_gb (here shown as disk is shown as 1. This is because the database 
query simply returns .first() match, which in my case is returning the deleted 
instance_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280534] Re: Incorrect answer of 'flavor show' if deleted flavors with the same flavor_id exist

2014-09-18 Thread Sean Dague
*** This bug is a duplicate of bug 1246017 ***
https://bugs.launchpad.net/bugs/1246017

** This bug is no longer a duplicate of bug 1153926
   flavor show shouldn't read deleted flavors.
** This bug has been marked a duplicate of bug 1246017
   Flavors v2,v3/flavors/​{flavor_id}​ API should not return deleted flavor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280534

Title:
  Incorrect answer of 'flavor show' if deleted flavors with the same
  flavor_id exist

Status in OpenStack Compute (Nova):
  New

Bug description:
  'Flavor show' can't give the right answer if deleted flavors with the
  same flavor_id exist in db.

  The result of 'flavor list' is right, it will show the new flavor's
  contents.

  But when you execuate 'flavor show flavor_id', the info only shows the old 
deleted record,
  the new info will be covered, and won't be given.

  That means, if one flavor_id was assigned, 'flavor show' will always show 
this record, 
  even though you delete it and create a new one with this flavor_id.

  Moreover, if you have more deleted flavors with same flavor_id, the
  result will only show the first deleted record.

  
  More test information can be found here:
  http://paste.openstack.org/show/65716/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371082] [NEW] nova-scheduler high cpu usage

2014-09-18 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

For no particular reason nova-scheduler cpu utilization can jump to
100%. I was unable to find any pattern and reason why this is happening.
We've small cluster 1 cloud controller and 7 node controllers. Except
high cpu usage nothing bad happens, we're able to create/delete
instances, after nova-scheduler restart everything goes back to normal
state.

I was able to strace 2 processes while nova-scheduler was using 100%
cpu.

1st process is in loop and it prints:
122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 12.322771 sec\0

2nd processes is in loop as well and it's repeating:
epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

Other processes doesn't have any issues with AMQP server just only nova-
scheduler.

We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
nova-scheduler high cpu usage
https://bugs.launchpad.net/bugs/1371082
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298509] Re: nova server-group-delete allows deleting server group with members

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
   Importance: Medium = Wishlist

** Changed in: nova
 Assignee: Balazs Gibizer (balazs-gibizer) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298509

Title:
  nova server-group-delete allows deleting server group with members

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently nova will let you do this:

  nova server-group-create --policy anti-affinity antiaffinitygroup
  nova boot --flavor=1 --image=cirros-0.3.1-x86_64-uec --hint 
group=group_uuid cirros0
  nova boot --flavor=1 --image=cirros-0.3.1-x86_64-uec --hint 
group=group_uuid cirros1
  nova boot --flavor=1 --image=cirros-0.3.1-x86_64-uec --hint 
group=group_uuid cirros2
  nova server-group-delete group_uuid

  Given that a server group is designed to logically group servers
  together, I don't think it makes sense to allow nova to delete a
  server group that currently has undeleted members in it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303481] Re: nova.scheduler.host_manager should cache CONF values

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
 Assignee: Gary Kotton (garyk) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303481

Title:
  nova.scheduler.host_manager should cache CONF values

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Performance improvement is possible by caching the values from the
  global CONF object that are repeatedly access, including
  CONF.scheduler_default_filters and CONF.scheduler_default_filters (see
  patch here:
  https://review.openstack.org/#/c/85594/3/nova/scheduler/host_manager.py)
  and CONF.scheduler_weight_classes.

  Avoiding CONF lookups where possible is just good practice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190635] Re: default scheduler fails when CPUs are the limiting resource

2014-09-18 Thread Sean Dague
The default filters are dramatically different now. I think we should
take this to Invalid because the new defaults might cover this. Also
there has been a bunch of recent patches about clearing up the error
messages.

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
 Assignee: Julia Varlamova (jvarlamova) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1190635

Title:
  default scheduler fails when CPUs are the limiting resource

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_scheduling.html
  (likewise, 
http://docs.openstack.org/grizzly/openstack-compute/admin/content/ch_scheduling.html)

  Indicate that the default scheduler uses:
  scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
  but no CoreFilter is in the default list.

  Accordingly, if the compute node with the least amount of memory is
  already has all available (possibly overcommitted) vCPUs in use, then
  the scheduler will happily launch the next instance on a node that
  CANNOT run the instance, and nova show will somewhat unclearly report:

  | fault  | {u'message': u'NoValidHost', u'code': 500,
  u'created': u'2013-06-13T14:40:46Z'} |

  At the very minimum, NO scheduler should be considered functional if
  it schedules an instance on a node that refuses to run it, while there
  are nodes that would be able to.

  This is running on Ubuntu Precise with Folsom deployed from the cloud
  archive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1190635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 962549] Re: Deleting a flavor can cause servers/detail to return HTTP 400

2014-09-18 Thread Sean Dague
This is fixed with the system metadata flavor saving

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
 Assignee: Sirisha Guduru (guduru-sirisha) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/962549

Title:
  Deleting a flavor can cause servers/detail to return HTTP 400

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Steps to reproduce on essex-4:

  1. Boot an instance using flavor ID 1
  2. After the instance is active, delete the flavor (nova-manage flavor delete 
m1.tiny)
  3. Make a call to /servers/details (ie nova list)

  # nova list
  The server could not comply with the request since it is either malformed or 
otherwise incorrect. (HTTP 400)

  from nova-api.log:

  2012-03-22 14:56:40 INFO nova.api.openstack.wsgi 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] GET 
http://10.10.10.90:8774/v1.1/test/servers/detail
  2012-03-22 14:56:40 DEBUG nova.api.openstack.wsgi 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] Unrecognized Content-Type 
provided in request from (pid=23300
  ) get_body /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:694
  2012-03-22 14:56:40 DEBUG nova.compute.api 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] Searching by: {'deleted': 
False, 'project_id': 'test'} from (pid=233
  00) get_all /usr/lib/python2.6/site-packages/nova/compute/api.py:1009
  2012-03-22 14:56:40 DEBUG nova.api.openstack.common 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] Generated ACTIVE from 
vm_state=active task_state=None. from
   (pid=23300) status_from_state 
/usr/lib/python2.6/site-packages/nova/api/openstack/common.py:98
  2012-03-22 14:56:40 ERROR nova.api.openstack.wsgi 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] Exception handling 
resource: 'NoneType' object is unsubscript
  able
  (nova.api.openstack.wsgi): TRACE: Traceback (most recent call last):
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 848, in 
_process_stack
  (nova.api.openstack.wsgi): TRACE: action_result = self.dispatch(meth, 
request, action_args)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 924, in 
dispatch
  (nova.api.openstack.wsgi): TRACE: return method(req=request, 
**action_args)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/servers.py, line 
382, in detail
  (nova.api.openstack.wsgi): TRACE: servers = self._get_servers(req, 
is_detail=True)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/servers.py, line 
465, in _get_servers
  (nova.api.openstack.wsgi): TRACE: return self._view_builder.detail(req, 
limited_list)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/views/servers.py, 
line 123, in detail
  (nova.api.openstack.wsgi): TRACE: return self._list_view(self.show, 
request, instances)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/views/servers.py, 
line 127, in _list_view
  (nova.api.openstack.wsgi): TRACE: server_list = [func(request, 
server)[server] for server in servers]
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/views/servers.py, 
line 61, in wrapped
  (nova.api.openstack.wsgi): TRACE: return func(self, request, instance)
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/views/servers.py, 
line 97, in show
  (nova.api.openstack.wsgi): TRACE: flavor: self._get_flavor(request, 
instance),
  (nova.api.openstack.wsgi): TRACE:   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/views/servers.py, 
line 172, in _get_flavor
  (nova.api.openstack.wsgi): TRACE: flavor_id = 
instance[instance_type][flavorid]
  (nova.api.openstack.wsgi): TRACE: TypeError: 'NoneType' object is 
unsubscriptable
  (nova.api.openstack.wsgi): TRACE: 
  2012-03-22 14:56:40 INFO nova.api.openstack.wsgi 
[req-43268d0b-0b3f-451f-b36f-bfcb49b68ff9 ryan test] 
http://10.10.10.90:8774/v1.1/test/servers/detail returned with
   HTTP 400

  Either users should be blocked from deleting a flavor until all
  instances booted from it terminate, or this should be more robust (I'd
  propose the latter).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/962549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371082] Re: nova-scheduler high cpu usage

2014-09-18 Thread vishal yadav
*** This bug is a duplicate of bug 1371084 ***
https://bugs.launchpad.net/bugs/1371084

** Project changed: cinder = nova

** Tags added: nova-scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371082

Title:
  nova-scheduler high cpu usage

Status in OpenStack Compute (Nova):
  New

Bug description:
  For no particular reason nova-scheduler cpu utilization can jump to
  100%. I was unable to find any pattern and reason why this is
  happening. We've small cluster 1 cloud controller and 7 node
  controllers. Except high cpu usage nothing bad happens, we're able to
  create/delete instances, after nova-scheduler restart everything goes
  back to normal state.

  I was able to strace 2 processes while nova-scheduler was using 100%
  cpu.

  1st process is in loop and it prints:
  122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall 
[-] task run outlasted interval by 12.322771 sec\0

  2nd processes is in loop as well and it's repeating:
  epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
  epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
  epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
  sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

  Other processes doesn't have any issues with AMQP server just only
  nova-scheduler.

  We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186065] Re: Instance status always being building after memory shortage error happen

2014-09-18 Thread Sean Dague
long incomplete bug

** Tags removed: havana-backport-potential

** Summary changed:

- Instance status always being building after memory shortage error happen
+ HyperV: Instance status always being building after memory shortage error 
happen

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1186065

Title:
  HyperV: Instance status always being building after memory shortage
  error happen

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  TEST STEPS:
  1)  Try to use Automation tool to deploy 600 VMs. (Including resize and 
delete)
    (Using flavors of m1.small)
  2)  When we had deployed 427 VMs, the memory of compute node was shortage.
  3)  Time out occur.

  Expected result: Once the memory shortage was happened, the deploying should 
be stopped and openstack will show the ERROR status to user.
  Actual result: The status of the deploying is still BUILD and we can find the 
error message from the compute.log of compute node.

  we can find below message compute.log about the shortage of memory.

  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops HyperVException: 
WMI job failed with status 10. Error details: 'instance-02aa' could not 
initialize.
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops Not enough memory 
in the system to start the virtual machine instance-02aa. - 
'instance-02aa' could not initialize. (Virtual machine ID 
BAEB8C16-DBE5-488D-B55C-59543A8F0885)
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops Not enough memory 
in the system to start the virtual machine instance-02aa with ram size 2048 
megabytes. (Virtual machine ID BAEB8C16-DBE5-488D-B55C-59543A8F0885) - Error 
code: 32778
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops
  2013-05-29 01:07:20.289 ERROR nova.virt.hyperv.vmops 
[req-e2855532-03eb-484a-9b53-4c7a8cb37795 a6ab9c2b52554567b2c0281636398bfc 
aef9ae3aeace49a080d22e46295834b4] Failed to change vm state of 
instance-02aa to 2
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops Traceback (most 
recent call last):
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops   File C:\Program 
Files (x86)\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 184, in spawn
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops 
self.power_on(instance)
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops   File C:\Program 
Files (x86)\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 369, in 
power_on
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops 
constants.HYPERV_VM_STATE_ENABLED)
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops   File C:\Program 
Files (x86)\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py, line 380, in 
_set_vm_state
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops raise 
vmutils.HyperVException(msg)
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops HyperVException: 
Failed to change vm state of instance-02aa to 2
  2013-05-29 01:07:20.289 5600 TRACE nova.virt.hyperv.vmops
  2013-05-29 01:07:20.289 INFO nova.virt.hyperv.vmops 
[req-e2855532-03eb-484a-9b53-4c7a8cb37795 a6ab9c2b52554567b2c0281636398bfc 
aef9ae3aeace49a080d22e46295834b4] Got request to destroy instance: 
instance-02aa
  2013-05-29 01:07:20.352 DEBUG nova.virt.hyperv.vmops 
[req-e2855532-03eb-484a-9b53-4c7a8cb37795 a6ab9c2b52554567b2c0281636398bfc 
aef9ae3aeace49a080d22e46295834b4] [instance: 
f82a4a9d-48a7-4361-8dbd-c906ca94b008] Power off instance power_off C:\Program 
Files (x86)\Hyper-V 
Agent\Python27\lib\site-packages\nova\virt\hyperv\vmops.py:361

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1186065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1238393] Re: Xenapi glance plugin should not try uploading when the auth token in invalid

2014-09-18 Thread Sean Dague
There is actually an ML thread now about the general architecture issue
here. It's way more complicated than a simple bug and probably a bit
change to keystone auth protocol

** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
 Assignee: Sridevi Koushik (sridevik) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238393

Title:
  Xenapi glance plugin should not try uploading when the auth token in
  invalid

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The glance plugin tries to upload for invalid auth token.
  In nova/image/glance, the X-Auth-Token is taken from the context, and passed 
on to upload. And in the upload_tarball method, the plugin starts to put the 
headers and upload the image, and only after all the chunks are uploaded it 
will receive an UnAuthorized response.

  Suggested fix:
  Suggested approach: Have a HEAD call made with the X-Auth-Token. If that 
returns a 401, then abandon the upload process.
  Otherwise, continue with the upload.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268895] Re: Possible infinity loop on sqlachemy.api._retry_on_deadlock

2014-09-18 Thread Sean Dague
Closing as Opinion.

** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
 Assignee: sahid (sahid-ferdjaoui) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268895

Title:
  Possible infinity loop on sqlachemy.api._retry_on_deadlock

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  We need to add max attempts logic to retry_on_deadlock

  Retry on deadlock can continue to infinity and produce
  a timeout in an other part of code causing an increase
  difficulty to debug the real problem.

  
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n162

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240416] Re: nova client error when deleting secgroup rules

2014-09-18 Thread Sean Dague
** Project changed: nova = python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240416

Title:
  nova client error when deleting secgroup rules

Status in Python client library for Nova:
  Incomplete

Bug description:
  When you have a group rule added to a security group that has the same
  port instructions as a non-group rule, the NovaClient seemingly gets
  confused when you try to delete the non-group rule and spits out an
  error:

  example:

  $ nova secgroup-list-rules example
  --
  IP Protocol   From Port   To Port IP RangeSource Group

  --
  tcp   22  22  test
  tcp   22  22  0.0.0.0/0

  --

  $ nova secgroup-delete-rule example tcp 22 22 0.0.0.0/0
  ERROR: 'cidr'

  Attempting to delete this rule using the Neutron Client works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1240416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240317] Re: cant resize alter live migrate(block_migrate)

2014-09-18 Thread Sean Dague
Long incomplete bug, please reopen if you can provide the new info.

** Changed in: nova
 Assignee: jiangguoliang (jglyq1982) = (unassigned)

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240317

Title:
  cant resize alter live migrate(block_migrate)

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when i try to resize my instance alter live migrate(block_migrate).i found 
some error message in nova-compute.log on my compute node.
  error message:
  2013-10-15 18:52:09.276 ERROR nova.compute.manager 
[req-8b4330c7-1ea6-404a-ad0d-4f064e6b9643 None None] [instance: 
28b509bb-dfe9-4793-a9f8-b121ab16aa6c] t3.uuzu.idc is not a valid node managed 
by this compute host.. Setting instance vm_state to ERROR
  2013-10-15 18:52:09.445 ERROR nova.openstack.common.rpc.amqp 
[req-8b4330c7-1ea6-404a-ad0d-4f064e6b9643 None None] Exception during message 
handling
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 430, 
in _process_data
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 133, in dispatch
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
temp_level, payload)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 94, in wrapped
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 260, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 237, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 224, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2050, in 
confirm_resize
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp rt = 
self._get_resource_tracker(migration['source_node'])
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 361, in 
_get_resource_tracker
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp raise 
exception.NovaException(msg)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
NovaException: t3.uuzu.idc is not a valid node managed by this compute host.
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 

  I found the instance is on the new host t3.uuzu.idc. but the status is ERROR. 
  Version is Grizzly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371084] Re: nova-scheduler high cpu usage

2014-09-18 Thread Sean Dague
This looks like some sort of issues with AMQP connect to rabbit. As
that's all hidden behind olso.messaging, I think we should need to
figure out what's wrong there.

** Tags added: oslo

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371084

Title:
  nova-scheduler high cpu usage

Status in OpenStack Compute (Nova):
  Incomplete
Status in Messaging API for OpenStack:
  New

Bug description:
  For no particular reason nova-scheduler cpu utilization can jump to
  100%. I was unable to find any pattern and reason why this is
  happening. We've small cluster 1 cloud controller and 7 node
  controllers. Except high cpu usage nothing bad happens, we're able to
  create/delete instances, after nova-scheduler restart everything goes
  back to normal state.

  I was able to strace 2 processes while nova-scheduler was using 100%
  cpu.

  1st process is in loop and it prints:
  122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall 
[-] task run outlasted interval by 12.322771 sec\0

  2nd processes is in loop as well and it's repeating:
  epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
  epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
  epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
  sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

  Other processes doesn't have any issues with AMQP server just only
  nova-scheduler.

  We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327028] Re: add availability_zone for host show

2014-09-18 Thread Sean Dague
Long incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
 Assignee: tinytmy (tangmeiyan77) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327028

Title:
  add availability_zone for host show

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when we get host with hostname, the return content does not contain 
availability_zone.
  I think this is need to be contained.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317723] Re: There should be a log if the admin password is not set

2014-09-18 Thread Sean Dague
I don't know why we should be logging not supported operations.

** Changed in: nova
 Assignee: Wei T (nuaafe) = (unassigned)

** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317723

Title:
  There should be a log if the admin password is not set

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  There is a  configurable libvirt_inject_password for libvirt driver inject 
admin password or not.
  But if the value is set as False, the password will not be inject and the 
boot work will be succeed without any warning.

  Also, in horizon, there isn't any configurable value in config file to
  enable/disable the set root password during launch instance.

  Without any log and warning in nova-compute/api, and horizon make user
  think it work as expected, so I believe it's a bug, to fix it need
  work both in horizon and nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371115] [NEW] fix full synchronization between neutron and ODL

2014-09-18 Thread Cédric OLLIVIER
Public bug reported:

ODL MD doesn't treat the pending operation in sync_full().
It induces a desynchronization in case of an update and a delete operation 
triggering the full sync.
To reproduce this desynchronisation:
  - create a network
  - restart neutron
  - delete or update this network

When neutron and ODL aren't synchronized, all the update and delete operations 
are also lost (from the ODL point of view) 
To reproduce the test case:
  - create a network
  - set wrong credentials in ml2_conf_odl.ini and restart neutron
  - delete or update this network
  - set the correct credentials in ml2_conf_odl.ini and restart neutron
  - trigger the full synchronization via any operation

ODL MD will also send POST requests with empty data (ie. {'networks':
[]}) to ODL if all resources (UUID) previously exist in ODL.

** Affects: neutron
 Importance: Undecided
 Assignee: Cédric OLLIVIER (m.col)
 Status: New


** Tags: icehouse-backport-potential opendaylight

** Changed in: neutron
 Assignee: (unassigned) = Cédric OLLIVIER (m.col)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371115

Title:
  fix full synchronization between neutron and ODL

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ODL MD doesn't treat the pending operation in sync_full().
  It induces a desynchronization in case of an update and a delete operation 
triggering the full sync.
  To reproduce this desynchronisation:
- create a network
- restart neutron
- delete or update this network

  When neutron and ODL aren't synchronized, all the update and delete 
operations are also lost (from the ODL point of view) 
  To reproduce the test case:
- create a network
- set wrong credentials in ml2_conf_odl.ini and restart neutron
- delete or update this network
- set the correct credentials in ml2_conf_odl.ini and restart neutron
- trigger the full synchronization via any operation

  ODL MD will also send POST requests with empty data (ie. {'networks':
  []}) to ODL if all resources (UUID) previously exist in ODL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371116] [NEW] AngularJS code is evaluated twice after modal form submission

2014-09-18 Thread Kamil Rykowski
Public bug reported:

When we send a form which is placed inside a modal box the proper AJAX
request is made. If it succeed and no redirection is going to be made
and if the returned HTML response contains any angular syntax it will be
evaluated twice. It means f.e. that every controller used inside
returned response will be initialized twice which leads to serious
problems in angular logic.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371116

Title:
  AngularJS code is evaluated twice after modal form submission

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When we send a form which is placed inside a modal box the proper AJAX
  request is made. If it succeed and no redirection is going to be made
  and if the returned HTML response contains any angular syntax it will
  be evaluated twice. It means f.e. that every controller used inside
  returned response will be initialized twice which leads to serious
  problems in angular logic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371121] [NEW] Glance image-create stuck in SAVING state and when deleted image remains on the filesystem

2014-09-18 Thread semy
Public bug reported:

I'm using Icehouse (Ubuntu 14.04) and a directory as a backend is
configured in Glance to store images.

When I perform image-create and it takes a long period of time, glance
ends up with NotAuthenticated: Authentication required exception and
Failed to upload image in logs. Later glance reports Unable to kill
image because of the very same error NotAuthenticated and because
previous action failed and generally it makes the image to stuck in
SAVING state.

More magic is after deleting that broken image. Glance reports that the
image was deleted successfully and there is no error in logs, but it
turns out only data from the DB was removed, but the image itself (file
in the directory) remains untouched. It's not longer visible in glance
image-list, but is still in glance's images directory.

NotAuthenticated error is probably caused by Keystone tokens. When I
increased keystone token expiration time it looks like the amount of
time to cause very same situation was also increased. However, when I
was uploading multiple images during the same session in Horizon I ended
up with all successfull image-create except the last one which was
during the time when the token should expire, so still it's an issue.

Nevertheless, Glance should do the whole cleanup properly even if the
image is in e.g. SAVING state and it should look for a file matching
that broken image and purge it from the disk, if such file exists.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371121

Title:
  Glance image-create stuck in SAVING state and when deleted image
  remains on the filesystem

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I'm using Icehouse (Ubuntu 14.04) and a directory as a backend is
  configured in Glance to store images.

  When I perform image-create and it takes a long period of time, glance
  ends up with NotAuthenticated: Authentication required exception and
  Failed to upload image in logs. Later glance reports Unable to kill
  image because of the very same error NotAuthenticated and because
  previous action failed and generally it makes the image to stuck in
  SAVING state.

  More magic is after deleting that broken image. Glance reports that
  the image was deleted successfully and there is no error in logs, but
  it turns out only data from the DB was removed, but the image itself
  (file in the directory) remains untouched. It's not longer visible in
  glance image-list, but is still in glance's images directory.

  NotAuthenticated error is probably caused by Keystone tokens. When I
  increased keystone token expiration time it looks like the amount of
  time to cause very same situation was also increased. However, when I
  was uploading multiple images during the same session in Horizon I
  ended up with all successfull image-create except the last one which
  was during the time when the token should expire, so still it's an
  issue.

  Nevertheless, Glance should do the whole cleanup properly even if the
  image is in e.g. SAVING state and it should look for a file matching
  that broken image and purge it from the disk, if such file exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371118] [NEW] Image file stays in store if image has been deleted during upload

2014-09-18 Thread Mike Fedosin
Public bug reported:

When I create a new task in v2 to upload an image, it creates the image
record in db, sets status to saving and then begins the uploading.

If the image is deleted by appropriate API call while its content is
still being uploaded, an exception is raised and it is not handled in
the API code. This leads to the fact that the uploaded image file stays
in a storage and clogs it.

File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 62, 
in _execute 
uri)
File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 95, 
in import_image
new_image = image_repo.get(image_id)
File /opt/stack/glance/glance/api/authorization.py, line 106, in get
image = self.image_repo.get(image_id)
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/api/policy.py, line 179, in get
return super(ImageRepoProxy, self).get(image_id)
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get 
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/db/__init__.py, line 72, in get raise 
exception.NotFound(msg)
NotFound: No image found with ID e2285448-a56f-45b1-9e6e-216d2b304967

This bug is very similar to
https://bugs.launchpad.net/glance/+bug/1188532, but it relates to task
mechanism in v2.

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371118

Title:
  Image file stays in store if image has been deleted during upload

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When I create a new task in v2 to upload an image, it creates the
  image record in db, sets status to saving and then begins the
  uploading.

  If the image is deleted by appropriate API call while its content is
  still being uploaded, an exception is raised and it is not handled in
  the API code. This leads to the fact that the uploaded image file
  stays in a storage and clogs it.

  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 62, 
in _execute 
  uri)
  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 95, 
in import_image
  new_image = image_repo.get(image_id)
  File /opt/stack/glance/glance/api/authorization.py, line 106, in get
  image = self.image_repo.get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/api/policy.py, line 179, in get
  return super(ImageRepoProxy, self).get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get 
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/db/__init__.py, line 72, in get raise 
exception.NotFound(msg)
  NotFound: No image found with ID e2285448-a56f-45b1-9e6e-216d2b304967

  This bug is very similar to
  https://bugs.launchpad.net/glance/+bug/1188532, but it relates to task
  mechanism in v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224972] Re: When createing a volume from an image - nova leaves the volume name empty

2014-09-18 Thread Duncan Thomas
Far from convinced cinder should make any assumptions about volume names
- it's a free text string, and an empty name is entirely valid.

** Changed in: cinder
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224972

Title:
  When createing a volume from an image - nova leaves the volume name
  empty

Status in Cinder:
  Opinion
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When a block device with source=image, dest=volume to nova instance
  boot, nova will instruct Cinder to create the volume, however it will
  not set any name. It would be helpful to set a descriptive name so
  that the user knows where the volume came from.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1224972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368372] Re: nova client help text for rescue

2014-09-18 Thread Michal Dulko
I guess this is fixed now:

nova help rescue
usage: nova rescue server

Reboots a server into rescue mode, which starts the machine from the initial
image, attaching the current boot disk as secondary.

Positional arguments:
  server  Name or ID of server.

** Changed in: python-novaclient
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368372

Title:
  nova client help text for rescue

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Nova:
  Fix Released

Bug description:
  The nova client help text for rescue is very terse.   Improving this
  would not only help the command but also the doc since it is
  autogenerated.

  rescue  Rescue a server.

  A basic explanation of what rescue is would be very helpful. Some of
  the other commands provide more assistance.

  quota-deleteDelete quota for a tenant/user so their quota will
  Revert back to default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2014-09-18 Thread Thierry Carrez
** Changed in: oslo.db
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Orchestration API (Heat):
  New
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in The Oslo library incubator:
  Fix Released
Status in Oslo Database library:
  Fix Released

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services.binary AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361230] Re: ad248f6 jsonutils sync breaks if simplejson 2.2.0 (under python 2.6)

2014-09-18 Thread Doug Hellmann
** Changed in: oslo.serialization
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1361230

Title:
  ad248f6 jsonutils sync breaks if simplejson  2.2.0 (under python 2.6)

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in The Oslo library incubator:
  Fix Committed
Status in Oslo library for sending and saving object:
  Fix Released
Status in Taskflow for task-oriented systems.:
  Fix Committed

Bug description:
  This keystone sync:

  
https://github.com/openstack/keystone/commit/94efafd6d6066f63a9226a6b943d0e86699e7edd

  Pulled in this change to jsonutils:

  https://review.openstack.org/#/c/113760/

  That uses a flag in json.dumps which is only in simplejson = 2.2.0.
  If you don't have a new enough simplejson the keystone database
  migrations fail.

  Keystone doesn't even list simplejson as a requirement and oslo-
  incubator lists simplejson = 2.0.9 as a test-requirement since it's
  optional in the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1361230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370769] Re: Ensure all metadata definition code uses six.itertypes

2014-09-18 Thread Pawel Skowron
I did not find any issues with dict.iteritems in python-glanceclient
part of the bug.

** Changed in: python-glanceclient
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1370769

Title:
  Ensure all metadata definition code uses six.itertypes

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Python client library for Glance:
  Invalid

Bug description:
  Similar to https://review.openstack.org/#/c/95467/

  According to https://wiki.openstack.org/wiki/Python3 dict.iteritems()
  should be replaced with six.iteritems(dict).

  All metadata definition code added should ensure that six.iteritems is
  used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1370769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371154] [NEW] increase default setting of workers to n-cpu

2014-09-18 Thread Morgan Fainberg
Public bug reported:

Eventlet workers should default to n-cpu not 1 for both main and admin
APIs. The default of 1 with UUID tokens causes eventlet to perform
extremely poorly under a default installation.

It is possible the minimum default should always be 2 workers. (Open for
discussion)

** Affects: keystone
 Importance: Medium
 Status: Triaged

** Description changed:

  Eventlet workers should default to n-cpu not 1 for both main and admin
  APIs. The default of 1 with UUID tokens causes eventlet to perform
  extremely poorly under a default installation.
+ 
+ It is possible the minimum default should always be 2 workers. (Open for
+ discussion)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1371154

Title:
  increase default setting of workers to n-cpu

Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  Eventlet workers should default to n-cpu not 1 for both main and admin
  APIs. The default of 1 with UUID tokens causes eventlet to perform
  extremely poorly under a default installation.

  It is possible the minimum default should always be 2 workers. (Open
  for discussion)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1371154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371160] [NEW] HTTP 500 while retrieving metadata by non-existent key

2014-09-18 Thread Ilya Shakhat
Public bug reported:

HTTP 500 error occurs when one tries to get metadata by path constructed
from folder name with appended value.

Steps to repro:
1. Launch VM and access its terminal
2. curl http://169.254.169.254/latest/meta-data/instance-id  -- this returns 
some string, i.e. i-0001
3. curl http://169.254.169.254/latest/meta-data/instance-id/i-0001  -- this 
returns HTTP 500
It's expected that the last call returns meaningful message and not produce 
trace backs in logs.

Errors:
--
In VM terminal:
$ curl http://169.254.169.254/latest/meta-data/instance-id/i-0001
html
 head
  title500 Internal Server Error/title
 /head
 body
  h1500 Internal Server Error/h1
  Remote metadata server experienced an internal server error.br /br /
 /body
/html$ 

In Neutron metadata agent:
2014-09-18 14:44:37.563 ERROR neutron.agent.metadata.agent [-] Unexpected error.
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent Traceback (most 
recent call last):
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 130, in __call__
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent return 
webob.exc.HTTPNotFound()
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 248, in 
_proxy_request
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent def 
_sign_instance_id(self, instance_id):
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent Exception: 
Unexpected response code: 400
2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent 
2014-09-18 14:44:37.566 INFO eventlet.wsgi.server [-] 10.0.0.2,local - - 
[18/Sep/2014 14:44:37] GET /latest/meta-data/instance-id/i-0001 HTTP/1.1 
500 229 0.348877

In Nova API service:
2014-09-18 14:31:19.030 ERROR nova.api.ec2 
[req-5c84e0ae-7d18-4113-a08b-ed068e5333ed None None] FaultWrapper: string 
indices must be integers, not unicode
2014-09-18 14:31:19.030 TRACE nova.api.ec2 Traceback (most recent call last):
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 87, in __call__
2014-09-18 14:31:19.030 TRACE nova.api.ec2 return 
req.get_response(self.application)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-09-18 14:31:19.030 TRACE nova.api.ec2 application, 
catch_exc_info=False)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-09-18 14:31:19.030 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-09-18 14:31:19.030 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-09-18 14:31:19.030 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 99, in __call__
2014-09-18 14:31:19.030 TRACE nova.api.ec2 rv = 
req.get_response(self.application)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-09-18 14:31:19.030 TRACE nova.api.ec2 application, 
catch_exc_info=False)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-09-18 14:31:19.030 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-09-18 14:31:19.030 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-09-18 14:31:19.030 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/handler.py, line 128, in __call__
2014-09-18 14:31:19.030 TRACE nova.api.ec2 data = 
meta_data.lookup(req.path_info)
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/base.py, line 418, in lookup
2014-09-18 14:31:19.030 TRACE nova.api.ec2 data = 
self.get_ec2_item(path_tokens[1:])
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/base.py, line 300, in get_ec2_item
2014-09-18 14:31:19.030 TRACE nova.api.ec2 return find_path_in_tree(data, 
path_tokens[1:])
2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/base.py, line 565, in find_path_in_tree
2014-09-18 14:31:19.030 TRACE nova.api.ec2 data = 

[Yahoo-eng-team] [Bug 1371160] Re: HTTP 500 while retrieving metadata by non-existent key

2014-09-18 Thread Sean Dague
So I agree that n-api should not stack trace... however it successfully
returns a 400 to the service, which I think was expected. Neutron is
exploding on that 400 though, which is not.

** Tags added: ec2

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371160

Title:
  HTTP 500 while retrieving metadata by non-existent key

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  HTTP 500 error occurs when one tries to get metadata by path
  constructed from folder name with appended value.

  Steps to repro:
  1. Launch VM and access its terminal
  2. curl http://169.254.169.254/latest/meta-data/instance-id  -- this returns 
some string, i.e. i-0001
  3. curl http://169.254.169.254/latest/meta-data/instance-id/i-0001  -- 
this returns HTTP 500
  It's expected that the last call returns meaningful message and not produce 
trace backs in logs.

  Errors:
  --
  In VM terminal:
  $ curl http://169.254.169.254/latest/meta-data/instance-id/i-0001
  html
   head
title500 Internal Server Error/title
   /head
   body
h1500 Internal Server Error/h1
Remote metadata server experienced an internal server error.br /br /
   /body
  /html$ 

  In Neutron metadata agent:
  2014-09-18 14:44:37.563 ERROR neutron.agent.metadata.agent [-] Unexpected 
error.
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent Traceback (most 
recent call last):
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 130, in __call__
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent return 
webob.exc.HTTPNotFound()
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent   File 
/opt/stack/neutron/neutron/agent/metadata/agent.py, line 248, in 
_proxy_request
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent def 
_sign_instance_id(self, instance_id):
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent Exception: 
Unexpected response code: 400
  2014-09-18 14:44:37.563 TRACE neutron.agent.metadata.agent 
  2014-09-18 14:44:37.566 INFO eventlet.wsgi.server [-] 10.0.0.2,local - - 
[18/Sep/2014 14:44:37] GET /latest/meta-data/instance-id/i-0001 HTTP/1.1 
500 229 0.348877

  In Nova API service:
  2014-09-18 14:31:19.030 ERROR nova.api.ec2 
[req-5c84e0ae-7d18-4113-a08b-ed068e5333ed None None] FaultWrapper: string 
indices must be integers, not unicode
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 Traceback (most recent call last):
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 87, in __call__
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 return 
req.get_response(self.application)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 99, in __call__
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 rv = 
req.get_response(self.application)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-09-18 14:31:19.030 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
  2014-09-18 14:31:19.030 TRACE nova.api.ec2   File 

[Yahoo-eng-team] [Bug 1371175] [NEW] Delete the libvirt volume_drivers config parameter

2014-09-18 Thread Daniel Berrange
Public bug reported:

A followup from

  https://bugs.launchpad.net/nova/+bug/1362191

This bug is to track the actual deletion of the volume_drivers config
parameter.

** Affects: nova
 Importance: Low
 Status: Confirmed

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371175

Title:
   Delete the libvirt volume_drivers config parameter

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  A followup from

https://bugs.launchpad.net/nova/+bug/1362191

  This bug is to track the actual deletion of the volume_drivers config
  parameter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310589] Re: Able to list the L3 stats of neutron meter labels for either ingress or egress direction but NOT both.

2014-09-18 Thread Alan Pevec
** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Changed in: neutron/havana
   Importance: Undecided = Low

** Changed in: neutron/havana
Milestone: None = 2013.2.4

** Changed in: neutron/havana
 Assignee: (unassigned) = Fei Long Wang (flwang)

** Changed in: neutron/icehouse
 Assignee: (unassigned) = Fei Long Wang (flwang)

** Changed in: neutron/havana
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1310589

Title:
  Able to list the L3 stats of neutron meter labels  for either ingress
  or egress direction but NOT both.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  In Progress
Status in neutron icehouse series:
  Fix Released

Bug description:
  I created meter-label and added rule to it but i am not able to list
  the meters.

  1. neutron meter-label-create --tenant-id xxx --description label for tenant 
xxx label_xxx 
  2. neutron meter-label-rule-create  --direction ingress --tenant-id xxx 
label_xxx 10.64.201.0/24
  3. To get the counters from ceilometer: 
  ceilometer sample-list -m bandwidth -q project_id=xxx -l 10

  following error observed in /var/log/neutron/metering_agent.log

  2014-04-21 04:33:39.400 19954 ERROR neutron.openstack.common.notifier.api 
[req-0ad3e301-7b7e-4740-b210-893a7fc1b49a None] Failed to load notifier 
neutron.openstack.common.notifier.list_notifier. These notifications will not 
be sent.
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api 
Traceback (most recent call last):
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api   
File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/notifier/api.py, 
line 168, in add_driver
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api 
driver = importutils.import_module(notification_driver)
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api   
File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/importutils.py, 
line 57, in import_module
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api 
__import__(import_str)
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api 
ImportError: No module named list_notifier
  2014-04-21 04:33:39.400 19954 TRACE neutron.openstack.common.notifier.api

  Regards,
  Kotewsar

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1310589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256043] Re: Need to add Development environment files to ignore list

2014-09-18 Thread gordon chung
we've decide in ceilometer to let users  configure own gitignore for
such cases... closing ceilo items.

** No longer affects: python-ceilometerclient

** No longer affects: ceilometer

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1256043

Title:
  Need to add Development environment files to ignore list

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Python client library for Cinder:
  Fix Committed
Status in Python client library for Glance:
  In Progress
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Neutron:
  In Progress
Status in Python client library for Nova:
  Fix Released
Status in Python client library for Swift:
  Won't Fix
Status in OpenStack Object Storage (Swift):
  Won't Fix

Bug description:
  Following files generated by Eclipse development environment should be
  in ignore list to avoid their inclusion during a git push.

  .project
  .pydevproject

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1256043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371185] [NEW] Flavor Tables Extra Specs link is broken

2014-09-18 Thread Aaron Sahlin
Public bug reported:

Clicking on any Extra Specs link puts the browser into the never ending
spinner.

I attached the browser console showing the error causing the issue.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: ExtraSpecsHangs.png
   
https://bugs.launchpad.net/bugs/1371185/+attachment/4207623/+files/ExtraSpecsHangs.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371185

Title:
  Flavor Tables Extra Specs link is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Clicking on any Extra Specs link puts the browser into the never
  ending spinner.

  I attached the browser console showing the error causing the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346525] Re: Snapshots when using RBD backend make full copy then upload unnecessarily

2014-09-18 Thread Abel Lopez
*** This bug is a duplicate of bug 1226351 ***
https://bugs.launchpad.net/bugs/1226351

I think this is a duplicate of 1226351

** This bug has been marked a duplicate of bug 1226351
   Make RBD Usable for Ephemeral Storage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346525

Title:
  Snapshots when using RBD backend make full copy then upload
  unnecessarily

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When performing a snapshot a local copy is made. In the case of RBD,
  it reads what libvirt thinks is a raw block device and then converts
  that to a local raw file. The file is then uploaded to glance, which
  reads the whole raw file and stores it in the backend, if the backend
  is Ceph this is completely unnecessary and defeats the whole point of
  having a Ceph cluster. The fix should go something like this:

  1. Tell Ceph to make a snapshot of the RBD
  2. Get Ceph metadata from backend, send that to Glance
  3. Glance gets metadata, if it has Ceph backend no download is necessary
  4. If it doesn't, download image from Ceph location, store in backend

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304181] Re: neutron should validate gateway_ip is in subnet

2014-09-18 Thread Alan Pevec
Too late for Havana, Ihar has provided a relnote
https://wiki.openstack.org/wiki/ReleaseNotes/2013.2.4#Neutron

** Changed in: neutron/havana
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304181

Title:
  neutron should validate gateway_ip is in subnet

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Won't Fix
Status in neutron icehouse series:
  In Progress
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I don't believe this is actually a valid network configuration:

  arosen@arosen-MacBookPro:~/devstack$ neutron subnet-show  
be0a602b-ea52-4b13-8003-207be20187da
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {start: 10.11.12.1, end: 10.11.12.254} |
  | cidr | 10.11.12.0/24  |
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | be0a602b-ea52-4b13-8003-207be20187da   |
  | ip_version   | 4  |
  | name | private-subnet |
  | network_id   | 53ec3eac-9404-41d4-a899-da4f32045abd   |
  | tenant_id| f2d9c1726aa940d3bd5a8ee529ea2480   |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-09-18 Thread Sean Dague
I managed to get a reproduce by creating a slow vm: ubuntu 14.04 in
vbox, 1 G ram, 2 vcpu, set to 50% of cpu performance.

tox -epy27 -- --until-fail multiprocess

On the 3rd time through I got the following:

running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmp7YX5uh
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./nova/tests  --load-list 
/tmp/tmpU5Qsw_
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_killed_worker_recover
 [5.688803s] ... ok

Captured stderr:


/home/sdague/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:35:
 SAWarning: The IN-predicate on instances.uuid was invoked with an empty 
sequence. This results in a contradiction, which nonetheless can be expensive 
to evaluate.  Consider alternative strategies for improved performance.
  return o[0](self, self.expr, op, *(other + o[1:]), **kwargs)

{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_killed_worker_recover
 [2.634592s] ... ok
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_restart_sighup
 [1.565492s] ... ok
{0} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_terminate_sigterm
 [2.400319s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_restart_sighup
 [160.043131s] ... FAILED
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigkill
 [2.317150s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
 [2.274788s] ... ok
{1} 
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITestV3.test_terminate_sigkill
 [2.089225s] ... ok

and hang

So, testr is correctly killing the restart test when it times out. It is
also correctly moving on to additional tests. However it is then in a
hung state and can't finish once the tests are done.

Why the test timed out, I don't know. However the fact that testr is
going crazy is an issue all by itself.

** Also affects: testrepository
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Confirmed
Status in Test Repository:
  New

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  

[Yahoo-eng-team] [Bug 1370191] Re: db deadlock on service_update()

2014-09-18 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = In Progress

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
   Importance: Undecided = Medium

** Changed in: nova/icehouse
   Importance: Undecided = Medium

** Changed in: nova/havana
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova/icehouse
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova/havana
Milestone: None = 2013.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370191

Title:
  db deadlock on service_update()

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  In Progress

Bug description:
  Several methods in nova.db.sqlalchemy.api are decorated with
  @_retry_on_deadlock.  service_update() is not currently one of them,
  but it should be based on the following backtrace:

  4-09-15 15:40:22.574 34384 ERROR nova.servicegroup.drivers.db [-] model
  server went away
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db Traceback
  (most recent call last):
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py, line
  95, in _report_state
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service.service_ref, state_catalog)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/conductor/api.py, line 218, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return self._manager.service_update(context, service, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/utils.py, line 967, in wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139,
  in inner
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 491, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db svc =
  self.db.service_update(context, service['id'], values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/api.py, line 148, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return IMPL.service_update(context, service_id, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 146, in
  wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return f(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 533, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service_ref.update(values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 447,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.rollback()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line
  58, in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  compat.reraise(exc_type, exc_value, exc_tb)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 444,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 358,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  t[1].commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 1195,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self._do_commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  

[Yahoo-eng-team] [Bug 1371230] [NEW] Flavor Update Metadata 'invalid key name' on textbox focus

2014-09-18 Thread Cindy Lu
Public bug reported:

Admin  Flavors  Update Metadata modal

When you first focus on the Other input field, it is fine.  Enter in
something and then delete it.  Now you will consistently get the red
error text Invalid key name every time you focus on the field.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Untitled.png
   
https://bugs.launchpad.net/bugs/1371230/+attachment/4207788/+files/Untitled.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371230

Title:
  Flavor Update Metadata 'invalid key name' on textbox focus

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Admin  Flavors  Update Metadata modal

  When you first focus on the Other input field, it is fine.  Enter in
  something and then delete it.  Now you will consistently get the red
  error text Invalid key name every time you focus on the field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371249] [NEW] Fix typos in the Description section of 'Create QOS Spec' form under Admin-Volumes-Volume Type

2014-09-18 Thread mariam john
Public bug reported:

Under the Admin-Volumes-Volume Types panel, when we click the 'Create
QOS Spec' button, the description section has few typos

** Affects: horizon
 Importance: Undecided
 Assignee: mariam john (mariamj)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = mariam john (mariamj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371249

Title:
  Fix typos in the Description section of 'Create QOS Spec' form under
  Admin-Volumes-Volume Type

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Under the Admin-Volumes-Volume Types panel, when we click the
  'Create QOS Spec' button, the description section has few typos

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371251] [NEW] Replace term Extra Specs with Metadata in Flavor table

2014-09-18 Thread Cindy Lu
Public bug reported:

Flavors table has a column called Extra Specs.  It should be replaced
with Metadata for consistency.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371251

Title:
  Replace term Extra Specs with Metadata in Flavor table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Flavors table has a column called Extra Specs.  It should be
  replaced with Metadata for consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241222] Re: Cells is using the wrong flavorid for resize

2014-09-18 Thread Alan Pevec
** Changed in: nova
   Importance: Undecided = Medium

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = In Progress

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1241222

Title:
  Cells is using the wrong flavorid for resize

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  Cells is trying to resize incorrectly with the flavor 'id', not the
  API's 'flavorid'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1241222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269448] Re: VMware: VC driver lacks support for firewall rules

2014-09-18 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = In Progress

** Tags removed: havana-backport-potential

** Changed in: nova/havana
   Importance: Undecided = High

** Changed in: nova/havana
 Assignee: (unassigned) = Yaguang Tang (heut2008)

** Changed in: nova/havana
Milestone: None = 2013.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269448

Title:
  VMware: VC driver lacks support for firewall rules

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  Confirmed

Bug description:
  Issuing
  [root@jhenner-node ~(keystone_admin)]# nova secgroup-add-rule  default tcp 33 
33 0.0.0.0/0
  +-+---+-+---+--+
  | IP Protocol | From Port | To Port | IP Range  | Source Group |
  +-+---+-+---+--+
  | tcp | 33| 33  | 0.0.0.0/0 |  |
  +-+---+-+---+--+

  causes:
  [root@jhenner-node ~(keystone_admin)]# tail -f /var/log/nova/compute.log | 
grep -v DEBUG
  2014-01-15 14:43:33.040 19359 ERROR nova.openstack.common.rpc.amqp 
[req-8273843f-cf2f-4638-8e41-ad7b5278773b c617ab6c5a9c45ac97d59b3d799e431e 
89cec4e2039c4344b30e74575444afd1] Exception during message handling
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp **args)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 90, in wrapped
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp 
payload)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/exception.py, line 73, in wrapped
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 857, in 
refresh_instance_security_rules
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp return 
_sync_refresh()
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py, line 
246, in inner
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 34, in __exit__
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp 
self.gen.throw(type, value, traceback)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py, line 
210, in lock
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp yield 
sem
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py, line 
246, in inner
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp return 
f(*args, **kwargs)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 856, in 
_sync_refresh
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp return 
self.driver.refresh_instance_security_rules(instance)
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp 
AttributeError: 'VMwareVCDriver' object has no attribute 
'refresh_instance_security_rules'
  2014-01-15 14:43:33.040 19359 TRACE nova.openstack.common.rpc.amqp 

  The secgroups seems to be ineffective, there seems to be no
  firewalling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229625] Re: hairpin mode on vnet bridge ports causes false positives on IPv6 duplicate address detection

2014-09-18 Thread Alan Pevec
** Changed in: nova
   Importance: Undecided = Medium

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
   Importance: Undecided = Medium

** Changed in: nova/havana
Milestone: None = 2013.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229625

Title:
   hairpin mode on vnet bridge ports causes false positives on IPv6
  duplicate address detection

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  This is bug 1011134 again happening in a cloud that does not have the ipv6 
flag set,
  so the previous patch from https://review.openstack.org/14017
  is not used.
  Guest VMs will try to configure IPv6 link-local addrs even without the outer 
parts supporting it
  and can throw errors when they see inbound packets with their own MAC address.

  Note: I think, this bug can not be unit-tested as it requires a
  complex setup including running a VM in a cloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1229625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371285] [NEW] [data processing] Job type dropdown needs to be data plugin specific

2014-09-18 Thread Chad Roberts
Public bug reported:

*wishlist for Juno, more likely for Kilo*

When choosing the job type (project - data processing - Jobs - Create
Job), right now all job types display, even if there is no plugin
present to support that job type.  The best example of this is Spark.
Right now, even if you do not have the spark plugin loaded, the spark
job type still shows up.  This is mostly harmless, but is misleading to
the user and will likely lead to frustration when they discover that
they can not run their job.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371285

Title:
  [data processing] Job type dropdown needs to be data plugin specific

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  *wishlist for Juno, more likely for Kilo*

  When choosing the job type (project - data processing - Jobs -
  Create Job), right now all job types display, even if there is no
  plugin present to support that job type.  The best example of this is
  Spark.  Right now, even if you do not have the spark plugin loaded,
  the spark job type still shows up.  This is mostly harmless, but is
  misleading to the user and will likely lead to frustration when they
  discover that they can not run their job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254320] Re: Network cache should not be refreshed for instances that are still building

2014-09-18 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Importance: Undecided = Medium

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = Alan Pevec (apevec)

** Changed in: nova/havana
Milestone: None = 2013.2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254320

Title:
  Network cache should not be refreshed for instances that are still
  building

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  heal_instance_info_cache is a periodic task which refreshes the
  network cache for instances on a host.  Currently it processes
  instances which are still in the building state, which is both
  unnecessary (the build itself will update the cache when it completes)
  and can lead to a race condition;  If the periodic task gets Null
  network information (likely as the instance is still being built) but
  then gets pre-empted by the build thread and updates the cache after
  the instance has finished build has finished then the cache is in
  effect cleared.

  Simply skipping instances in the building state avoids this situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362854] Re: Incorrect regex on rootwrap for encrypted volumes ln creation

2014-09-18 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided = Critical

** Changed in: nova/havana
   Status: New = In Progress

** Changed in: nova/havana
   Importance: Undecided = Critical

** Changed in: nova/icehouse
   Status: New = In Progress

** Changed in: nova/havana
 Assignee: (unassigned) = John Griffith (john-griffith)

** Changed in: nova/havana
Milestone: None = 2013.2.4

** Tags removed: havana-backport-potential icehouse-backport-potential

** Changed in: nova/icehouse
 Assignee: (unassigned) = John Griffith (john-griffith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362854

Title:
  Incorrect regex on rootwrap for encrypted volumes ln creation

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  In Progress

Bug description:
  While running Tempest tests against my device, the encryption tests
  consistently fail to attach.  Turns out the problem is an attempt to
  create symbolic link for encryption process, however the rootwrap spec
  is restricted to targets with the default openstack.org iqn.

  Error Message from n-cpu:

  Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln
  --symbolic --force /dev/mapper/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-
  6b4269af9d4f.4710-lun-0 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.sol

  
  Rootwrap entry currently implemented:

  ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*, /dev/disk/by-path/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371298] [NEW] libvirt: AMI-based Linux instances /dev/console unusable

2014-09-18 Thread Nicolas Simonds
Public bug reported:

In Linux, the last console= option listed in /proc/cmdline becomes
/dev/console, which is used for things like rescue mode, single-user
mode, etc.  In the case of AMI-based Linux images, libvirt defines the
serial console (tied to the console.log) last, which means a crashed
instance ends up being unrecoverable

Steps to Reproduce:

1.  Upload the AMI/AKI/ARI images attached to this bug into Glance and tie them 
together (if how to do this is not common knowledge, I can follow-on with exact 
steps)
2.  Boot an instance against the image.  It has been altered so that it will 
crash on startup, believing there is filesystem corruption

Expected Behaviour:

A Press enter for maintenance (or type Control-D to continue): prompt
on the interactive console (Spice/VNC/etc.)

Actual Behaviour:

The aforementioned prompt appears in the libvirt console.log, and the
instance is effectively bricked.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371298

Title:
  libvirt: AMI-based Linux instances /dev/console unusable

Status in OpenStack Compute (Nova):
  New

Bug description:
  In Linux, the last console= option listed in /proc/cmdline becomes
  /dev/console, which is used for things like rescue mode, single-user
  mode, etc.  In the case of AMI-based Linux images, libvirt defines the
  serial console (tied to the console.log) last, which means a crashed
  instance ends up being unrecoverable

  Steps to Reproduce:

  1.  Upload the AMI/AKI/ARI images attached to this bug into Glance and tie 
them together (if how to do this is not common knowledge, I can follow-on with 
exact steps)
  2.  Boot an instance against the image.  It has been altered so that it will 
crash on startup, believing there is filesystem corruption

  Expected Behaviour:

  A Press enter for maintenance (or type Control-D to continue):
  prompt on the interactive console (Spice/VNC/etc.)

  Actual Behaviour:

  The aforementioned prompt appears in the libvirt console.log, and the
  instance is effectively bricked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212347] Re: destroy() method of nova-compute's driver contract doesn't specify exception handling

2014-09-18 Thread Sean Dague
So I'm putting this into an Opinion / Wishlist state because this feels
like it's part of a larger refactoring conversation. The driver contract
really isn't a contract at all at this point.

** Changed in: nova
   Status: Triaged = Opinion

** Changed in: nova
   Importance: High = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212347

Title:
  destroy() method of nova-compute's driver contract doesn't specify
  exception handling

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  nova-compute's contract for virtual machine drivers doesn't specify
  how those drivers should handle errors and exceptions when the
  driver's destroy() operation is called.

  The contract should say what strategies drivers might use when the
  destroy() call encounters an error, which exceptions may be passed to
  nova-compute, and how those exceptions may cause nova-compute to
  modify the instance's state.

  The virtual machine driver destroy() contract is here:
  https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L244

  Driver exceptions are handled by nova-compute:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py

  Possibly from a lack of this specification, the current set of drivers
  take different approaches to handling exceptions in their destroy()
  operations: some save and re-raise, some wrap and re-raise, some
  retry, some do nothing, and some hide the error.

  Also, without some specification, developers can't write unit tests to
  verify the driver's behaviour against the contract.

  
  Reported against nova master: commit 8fb450fb3aa033d42c5dddb907392efd70f54a6b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1212347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 969537] Re: Quota-classes API extension requires tenant-to-quota-class mappings

2014-09-18 Thread Sean Dague
Moving to opinion status, this seems to be a long stalled effort

** Changed in: nova
 Assignee: Eoghan Glynn (eglynn) = (unassigned)

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/969537

Title:
  Quota-classes API extension requires tenant-to-quota-class mappings

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  In order for the quota-classes API extension to be functional in the
  absence of Turnstile, nova support for tenant-to-quota-class mappings
  would be required.

  This could include:

   - a  project_quota_class_association table in the nova DB

   - additional logic in the quota-classes API extension to establish
  and tear-down of such mappings

   - middleware support for setting the quota_class attribute on the
  request context

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/969537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1028495] Re: Separate image_snapshot from image_uploading

2014-09-18 Thread Sean Dague
very old refactoring wishlist item

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1028495

Title:
  Separate image_snapshot from image_uploading

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The current code does this:

  1. Set task_state=image_snapshot
  2. Snapshot Instance
  3. Upload Image
  4. Set task_state=None

  There are a couple of problems with this.

  First, there's a race condition between when the Upload-Image task
  completes and when the task_state is set to None. This means that if
  we're polling the image to see when it goes active and then take a
  snapshot immediately afterwards, it *may* fail.

  The second issue is that snapshotting an instance is quick, but
  uploading is slow, so we're preventing more snapshots from being taken
  for an *unnecessarily* long period of time.

  Really, we should only forbid snapshots while the instance is
  snapshotting. Uploading the Image would be handled asynchronously.

  So, the revised proposal is:

  Compute Manager:

  1. Set task_state=image_snapshot
  2. Snapshot Instance
  3. Queue Snapshot for Upload
  4. Set task_state=None

  Image-Upload Worker

  1. Dequeue Image Upload Job
  2. Upload to Glance

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1028495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 897807] Re: OS API: Treatment of Nova and Glance generated metadata inconsistent

2014-09-18 Thread Sean Dague
This is seriously old, I'm going to close as Invalid for now. Please
feel free to reopen with updated details.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/897807

Title:
  OS API: Treatment of Nova and Glance generated metadata inconsistent

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In testing, I noted that when I use the OS API create image request
  (and using Glance as the image provider), additional metadata is added
  to an image beyond that provided with the create image request. For
  create image, the metadata provided by Glance appears to not be
  counted against the image metadata limits.  In contradiction to that,
  when using any of the /images/metadata range of requests, both the
  Nova and Glance generated metadata are counted towards the image's
  metadata limit. It seems that the behavior should be consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/897807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262679] Re: Range() in for loop need a refactor

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Triaged = Opinion

** Changed in: nova
 Assignee: Liang Bo (liang-bo-os) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262679

Title:
   Range() in for loop need a refactor

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  In some files, range was called as:
  for i in range(0, 10):
  pass
  Actually the start index arg was useless, since its defaultvalue is 0.
  They should be refactor as:
  for i in range(10):
  pass

  Stats in nova codes, range(N) = 230 lines,  range(0, N) = 30 lines

  range(N) seems more clear and graceful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-09-18 Thread Robert Collins
** No longer affects: testrepository

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1092714] Re: unit test state leaks

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1092714

Title:
  unit test state leaks

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  If you run testr with --concurrency=255 (make sure to have a machine
  with at lest 24 GB memory) we still have a few state leaks between
  tests. The following are those fails.

  ==
  ERROR: 
nova.tests.integrated.test_api_samples.SimpleTenantUsageSampleJsonTest.test_get_tenant_usage_details
  tags: worker-198
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'nova': {{{
  Loading compute driver 'nova.virt.fake.FakeDriver'
  Loading network driver 'nova.network.linux_net'
  Starting compute node (version 2013.1)
  Updating host status
  Auditing locally available compute resources
  Free ram (MB): 7680
  Free disk (GB): 1028
  Free VCPUS: 1
  Compute_service record created for 257bd93648c9434cb4b598471b920e9c 
  Starting cert node (version 2013.1)
  Loading network driver 'nova.network.linux_net'
  Starting network node (version 2013.1)
  Starting scheduler node (version 2013.1)
  Starting conductor node (version 2013.1)
  Initializing extension manager.
  Loaded extension: os-simple-tenant-usage
  Initializing extension manager.
  Loaded extension: os-simple-tenant-usage
  osapi_compute listening on 127.0.0.1:38209
  http://127.0.0.1:38209/v2
  Doing GET on /v2
  (12170) wsgi starting up on http://127.0.0.1:38209/

  127.0.0.1 GET /v2 HTTP/1.1 status: 204 len: 216 time: 0.0005970

  Doing POST on /v2/openstack/servers
  Body: {
  server : {
  name : new-server-test,
  imageRef : 
http://openstack.example.com/openstack/images/70a599e0-31e7-49b7-b260-868f441e862b;,
  flavorRef : http://openstack.example.com/openstack/flavors/1;,
  metadata : {
  My Server Name : Apache1
  },
  personality : [
  {
  path : /etc/banner.txt,
  contents : 
ICAgICAgDQoiQSBjbG91ZCBkb2VzIG5vdCBrbm93IHdoeSBpdCBtb3ZlcyBpbiBqdXN0IHN1Y2ggYSBkaXJlY3Rpb24gYW5kIGF0IHN1Y2ggYSBzcGVlZC4uLkl0IGZlZWxzIGFuIGltcHVsc2lvbi4uLnRoaXMgaXMgdGhlIHBsYWNlIHRvIGdvIG5vdy4gQnV0IHRoZSBza3kga25vd3MgdGhlIHJlYXNvbnMgYW5kIHRoZSBwYXR0ZXJucyBiZWhpbmQgYWxsIGNsb3VkcywgYW5kIHlvdSB3aWxsIGtub3csIHRvbywgd2hlbiB5b3UgbGlmdCB5b3Vyc2VsZiBoaWdoIGVub3VnaCB0byBzZWUgYmV5b25kIGhvcml6b25zLiINCg0KLVJpY2hhcmQgQmFjaA==
  }
  ]
  }
  }
  POST http://127.0.0.1:38209/v2/openstack/servers
  Starting instance...
  Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1
  Total Memory: 8192 MB, used: 512 MB
  Memory limit not specified, defaulting to unlimited
  Total Disk: 1028 GB, used: 0 GB
  Disk limit not specified, defaulting to unlimited
  Total CPU: 1 VCPUs, used: 0 VCPUs
  CPU limit not specified, defaulting to unlimited
  Claim successful
  http://127.0.0.1:38209/v2/openstack/servers returned with HTTP 202
  127.0.0.1 POST /v2/openstack/servers HTTP/1.1 status: 202 len: 606 time: 
20.4780099

  Doing GET on 
/v2/openstack/os-simple-tenant-usage/openstack?start=2012-12-20+21%3A47%3A12.784353end=2012-12-20+22%3A47%3A12.784353
  GET 
http://127.0.0.1:38209/v2/openstack/os-simple-tenant-usage/openstack?start=2012-12-20+21%3A47%3A12.784353end=2012-12-20+22%3A47%3A12.784353
  
http://127.0.0.1:38209/v2/openstack/os-simple-tenant-usage/openstack?start=2012-12-20+21%3A47%3A12.784353end=2012-12-20+22%3A47%3A12.784353
 returned with HTTP 200
  127.0.0.1 GET 
/v2/openstack/os-simple-tenant-usage/openstack?start=2012-12-20+21%3A47%3A12.784353end=2012-12-20+22%3A47%3A12.784353
 HTTP/1.1 status: 200 len: 707 time: 0.8514791

  Stopping WSGI server.
  This shouldn't be getting called except during testing.
  This shouldn't be getting called except during testing.
  This shouldn't be getting called except during testing.
  This shouldn't be getting called except during testing.
  }}}

  Traceback (most recent call last):
File /home/stack/code/nova/nova/tests/integrated/test_api_samples.py, 
line 1667, in test_get_tenant_usage_details
  response)
File /home/stack/code/nova/nova/tests/integrated/test_api_samples.py, 
line 240, in _verify_response
  response_data)
File /home/stack/code/nova/nova/tests/integrated/test_api_samples.py, 
line 215, in _verify_something
  return self._compare_result(subs, expected, result)
File /home/stack/code/nova/nova/tests/integrated/test_api_samples.py, 
line 158, in _compare_result
  res = self._compare_result(subs, expected[key], result[key])
File /home/stack/code/nova/nova/tests/integrated/test_api_samples.py, 
line 158, in _compare_result
  res = self._compare_result(subs, expected[key], result[key])
File 

[Yahoo-eng-team] [Bug 1087493] Re: clients need to try multiple IPs when connecting to hosts

2014-09-18 Thread Sean Dague
Not a nova bug

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1087493

Title:
  clients need to try multiple IPs when connecting to hosts

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Since most (all?) of the api daemons are unable to listen on and IPv6
  socket problems arise in IPv6 environments. If a server is listening
  on IPv4 and its hostname has both IPv4 and IPv6 addresses associated
  with it then a client attempting to connect to that host *by name*
  will fail as IPv6 is the default protocol to use if it is available.

  This is aggravating for a period of time while various open stack
  clients report connection refused but a telnet session (or other
  simple test client) to a specific api port works. Eg:

  tims@horizon telnet os-api1 8776
  Trying 2607:f088:0:2:5054:ff:fe26:801...
  Trying 192.168.2.100...
  Connected to os-api1.
  Escape character is '^]'.
  ^]
  telnet quit
  Connection closed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1087493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1130881] Re: Fix Snapshots index() to show summary list

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Confirmed = Opinion

** Changed in: nova
 Assignee: Giampaolo Lauria (lauria) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1130881

Title:
  Fix Snapshots index() to show summary list

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently, Snapshots index() is producing the same output as Snapshots list().
  Also, a unit test case is missing for index().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1130881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204019] Re: nova packages install error

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1204019

Title:
  nova packages install error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  hello

  I have an error with nova :

  usermod: no changes
  Command failed, please check log for more info
  2013-07-23 10:34:28.916 10963 CRITICAL nova [-] vernum(83)
  2013-07-23 10:34:28.916 10963 TRACE nova Traceback (most recent call last):
  2013-07-23 10:34:28.916 10963 TRACE nova File /usr/bin/nova-manage, line 
1263, in module
  2013-07-23 10:34:28.916 10963 TRACE nova main()
  2013-07-23 10:34:28.916 10963 TRACE nova File /usr/bin/nova-manage, line 
1255, in main
  2013-07-23 10:34:28.916 10963 TRACE nova fn(*fn_args, **fn_kwargs)
  2013-07-23 10:34:28.916 10963 TRACE nova File /usr/bin/nova-manage, line 
798, in sync
  2013-07-23 10:34:28.916 10963 TRACE nova return migration.db_sync(version)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/lib/python2.7/dist-packages/nova/db/migration.py, line 32, in db_sync
  2013-07-23 10:34:28.916 10963 TRACE nova return IMPL.db_sync(version=version)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 78, in 
db_sync
  2013-07-23 10:34:28.916 10963 TRACE nova return 
versioning_api.upgrade(get_engine(), repository, version)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 186, 
in upgrade
  2013-07-23 10:34:28.916 10963 TRACE nova return _migrate(url, repository, 
version, upgrade=True, err=err, **opts)
  2013-07-23 10:34:28.916 10963 TRACE nova File string, line 2, in _migrate
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.py, line 43, in 
patched_with_engine
  2013-07-23 10:34:28.916 10963 TRACE nova return f(*a, **kw)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 345, 
in _migrate
  2013-07-23 10:34:28.916 10963 TRACE nova changeset = schema.changeset(version)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 80, 
in changeset
  2013-07-23 10:34:28.916 10963 TRACE nova changeset = 
self.repository.changeset(database, start_ver, version)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/repository.py, line 
225, in changeset
  2013-07-23 10:34:28.916 10963 TRACE nova changes = 
[self.version(v).script(database, op) for v in versions]
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/repository.py, line 
189, in version
  2013-07-23 10:34:28.916 10963 TRACE nova return self.versions.version(*p, **k)
  2013-07-23 10:34:28.916 10963 TRACE nova File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/version.py, line 
140, in version
  2013-07-23 10:34:28.916 10963 TRACE nova return self.versions[VerNum(vernum)]
  2013-07-23 10:34:28.916 10963 TRACE nova KeyError: vernum(83)
  2013-07-23 10:34:28.916 10963 TRACE nova
  dpkg: error processing nova-common (--configure):
  subprocess installed post-installation script returned error exit status 1
  Errors were encountered while processing:
  nova-common
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  how to solve ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1204019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1166087] Re: nova net-list not working on client side

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1166087

Title:
  nova net-list not working on client side

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have installed nova client on my local machine and trying to use 
nova-client using following command
  nova --os_usernmame=admin --os_passwo=x --os_tenant_name=demo 
--os_auth_url=http://10.99.130.230:5000/v2.0/ net_list

  It is showing maximum tries exceeded for url :
  /v1.1/e72a1350522c4b9ea9b69661bc4ed265/os-tenant-networks

  is there anything extra that I need to supply additionally to make
  this command work?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1166087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220599] Re: xenapi: Support tgz compression level in tgz upload

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Triaged = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1220599

Title:
  xenapi: Support tgz compression level in tgz upload

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Once https://review.openstack.org/41651 is landed, we should be able
  to configure the compression level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1220599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1178555] Re: deploy ramdisk and kernel must be public glance images

2014-09-18 Thread Sean Dague
I assume this is a baremetal bug, in which case I'm making it invalid
because we just deprecated that driver

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1178555

Title:
  deploy ramdisk and kernel must be public glance images

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This isn't desirable, as deploy images run privileged so having them
  public may disclose operational config.

  To reproduce just register a deploy ramdisk + kernel with is-public
  False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1178555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226342] Re: nova delete when a baremetal node is not responding to power management leaves the node orphaned

2014-09-18 Thread Sean Dague
baremetal deprecated, please retriage if it's still an ironic issue

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226342

Title:
  nova delete when a baremetal node is not responding to power
  management leaves the node orphaned

Status in OpenStack Compute (Nova):
  Won't Fix
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  If you nova delete an instance on baremetal and the baremetal power
  manager fails for some reason, you end up with a stale instance_uuid
  in the bm_nodes table. This is unrecoverable via the API - db surgery
  is needed.

  To reproduce, configure a bad power manager, nova boot something on
  bm, then nova delete, and check the DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226170] Re: bm node instance provisioning delay with 45 nodes

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226170

Title:
  bm node instance provisioning delay with 45 nodes

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Nova - 1:2013.1

  While doing scale test, within a batch of 42 cartridges, few of them get 
provisioned in ~7 mins and few takes 35 mins.
  There is totally 10 instances which spans between 30-36 minutes of 
provisioning time.

  nova-compute log snippet of the node which is taking 36 minutes [the
  highest time in the batch].

  In the first minute, the Claim is successful.
  The next 30 minutes is wait time with the message During sync_power_state 
the instance has a pending task. Skip and eventually the provisioning gets 
completed.

  Per blueprint, https://blueprints.launchpad.net/nova/+spec/improve-
  baremetal-pxe-deploy

  the current is nova-baremetal-deploy-helper mounts the baremetal node's disks 
via iSCSI, fdisks the partitions, and dd's the updated glance image over iSCSI
  while the desired approach is ramdisk fdisks the local disks, pulls specified 
image from glance, writes to local disk, and reboot into it

  Is the current approach causing this performance bottle neck? Is there
  any parameter which can be tuned to better the performance?

  Line 388: 2013-09-06 12:09:12.843 AUDIT nova.compute.manager 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Starting instance...
  Line 801: 2013-09-06 12:10:15.230 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Attempting claim: memory 2048 MB, disk 30 
GB, VCPUs 1
  Line 802: 2013-09-06 12:10:15.232 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Total Memory: 2048 MB, used: 512 MB
  Line 803: 2013-09-06 12:10:15.233 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Memory limit: 3072 MB, free: 2560 MB
  Line 804: 2013-09-06 12:10:15.235 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Total Disk: 80 GB, used: 0 GB
  Line 805: 2013-09-06 12:10:15.236 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Disk limit not specified, defaulting to 
unlimited
  Line 806: 2013-09-06 12:10:15.238 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Total CPU: 1 VCPUs, used: 0 VCPUs
  Line 807: 2013-09-06 12:10:15.240 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] CPU limit not specified, defaulting to 
unlimited
  Line 808: 2013-09-06 12:10:15.241 AUDIT nova.compute.claims 
[req-894f8127-4b15-4046-9919-fbd0123f3555 93aabe9ff2064688bdd070f16e6de768 
40c74bd952f8470991117d54f0c03f0f] [instance: 
15047e3b-b4c8-4b45-906e-5c2e4650a9e2] Claim successful
  Line 1047: 2013-09-06 12:12:14.514 28732 INFO nova.compute.manager [-] 
[instance: 15047e3b-b4c8-4b45-906e-5c2e4650a9e2] During sync_power_state the 
instance has a pending task. Skip.
  Line 1354: 2013-09-06 12:22:23.266 28732 INFO nova.compute.manager [-] 
[instance: 15047e3b-b4c8-4b45-906e-5c2e4650a9e2] During sync_power_state the 
instance has a pending task. Skip.
  Line 1405: 2013-09-06 12:32:30.858 28732 INFO nova.compute.manager [-] 
[instance: 15047e3b-b4c8-4b45-906e-5c2e4650a9e2] During sync_power_state the 
instance has a pending task. Skip.
  Line 1486: 2013-09-06 12:42:38.456 28732 INFO nova.compute.manager [-] 
[instance: 15047e3b-b4c8-4b45-906e-5c2e4650a9e2] During sync_power_state the 
instance has a pending task. Skip.
  Line 1490: 2013-09-06 12:43:58.078 28732 INFO nova.virt.baremetal.pxe [-] PXE 
deploy started for instance 15047e3b-b4c8-4b45-906e-5c2e4650a9e2
  Line 1493: 2013-09-06 12:44:41.641 28732 INFO nova.virt.baremetal.pxe [-] PXE 
deploy completed for instance 15047e3b-b4c8-4b45-906e-5c2e4650a9e2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to 

[Yahoo-eng-team] [Bug 1221911] Re: test_list_hosts_with_zone failed in a swift gate job

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1221911

Title:
  test_list_hosts_with_zone failed in a swift gate job

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  See http://logs.openstack.org/57/45057/1/gate/gate-tempest-devstack-
  vm-postgres-full/55e3888/console.html

  2013-09-06 20:05:37.846 | 
==
  2013-09-06 20:05:37.847 | FAIL: 
tempest.api.compute.admin.test_hosts.HostsAdminTestJSON.test_list_hosts_with_zone[gate]
  2013-09-06 20:05:37.847 | 
tempest.api.compute.admin.test_hosts.HostsAdminTestJSON.test_list_hosts_with_zone[gate]
  2013-09-06 20:05:37.847 | 
--
  2013-09-06 20:05:37.847 | _StringException: Empty attachments:
  2013-09-06 20:05:37.847 |   stderr
  2013-09-06 20:05:37.848 |   stdout
  2013-09-06 20:05:37.848 | 
  2013-09-06 20:05:37.848 | pythonlogging:'': {{{
  2013-09-06 20:05:37.848 | 2013-09-06 19:47:00,711 Request: GET 
http://127.0.0.1:8774/v2/a32a2342f5b346f7a5a4a98e7db22ab3/os-hosts
  2013-09-06 20:05:37.849 | 2013-09-06 19:47:00,786 Response Status: 200
  2013-09-06 20:05:37.849 | 2013-09-06 19:47:00,786 Nova request id: 
req-7b85655e-04a1-489d-8e7b-a0e68c26a7fa
  2013-09-06 20:05:37.849 | 2013-09-06 19:47:00,786 Request: GET 
http://127.0.0.1:8774/v2/a32a2342f5b346f7a5a4a98e7db22ab3/os-hosts?zone=None
  2013-09-06 20:05:37.849 | 2013-09-06 19:47:00,813 Response Status: 200
  2013-09-06 20:05:37.849 | 2013-09-06 19:47:00,813 Nova request id: 
req-155ed027-42db-48a6-84ae-3b77f95eb580
  2013-09-06 20:05:37.849 | }}}
  2013-09-06 20:05:37.849 | 
  2013-09-06 20:05:37.850 | Traceback (most recent call last):
  2013-09-06 20:05:37.850 |   File tempest/api/compute/admin/test_hosts.py, 
line 50, in test_list_hosts_with_zone
  2013-09-06 20:05:37.850 | self.assertTrue(len(hosts) = 1)
  2013-09-06 20:05:37.850 |   File /usr/lib/python2.7/unittest/case.py, line 
420, in assertTrue
  2013-09-06 20:05:37.850 | raise self.failureException(msg)
  2013-09-06 20:05:37.850 | AssertionError: False is not true

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1221911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195095] Re: virtual power driver config is global

2014-09-18 Thread Sean Dague
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1195095

Title:
  virtual power driver config is global

Status in OpenStack Compute (Nova):
  Won't Fix
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  This limits it to managing nodes on a single virsh environment. It
  would be great if instead of global config it was per-node, so we
  could configure nodes on multiple machines.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1195095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371324] [NEW] Arista ML2 driver should let EOS know when it is syncing

2014-09-18 Thread Shashank Hegde
Public bug reported:

The Arista ML2 driver performs a periodic sync with EOS to figure out if
EOS data is inconsistent with Neutron database. The driver needs to let
EOS know when the sync starts and when the sync completes.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371324

Title:
  Arista ML2 driver should let EOS know when it is syncing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Arista ML2 driver performs a periodic sync with EOS to figure out
  if EOS data is inconsistent with Neutron database. The driver needs to
  let EOS know when the sync starts and when the sync completes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314129] Re: jsonutils should use simplejson on python 2.6 if available

2014-09-18 Thread Sean Dague
** No longer affects: zaqar

** No longer affects: python-neutronclient

** No longer affects: trove

** No longer affects: python-novaclient

** No longer affects: horizon

** No longer affects: keystone

** No longer affects: glance

** No longer affects: ceilometer

** No longer affects: cinder

** No longer affects: ironic

** No longer affects: sahara

** No longer affects: oslo-incubator

** No longer affects: neutron

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1314129

Title:
  jsonutils should use simplejson on python 2.6 if available

Status in Orchestration API (Heat):
  In Progress
Status in Messaging API for OpenStack:
  In Progress
Status in Taskflow for task-oriented systems.:
  Fix Committed
Status in Tuskar:
  Fix Committed

Bug description:
  Python 2.6 ships 'json' module that is very slow because it's written
  in pure Python. Python 2.7 updated [1] its 'json' module from
  simplejson PyPI repo with a version that is based on C extension (and
  quick). Quoting: Updated module: The json module was upgraded to
  version 2.0.9 of the simplejson package, which includes a C extension
  that makes encoding and decoding faster. (Contributed by Bob Ippolito;
  issue 4136.)

  We should strive to use simplejson library when running on Python 2.6.

  [1]: https://docs.python.org/dev/whatsnew/2.7.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1314129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247030] Re: admin user cannot delete other tenants' instances by name

2014-09-18 Thread Sean Dague
** Project changed: nova = python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1247030

Title:
  admin user cannot delete other tenants' instances by name

Status in Python client library for Nova:
  Incomplete

Bug description:
  1.Set to admin user firstly. Normally user can delete an instance by
  instance-id or instance-name(when there aren't duplicated names).

  2. Use non-admin user to create a instance server1459893667.

  2.Use admin user to list instances.
  [root@localhost ˜]# nova list --all-tenant 1
  
+--+--+-++-+---+
  | ID   | Name | Status  | Task 
State | Power State | Networks  |
  
+--+--+-++-+---+
  | 59a285a0-d9a1-4bae-969c-7577e673dbb6 | kvm1 | SHUTOFF | None
   | Shutdown| network1=10.0.1.3 |
  | 718edd9b-6ce5-4700-9325-f63a2ecf94ee | server1459893667 | ERROR   | None
   | NOSTATE |   |
  
+--+--+-++-+---+

  3. Try to delete an instance of a non-admin user by its name using admin user.
  [root@localhost ˜]# nova delete server1459893667
  No server with a name or ID of 'server1459893667' exists.
  ERROR: Unable to delete any of the specified servers.

  4.Then try to delete this instance by its id and this complete successfully.
  [root@localhost ˜]# nova delete 718edd9b-6ce5-4700-9325-f63a2ecf94ee
  [root@localhost ˜]# nova list --all-tenant 1
  
+--+--+-++-+---+
  | ID   | Name | Status  | Task State | Power 
State | Networks  |
  
+--+--+-++-+---+
  | 59a285a0-d9a1-4bae-969c-7577e673dbb6 | kvm1 | SHUTOFF | None   | 
Shutdown| network1=10.0.1.3 |
  
+--+--+-++-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1247030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161657] Re: nova.compute.manager.py needs better rollbacks

2014-09-18 Thread Sean Dague
This isn't really a bug, this is really something which should come in
via the specs process

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161657

Title:
  nova.compute.manager.py needs better rollbacks

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  As documented at
  https://review.openstack.org/#/c/25075/2/nova/compute/manager.py there
  are cases in the compute manager that cause the database, network, or
  instances themselves to be in a inconsistent (or wrong entirely)
  state. It would be useful to verify that when a plugin is called that
  there is a defined interface and known set of errors that said
  interface can throw, and how to rollback from all of those allowed set
  of errors. The top level manager code must correctly rollback state
  (as needed) so that the compute node is left in a pristine state when
  a underlying driver does not behave correctly (or just doesn't work).

  Lets first attack one function, a critical path one, _run_instance(),
  and its direct _spawn(), _prep_block_device()

  Certain calls noted:

  - Deallocating networks/volumes (not always done) - 
_setup_block_device_mapping is never rolledback...
  - Un-preparing a block device (on later failure)
  - A driver can affect the macs for an instance 
(self.driver.macs_for_instance) and since this is 3rd party driver code, if 
said driver 'locks' said macs (via whatever mechanism) then there is future 
macs not rolledback.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1062097] Re: virtual interface create error

2014-09-18 Thread Sean Dague
Going to assume the unique keys blueprint addressed this

** Changed in: nova
 Assignee: Boris Pavlovic (boris-42) = (unassigned)

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1062097

Title:
  virtual interface create error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I saw instance creation failed when nova-network attempted create duplicated 
MAC address for vif.
  In nova code, there are exception code exists but It looks doesn't catch 
exception Integrity error.

  This is my test code.

  #!/usr/bin/python
  from nova import utils
  from nova import flags
  import nova.context
  import sys
  from nova import db

  def main(sys):
  context = 
nova.context.RequestContext('t...@test.com','prj-test',True,False)
  vif = {'address': '02:16:3e:63:c9:39',
 'instance_id': 1,
 'network_id': 1,
 'uuid': str(utils.gen_uuid())}

  db.virtual_interface_create(context, vif)

  if __name__ == '__main__':
  utils.default_flagfile()
  FLAGS = flags.FLAGS(sys.argv)

  '02:16:3e:63:c9:39' is already exists db table. So I expected
  exception.VirtualInterfaceCreateException() because In
  db/sqlalchemy/api.py,

   @require_context
   def virtual_interface_create(context, values):
   Create a new virtual interface record in teh database.

   :param values: = dict containing column values
   
   try:
   vif_ref = models.VirtualInterface()
  vif_ref.update(values)
   vif_ref.save()
   except IntegrityError:
  raise exception.VirtualInterfaceCreateException()

   return vif_ref

  But next error is occured when I tested.
  Traceback (most recent call last):
File ./test_create_vif.sh, line 23, in module
  main(sys)
File ./test_create_vif.sh, line 17, in main
  db.virtual_interface_create(context, vif)
File 
/usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/nova/db/api.py, 
line 448, in virtual_interface_create
  return IMPL.virtual_interface_create(context, values)
File 
/usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/nova/db/sqlalchemy/api.py,
 line 120, in wrapper
  return f(*args, **kwargs)
File 
/usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/nova/db/sqlalchemy/api.py,
 line 1002, in virtual_interface_create
  vif_ref.save()
File 
/usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/nova/db/sqlalchemy/models.py,
 line 59, in save
  session.flush()
File 
/usr/local/lib/python2.7/dist-packages/nova-2012.1-py2.7.egg/nova/exception.py,
 line 98, in _wrap
  raise DBError(e)
  nova.exception.DBError: (IntegrityError) (1062, Duplicate entry 
'02:16:3e:63:c9:39' for key 'address') 'INSERT INTO virtual_interfaces 
(created_at, updated_at, deleted_at, deleted, address, network_id, instance_id, 
uuid) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)' (datetime.datetime(2012, 10, 5, 
8, 7, 30, 868674), None, None, 0, '02:16:3e:63:c9:39', 1, 1, 
'9452abe3-3fea-4706-94e3-876753e8bcb1')

  For this reason, When VIF's mac address is duplicated, maybe instance
  creation is failed.

  When Instance is created, below code is executed.

  nova/network/manager.py

  def add_virtual_interface(self, context, instance_uuid, network_id):
  vif = {'address': utils.generate_mac_address(),
 'instance_uuid': instance_uuid,
 'network_id': network_id,
 'uuid': str(utils.gen_uuid())}
  # try FLAG times to create a vif record with a unique mac_address
  for i in xrange(FLAGS.create_unique_mac_address_attempts):
  try:
  return self.db.virtual_interface_create(context, vif)
  except exception.VirtualInterfaceCreateException:
  vif['address'] = utils.generate_mac_address()
  else:
  self.db.virtual_interface_delete_by_instance(context,
   instance_uuid)
  raise exception.VirtualInterfaceMacAddressException()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1062097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 989977] Re: scheduler host_manager should sanity check compute_node entries

2014-09-18 Thread Sean Dague
This really isn't a bug, it's not enough to move forward on. Happy to
reopen with a larger description of what's the thing we should fix.

** Changed in: nova
 Assignee: Chris Behrens (cbehrens) = (unassigned)

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/989977

Title:
  scheduler host_manager should sanity check compute_node entries

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Somehow through DB manipulations or something, a compute_node entry
  can point to a non-compute service_id.  We should sanity check this
  when scheduling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/989977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1152382] Re: nova-all fork bomb

2014-09-18 Thread Sean Dague
Closing as opinion, nova-all really isn't something we even talk about
any more

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1152382

Title:
  nova-all fork bomb

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  When I Ctrl-C out of nova-all, at random times it ceases to terminate
  the processes and begins an endless cycle of forking off processes.

  2013-03-07 23:47:34.357 9170 AUDIT nova.service [-] Starting conductor node 
(version 2013.1)
  2013-03-07 23:47:34.372 9170 INFO nova.service [-] Parent process has died 
unexpectedly, exiting
  2013-03-07 23:47:34.381 7396 INFO nova.service [-] Child 9170 exited with 
status 1
  2013-03-07 23:47:34.400 7396 INFO nova.service [-] Started child 9171
  2013-03-07 23:47:34.406 9171 AUDIT nova.service [-] Starting conductor node 
(version 2013.1)
  2013-03-07 23:47:34.421 9171 INFO nova.service [-] Parent process has died 
unexpectedly, exiting
  2013-03-07 23:47:34.430 7396 INFO nova.service [-] Child 9171 exited with 
status 1
  2013-03-07 23:47:34.431 7396 INFO nova.service [-] Forking too fast, sleeping
  2013-03-07 23:47:35.451 7396 INFO nova.service [-] Started child 9172
  2013-03-07 23:47:35.457 9172 AUDIT nova.service [-] Starting conductor node 
(version 2013.1)
  2013-03-07 23:47:35.479 9172 INFO nova.service [-] Parent process has died 
unexpectedly, exiting
  2013-03-07 23:47:35.491 7396 INFO nova.service [-] Child 9172 exited with 
status 1
  2013-03-07 23:47:35.510 7396 INFO nova.service [-] Started child 9173
  2013-03-07 23:47:35.516 9173 AUDIT nova.service [-] Starting conductor node 
(version 2013.1)
  2013-03-07 23:47:35.530 9173 INFO nova.service [-] Parent process has died 
unexpectedly, exiting
  2013-03-07 23:47:35.540 7396 INFO nova.service [-] Child 9173 exited with 
status 1
  2013-03-07 23:47:35.540 7396 INFO nova.service [-] Forking too fast, sleeping

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1152382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161664] Re: Rescheduling can DDOS itself

2014-09-18 Thread Sean Dague
This isn't really a bug. If you have a reproduce with an expected
behavior and actually behavior maybe we can turn it into a bug.

** Changed in: nova
   Status: Triaged = Opinion

** Changed in: nova
   Importance: Low = Wishlist

** Tags added: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161664

Title:
  Rescheduling can DDOS itself

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Due to the way nova currently handles rescheduling there is a
  tendency, when a large number of compute nodes need to reschedule (for
  whatever error) that they then swamp the message queue (and nova
  scheduler) with rescheduling messages. This can cascade to where
  further rescheduling messages will occur (and repeat...) or until the
  MQ piles up and/or the scheduler falls over.

  Even with a reschedule 'count' under situations when rescheduling is
  happening on mass the rescheduling itself can cause more problems for
  your system than it helps solve (aka, just leave the scheduled
  instance in error state). Likely the way to do this in a more
  centralized manner (aka with a orchestration unit that can do this
  rescheduling on behalf of the request) can help rate limit itself and
  its requests to the scheduler for new locations to schedule to. Having
  each compute node perform this same operation means rate limiting is
  not possible (and allows for your own system to DDOS itself).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224921] Re: notification.info_from_instance should use get()

2014-09-18 Thread Sean Dague
Long incomplete bug, marking as Opinion. It might be valid, but it's not
anything actionable.

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
   Status: Invalid = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224921

Title:
  notification.info_from_instance should use get()

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  notification.info_from_instance() reads many values from the instance
  structure, including capacity vales that are populated from the
  instance_system_metadata table.  However there are cases where these
  values are not present - for example if a deleted instance is passed
  in then the DB queries do not always do the joins.  This results in a
  KeyError exception.

  Whilst such cases are triggered by bugs the notification code should
  be more robust and use .get() methods instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1224921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >