[Yahoo-eng-team] [Bug 1485408] Re: Deadlock when querying user quota usage.

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485408

Title:
  Deadlock when querying user quota usage.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently, when querying user quota usages, the query process is not
  Order Preserved, this may causes deadlock in bulk creations. That is,
  for example, there are 2 rows in the DB, the first call get the record
  from the 1st row and lock, at the same time, an another call get the
  record from the 2nd row and lock, then both calls will wait for the
  other call to release the row and will cause the deadlock scenario.

  Simply add order_by(id) to the query can avoid the deadlock by calls
  that are not order preserved.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486590] Re: nova.conf.sample does not have neutron<->keystone auth plugin configuration

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486590

Title:
  nova.conf.sample does not have neutron<->keystone auth plugin
  configuration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  support was added in https://review.openstack.org/#/c/136931/ to be
  able to specify and use the standard session and auth plugin helpers
  from keystoneclient to standardize the options available for talking
  to neutron.

  However, both in Kilo nor in Master now, these config options do not show up 
when we generate the sample configuration file. Jamie Lennox has details in his 
blog as well:
  http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/

  Essentially someone who has gone through neutronv2/api.py and read
  that blog post above will be able to configure keystone v3 instead of
  v2. Even the devstack uses v2 only for this handshake which prompted
  me to look in deeper and found this problem
  (https://review.openstack.org/#/c/209566/3/lib/neutron-legacy,cm)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486880] Re: Can't delete an instance if boot from encryption volume

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486880

Title:
  Can't delete an instance if boot from  encryption volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova configuration:

  [ephemeral_storage_encryption]
  enabled = True
  cipher = aes-xts-plain64
  key_size = 512

  [default]
  # use lvm as images_type  
 |
  images_type=lvm   
 |
  # use that volume group   
 |
  # /opt/stack/data/stack-volumes-default-backing-file  
 |
  images_volume_group=stack-volumes-default 
 |
 
  reproduce:

  
+--+---+++-+--+
  root@taget-ThinkStation-P300:/opt/stack/nova# nova boot --security-groups 
default  --key-name testkey --image 57b26b8f-0e8c-4ffd-87b9-0177e6331
  b29 --nic net-id=e1d6382e-0e01-4172-9772-19d83058f8f3 --flavor 1 test3

  root@taget-ThinkStation-P300:/opt/stack/nova# nova delete test3

  root@taget-ThinkStation-P300:/opt/stack/nova# nova list
  
+--+---+++-+--+
  | ID   | Name  | Status | Task State | Power 
State | Networks |
  
+--+---+++-+--+
  | 9ac19e94-06e0-42c9-bbb5-aff44a70959e | test1 | ERROR  | deleting   | 
NOSTATE |  |
  | 6b8c201c-6e83-4eed-a12e-190e2fa9c1a5 | test2 | ERROR  | deleting   | 
NOSTATE |  |
  | d58b12ca-f983-47b8-93d8-a570fe4458d0 | test3 | ERROR  | -  | 
Shutdown|  |
  
+--+---+++-+--+


  
  2015-08-20 15:52:22.113 ERROR oslo_messaging.rpc.dispatcher 
[req-5a58977e-04d6-47d4-b962-a98601d400d6 admin admin] Exception during message 
handling: Failed to remove volume(s): (Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf lvremove -f 
/dev/stack-volumes-default/d58b12ca-f983-47b8-93d8-a570fe4458d0_disk
  Exit code: 5
  Stdout: u''
  Stderr: u'  WARNING: Ignoring duplicate config node: global_filter (seeking 
global_filter)\n  WARNING: Ignoring duplicate config node: global_filter 
(seeking global_filter)\n  Logical volume 
stack-volumes-default/d58b12ca-f983-47b8-93d8-a570fe4458d0_disk is used by 
another device.\n')
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 89, in wrapped
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher payload)
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 72, in wrapped
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 345, in decorated_function
  2015-08-20 15:52:22.113 TRACE oslo_messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1486791] Re: Fix mistake in UT: test_detach_unattached_volume

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486791

Title:
  Fix mistake in UT: test_detach_unattached_volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Simple fix:

  In nova.test.unit.compute.test_compute.py

  def test_detach_unattached_volume(self):
  # Ensure exception is raised when volume's idea of attached
  # instance doesn't match.
  fake_instance = self._fake_instance({'uuid': 'uuid1',
  'locked': False,
  'launched_at': timeutils.utcnow(),
  'vm_state': vm_states.ACTIVE,
  'task_state': None})
  volume = {'id': 1, 'attach_status': 'in-use',
  --volume attach_status should be 'attached' or 'detached', status 
should be 'in-use', this should be fixed--
'instance_uuid': 'uuid2'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485059] Re: bdm image property should overload 'mappings' one for instance launch

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485059

Title:
  bdm image property should overload 'mappings' one for instance launch

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If an image has settings for the same device in both 'mappings' and
  'block_device_mapping' properties, the last one should be used for
  instance launch. But currently 'mappings' takes precedence.

  =
  Steps to reproduce:

  1 Create a flavor with ephemeral disk.
  openstack flavor create m1.nano-ephemeral --id 142 --ram 64 --ephemeral 1

  2 Set mapping property in an image:
  openstack image set --property mappings='[{"device": "vdb", "virtual": 
"ephemeral0"}]' cirros-0.3.4-x86_64-uec

  3 Check expected mappings:
  euca-describe-images
  IMAGEami-0001...
  BLOCKDEVICEMAPPING/dev/vdb

  3 Create a volume:
  openstack volume create empty --size 1

  4 Create a snapshot:
  openstack snapshot create empty --name empty
  | Field   | Value|
  | id  | af84c37e-3521-435f-9976-e45aaf7fa2c7 |

  5 Set bdm property in the image with the snapshot id and the same device name:
  openstack image set --property block_device_mapping='[{"device_name": 
"/dev/vdb", "boot_index": -1, "source_type": "snapshot", "destination_type": 
"volume", "volume_size": 1, "delete_on_termination": true, "snapshot_id": 
"af84c37e-3521-435f-9976-e45aaf7fa2c7"}]' --property bdm_v2=true 
cirros-0.3.4-x86_64-uec

  6 Check expected mappings:
  euca-describe-images
  IMAGEami-0001...
  BLOCKDEVICEMAPPING/dev/vdbsnap-00011true

  Here we see that bdm overloads mappings.

  7 Boot an instance with the image:
  nova boot --flavor m1.nano-ephemeral --image cirros-0.3.4-x86_64-uec inst

  8 Wait active state of the instance and view its volumes:
  nova show inst
  | Property | Value
  |
  | os-extended-volumes:volumes_attached | []   
  |

  =
  Actual result: no volume is attached to the instance.
  Expected result: an id of an attached volume.

  =
  Since Nova EC2 has not been changed for a long time, it still processes these 
properties in right order. So these steps use EC2 API to display expected 
results.
  Also see _setUpImageSet in test_cloud.py.  Pay  attention to mapping1, 
block_device_mapping1, _expected_bdms1 vars. Tests ensures that bdm overloads 
mappings for showing images.

  The behavior is broken by https://review.openstack.org/#/c/83805
  See there in L636 of api.py:
  if image_mapping:
  image_defined_bdms += self._prepare_image_mapping(
  instance_type, image_mapping)

  Compare with left side's L1163:
  for mapping in (image_mapping, block_device_mapping):
  if not mapping:
  continue
  self._update_block_device_mapping(context,
  instance_type, instance_uuid, mapping)

  I think it's safe to restore the old behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486541] Re: Using cells, local instance deletes incorrectly use legacy bdms instead of objects

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486541

Title:
  Using cells, local instance deletes incorrectly use legacy bdms
  instead of objects

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The instance delete code paths were changed to use new-world bdm
  objects in commit f5071bd1ac00ed68102d37c8025d36df6777cd9e.

  However, cells code still use the legacy format for local delete
  operations which is clearly wrong. Code that gets called in the parent
  class in nova/compute/api.py uses dot-notation and calls bdm.destroy()
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486458] Re: Hyper-V: snapshot fails if the instance is destroyed beforehand

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486458

Title:
  Hyper-V: snapshot fails if the instance is destroyed beforehand

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Instance destroy and Instance snapshot are locking operations, meaning
  that they can only occur sequencially. This is the result of the
  bugfix for https://bugs.launchpad.net/nova/+bug/1461970 .

  The problem is that the destroy instance can occur before snapshoting,
  resulting in a VM NotFound exception being raised while snapshoting,
  as the VM no longer exists. In the logs it can be observed that the
  lock was being held by "do_terminate_instance", which was then aquired
  by "instance_synchronized_snapshot".

  This causes failures in some tempest tests.

  Logs: http://paste.openstack.org/show/421642/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485429] Re: Unit test case for unshelving a volume backed instance is not correct

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485429

Title:
  Unit test case for unshelving a volume backed instance is not correct

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In case of volume backed instance, snapshot is not taken when an instance is 
shelved,
  so "shelved_image_id" key is not set to the instance system metadata.

  In the unit test case 
"test_unshelve_instance_schedule_and_rebuild_volume_backed
  ", set the "shelved_image_id" to the instance system metadata, which is not 
correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486410] Re: Hyper-V: detach_interface raises NotImplementedError during neutron event

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486410

Title:
  Hyper-V: detach_interface raises NotImplementedError during neutron
  event

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Neutron events have been added to nova. If the event "network-vif-
  unplugged" is received, the nova compute manager will proceed to call
  the compute driver's detach_interface method. The mentioned method is
  not implemented in the HyperVDriver, causing errors. See log.

  LOG: http://paste.openstack.org/show/421561/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient

2015-09-24 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in Cinder:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Committed

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Python-novaclient's doc: http://docs.openstack.org/developer/python-
  novaclient/api.html

  Affected projects:
   - Horizon - 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

   - Manila -
  
https://github.com/openstack/manila/blob/473b46f6edc511deaba88b48392b62bfbb979787/manila/compute/nova.py#L23

   - Cinder-
  
https://github.com/openstack/cinder/blob/de64f5ad716676b7180365798efc3ea69a4fef0e/cinder/compute/nova.py#L23

   - Mistral -
  
https://github.com/openstack/mistral/blob/f42b7f5f5e4bcbce8db7e7340b4cac12de3eec4d/mistral/actions/openstack/actions.py#L23

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494574] Re: Logging missing value types

2015-09-24 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed => Fix Released

** Changed in: cinder
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494574

Title:
  Logging missing value types

Status in Cinder:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  In Progress
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Released
Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Committed
Status in os-brick:
  Fix Released
Status in oslo.versionedobjects:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  There are a few locations in the code where the log string is missing
  the formatting type, causing log messages to fail.

  
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vnx_cli.py
  
  LOG.debug('EMC: Command Exception: %(rc) %(result)s. 
'  
  FILE: ../OpenStack/cinder/cinder/consistencygroup/api.py  
  
  LOG.error(_LE("CG snapshot %(cgsnap) not found 
when "
  LOG.error(_LE("Source CG %(source_cg) not found 
when "
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vmax_masking.py   
  
  "Storage group %(sgGroupName) "   
  
  FILE: ../OpenStack/cinder/cinder/volume/manager.py
  
  '%(image_id) will not create cache 
entry.'),

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1494574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486134] Re: Reservations code triggers deadlocks and lock wait timeouts

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486134

Title:
  Reservations code triggers deadlocks and lock wait timeouts

Status in neutron:
  Fix Released

Bug description:
  Switching the gate tests to multiple workers + pymysql is triggering a lot of 
errors like [1] in the reservation logic.
  These errors are not hitting the gate at the moment (no instance of lock wait 
timeout or deadlock errors emerged from logstash).

  Nevertheless Rally failure rate has now jumped to 100%. This means
  that the issue with the reservation logic will surely end up affecting
  production environments, and is a time bomb waiting to explode in the
  upstream gate.

  The logic must be fixed, otherwise reverted.
  Please also cut the fingers of the developer that wrote that code.

  [1] http://logs.openstack.org/60/213360/4/check/gate-rally-dsvm-
  neutron-
  neutron/4297681/logs/screen-q-svc.txt.gz?level=TRACE#_2015-08-18_10_28_01_314

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489014] Re: ovs agen _bind_devices should query only existing ports

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489014

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  Fix Released

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487598] Re: Remove vendor AGENT_TYPE_* constants

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487598

Title:
  Remove vendor AGENT_TYPE_* constants

Status in neutron:
  Fix Released

Bug description:
  Neutron defines in neutron.common.constants vendor AGENT_TYPE_*
  constants BUT there are only used by currently or future out-of-tree
  code ... such constants should be moved to out-of-tree repos

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489671] Re: Neutron L3 sync_routers logic process all router ports from database when even sync for a specific router

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489671

Title:
  Neutron L3 sync_routers logic process all router ports from database
  when even sync for a specific router

Status in neutron:
  Fix Released

Bug description:
  Recreate Steps:
  1) Create multiple routers and allocate each router interface for neutron 
route ports from different network.
  for example, below, there are 4 routers with each have 4,2,1,2 ports.  
(So totally 9 router ports in database)
  [root@controller ~]# neutron router-list
  
+--+---+---+-+---+
  | id   | name  | 
external_gateway_info | distributed | ha|
  
+--+---+---+-+---+
  | b2b466d2-1b1a-488d-af92-9d83d1c0f2c0 | routername1   | null 
 | False   | False |
  | 919f4312-41d1-47a8-b2b5-dc7f14d3f331 | routername2   | null 
 | False   | False |
  | 2854df21-7fe8-4968-a372-3c4a5c3d4ecf | routername3   | null 
 | False   | False |
  | daf51173-0084-4881-9ba3-0a9ac80d7d7b | routername4   | null 
 | False   | False |
  
+--+---+---+-+---+

  [root@controller ~]# neutron router-port-list routername1
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6194f014-e7c1-4d0b-835f-3cbf94839b9b |  | fa:16:3e:a9:43:7a | 
{"subnet_id": "84b1e75e-9ce3-4a85-a9c6-32133fca081d", "ip_address": "77.0.0.1"} 
|
  | bcac4f23-b74d-4cb3-8bbe-f1d59dff724f |  | fa:16:3e:72:59:a1 | 
{"subnet_id": "80dc7dfe-d353-4c51-8882-934da8bbbe8b", "ip_address": "77.1.0.1"} 
|
  | 39bb4b6c-e439-43a3-85f2-cade8bce8d3c |  | fa:16:3e:9a:65:e6 | 
{"subnet_id": "b54cb217-98b8-41e1-8b6f-fb69d84fcb56", "ip_address": "80.0.0.1"} 
|
  | 3349d441-4679-4176-9f6f-497d39b37c74 |  | fa:16:3e:eb:43:b5 | 
{"subnet_id": "8fad7ca7-ae0d-4764-92d9-a5e23e806eba", "ip_address": "81.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername2
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 77ac0964-57bf-4ed2-8822-332779e427f2 |  | fa:16:3e:ea:83:f8 | 
{"subnet_id": "2f07dbf4-9c5c-477c-b992-1d3dd284b987", "ip_address": "95.0.0.1"} 
|
  | aeeb920e-5c73-45ba-8fe9-f6dafabdab68 |  | fa:16:3e:ee:43:a8 | 
{"subnet_id": "15c55c9f-2051-4b4d-9628-552b86543e4e", "ip_address": "97.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername3
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | f792ac7d-0bdd-4dbe-bafb-7822ce388c71 |  | fa:16:3e:fe:b7:f7 | 
{"subnet_id": "b62990de-0468-4efd-adaf-d421351c6a8b", "ip_address": "66.0.0.1"} 
|
  
+--+--+---+-+
  [root@controller ~]# neutron router-port-list routername4
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 

[Yahoo-eng-team] [Bug 1489091] Re: neutron l3-agent-router-remove is not unscheduling dvr routers from L3-agents

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489091

Title:
  neutron l3-agent-router-remove is not unscheduling dvr routers from
  L3-agents

Status in neutron:
  Fix Released

Bug description:
  In my environment where there is a compute node and a controller node.
  On the compute node the L3-agent mode is 'dvr'.
  On the controller node the L3-agent mode is 'dvr-snat'.
  Nova-compute is only running on the compute node.

  Start: the compute node has no VMs running, there are no namespaces on
  the compute node.

  1. Created a network and a router
     neutron net-create my-net
     neutron subnet-create sb-my-net my-net 10.1.2.0/24
     neutron router-create my-router
     neutron router-interface-add my-router sb-my-net
     neutron router-gateway-set my-router public

  my-net's UUID is 1162f283-6efc-424a-af37-0fbeeaf5d02a
  my-router's UUID is 4f357733-9320-4c67-a0f6-81054d40fdaa

  2. Boot a VM
     nova boot --flavor 1 --image  --nic 
net-id=1162f283-6efc-424a-af37-0fbeeaf5d02a myvm
     - The VM is hosted on the compute node.

  3. Assign a floating IP to the VM
  neutron port-list --device-id 
  neutron floatingip-create --port-id  public

  The fip namespace and the qrouter-
  4f357733-9320-4c67-a0f6-81054d40fdaa is found on the compute node.

  4. Delete the VM. On the compute node, the fip namespace went away as 
expected.  But the qrouter namespace is left behind, but it should have been 
deleted. Neutron l3-agent-list-hosting-router shows the router is still 
scheduled on the compute node's L3-agent.
  stack@Dvr-Ctrl2:~/DEVSTACK/manage$ nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
  
+--+-++---+--+
  | id   | host| admin_state_up | alive 
| ha_state |
  
+--+-++---+--+
  | 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   
|  |
  | 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   
|  |
  
+--+-++---+--+

  5. Attempt to use neutron l3-agent-router-remove to remove the router from 
the compute node's L3-agent also didn't work.  The router is still scheduled on 
the agent.
  stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron l3-agent-router-remove 
733e31eb-b49e-488b-aaf1-0dbcda802f66 4f357733-9320-4c67-a0f6-81054d40fdaa
  Removed router 4f357733-9320-4c67-a0f6-81054d40fdaa from L3 agent

  stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
  
+--+-++---+--+
  | id   | host| admin_state_up | alive 
| ha_state |
  
+--+-++---+--+
  | 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   
|  |
  | 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   
|  |
  
+--+-++---+--+

  The errors in (4) and (5) did not happen on the stable/kilo or the 
stable/juno code:
     i.) In (4) the router should no longer be scheduled on the compute node's 
L3 agent.
     ii.) In (5) neutron l3-agent-router-remove should removed the router from 
the compute node's L3 agent.

  Both (4) and (5) indicates that no notification to remove the router
  is sent to the L3-agent on the compute node.  They represent
  regressions in the latest neutron code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490043] Re: test_keepalived_respawns fails when trying to kill -15

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490043

Title:
  test_keepalived_respawns fails when trying to kill -15

Status in neutron:
  Fix Released

Bug description:
  test_keepalived_respawns spawns keepalived, asserts that it's up,
  kills it,  then waits for the process monitor to respawn it.
  Sometimes, the test seems to fail when sending signal 15 to the
  process.

  Logstash:
  message:"pm.disable(sig='15')'" AND tags:"console"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicG0uZGlzYWJsZShzaWc9JzE1JyknXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA4MDA0MTU5OTZ9

  3 hits in the last 7 days.

  Example console log:
  
http://logs.openstack.org/91/215491/2/gate/gate-neutron-dsvm-functional/a5ea84a/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487139] Re: We need more strict filters for functional tests

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487139

Title:
  We need more strict filters for functional tests

Status in neutron:
  Fix Released

Bug description:
  Currently tee utility not used in functional tests, but filters for it (very 
weak) are existing in functional-testing.filters . Also, in the current state 
curl filter allows to use curl for replacement system files, like: curl -o 
/bin/su http://mycoolsite.in/my-virus-needs-suid
  We need more strict filters to be set for functional tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488730] Re: ping the external network ip, clear external-gateway and then set the external-gateway back, the connection is not recoverd due to conntrack

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488730

Title:
  ping the external network ip, clear external-gateway and then set the
  external-gateway back, the connection is not recoverd due to conntrack

Status in neutron:
  Fix Released

Bug description:
  I found a small bug on kilo version: ping the external network ip,
  clear external-gateway and then set the external-gateway back, the
  ping connection is not recoverd due to conntrack.

  here is the detailed operations:
  1. make sure a vm is connected to a router: "router1", and the router is 
connected to an external network.
  2. ping the external network in a vm, it should be successful:
  # ping 8.8.8.8
  3. clear the external-gateway, now the ping connection is dropped.
  # neutron router-gateway-clear router1
  4. set the external-gateway back, "public" is an external network.
  # neutron router-gateway-set router1 public

  Now, I found the ping connection is not recovered, my investigation
  shows this is due to conntrack state issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486861] Re: get_extensions_path fail to remove duplicate path when the path gets appended with a another path

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486861

Title:
  get_extensions_path fail to remove duplicate path when the path gets
  appended with a another path

Status in neutron:
  Fix Released

Bug description:
  get_extensions_path contains logic to eliminate duplicated paths.
  However, in the case where the 'api_extensions_path' of config
  contains multiple concatenated paths, it treats the concatenated list
  as one extension path.

  This could be a problem for Neutron services, specifically fwaas,
  lbaas and vpnaas.  For these, their extension paths are automatically
  added in get_extensions_paths.  If fwaas extension path is also
  specified in CONF.api_extensions_path', for example, and if that path
  is appended with a different extension path, the duplicated fwaas
  extension paths are not recognized as duplicate.

  In an offending case, you would have:

  paths = ['fw_ext_path', 'fw_ext_path:some_other_path']

  and since 'fw_ext_path' != 'fw_ext_path:some_other_path', both
  'fw_ext_path' remains, which causes error later on when Firewall
  object's 'super' is called since the module containing the Firewall
  class definition of Firewall was loaded twice (python doesn't like
  this).

  In the above scenario, the paths should have been evaluated as:

  paths = ['fw_ext_path', 'fw_ext_path', 'some_other_path']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486300] Re: ICMP port code should be checked in range [0, 255]

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486300

Title:
  ICMP port code should be checked in range [0,255]

Status in neutron:
  Fix Released

Bug description:
  ICMP allows port between 0 and 255, port-range-min is also need to be checked,
  but the variable was not checked in function _validate_port_range.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483315] Re: ebtables ARP rules don't account for floating IPs on LinuxBridge

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483315

Title:
  ebtables ARP rules don't account for floating IPs on LinuxBridge

Status in neutron:
  Fix Released

Bug description:
  The new ebtables ARP filtering rules don't account for floating IPs,
  which blocks ARP replies from the qrouter netns the float lives in,
  effectively blocking traffic to the float and thus the instance.
  Looking at the ebtables code, rules are currently only added for ports
  with port security enabled (port_filter:True), IPs in the fixed_ips
  list and IPs in the allowed-address pairs list for a given port.
  Floating IPs do not have port security enabled, aren't fixed_ips and
  aren't automatically inserted into router gateway port AAPs.

  This is an example ebtables -L --Lc list of the filter table on the root 
namespace where the router is:
  http://paste.openstack.org/show/412384/

  192.168.74.0/24 is the private instance network
  172.29.248.0/22 is the public network

  192.168.74.1 is the router inside IP
  192.168.74.2 is the DHCP server IP
  192.168.74.3 is the instance IP

  172.29.248.2 is the router gateway/outside IP
  172.29.248.3 is the DHCP server IP (forgot to disable for the public)
  172.29.248.8 is the floating IP

  As you can see, the floating IP is not in the rules, which results in
  ARP replies from the qrouter namespace being dropped.

  Adding the exception to ebtables results in working traffic, like this (line 
18):
  http://paste.openstack.org/show/412386/

  For reference, here's ebtables from the compute node along with the instance 
information:
  http://paste.openstack.org/show/412387/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478949] Re: VPNaaS: mtu parameter isn't used in ipsec.conf template

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478949

Title:
  VPNaaS: mtu parameter isn't used in ipsec.conf template

Status in neutron:
  Fix Released

Bug description:
  It is possible to specify MTU parameter when creating IPSec Site
  Connection but it will be ignored, because it is missing in
  ipsec.conf.template

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499271] [NEW] MetricsWeigher can return a NoneType exception

2015-09-24 Thread Sylvain Bauza
Public bug reported:

If host_state.metrics is set to None, then calling the MetricsWeigher
returns:


  File "/opt/stack/new/nova/nova/scheduler/weights/metrics.py", line 89, in 
_weigh_object
metrics_dict = {m.name: m for m in host_state.metrics}

TypeError: 'NoneType' object is not iterable


http://logs.openstack.org/05/226805/2/check/gate-ironic-inspector-
dsvm/5cd5071/logs/screen-n-cond.txt.gz?level=WARNING

** Affects: nova
 Importance: Critical
 Assignee: Sylvain Bauza (sylvain-bauza)
 Status: In Progress


** Tags: liberty-rc-potential low-hanging-fruit scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499271

Title:
  MetricsWeigher can return a NoneType exception

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  If host_state.metrics is set to None, then calling the MetricsWeigher
  returns:


File "/opt/stack/new/nova/nova/scheduler/weights/metrics.py", line 89, in 
_weigh_object
  metrics_dict = {m.name: m for m in host_state.metrics}

  TypeError: 'NoneType' object is not iterable


  http://logs.openstack.org/05/226805/2/check/gate-ironic-inspector-
  dsvm/5cd5071/logs/screen-n-cond.txt.gz?level=WARNING

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499307] [NEW] Firewalls Details pages don't have actions

2015-09-24 Thread Rob Cresswell
Public bug reported:

The Details pages for FWaas items (Rules, Policies, and Firewalls) don't
have the usual actions available.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1499307

Title:
  Firewalls Details pages don't have actions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Details pages for FWaas items (Rules, Policies, and Firewalls)
  don't have the usual actions available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1499307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499319] [NEW] [kilo]Some instances are assigned with 2 IPs when 100 instances are launched concurrently.

2015-09-24 Thread IBM-Cloud-SH
Public bug reported:

[Summary]
[nova-docker]Some instances are assigned with 2 IPs when 100 instances are 
launched concurrently. 
[Topo]
Both controller and compute nodes are running on centos7.0 img: 
3.10.0-229.11.1.el7.x86_64

[Reproduceable or not]
Can be reproduced, but not easily.
This issue happened when compute nodes use docker hypervisor, also happened 
before with QEMU.

[Recreate Steps]
1. On dashboard, create tenant a tenant with network and subnet.
2. Launch 100 instances concurrently.
3. After several minutes, check resultand found that 2 instances are assigned 
with 2 IPs separately.

[Log]
log on neutron serve and dhcp agent is attached.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: log neutron server

** Attachment added: "server.log"
   https://bugs.launchpad.net/bugs/1499319/+attachment/4473272/+files/server.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499319

Title:
   [kilo]Some instances are assigned with 2 IPs when 100 instances are
  launched concurrently.

Status in neutron:
  New

Bug description:
  [Summary]
  [nova-docker]Some instances are assigned with 2 IPs when 100 instances are 
launched concurrently. 
  [Topo]
  Both controller and compute nodes are running on centos7.0 img: 
3.10.0-229.11.1.el7.x86_64

  [Reproduceable or not]
  Can be reproduced, but not easily.
  This issue happened when compute nodes use docker hypervisor, also happened 
before with QEMU.

  [Recreate Steps]
  1. On dashboard, create tenant a tenant with network and subnet.
  2. Launch 100 instances concurrently.
  3. After several minutes, check resultand found that 2 instances are assigned 
with 2 IPs separately.

  [Log]
  log on neutron serve and dhcp agent is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499312] [NEW] [kilo]Some instances are in error status after 20 instances launched concurrently due to neutron scheduling.

2015-09-24 Thread IBM-Cloud-SH
Public bug reported:

[Summary]
Launch 20 instances concurrently, found 5 instances are in ERROR status.

[Topo]
Both controller and compute nodes are running on centos7.0 img: 
3.10.0-229.11.1.el7.x86_64
ovs_version: "2.3.1"

[Reproduceable or not]
Can be reproduced easily.

[Recreate Steps]
1. Launch 20 instances concurrently:
[root@quasarin-1 ~(keystone_admin)]$ nova --os-tenant-name herman-tenant-1 boot 
--flavor 1 --image dbeaeaf5-b996-48ee-8272-dada151cf34e --nic 
net-id=d5be47c2-f54e-49b9-9feb-d89cc6b86c56 --availability-zone 
scaling:quasarin-1 in1-20 --max 20

2. check instances state, 5 instances are in error status:
[root@quasarin-1 ~(keystone_admin)]$ nova --os-tenant-name herman-tenant-1 list
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py:251: 
SecurityWarning: Certificate has no `subjectAltName`, falling back to check for 
a `commonName` for now. This feature is being removed by major browsers and 
deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for 
details.)
  SecurityWarning
+--+---+++-+--+
| ID   | Name  | Status | Task State | 
Power State | Networks |
+--+---+++-+--+
| 5f048fb4-fd48-4b13-8ac3-8d3b371c14f5 | in1-20-1  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.2  |
| 0dffe66b-8299-4e2d-ac3e-e9e6a7c0c430 | in1-20-10 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.9  |
| eef2edd3-2fce-4a2d-a89a-17c06b744a4c | in1-20-11 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.12 |
| 003584f6-ff38-49d5-9f43-11e965cf3897 | in1-20-12 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.13 |
| bf41557b-466c-423f-a36f-f25dd1453981 | in1-20-13 | ERROR  | spawning   | 
NOSTATE |  |
| a4412824-4b37-4c7e-87ba-ac2d5e38cd37 | in1-20-14 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.15 |
| 623439fd-90d0-45a5-86ef-b88fed1c2b5b | in1-20-15 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.16 |
| a12bbd01-f4db-430a-9818-37e6fee18796 | in1-20-16 | ERROR  | spawning   | 
NOSTATE |  |
| ad3f1752-7fba-48fd-aceb-f0ff7face845 | in1-20-17 | ERROR  | spawning   | 
NOSTATE |  |
| f31378f2-1c4c-42e9-92bc-85de500afbd2 | in1-20-18 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.19 |
| 44ec8355-12a3-4d8c-8754-61de3047e71e | in1-20-19 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.20 |
| 5b826315-9711-4051-aa6b-0bb8262074d6 | in1-20-2  | ERROR  | spawning   | 
NOSTATE |  |
| a518dcb4-338b-41ef-9740-f3ad7da39d61 | in1-20-20 | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.21 |
| a418d237-7ce2-4880-8791-7b5e958be1ab | in1-20-3  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.4  |
| e3a4cc36-dd90-4030-b296-2c2a6964aae0 | in1-20-4  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.11 |
| e65bc507-6be5-4929-80c0-c0f37ea1eefd | in1-20-5  | ERROR  | spawning   | 
NOSTATE |  |
| e3e21145-6e5f-4263-855a-2a95bf35d05d | in1-20-6  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.6  |
| 0f1abea6-7d4f-4acd-b17a-298b82a3c34b | in1-20-7  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.8  |
| 7f3a22b2-70d6-4dc3-95d4-168fbd0707fc | in1-20-8  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.7  |
| 470fbabe-fcce-4a1d-8a8c-065134b1c977 | in1-20-9  | ACTIVE | -  | 
Running | herman-tenant-1-net-1=100.1.1.10 |
+--+---+++-+--+


[logs]
[root@quasarin-1 ~(keystone_admin)]$ nova --os-tenant-name herman-tenant-1 list
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py:251: 

[Yahoo-eng-team] [Bug 1499269] [NEW] cannot attach direct type port (sr-iov) to existing instance

2015-09-24 Thread Pedro Sousa
Public bug reported:

Whenever I try to attach a direct port to an existing instance It fails:

#neutron port-create Management --binding:vnic_type direct
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | direct
  |
| device_id |   
  |
| device_owner  |   
  |
| fixed_ips | {"subnet_id": "6c82ff4c-124e-469a-8444-1446cc5d979f", 
"ip_address": "10.92.29.123"} |
| id| ce455654-4eb5-4b89-b868-b426381951c8  
  |
| mac_address   | fa:16:3e:6b:15:e8 
  |
| name  |   
  |
| network_id| 5764ca50-1f30-4daa-8c86-a21fed9a679c  
  |
| security_groups   | 5d2faf7b-2d32-49a8-978e-a91f57ece17d  
  |
| status| DOWN  
  |
| tenant_id | d5ecb0eea96f4996b565fd983a768b11  
  |
+---+-+

# nova interface-attach --port-id ce455654-4eb5-4b89-b868-b426381951c8 
voicisc4srv1
ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-11516bf4-7ab7-414c-a4ee-63e44aaf00a5)

nova-compute.log:

0a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Exception 
during message handling: Failed to attach network adapter device to 
056d455a-314d-4853-839e-70229a56dfcd
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6632, in 
attach_interface
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher port_id, 
requested_ip)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 443, in 
decorated_function
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher payload)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1496557] Re: xenapi boot from volume is broken after move to ImageMeta object if volume passed in

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496557

Title:
  xenapi boot from volume is broken after move to ImageMeta object if
  volume passed in

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When a request is made to boot an instance from a volume and a volume
  is passed in as part of the request, rather than an image to have Nova
  create a volume from, the image id is not passed down as part of the
  build request.  The boot_meta dict created in compute/api.py does not
  store an 'id' key/value in the dict so when it eventually gets down to
  the virt layer and the dict is converted to an object the 'id'
  attribute can not be accessed.  This causes a failure within the
  xenapi driver.

  2015-09-16 13:02:00.481 24755 DEBUG nova.virt.xenapi.vmops 
[req-14839809--53d088b99b2d dbf01adba9b245369ba32a46d93fdf5f 5930474 - - -] 
[instance: 897942e0] Updating progress to 10 _update_instance_progress 
/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py:1017
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [req-14839809--53d088b99b2d 
dbf01adba9b245369ba32a46d93fdf5f 5930474 - - -] [instance: 897942e0] Failed to 
spawn, rolling back
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] Traceback 
(most recent call last):
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]   File 
"/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 657, in _spawn
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] 
name_label)
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]   File 
"/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 212, in inner
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] rv = 
f(*args, **kwargs)
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]   File 
"/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/nova/virt/xenapi/vmops.py",
 line 492, in create_disks_step
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] 
image_meta.id, disk_image_type,
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]   File 
"/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 66, in getter
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] 
self.obj_load_attr(name)
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]   File 
"/opt/rackstack/rackstack.381.6/nova/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 555, in obj_load_attr
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] 
_("Cannot load '%s' in the base class") % attrname)
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0] 
NotImplementedError: Cannot load 'id' in the base class
  2015-09-16 13:02:00.696 24755 ERROR nova.utils [instance: 897942e0]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497896] Re: Libvirt: unable to launch a VM with direct OVS plugging neutron drivers

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497896

Title:
  Libvirt: unable to launch a VM with direct OVS plugging neutron
  drivers

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When launching a VM with the NSX neutron driver we get the following
  exception:

  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00mTraceback (most 
recent call last):
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2152, in _build_resources
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00myield 
resources
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2006, in _build_and_run_instance
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
block_device_info=block_device_info)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2444, in spawn
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
block_device_info=block_device_info)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4516, in 
_create_domain_and_network
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00mxml, 
pause=pause, power_on=power_on)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4446, in _create_domain
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
guest.launch(pause=pause)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 141, in launch
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
self._encoded_xml, errors='ignore')
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
six.reraise(self.type_, self.value, self.tb)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 136, in launch
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00mreturn 
self._domain.createWithFlags(flags)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00mresult = 
proxy_call(self._autowrap, f, *args, **kwargs)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00mrv = 
execute(f, *args, **kwargs)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m
six.reraise(c, e, tb)
  ^[[01;31m2015-09-21 22:08:48.964 TRACE nova.compute.manager 
^[[01;35m[instance: a8f73028-1493-4ad2-b957-1d75422b3ff2] ^[[00m  File 

[Yahoo-eng-team] [Bug 1498075] Re: Filter leading/trailing spaces for name field in v2.1 compat mode

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1498075

Title:
  Filter leading/trailing spaces for name field in v2.1 compat mode

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This has spun out of:
  https://bugs.launchpad.net/nova/+bug/1491511

  v2_legacy allows trailing whitespace, so v2.0 compat needs to also
  accept those request.

  To make it simpler, best to strip all the trailing whitespace in v2.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1498075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496664] Re: V2.1 comp mode behavior should be fixed for diff of v2 and v2.1 APIs

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496664

Title:
  V2.1 comp mode behavior should be fixed for diff of v2 and v2.1 APIs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are some cases where v2.1 is different than v2.0 -
  - VIF API  - no net-id in v2.1
  - rate limit bits - not present in v2.1
  - extension info - namespace diff

  
  For above cases,  current v2.1 compatible mode behaves same as  v2.1 not v2. 
  Failure in - 
http://logs.openstack.org/86/224386/1/check/gate-nova-tox-functional/b98e535/testr_results.html.gz

  As v2.1 comp mode should behave same as v2 instead of v2.1, we should
  fix those cases to return same response as v2 APIs does.

  I am not sure about rate limit and extension info things, should we
  fix those?

  It was found when we start running v2.1 comp mode sample test against
  v2 sample files instead of v2.1 one. -
  https://review.openstack.org/#/c/224386/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498634] Re: pep8 check not on api/openstack/common.py

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1498634

Title:
  pep8 check not on api/openstack/common.py

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  pep8 not running on api/openstack/common.py because 
  our exclude rule is exclude =  
.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools/xenserver*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1498634/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1072751] Re: Instance recovery needed when Compute service goes down during Reboot

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1072751

Title:
  Instance recovery needed when Compute service goes down during Reboot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Scenario:

  If the Compute service goes down just after destroying the instance and 
before recreating the domain on the hypervisor,
  then the instance state task state remains rebooting and the instance remains 
in an inconsistent state after Compute gets back. 
  Admin has to recreate the instance on the hypervisor using the instance's xml.

  This is another corner scenario with low probability, but could be
  managed by the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1072751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385489] Re: ResourceTracker._update_usage_from_migrations() is inefficient due to multiple Instance.get_by_uuid() lookups

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1385489

Title:
  ResourceTracker._update_usage_from_migrations() is inefficient due to
  multiple Instance.get_by_uuid() lookups

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Here is our ResourceTracker._update_usage_from_migration() code:

  def _update_usage_from_migrations(self, context, resources,
  migrations):

  self.tracked_migrations.clear()

  filtered = {}

  # do some defensive filtering against bad migrations records in the
  # database:
  for migration in migrations:

  instance = migration['instance']

  if not instance:
  # migration referencing deleted instance
  continue

  uuid = instance['uuid']

  # skip migration if instance isn't in a resize state:
  if not self._instance_in_resize_state(instance):
  LOG.warn(_("Instance not resizing, skipping migration."),
   instance_uuid=uuid)
  continue

  # filter to most recently updated migration for each instance:
  m = filtered.get(uuid, None)
  if not m or migration['updated_at'] >= m['updated_at']:
  filtered[uuid] = migration

  for migration in filtered.values():
  instance = migration['instance']
  try:
  self._update_usage_from_migration(context, instance, None,
resources, migration)
  except exception.FlavorNotFound:
  LOG.warn(_("Flavor could not be found, skipping "
 "migration."), instance_uuid=uuid)
  continue

  Unfortunately, when the migration object's 'instance' attribute is
  accessed, a call across RPC and DB occurs:

  
https://github.com/openstack/nova/blob/stable/icehouse/nova/objects/migration.py#L77-L80

  @property
  def instance(self):
  return instance_obj.Instance.get_by_uuid(self._context,
   self.instance_uuid)

  For some very strange reason, the code in
  _update_usage_from_migration() builds a "filtered"dictionary with the
  migration objects that need to be accounted for in the resource
  usages, and then once it builds that filtered dictionary, it goes
  through the values and calls _update_usage_from_migration(), passing
  the migration object's instance object.

  There's no reason to do this at all. The filtered variable can go away
  and the call to _update_usage_from_migration() can occur in the main
  for loop, using the same instance variable from the original line:

   instance = migration['instance']

  That way, for each migration, we don't need to do two lookup by UUID
  calls through the conductor to get the migration's instance object...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1385489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494667] Re: Switch to load based schedulers

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494667

Title:
  Switch to load based schedulers

Status in neutron:
  Fix Released

Bug description:
  A load based scheduler for dhcp was introduced in Kilo whereas one for
  routers has been around for longer.

  This change proposes to switch from chance-based scheduling to load based 
scheduling as default. It's very likely that cloud deployed at scale do
  use these schedulers already so it may make sense to flip the defaults so 
that upstream CI exercise that code to avoid surprises down the road.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494336] Re: Neutron traceback when an external network without IPv6 subnet is attached to an HA Router

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494336

Title:
  Neutron traceback when an external network without IPv6 subnet is
  attached to an HA Router

Status in neutron:
  Fix Released

Bug description:
  For an HA Router which does not have any subnets in the external network, 
Neutron 
  sets the IPv6 proc entry[1] on the gateway interface to receive Router Advts 
from 
  external IPv6 router and configure a default route pointing to the LLA of the 
external IPv6 Router.

  Normally for an HA Router in the backup state, Neutron removes Link Local 
Address (LLA)
  from the gateway interface. 

  In Kernel version 3.10 when the last IPv6 address is removed from the 
interface, 
  IPv6 is shutdown on the iface and the proc entries corresponding to the iface 
are deleted (i.e., /proc/sys/net/ipv6/conf/)
  This issue is resolved in the later kernels [2], but the issue exists on 
platforms with Kernel version 3.10
  When IPv6 proc entries are missing and Neutron tries to configure the proc 
entry we see the following traceback [3] in Neutron. 

  [1] /proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra
  [2] 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=876fd05ddbae03166e7037fca957b55bb3be6594
  [3] Trace:
  Command: ['ip', 'netns', 'exec', 
'qrouter-e66b99aa-e840-4a13-9311-6242710a5452', 'sysctl', '-w', 
'net.ipv6.conf.qg-1fc4061d-3c.accept_ra=2']
  Exit code: 255
  Stdin:
  Stdout:
  Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/qg-1fc4061d-3c/accept_ra: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494021] Re: tests.unit.quota.test_resource can randomly fail

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494021

Title:
  tests.unit.quota.test_resource can randomly fail

Status in neutron:
  Fix Released

Bug description:
  The TestTrackedResource class is designed to inject random failures
  into the gate.  It generates random numbers within the range of
  0..1, and will fail if it generates duplicate random numbers
  during its run.

  class TestTrackedResource(testlib_api.SqlTestCaseLight):
  def _add_data(self, tenant_id=None):
   session = db_api.get_session()
   with session.begin():
   tenant_id = tenant_id or self.tenant_id
  session.add(test_quota.MehModel(
  meh='meh_%d' % random.randint(0, 1),
  tenant_id=tenant_id))
  session.add(test_quota.MehModel(
  meh='meh_%d' % random.randint(0, 1),
  tenant_id=tenant_id))


   Because the test repeatedly calls _add_data(), if the calls to
  randint() ever generate the same number during a test, it will fail.
  Aggregated over hundreds, or sometimes thousands, of test runs per
  day, I would estimate that this could cause several  unnecessary
  check/gate failures in a busy day.

  I propose changing random.randint() to uuid.uuid4(), which gives us a
  much larger random number space and a much smaller probability of
  collision.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494114] Re: Bad router request: Router already has a port on subnet

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494114

Title:
  Bad router request: Router already has a port on subnet

Status in neutron:
  Fix Released

Bug description:
  A number of Tempest tests fail in the gate.

  An example:

  http://logs.openstack.org/02/221502/5/gate/gate-tempest-dsvm-neutron-
  dvr/621ea48/logs/testr_results.html.gz

  The logstash query:

  message:"Bad router request: Router already has a port on subnet" AND
  build_status:"FAILURE" AND tags:"console"

  The logstash thingy:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQmFkIHJvdXRlciByZXF1ZXN0OiBSb3V0ZXIgYWxyZWFkeSBoYXMgYSBwb3J0IG9uIHN1Ym5ldFwiIEFORCBidWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDE4NTc2Nzc4MzEsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  This is affecting DVR jobs only by the looks of it.

  9 hits in 7 days by the time it was reported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492505] Re: py34 intermittent failure

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492505

Title:
  py34 intermittent failure

Status in neutron:
  Fix Released
Status in oslo.messaging:
  In Progress

Bug description:
  An instance here:

  http://logs.openstack.org/56/220656/1/gate/gate-neutron-
  python34/e2c4460/testr_results.html.gz

  message:"Bad checksum - calculated"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQmFkIGNoZWNrc3VtIC0gY2FsY3VsYXRlZFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDQxNDI0Nzk0Nzc1fQ==

  This has been observed in a couple of py34 jobs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493886] Re: Neutron notifier for Nova uses old hacks for nova extensions

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493886

Title:
  Neutron notifier for Nova uses old hacks for nova extensions

Status in neutron:
  Fix Released

Bug description:
  A long time ago, novaclient doesn't provide a public interface for
  discovery extensions. It was required to provide a hack due to
  unability to import extension directly(there was a novaclient module
  refactoring). This hack can produce another issues which can prevent
  to unability to communicate with Nova, since it initialize novaclient
  versioned class directly.

  Since novaclient already provides a public way for discovery
  extensions based on version, we can remove this hack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492909] Re: QoS: Sr-IOV Agent doens't clear VF rate when deleteing VM

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492909

Title:
  QoS: Sr-IOV Agent  doens't clear VF rate when deleteing VM

Status in neutron:
  Fix Released

Bug description:
  when launching VM with port with QoS policy and after a while deleting the VM 
the SR-IOV agent doesn't clear the VF max rate.
  expected behavior is to delete   VF max rate upon VM deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495040] Re: test_filtering_shared_networks fails intermittently

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495040

Title:
  test_filtering_shared_networks fails intermittently

Status in neutron:
  Fix Released

Bug description:
  An example:

  http://logs.openstack.org/25/212425/4/check/gate-neutron-dsvm-
  api/7a222e3/testr_results.html.gz

  Query:

  message:"in test_filtering_shared_networks" AND build_status:"FAILURE"

  Logstash URL:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9maWx0ZXJpbmdfc2hhcmVkX25ldHdvcmtzXCIgQU5EIGJ1aWxkX3N0YXR1czpcIkZBSUxVUkVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQ0MjA1MDM1MTQwNCwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  15 failures over the last 7 days. It's unclear when it started, most likely a 
latent race popping up after something else reshuffled the
  typical execution run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497823] Re: LBaaS V2 Octavia driver should create get its own context/session

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497823

Title:
  LBaaS V2 Octavia driver should create get its own context/session

Status in neutron:
  Fix Released

Bug description:
  The LBaaS V2 Octavia driver currently uses the context that is passed
  from the plugin.  Since the driver spins up a thread to poll Octavia,
  each thread should create its own context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499054] Re: devstack VMs are not booting

2015-09-24 Thread Thierry Carrez
This was actually merged after the branch cut and might need a backport
now

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499054

Title:
  devstack VMs are not booting

Status in Ironic:
  Confirmed
Status in Ironic Inspector:
  Confirmed
Status in neutron:
  Fix Committed

Bug description:
  In devstack, VMs are failing to boot the deploy ramdisk consistently.
  It appears ipxe is failing to configure the NIC, which is usually
  caused by a DHCP timeout, but can also be caused by a bug in the PXE
  ROM that chainloads to ipxe. See also http://ipxe.org/err/040ee1

  Console output:

   eaBIOS (version 1.7.4-20140219_122710-roseapple)
   achine UUID 37679b90-9a59-4a85-8665-df8267e09a3b
  M

  iPXE (http://ipxe.org) 00:04.0 CA00 PCI2.10 PnP PMM+3FFC2360+3FF22360 CA00

 

  
  Booting from ROM...
  iPXE (PCI 00:04.0) starting execution...ok
  iPXE initialising devices...ok


  iPXE 1.0.0+git-2013.c3d1e78-2ubuntu1.1 -- Open Source Network Boot 
Firmware 
  -- http://ipxe.org
  Features: HTTP HTTPS iSCSI DNS TFTP AoE bzImage ELF MBOOT PXE PXEXT Menu

  net0: 52:54:00:7c:af:9e using 82540em on PCI00:04.0 (open)
[Link:up, TX:0 TXE:0 RX:0 RXE:0]
  Configuring (net0 52:54:00:7c:af:9e).. Error 0x040ee119 
(http://
  ipxe.org/040ee119)
  No more network devices

  No bootable device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1499054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494574] Re: Logging missing value types

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494574

Title:
  Logging missing value types

Status in Cinder:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  In Progress
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Released
Status in networking-midonet:
  In Progress
Status in neutron:
  Fix Released
Status in os-brick:
  Fix Released
Status in oslo.versionedobjects:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  There are a few locations in the code where the log string is missing
  the formatting type, causing log messages to fail.

  
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vnx_cli.py
  
  LOG.debug('EMC: Command Exception: %(rc) %(result)s. 
'  
  FILE: ../OpenStack/cinder/cinder/consistencygroup/api.py  
  
  LOG.error(_LE("CG snapshot %(cgsnap) not found 
when "
  LOG.error(_LE("Source CG %(source_cg) not found 
when "
  FILE: ../OpenStack/cinder/cinder/volume/drivers/emc/emc_vmax_masking.py   
  
  "Storage group %(sgGroupName) "   
  
  FILE: ../OpenStack/cinder/cinder/volume/manager.py
  
  '%(image_id) will not create cache 
entry.'),

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1494574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499224] [NEW] lb not deployable but still add into instance_mapping when lbaas agent restart

2015-09-24 Thread yaowei
Public bug reported:

lb not deployable but still add into instance_mapping when lbaas agent
restart and reload loadbalancer.

** Affects: neutron
 Importance: Undecided
 Assignee: yaowei (yaowei)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yaowei (yaowei)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499224

Title:
  lb not deployable but still add into instance_mapping when lbaas agent
  restart

Status in neutron:
  In Progress

Bug description:
  lb not deployable but still add into instance_mapping when lbaas agent
  restart and reload loadbalancer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1156456] Re: libvirt CPU info doesn't count NUMA cells

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1156456

Title:
  libvirt CPU info doesn't count NUMA cells

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The libvirt driver, when counting sockets/cores/etc., does not take
  NUMA architectures into account.  This can cause applications using
  data from the Nova API to under-report the total number of
  sockets/cores/etc. on compute nodes with more than one NUMA cell.

  Example, on a production system with 2 NUMA cells:

  $ grep ^proc /proc/cpuinfo | wc -l
32

  $ python simple_test_script_to_ask_nova_for_cpu_topology.py
  {u'cores': u'8', u'threads': u'2', u'sockets': u'1'}

  So, if one were relying solely on Nova to obtain information about
  this system's capabilities, the results would inaccurate results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1156456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390336] Re: Libvirt: Raise wrong exception message when binding vif failed

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390336

Title:
  Libvirt: Raise wrong exception message when binding vif failed

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova get an NovaException with wrong message due to wrong try to build
  an instance on compute node.

  
  2014-11-07 14:40:54.446 ERROR nova.compute.manager [-] [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] Instance failed to spawn
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] Traceback (most recent call last):
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2244, in _build_resources
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] yield resources
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2114, in _build_and_run_instance
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] block_device_info=block_device_info)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2597, in spawn
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] write_to_disk=True)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4157, in _get_guest_xml
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] context)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4018, in _get_guest_config
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] flavor, virt_type)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6]   File 
"/opt/stack/nova/nova/virt/libvirt/vif.py", line 352, in get_config
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] _("Unexpected vif_type=%s") % 
vif_type)
  2014-11-07 14:40:54.446 TRACE nova.compute.manager [instance: 
9b56e64b-71b5-4598-b4df-45bb85b43ed6] NovaException: Unexpected 
vif_type=binding_failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390033] Re: Inconsistent info of availability zone (az) if the default az is replaced

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390033

Title:
  Inconsistent info of availability zone (az) if the default az is
  replaced

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Affected version: stable/juno

  Description:
  On a single-node deployment of OpenStack (using DevStack), if the default 
availability zone of Nova is replaced by another one, then  the API 
api.nova.server_list returns the list of VMs in which the info of availability 
zone is inconsistent. This results in the toggling effect of Horizon dashboard 
when displaying the list of instances (under tab "Project/Instances"). The 
toggling effect is caused by inconsistent info of availability zone whose 
values either the default one or the newly-created one.

  This bug can easily be reproduced using Horizon dashboard as follows:
   - Go to tab "Admin/Host Aggregates" to create a new host aggregate which is 
assigned with the current host. Pls note that if this newly-created host 
aggregate is not assigned with any host, then the availability zone won't be 
defined.
   - After that, also under this view, we find (in the Availability zones) that 
the newly-created availability zone hides away the default one.
   - Go to tab "System information", we find that nova-compute service is 
running in the newly-created availability zone while all the cinder services 
are running in the default availability zone.
   - Go to tab "Project/Image" to select some image for creating a new bootable 
volume.
   - This newly-created volume is then used to launch a new VM
   - After launching a new VM, it's auto re-directed to the view of 
"Instances". At here, we can observe the toggling effect on the availability 
zone info.

  Analysis:
  Root cause is due to the  API api.nova.server_list as described above. 
  This can be seen by adding some more debug info as follows:

  2014-11-06 10:30:05,103 - my_logger - DEBUG -
  openstack_dashboard.dashboards.project.instances.views - Instance
  amount: 1, Instances: "[http://192.168.56.103:8774/v2/a0c581f7a88441ed84e9878fa9fc8e50/servers
  /d5c1575d-8ac8-4921-802c-e9b121acd82e', u'rel': u'self'}, {u'href':
  u'http://192.168.56.103:8774/a0c581f7a88441ed84e9878fa9fc8e50/servers
  /d5c1575d-8ac8-4921-802c-e9b121acd82e', u'rel': u'bookmark'}],
  'created': u'2014-11-06T10:30:03Z', 'key_name': None, 'image': u'',
  'OS-DCF:diskConfig': u'AUTO', 'image_name': '-', 'OS-EXT-
  STS:power_state': 0, 'OS-EXT-SRV-ATTR:host': None, 'OS-EXT-SRV-
  ATTR:instance_name': u'instance-0004', 'tenant_id':
  u'a0c581f7a88441ed84e9878fa9fc8e50', 'user_id':
  u'2f8c907029eb43e5ab98a55ac28c885e', 'flavor': {u'id': u'1', u'links':
  [{u'href':
  u'http://192.168.56.103:8774/a0c581f7a88441ed84e9878fa9fc8e50/flavors/1',
  u'rel': u'bookmark'}]}, 'OS-EXT-AZ:availability_zone': u'nova', 'id':
  u'd5c1575d-8ac8-4921-802c-e9b121acd82e', 'metadata': {}}>]"

  
  2014-11-06 10:31:02,037 - my_logger - DEBUG - 
openstack_dashboard.dashboards.project.instances.views - Instance amount: 1, 
Instances: "[http://192.168.56.103:8774/v2/a0c581f7a88441ed84e9878fa9fc8e50/servers/d5c1575d-8ac8-4921-802c-e9b121acd82e',
 u'rel': u'self'}, {u'href': 
u'http://192.168.56.103:8774/a0c581f7a88441ed84e9878fa9fc8e50/servers/d5c1575d-8ac8-4921-802c-e9b121acd82e',
 u'rel': u'bookmark'}], 'created': u'2014-11-06T10:30:03Z', 'key_name': None, 
'image': u'', 'OS-DCF:diskConfig': u'AUTO', 'image_name': '-', 
'OS-EXT-STS:power_state': 1, 'OS-EXT-SRV-ATTR:host': u'ubuntu', 
'OS-EXT-SRV-ATTR:instance_name': u'instance-0004', 'tenant_id': 
u'a0c581f7a88441ed84e9878fa9fc8e50', 'user_id': u'2f8c907029eb43e5ab98a55ac28
 c885e', 'flavor': {u'id': u'1', u'links': [{u'href': 
u'http://192.168.56.103:8774/a0c581f7a88441ed84e9878fa9fc8e50/flavors/1', 
u'rel': u'bookmark'}]}, 'OS-EXT-AZ:availability_zone': u'test_az', 'id': 
u'd5c1575d-8ac8-4921-802c-e9b121acd82e', 'metadata': {}}>]"

  
  2014-11-06 10:31:32,437 - my_logger - DEBUG - 
openstack_dashboard.dashboards.project.instances.views - Instance amount: 1, 
Instances: "[http://192.168.56.103:8774/v2/a0c581f7a88441ed84e9878fa9fc8e50/servers/d5c1575d-8ac8-4921-802c-e9b121acd82e',
 u'rel': u'self'}, {u'href': 
u'http://192.168.56.103:8774/a0c581f7a88441ed84e9878fa9fc8e50/servers/d5c1575d-8ac8-4921-802c-e9b121acd82e',
 u'rel': u'bookmark'}], 'created': u'2014-11-06T10:30:03Z', 'key_name': None, 
'image': u'', 'OS-DCF:diskConfig': u'AUTO', 'image_name': '-', 
'OS-EXT-STS:power_state': 1, 'OS-EXT-SRV-ATTR:host': u'ubuntu', 
'OS-EXT-SRV-ATTR:instance_name': u'instance-0004', 'tenant_id': 
u'a0c581f7a88441ed84e9878fa9fc8e50', 'user_id': u'2f8c907029eb43e5ab98a55ac28
 c885e', 'flavor': {u'id': u'1', u'links': [{u'href': 

[Yahoo-eng-team] [Bug 1429581] Re: [VMware] Failed to attach volume due to wrong host iqn

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429581

Title:
  [VMware] Failed to attach volume due to wrong host iqn

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Attaching iSCSI volumes for VMware, using the following steps:
  1.Nova gets volume connector information(get_volume_connector), such as ESX 
iqn;
  2.Calling the cinder-volume initialize_connection, and will register the iqn 
of the ESX to the iSCSI server.It returns the connection information in the 
final, such as the iSCSI target;
  3.Nova attaches the volume to the VM with the connection_info.

  I try to attach an iSCSI volume to an instance, but it's failed in the 3rd 
step(unable to attach the volume to the VM).
  After analyzing the logs, I found the reason for it is the iqn 1th step 
returning was wrong.

  My environment:
  My vcenter cluster has two host: ESX-1 and ESX-2, and an instance(VM-2) on 
the host ESX-2.
  I try to attach an iSCSI volume to VM-2, I have found that it returns the iqn 
of ESX-1 while calling method get_volume_connector, I think it should return 
the iqn of the host running the VM (that is, ESX-2 iqn), rather than first host 
iqn, But it always returns the first ESX iqn, as in the following code.

  vmwareapi/volumeutils.py:

  def get_volume_connector(self, instance):
  """Return volume connector information."""
  try:
  vm_ref = vm_util.get_vm_ref(self._session, instance)
  except exception.InstanceNotFound:
  vm_ref = None
  iqn = self._iscsi_get_host_iqn()
  connector = {'ip': CONF.vmware.host_ip,
   'initiator': iqn,
   'host': CONF.vmware.host_ip}
  if vm_ref:
  connector['instance'] = vm_ref.value
  return connector

  def _iscsi_get_host_iqn(self):
  """Return the host iSCSI IQN."""
  host_mor = vm_util.get_host_ref(self._session, self._cluster)
  hbas_ret = self._session._call_method(
  vim_util, "get_dynamic_property",
  host_mor, "HostSystem",
  "config.storageDevice.hostBusAdapter")

  # Meaning there are no host bus adapters on the host
  if hbas_ret is None:
  return
  host_hbas = hbas_ret.HostHostBusAdapter
  if not host_hbas:
  return
  for hba in host_hbas:
  if hba.__class__.__name__ == 'HostInternetScsiHba':
  return hba.iScsiName

  vmwareapi/vm_util.py:

  def get_host_ref(session, cluster=None):
  """Get reference to a host within the cluster specified."""
  if cluster is None:
  results = session._call_method(vim_util, "get_objects",
     "HostSystem")
  _cancel_retrieve_if_necessary(session, results)
  host_mor = results.objects[0].obj
  else:
  host_ret = session._call_method(vim_util, "get_dynamic_property",
  cluster, "ClusterComputeResource",
  "host")
  if not host_ret or not host_ret.ManagedObjectReference:
  msg = _('No host available on cluster')
  raise exception.NoValidHost(reason=msg)
  host_mor = host_ret.ManagedObjectReference[0]

  return host_mor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446032] Re: Missing delete policy in policy sample file

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446032

Title:
  Missing delete policy in policy sample file

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Policy check for delete related actions are supported [1], but missing
  in the sample file, it would be nice to add these.

  https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n1816

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459468] Re: When doing resize action, CONF.allow_resize_to_same_host should check only once

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459468

Title:
  When doing resize action, CONF.allow_resize_to_same_host should check
  only once

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the current implementation, when doing instance resize action. The 
CONF.allow_resize_to_same_host is first checked in 
  compute/api which is on controller node. If CONF.allow_resize_to_same_host = 
True, nothing will added to 
  filter_properties['ignore_hosts'], if it is set to False, the source host 
will be added to filter_properties['ignore_hosts'] and it 
  will be ignored when performing select_destinations.

  The value of CONF.allow_resize_to_same_host has been checked again in 
compute/manager.py which is on the destination
  host which has already been selected by scheduler.

  This will lead to a problem, if CONF.allow_resize_to_same_host parameter is 
set to True in controller node but set to False
  or didn't set in compute node. When scheduler decided that the original 
compute node is the best one for resize but when
  the compute node implementing the resize action, it will throw an exception.

  The value of CONF.allow_resize_to_same_host should only check once in 
controller node (compute/api.py) and let scheduler
  judge which host is best for rebuild, the compute node should only perform 
the action when it has been selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443640] Re: Cells: race condition when saving an instance

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443640

Title:
  Cells: race condition when saving an instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When an instance is saved in a parent cell the db update happens there
  and then the update is sent to a child cell.  When the child cells
  updates its database it then sends that update back up to the parent
  to be save there again.  The propagation of the change back up to the
  parent can overwrite subsequent changes there causing data to be lost.
  Updates from a parent->child or child->parent should go one direction
  only and not propagate back to the originating cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446638] Re: api has issues when Sorting and pagination params used as filters

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446638

Title:
  api has issues when Sorting and pagination params used as filters

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  While retrieving servers, the sort and pagination query string
  parameters are treated as search options.

  These parameters are passed down to the DB layer and eventually
  filtered out when an AttributeError is caught because they do not
  exist on the Instance model.

  This is taken from:
  https://review.openstack.org/#/c/147298/4

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428136] Re: Nova v3 API still listed in the paste pipeline

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428136

Title:
  Nova v3 API still listed in the paste pipeline

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova v3 is no more, but we still reference it in our shipped
  paste.ini. This should be removed. As well as any other supporting
  code that is only used by it, like NoAuthMiddlewareV3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460044] Re: Data loss can occur if cinder attach fails

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460044

Title:
  Data loss can occur if cinder attach fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Driver detach is not called while handling failure during Cinder's
  attach API. This can result in volume data loss for VMware driver
  since during driver attach, the instance VM is reconfigured with
  volume's vmdk. Subsequent delete of instance will delete the volume's
  vmdk since the instance is not reconfigured to remove the volume's
  vmdk even after attach failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492111] Re: Support to set an optional csum attribute on ovs tunnels

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492111

Title:
  Support to set an optional csum attribute on ovs tunnels

Status in neutron:
  Fix Released

Bug description:
  During some of the performance tests, it was observed that on few of
  the 10G nics (intel br kx4 dual-port, intel 82599es), enabling the
  csum option boosted the vxlan/geneve performance by triggering GRO on
  the receiver.

  Kindly refer to the link for more details on the role of csum in vxlan 
performance.
  http://openvswitch.org/pipermail/dev/2015-August/059335.html

  This defect is created to add an additional option (tunnel_csum) in
  the ovs agent. This option will be passed to the ovs agent, which in
  turn will set the "option:csum" during the creation of GRE, VXLAN, and
  GENEVE tunnels

  The provision for this option is available in OVS 2.4 for
  vxlan/geneve.

  The changes for this option will be on similar lines of
  "dont_fragment" flag.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491922] Re: ovs agent doesn't configure new ovs-port for an instance

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491922

Title:
  ovs agent doesn't configure new ovs-port for an instance

Status in neutron:
  Fix Released

Bug description:
  in case of massive resource deletion (networks, ports) it may take agent 
quite a lot of time to process.
  Port delete priocessing is happening during ovs agent periodic task. It takes 
agent ~0.25s to process one port deletion.
  From the attached log we can see that on a certain iteration the agent had to 
process deletion of 1625 ports.
   1625 * 0.25 = 406 seconds.
  Indeed:

   2015-08-29 09:13:46.004 21292 DEBUG
  neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-
  55e0a577-e03b-4476-9bdd-f5480cfef966 ] Agent rpc_loop -
  iteration:25863 - starting polling. Elapsed:0.047 rpc_loop
  /usr/lib/python2.7/dist-
  packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1733

   ... (ports deletion handling)

   2015-08-29 09:20:28.569 21292 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-55e0a577-e03b-4476-9bdd-f5480cfef966 ] Agent rpc_loop - iteration:25863 - 
port information retrieved. Elapsed:402.612 rpc_loop 
/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1748
   ... (from here agent starts processing new ports)

  402 seconds is somewhat not acceptable. Nova waits for 300 seconds by default 
and then fails with vif plugging timeout.
  From log we can also see that new ovs port appeared during agent was busy 
with ports deletion stuff: 

  2015-08-29 09:13:52.432 21292 DEBUG neutron.agent.linux.ovsdb_monitor [-] 
Output received from ovsdb monitor: 
{"data":[["8fd481a4-1267-445b-bedc-f1f6b3a47898","old",null,["set",[]]],["","new","qvoced59c11-1b",76]],"headings":["row","action","name","ofport"]}
   _read_stdout 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/ovsdb_monitor.py:44

  Port deletion handling needs to be optimised on agent side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492069] Re: Enable the use of 'external' service providers

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492069

Title:
  Enable the use of 'external' service providers

Status in neutron:
  Fix Released

Bug description:
  Changes [1] allowed a service provider that does not belong to any of
  neutron *-aas projects (e.g. lbaas, fwaas, vpnaas) to be
  enabled/loaded through configuration.

  That was good.

  Now, this feature enhancement takes this to the next level of support
  for external drivers, and it allows service providers that may not be
  shipped with neutron-* and or networking_* projects to be loaded
  within the neutron service framework as well.

  [1] https://review.openstack.org/#/q/topic:bug/1473110,n,z

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490581] Re: the items will never be deleted from metering_info

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490581

Title:
  the items will never be deleted from metering_info

Status in neutron:
  Fix Released

Bug description:
  The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
  if info['last_update'] > ts + report_interval:
  del self.metering_info[label_id]
  I this situation last_update will always be less than current timestamp.
  Also this function is not covered by the unit tests.
  Also again, the purge_metering_info function uses metering_info dict but it 
should use the metering_infos dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491668] Re: deprecate external_network_bridge option in L3 agent

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491668

Title:
  deprecate external_network_bridge option in L3 agent

Status in neutron:
  Fix Released

Bug description:
  The external_network_bridge option in the L3 agent allows the L3 agent
  to plug directly into a bridge and skip all of the management by the
  L2 agent. This creates two ways to accomplish the same wiring, but it
  results in differences that cause confusion a issues for debugging.

  When the external_network_bridge option is used, all of the provider
  properties (e.g. VLAN tags, VXLAN VNIs) of the external network are
  ignored. So we end up with scenarios where users will create an
  external network with a VLAN tag, attach a router to it, and then
  complain when it's not sending the correct tagged traffic. It also
  means that features added to the L2 agent will not apply to router
  ports (e.g. enhanced debugging, QoS, port mirroring, etc).

  The appropriate way to do this is to define a physnet for the external
  network (e.g. 'external') and then create a bridge_mapping entry for
  it on the L2 agent that maps it to the external bridge (e.g. 'external
  :br-ex'). Then when the external Neutron network is created, it should
  be created with the 'flat' provider type and the 'external' provider
  physnet.

  We should deprecate external_network_bridge in L and remove it in M to
  migrate people to the more consistent approach with bridge_mappings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493396] Re: Enable rootwrap daemon logging during functional tests

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493396

Title:
  Enable rootwrap daemon logging during functional tests

Status in neutron:
  Fix Released

Bug description:
  When triaging bugs found during functional tests (Either legit bugs
  with Neutron, or issues related to the testing infrastructure), it is
  useful to view the Oslo rootwrap daemon logs. It has an option to log
  to syslog, but it is turned off by default. It should be turned on
  during functional tests to provide additional useful information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492283] Re: Add ability to use custom config in DHCP-agent

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492283

Title:
  Add ability to use custom config in DHCP-agent

Status in neutron:
  Fix Released

Bug description:
  Currently, dhcp-agent is hardcoded to use global oslo.config's CONF to
  get and register options. Adding an ability to pass the config as an
  argument will make dhcp-agent more flexible and reduce code coupling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490990] Re: acceptance: neutron fails to start server service

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490990

Title:
  acceptance: neutron fails to start server service

Status in neutron:
  Fix Released
Status in oslo.config:
  Invalid
Status in puppet-neutron:
  Fix Committed

Bug description:
  This is a new error that happened very lately, using RDO liberty
  packaging:

  With current state of beaker manifests, we have this error:
  No providers specified for 'LOADBALANCER' service, exiting

  Source: http://logs.openstack.org/50/216950/5/check/gate-puppet-
  neutron-puppet-beaker-rspec-dsvm-
  centos7/9e7e510/logs/neutron/server.txt.gz#_2015-09-01_12_40_22_734

  That means neutron-server can't start correctly.

  This is probably a misconfiguration in our manifests or a packaging
  issue in Neutron, because we don't have the issue in Trusty jobs.

  RDO packaging version: 7.0.0.0b3-dev606

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499204] [NEW] wrong check for physical function in pci utils

2015-09-24 Thread Moshe Levi
Public bug reported:

in pci utils the is_physical_function function check it based on existing 
virtfn* symbolic link. The check is incorrect because
if the PF doen't enable SR-IOV meaning sriov_numvfs is set to zero there are no 
 virtfn* ljnks and the nova-compute recognize it as VF.

see: 
root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent
class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor
commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem vpd
configdriver   infiniband_madlocal_cpus net 
   remove resource0_wc  subsystem_device
consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor
root@r-ufm160:/opt/stack/logs# cat 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs 
0


root@r-ufm160:/opt/stack/logs# echo 4 > 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs
root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent   virtfn3
class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor   vpd
commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem virtfn0
configdriver   infiniband_madlocal_cpus net 
   remove resource0_wc  subsystem_device  virtfn1
consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor  virtfn2

** Affects: nova
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress


** Tags: passthrough pci

** Tags added: pci-passthogth

** Tags removed: pci-passthogth
** Tags added: passthrough pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499204

Title:
  wrong check for physical function in pci utils

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  in pci utils the is_physical_function function check it based on existing 
virtfn* symbolic link. The check is incorrect because
  if the PF doen't enable SR-IOV meaning sriov_numvfs is set to zero there are 
no  virtfn* ljnks and the nova-compute recognize it as VF.

  see: 
  root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
  broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent
  class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor
  commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem vpd
  configdriver   infiniband_madlocal_cpus 
netremove resource0_wc  subsystem_device
  consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor
  root@r-ufm160:/opt/stack/logs# cat 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs 
  0

  
  root@r-ufm160:/opt/stack/logs# echo 4 > 
/sys/bus/pci/devices/\:03\:00.0/sriov_numvfs
  root@r-ufm160:/opt/stack/logs# ls /sys/bus/pci/devices/\:03\:00.0/
  broken_parity_status  d3cold_allowed   enableiommu_group
modalias   pools  reset sriov_numvfs  uevent   virtfn3
  class device   infinibandirq
msi_buspower  resource  sriov_totalvfsvendor   vpd
  commands_cachedma_mask_bitsinfiniband_cm local_cpulist  
msi_irqs   real_miss  resource0 subsystem virtfn0
  configdriver   infiniband_madlocal_cpus 
netremove resource0_wc  subsystem_device  virtfn1
  consistent_dma_mask_bits  driver_override  infiniband_verbs  mlx5_num_vfs   
numa_node  rescan sriov subsystem_vendor  virtfn2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499238] [NEW] logging_sample.conf has wrong formtter

2015-09-24 Thread Masaki Matsushita
Public bug reported:

etc/nova/logging_sample.conf has wrong formtter.
nova.openstack.common.log.ContextFormatter no longer exists.
We should replace it with oslo_log.formatters.ContextFormatter.

** Affects: nova
 Importance: Undecided
 Assignee: Masaki Matsushita (mmasaki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499238

Title:
  logging_sample.conf has wrong formtter

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  etc/nova/logging_sample.conf has wrong formtter.
  nova.openstack.common.log.ContextFormatter no longer exists.
  We should replace it with oslo_log.formatters.ContextFormatter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226049] Re: instance_system_metadata rows not being deleted

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226049

Title:
  instance_system_metadata rows not being deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Deleting an instance no longer seems to delete the associated
  instance_system_metadata rows in the DB.

  To reproduce in Devstack:
  - Create an instance
  - Delete it
  - Look at the associated DB rows:

  mysql> select * from instances where 
uuid="9ad5d7c7-306f-44ff-84a2-06b9af7d5d36" \G ;
  *** 1. row ***
created_at: 2013-09-16 10:18:05
updated_at: 2013-09-16 10:21:44
deleted_at: 2013-09-16 10:21:44
id: 7
   internal_id: NULL
   user_id: eb50e969766f4cfcb392d307d1178a9e
project_id: 852d7be63c7c4540856b38be6226ff49
 image_ref: 28b52a16-4a62-497c-8c83-837cdcf6bf66
 kernel_id: 86f818dc-d836-4b1f-aae5-386bc594a746
ramdisk_id: 41e93f5e-06c5-49eb-9ed4-3b5e10f56d0a
  launch_index: 0
  key_name: NULL
  key_data: NULL
   power_state: 1
  vm_state: deleted
 memory_mb: 512
 vcpus: 1
  hostname: phil
  host: vm-reap
 user_data: NULL
reservation_id: r-8arbuy8m
  scheduled_at: 2013-09-16 10:18:06
   launched_at: 2013-09-16 10:18:10
 terminated_at: 2013-09-16 10:21:44
  display_name: phil
   display_description: phil
 availability_zone: NULL
locked: 0
   os_type: NULL
   launched_on: vm-reap
  instance_type_id: 2
   vm_mode: NULL
  uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
  architecture: NULL
  root_device_name: /dev/vda
  access_ip_v4: NULL
  access_ip_v6: NULL
  config_drive: 
task_state: NULL
  default_ephemeral_device: NULL
   default_swap_device: NULL
  progress: 0
  auto_disk_config: 0
shutdown_terminate: 0
 disable_terminate: 0
   root_gb: 1
  ephemeral_gb: 0
 cell_name: NULL
  node: vm-reap.novalocal
   deleted: 7
 locked_by: NULL
   cleaned: 0
  1 row in set (0.00 sec)

  ERROR: 
  No query specified

  mysql> select * from instance_metadata where 
instance_uuid="9ad5d7c7-306f-44ff-84a2-06b9af7d5d36" \G ;
  Empty set (0.00 sec)

  ERROR: 
  No query specified

  mysql> select * from instance_system_metadata where 
instance_uuid="9ad5d7c7-306f-44ff-84a2-06b9af7d5d36" \G ;
  *** 1. row ***
 created_at: 2013-09-16 10:18:05
 updated_at: NULL
 deleted_at: NULL
 id: 83
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: image_kernel_id
  value: 86f818dc-d836-4b1f-aae5-386bc594a746
deleted: 0
  *** 2. row ***
 created_at: 2013-09-16 10:18:06
 updated_at: NULL
 deleted_at: NULL
 id: 84
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: instance_type_memory_mb
  value: 512
deleted: 0
  *** 3. row ***
 created_at: 2013-09-16 10:18:06
 updated_at: NULL
 deleted_at: NULL
 id: 85
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: instance_type_swap
  value: 0
deleted: 0
  *** 4. row ***
 created_at: 2013-09-16 10:18:06
 updated_at: NULL
 deleted_at: NULL
 id: 86
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: instance_type_vcpu_weight
  value: NULL
deleted: 0
  *** 5. row ***
 created_at: 2013-09-16 10:18:06
 updated_at: NULL
 deleted_at: NULL
 id: 87
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: instance_type_root_gb
  value: 1
deleted: 0
  *** 6. row ***
 created_at: 2013-09-16 10:18:06
 updated_at: NULL
 deleted_at: NULL
 id: 88
  instance_uuid: 9ad5d7c7-306f-44ff-84a2-06b9af7d5d36
key: instance_type_name
  value: m1.tiny
deleted: 0
  *** 7. row ***
 

[Yahoo-eng-team] [Bug 1419785] Re: VMware: running a redundant nova compute deletes running instances

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419785

Title:
  VMware: running a redundant nova compute deletes running instances

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When running more than one nova compute configured for the same
  cluster, rebooting one of the computes will delete all running
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491325] Re: nova api v2.1 does not allow to use autodetection of volume device path

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491325

Title:
  nova api v2.1 does not allow to use autodetection of volume device
  path

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Won't Fix
Status in OpenStack Compute (nova) kilo series:
  Fix Committed
Status in python-novaclient:
  Fix Released

Bug description:
  Using API v2.1 we are forced to provide device path attaching a volume
  to an instance.

  using API v2.0 it allowed to provide 'auto' and in this case Nova
  calculated it by itself.

  It is very useful when we do not care about exact device path.

  using APi v2.1 Nova, at first verifies request body [1] and only then
  have logic to autodetect "device path". So, either autodetect is dead
  code now or request validation should be changed.

  For the moment, this bug is blocker for Manila project.

  We get one of two errors:

  Returning 400 to user: Invalid input for field/attribute device.
  Value: None. None is not of type 'string' __call__

  or

  Returning 400 to user: Invalid input for field/attribute device.
  Value: auto. u'auto' does not match
  '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$'

  Where Nova client says explicitly:

  $ nova help volume-attach
  usage: nova volume-attach   []

  Attach a volume to a server.

  Positional arguments:
      Name or ID of server.
      ID of the volume to attach.
      Name of the device e.g. /dev/vdb. Use "auto" for autoassign (if 
supported)

  That "device" is optional and can be set to 'auto'.

  [1]
  
https://github.com/openstack/nova/blob/b7c8a73824211db9627962abd31b8801cc2c2880/nova/api/openstack/compute/volumes.py#L270

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443816] Re: cells: config drive doesn't work with cells when injecting an ssh key

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443816

Title:
  cells: config drive doesn't work with cells when injecting an ssh key

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  To reproduce the problem, build an instance with a config drive
  attached, and keypair selected, when the deployment is using cells.

  This is the change that caused this issue:
  
https://github.com/openstack/nova/commit/80aae8fcf45fdc38fcb6c9fea503cecbe42e42b6#diff-567f52edc17aff6c473d69c341a4cb0cR313

  The addition of reading the key from the database doesn't work for
  cells, where the key is stored in the api cell database.

  Ideally we might want to:
  * add keypair_type into the instance object, along side keypair_name, etc
  * consider sending a message to the parent cell to fetch the keypair
  I prefer the first idea.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492255] Re: Cells gate job fails because of 2 network tests

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492255

Title:
  Cells gate job fails because of 2 network tests

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  ===
  2015-09-04 11:41:29.466 | Failed 2 tests - output below:
  2015-09-04 11:41:29.467 | ==
  2015-09-04 11:41:29.467 | 
  2015-09-04 11:41:29.467 | 
tempest.api.compute.test_networks.ComputeNetworksTest.test_list_networks[id-3fe07175-312e-49a5-a623-5f52eeada4c2]
  2015-09-04 11:41:29.467 | 
-
  2015-09-04 11:41:29.467 | 
  2015-09-04 11:41:29.467 | Captured traceback:
  2015-09-04 11:41:29.467 | ~~~
  2015-09-04 11:41:29.467 | Traceback (most recent call last):
  2015-09-04 11:41:29.467 |   File "tempest/api/compute/test_networks.py", 
line 37, in test_list_networks
  2015-09-04 11:41:29.467 | self.assertNotEmpty(networks, "No networks 
found.")
  2015-09-04 11:41:29.467 |   File "tempest/test.py", line 588, in 
assertNotEmpty
  2015-09-04 11:41:29.467 | self.assertTrue(len(list) > 0, msg)
  2015-09-04 11:41:29.468 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  2015-09-04 11:41:29.468 | raise self.failureException(msg)
  2015-09-04 11:41:29.468 | AssertionError: False is not true : No networks 
found.
  2015-09-04 11:41:29.468 | 
  2015-09-04 11:41:29.468 | 
  2015-09-04 11:41:29.468 | Captured pythonlogging:
  2015-09-04 11:41:29.468 | ~~~
  2015-09-04 11:41:29.468 | 2015-09-04 11:31:55,672 10410 INFO 
[tempest_lib.common.rest_client] Request 
(ComputeNetworksTest:test_list_networks): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
  2015-09-04 11:41:29.468 | 2015-09-04 11:31:55,672 10410 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {}
  2015-09-04 11:41:29.468 | Body: None
  2015-09-04 11:41:29.468 | Response - Headers: 
{'x-openstack-request-id': 'req-d4c4b57f-495a-47d4-b157-4c6fa0c85796', 
'connection': 'close', 'vary': 'X-Auth-Token', 'status': '200', 'date': 'Fri, 
04 Sep 2015 11:31:55 GMT', 'content-length': '3863', 'content-type': 
'application/json', 'server': 'Apache/2.4.7 (Ubuntu)'}
  2015-09-04 11:41:29.469 | Body: None
  2015-09-04 11:41:29.469 | 2015-09-04 11:31:56,116 10410 INFO 
[tempest_lib.common.rest_client] Request 
(ComputeNetworksTest:test_list_networks): 200 GET 
http://127.0.0.1:8774/v2.1/3c0808e187e34cc998b1e08946c2a928/os-networks 0.443s
  2015-09-04 11:41:29.469 | 2015-09-04 11:31:56,116 10410 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': '', 'Accept': 'application/json'}
  2015-09-04 11:41:29.469 | Body: None
  2015-09-04 11:41:29.469 | Response - Headers: {'content-location': 
'http://127.0.0.1:8774/v2.1/3c0808e187e34cc998b1e08946c2a928/os-networks', 
'x-openstack-nova-api-version': '2.1', 'connection': 'close', 'vary': 
'X-OpenStack-Nova-API-Version', 'x-compute-request-id': 
'req-bc32c33a-31c7-4634-a3d2-70188c08e150', 'status': '200', 'date': 'Fri, 04 
Sep 2015 11:31:56 GMT', 'content-length': '16', 'content-type': 
'application/json'}
  2015-09-04 11:41:29.469 | Body: {"networks": []}
  2015-09-04 11:41:29.469 | 
  2015-09-04 11:41:29.469 | 
  2015-09-04 11:41:29.469 | 
tempest.api.compute.test_tenant_networks.ComputeTenantNetworksTest.test_list_show_tenant_networks[id-edfea98e-bbe3-4c7a-9739-87b986baff26]
  2015-09-04 11:41:29.469 | 
--
  2015-09-04 11:41:29.469 | 
  2015-09-04 11:41:29.469 | Captured traceback:
  2015-09-04 11:41:29.470 | ~~~
  2015-09-04 11:41:29.470 | Traceback (most recent call last):
  2015-09-04 11:41:29.470 |   File 
"tempest/api/compute/test_tenant_networks.py", line 29, in 
test_list_show_tenant_networks
  2015-09-04 11:41:29.470 | self.assertNotEmpty(tenant_networks, "No 
tenant networks found.")
  2015-09-04 11:41:29.470 |   File "tempest/test.py", line 588, in 
assertNotEmpty
  2015-09-04 11:41:29.470 | self.assertTrue(len(list) > 0, msg)
  2015-09-04 11:41:29.470 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  2015-09-04 11:41:29.470 | raise self.failureException(msg)
  2015-09-04 11:41:29.470 | AssertionError: False is not true : No tenant 
networks found.
  2015-09-04 

[Yahoo-eng-team] [Bug 1492121] Re: VMware: failed volume detachment leads to instances remaining on backend and volume still in 'in-use' state

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492121

Title:
  VMware: failed volume detachment leads to instances remaining on
  backend and volume still in 'in-use' state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When the volume detachment fails the termination of the instance will lead to 
the following:
  1. The Nova instance is deleted
  2. The Instance on the VC still exists
  3. The volume is in 'in-use' state

  The nova instance is deleted but the backend is not updated and the
  volumes are not set as available

  One example of this happening is when the spawning of the instance fails with 
an exception when attaching the volume.
  This issue could lead to a DDOS of the backend as the resources on the 
backend are not cleaned up correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491511] Re: Behavior change with latest nova paste config

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491511

Title:
  Behavior change with latest nova paste config

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/55/219655/1/check/gate-shade-dsvm-
  functional-nova/1154770/console.html#_2015-09-02_12_10_56_113

  This started failing about 12 hours ago. Looking at it with Sean, we
  think it's because it actually never worked, but nova was failing
  silent before. It's not throwing an error, which wile more correcter
  (you know you didn't delete something) is a behavior change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479066] Re: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479066

Title:
  DeprecationWarning: BaseException.message has been deprecated as of
  Python 2.6

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.vmware:
  New

Bug description:
  I see these when running tests:

  Captured stderr:
  
  nova/virt/libvirt/volume/volume.py:392: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
if ('device is busy' in exc.message or

  Seems that bug 1447946 was meant to fix some of this but it only
  handles NovaException, not other usage.

  We should be able to use six.text_type(e) for 'if str in e' type
  checks.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGVwcmVjYXRpb25XYXJuaW5nOiBCYXNlRXhjZXB0aW9uLm1lc3NhZ2UgaGFzIGJlZW4gZGVwcmVjYXRlZCBhcyBvZiBQeXRob24gMi42XCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svbm92YVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM4MTA2MTkwOTI3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482444] Re: Abnormal changes of quota usage after instance restored by admin

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482444

Title:
  Abnormal changes of quota usage after instance restored by admin

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova version, output of 'git log -1':
  commit 676ba7bbc788a528b0fe4c87c1c4bf94b4bb6eb1
  Author: Dave McCowan 
  Date:   Tue Feb 24 21:35:48 2015 -0500

  Websocket Proxy should verify Origin header

  If the Origin HTTP header passed in the WebSocket handshake does
  not match the host, this could indicate an attempt at a
  cross-site attack.  This commit adds a check to verify
  the origin matches the host.

  Change-Id: Ica6ec23d6f69a236657d5ba0c3f51b693c633649
  Closes-Bug: 1409142
  Reproduce steps:
  1. Enable soft delete via set reclaim_instance_interval in nova.conf.
  2. A normal project: ProjectA create a new instance and then delete it, then 
it's status change to SOFT_DELETED.
  3. Now restore the instance by admin user in project: admin, the instance 
back to ACTIVE, but the quota usage of project: admin  has changed, the flavor 
of that instance has added on admin project quota usage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481084] Re: Keypair creation fails when ssh public key comment contains spaces

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481084

Title:
  Keypair creation fails when ssh public key comment contains spaces

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the Sahara project, we have been generating public keys to use with
  nova keypair creation for some time.  These keys have a key comment of
  the form "Generated by Sahara"

  This has worked until recently.  However, it fails currently as
  follows:

  $ more ~/public_key 
  ssh-rsa 
B3NzaC1yc2EDAQABAAABAQDYwUk/fuNiNoseN5tgKt2NsfxeZIE7cC4bcGeJ3WacY8Ss2s/vw1WrBwoicd4cjwkpmrxQkR1d1vBzLyrE/ovHStyu1Gv/Os+wVB0j64AKlG6MZFMeJVuP9M+O0uSqBuEYhzaTvKofiVcrLJat7bJ9S8
  
MpTWj7ZXRbKKD/+pT1jxll4vCHKLo9caazl7vFI/hRcqMWAr+oYNZYh1BZeNxMWGtEgf11zHiStR1tvs/4CEstajPWWlkHcVeUuGgs8/+kNToUZ22i8kORp8ZFwp11pvFtieAYtBFBWWrze2U1irct34JAHTmemk8SZ/RmN9tLpIP8BspFdWnFylzVyuPZ
   Generated by Sahara

  (openstack) keypair create --public-key ~/public_key bob
  ERROR: openstack Keypair data is invalid: failed to generate fingerprint 
(HTTP 400) (Request-ID: req-370e6a3a-d01d-44a4-8a10-160282ec9488)

  Removing or replacing the spaces in the key comment fixes the problem
  (or hides it)

  This seems to be happening because usr/lib/python2.7/site-
  
packages/cryptography/hazmat/primitives/serialization.py(36)load_ssh_public_key()
  is seeing the whole key and splitting on spaces.  So the key comment
  is throwing off the key component count.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477110] Re: Online snapshot delete fails for network disk type

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477110

Title:
  Online snapshot delete fails for network disk type

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I have a test that cinder use GlusterFS (libgfapi) as storage.
  1. create a instance
  2. create a volume
  3. attach the volume to the instance
  4. make snapshot to the volume
  5. delete the snapshot

  It get an error.

  OS: CentOS 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1477110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478430] Re: nova failed to list old db instances

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478430

Title:
  nova failed to list old db instances

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I have a old version of nova, there is an instance running, when I upgrade to
  lastest upstream version of nova (commit 
183cd88cb2f9781b53f71b6b161df401c286c9ff)

  after sync db, I get 500 error when I try to list the old instance.

  nova-api get follow error:

  2015-07-27 12:00:11.397 TRACE nova.api.openstack sort_keys=sort_keys, 
sort_dirs=sort_dirs)
  2015-07-27 12:00:11.397 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/compute/api.py", line 2083, in _get_instances_by_filters
  2015-07-27 12:00:11.397 TRACE nova.api.openstack expected_attrs=fields, 
sort_keys=sort_keys, sort_dirs=sort_dirs)
  2015-07-27 12:00:11.397 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/objects/base.py", line 71, in wrapper
  2015-07-27 12:00:11.397 TRACE nova.api.openstack result = fn(cls, 
context, *args, **kwargs)
  2015-07-27 12:00:11.397 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/objects/instance.py", line 1221, in get_by_filters
  2015-07-27 12:00:11.397 TRACE nova.api.openstack expected_attrs)
  2015-07-27 12:00:11.397 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/objects/instance.py", line 1146, in _make_instance_list
  2015-07-27 12:00:11.397 TRACE nova.api.openstack 
expected_attrs=expected_attrs)
  2015-07-27 12:00:11.397 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/objects/instance.py", line 505, in _from_db_object
  2015-07-27 12:00:11.397 TRACE nova.api.openstack if db_inst['info_cache'] 
is None:
  2015-07-27 12:00:11.397 TRACE nova.api.openstack KeyError: 'info_cache'
  2015-07-27 12:00:11.397 TRACE nova.api.openstack 

  
  nova compute get follow error when restart:

  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/queue.py", line 117, 
in switch
  self.greenlet.switch(value)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 
623, in run_service
  service.start()
File "/opt/stack/nova/nova/service.py", line 158, in start
  self.manager.init_host()
File "/opt/stack/nova/nova/compute/manager.py", line 1261, in init_host
  context, self.host, expected_attrs=['info_cache'])
File "/opt/stack/nova/nova/objects/base.py", line 69, in wrapper
  args, kwargs)
File "/opt/stack/nova/nova/conductor/rpcapi.py", line 243, in 
object_class_action
  objver=objver, args=args, kwargs=kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", 
line 158, in call
  retry=self.retry)
File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", 
line 90, in _send
  timeout=timeout, retry=retry)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 431, in send
  retry=retry)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 422, in _send
  raise result
  KeyError: u'\'info_cache\'\nTraceback (most recent call last):\n\n  File 
"/opt/stack/nova/nova/conductor/manager.py", line 440, in _object_dispatch\n
return getattr(target, method)(*args, **kwargs)\n\n  File 
"/opt/stack/nova/nova/objects/base.py", line 71, in wrapper\nresult = 
fn(cls, context, *args, **kwargs)\n\n  File 
"/opt/stack/nova/nova/objects/instance.py", line 1229, in get_by_host\n
expected_attrs)\n\n  File "/opt/stack/nova/nova/objects/instance.py", line 
1146, in _make_instance_list\nexpected_attrs=expected_attrs)\n\n  File 
"/opt/stack/nova/nova/objects/instance.py", line 505, in _from_db_object\n
if db_inst[\'info_cache\'] is None:\n\nKeyError: \'info_cache\'\n'


  for reference, the old db_inst is follow:

  {'vm_state': u'active', 'internal_id': None, 'availability_zone':
  None, 'terminated_at': None, 'ramdisk_id': u'd73b7b35-b89b-4e92-98fc-
  dc5a7929f214', 'instance_type_id': 2L, 'updated_at':
  datetime.datetime(2015, 7, 27, 3, 49, 29), 'cleaned': 0L, 'vm_mode':
  None, 'deleted_at': None, 'reservation_id': u'r-0s8p28s9', 'id': 76L,
  'disable_terminate': False, 'user_id':
  u'bea64c5634ae4f079c1c69ed29a68242', 'uuid': u'25976bc8-cd6f-471f-
  a7d4-7ce7614f9b9e', 'default_swap_device': None, 'hostname': u'test2',
  'launched_on': u'liyong', 'display_description': u'test2', 'key_data':
  None, 'deleted': 0L, 'power_state': 1L, 'default_ephemeral_device':
  None, 'progress': 0L, 'project_id':
  u'52a0ee83fb524376bd603547aea415a3', 'launched_at':
  

[Yahoo-eng-team] [Bug 1484223] Re: Revert "Revert "Add VIF_DELETED notification event to Nova""

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484223

Title:
  Revert "Revert "Add VIF_DELETED notification event to Nova""

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://review.openstack.org/187871
  commit d477dbcf58693743af409549691f4dd2a441035f
  Author: Kevin Benton 
  Date:   Wed Jun 3 00:03:25 2015 -0600

  Revert "Revert "Add VIF_DELETED notification event to Nova""
  
  This reverts commit 6575db592c92791a51540134192bc86465940283.
  
  Depends-on: I998b6bb80cc0a81d665b61b8c4a424d7219c666f
  
  DocImpact
  If Neutron is upgraded to Liberty before the Nova API is,
  the Nova API log will contain errors complaining that it doesn't
  understand this new event. Nothing will be broken, but there will
  be an error every time a port is deleted until Nova is upgraded.
  
  Change-Id: I7aae44e62d2b1170bae31c3492148bfd516fb78b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484223/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484776] Re: Nova api doesn't handle InstanceUnknownCell when doing live-migration

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484776

Title:
  Nova api doesn't handle InstanceUnknownCell when doing live-migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  compute_api.live_migrate may raise InstanceUnknownCell exception when doing 
check_instance_cell.
  see https://github.com/openstack/nova/blob/master/nova/compute/api.py#L316 
for reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484830] Re: Fail early when live-migrate with block-migration and bdm

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484830

Title:
  Fail early when live-migrate with block-migration and bdm

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If user do live-migration with  block migrate and  mapped volumes,
  user won't get an exception in nova-api.

  This exception will raise in nova-compute node, so user need to check
  log on compute node to tell why

  he/she trigger a live-migration but the instance is still in running
  state.

  Migration error: Cannot block migrate instance 41b1d849-6a29-448e-
  8ace-4f4750514d72 with mapped volumes

  
  We can make this happen early when doing 
can-live-migration-source/destination etc..

  by doing this, user will get the exception in nova-api this will bring
  good user experience, and besides

  if we raise this exception early , then nova won't do pre-live-
  migration and _rollback_live_migration.

  and it will partially fix
  https://bugs.launchpad.net/nova/+bug/1457291

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480029] Re: lack index of instance_system_metadata.instance_uuid in pgsql

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480029

Title:
  lack index of instance_system_metadata.instance_uuid in pgsql

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova db migrating script don't create index on
  instance_system_metadata.instance_uuid for postgresql, it cause the
  listing instances have low performance.

  The following is our performance testing result, we have 200,000
  instances, and 4,000,000 instance_system_metadata records in
  background.

  list 1000 instances in one API request

  GET /${tenant_id}/servers/detail=1000

  No index:

  4~5 minutes on average

  After add index:

  5 seconds on average

  instance_system_metadata.instance_uuid index is created for mysql, now
  need to add it for postgresql.

  Code base:

  $ git log -1
  commit a8dd5722035784847fd7f7a915628d5feaaf5ff9
  Merge: a74f07a 990ef48
  Author: Jenkins 
  Date:   Thu Jul 30 21:36:10 2015 +

  Merge "Add DiskNotFound and VolumeNotFound test"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479129] Re: Hyper-V doesn't boot instances from volume with ephemeral disk

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479129

Title:
  Hyper-V doesn't boot instances from volume with ephemeral disk

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Instance boot from volume fails in Hyper-V if flavor has ephemeral
  disk. Boot fails with the following error:

  "HyperVException: WMI job failed with status 10. Error details: Failed
  to add device 'Physical Disk Drive'."

  This happens because Hyper-V tries to attach both the ephemeral disk
  and the boot volume to the same slot on the IDE controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481123] Re: Typos of 'address', 'current' and 'concurrent'

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481123

Title:
  Typos of 'address', 'current' and 'concurrent'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are some typos .

  adress => address (in nova/virt/libvirt/volume/volume.py)
  curent => current (in nova/virt/hyperv/hostops.py)
  concurent => concurrent (in nova/tests/unit/compute/test_compute_api.py)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1481123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480514] Re: Remove error instance fail when enable serial_consol

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480514

Title:
  Remove error instance fail when enable serial_consol

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When I fixed https://bugs.launchpad.net/nova/+bug/1478607
  I found I can't remove those error instances which was failed when config xml.

  This is because of following block:
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L894

  When nova try to destroy instance, it will cleanup relative resources.
  if we enable serial console, nova will try to find ports, which was assigned 
to it, and release them.
  But the instance was created failed, therefore nova will throw nova instance 
not found.
  Yes, the block looks like it had handle instance not found exception.
  But the function of "_get_serial_ports_from_instance" has yield keyword.
  It will not raise exception immediately instead of raise exception when 
program try to iterator yielded items.
  Therefore instance not found exception will been raised at L894 instead of 
L889.
  You can checkout following sample code.
  
http://www.tutorialspoint.com/execute_python_online.php?PID=0Bw_CjBb95KQMU05ycERQdUFfcms

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478800] Re: Libvirt migrations with rsync are slow

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478800

Title:
  Libvirt migrations with rsync are slow

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Setup:
  CentOS 6 + RDO Icehouse (code seems to be the same in trunk)

  When doing a nova migrate, the actual backing disk file is copied over with 
rsync. I assume the code came from this report
  https://bugs.launchpad.net/nova/+bug/1025259

  The rsync code uses the "-z" flag for compression. This is probably
  fine for cases with lightly used disks. However, with a disk full of
  content, it gets very slow. Rsync is not multithreaded so with a
  single E5-2670v2 core, we get ~12MB/s transfer speed (CPU bound). With
  the modest compression that is achieved this is significantly slower
  than no compression.

  If possible, some speed test should be done without compression for
  disk files with different content. There might not be a reason to use
  compression here at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484738] Re: keyerror when refreshing instance security groups

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484738

Title:
  keyerror when refreshing instance security groups

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Committed

Bug description:
  On a clean kilo install using source security groups I am seeing the
  following trace on boot and delete


  a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
  2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6772, in 
refresh_instance_security_rules
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 434, in 
decorated_function
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher args = 
(_load_instance(args[0]),) + args[1:]
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 425, in 
_load_instance
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
expected_attrs=metas)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 506, in 
_from_db_object
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
instance['metadata'] = utils.instance_meta(db_inst)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 817, in instance_meta
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher if 
isinstance(instance['metadata'], dict):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'metadata'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468332] Re: sriov agent causes incorrect port state if sriov driver doesn't support 'ip link vf state' setting

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468332

Title:
  sriov agent causes incorrect port state if sriov driver doesn't
  support 'ip link vf state' setting

Status in neutron:
  Fix Released

Bug description:
  Some devices doesn't seem to support link state setting:

  ubuntu@devstack1:~$ ip l sh p2p1
  189: p2p1:  mtu 1500 qdisc noop state DOWN mode DEFAULT 
group default qlen 1000
  link/ether 0c:c4:7a:1e:ac:0e brd ff:ff:ff:ff:ff:ff
  vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  ubuntu@devstack1:~$

  ubuntu@devstack1:~$ sudo ip l set dev p2p1 vf 6 state disable
  RTNETLINK answers: Operation not supported
  ubuntu@devstack1:~$

  ubuntu@devstack1:~$ ls -all /sys/class/net/p2p1/device/driver/module
  lrwxrwxrwx 1 root root 0 Jun 24 14:30 
/sys/class/net/p2p1/device/driver/module -> ../../../../module/ixgbe
  ubuntu@devstack1:~$

  As you can see, this happens with the 'ixgbe' driver.

  This confuses sriov agent:

  In neutron/plugins/sriovnicagent/sriov_nic_agent.py there's a
  'treat_device' method that's called after port binding for example.
  The sriov agent tries to set VF state to UP and fails, so the code
  doesn't reach self.plugin_rpc.update_device_up() and the port ends up
  hanging in BUILD state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477348] Re: Creating a neutron lbaas pool with session persistence type HTTP_COOKIE, I could see errors in lbaasv2 screen

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477348

Title:
  Creating a neutron lbaas pool with session persistence type
  HTTP_COOKIE, I could see errors in lbaasv2 screen

Status in neutron:
  Fix Released

Bug description:
  HAProxy version used: 1.5

  When creating pool with session persistence type HTTP_TYPE, the pool
  is created, but could see bunch of errors shown below. Also, when
  using CURL loadbalancer ip against two backend servers with simple
  webserver, I am getting 503 error.

  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTP --session-persistence type=HTTP_COOKIE --name pool1
  Created a new pool:
  +-++
  | Field   | Value  |
  +-++
  | admin_state_up  | True   |
  | description ||
  | healthmonitor_id||
  | id  | 5a9df493-d3c7-479d-8e06-c5ced62a4af9   |
  | lb_algorithm| ROUND_ROBIN|
  | listeners   | {"id": "ef8704b6-0fc7-4566-b97b-af8b4e1cc3e2"} |
  | members ||
  | name| pool1  |
  | protocol| HTTP   |
  | session_persistence | {"cookie_name": null, "type": "HTTP_COOKIE"}   |
  | tenant_id   | 1d967cf6cd024efc87d0bd5a1091dc1e   |
  +-++

  2015-07-22 16:46:43.126 ERROR neutron_lbaas.agent.agent_manager 
[req-068a510d-3eff-4adc-b650-98fe7bc950ab admin 
1d967cf6cd024efc87d0bd5a1091dc1e] Create pool 
66a232b7-ca04-4cdb-abf0-842f9899c8fa failed on device driver haproxy_ns
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager Traceback 
(most recent call last):
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/agent/agent_manager.py", line 328, in 
create_pool
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
driver.pool.create(pool)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/haproxy/namespace_driver.py", 
line 425, in create
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(pool.listener.loadbalancer)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/haproxy/namespace_driver.py", 
line 370, in refresh
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager if (not 
self.driver.deploy_instance(loadbalancer) and
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
252, in inner
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager return 
f(*args, **kwargs)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/haproxy/namespace_driver.py", 
line 172, in deploy_instance
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
self.update(loadbalancer)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/haproxy/namespace_driver.py", 
line 181, in update
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer, extra_args)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/drivers/haproxy/namespace_driver.py", 
line 353, in _spawn
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 89, in save_config
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 221, in render_loadbalancer_obj
  2015-07-22 16:46:43.126 TRACE neutron_lbaas.agent.agent_manager 
loadbalancer = _transform_loadbalancer(loadbalancer, haproxy_base_dir)
  2015-07-22 16:46:43.126 TRACE 

[Yahoo-eng-team] [Bug 1473110] Re: Can't declare service providers both in neutron.conf and in neutron_*aas.conf

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473110

Title:
  Can't declare service providers both in neutron.conf and in
  neutron_*aas.conf

Status in neutron:
  Fix Released

Bug description:
  Neutron server behavior is to load service providers from neutron.conf
  or neutron_*aas.conf but not both. [1]

  If we have a case of a service provider that does not belong to any of
  neutron_(lbaas|fwaas|vpnaas).conf (e.g. bgpvpn from networking-bgpvpn
  project) we would enable it through neutron.conf.  But of course we
  want to be able to enable both bgpvpn and other services defined
  through neutron_*aas.conf at the same time.

  Example error where both neutron.conf and neutron_lbaas.conf are used:

  root@devstack-juno-compute02:/etc/neutron# grep service_provider 
neutron*.conf  |grep -v :#
  neutron.conf:[service_providers]
  
neutron.conf:service_provider=BGPVPN:BaGPipe:networking_bgpvpn.neutron.services.bgpvpn.service_drivers.bagpipe.bagpipe.BaGPipeBGPVPNDriver:default
  neutron_lbaas.conf:[service_providers]
  
neutron_lbaas.conf:service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  2015-07-09 16:56:39.778 INFO neutron.manager 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Loading Plugin: 
networking_bgpvpn.neutron.services.bgpvpn.plugin.BGPVPNPlugin
  2015-07-09 16:56:39.992 WARNING neutron.services.provider_configuration 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Reading service_providers 
from legacy location in neutron.conf, and ignoring values in neutron_*aas.conf 
files; this override will be going away soon.
  2015-07-09 16:56:39.993 DEBUG neutron.services.provider_configuration 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Service providers = 
['BGPVPN:BaGPipe:networking_bgpvpn.neutron.services.bgpvpn.service_drivers.bagpipe.bagpipe.BaGPipeBGPVPNDriver:default']
 parse_service_provider_opt 
/opt/stack/neutron/neutron/services/provider_configuration.py:93
  2015-07-09 16:56:39.998 DEBUG neutron.services.service_base 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Loaded 
'networking_bgpvpn.neutron.services.bgpvpn.service_drivers.bagpipe.bagpipe.BaGPipeBGPVPNDriver'
 provider for service BGPVPN load_drivers 
/opt/stack/neutron/neutron/services/service_base.py:85
  2015-07-09 16:56:39.999 INFO networking_bgpvpn.neutron.services.bgpvpn.plugin 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] BGP VPN Service Plugin 
using Service Driver: bagpipe
  2015-07-09 16:56:40.000 DEBUG neutron.manager 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Successfully loaded BGPVPN 
plugin. Description: Neutron BGP VPN connection Service Plugin 
_load_service_plugins /opt/stack/neutron/neutron/manager.py:196
  2015-07-09 16:56:40.001 INFO neutron.manager 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] Loading Plugin: 
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPlugin
  2015-07-09 16:56:40.536 ERROR neutron.services.service_base 
[req-0d5c679c-d9de-42eb-8aa3-c889869486d4 None None] No providers specified for 
'LOADBALANCER' service, exiting

  The solution could consist in extending the harcoded list of
  neutron_*.conf files that neutron reads to load service_providers [2],
  but this may or may not be the most sensible approach.

  [1] 
https://review.openstack.org/gitweb?p=openstack%2Fneutron.git;a=commitdiff;h=fb3138c8d718be67505f247ca776abf15ba1504a
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/common/repos.py#L80

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477253] Re: ovs arp_responder unsuccessfully inserts IPv6 address into arp table

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477253

Title:
  ovs arp_responder unsuccessfully inserts IPv6 address into arp table

Status in neutron:
  Fix Released

Bug description:
  The ml2 openvswitch arp_responder agent attempts to install IPv6
  addresses into the OVS arp response tables. The action obviously
  fails, reporting:

  ovs-ofctl: -:4: 2001:db8::x:x:x:x invalid IP address

  The end result is that the OVS br-tun arp tables are incomplete.

  The submitted patch verifies that the address is IPv4 before
  attempting to add the address to the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475498] Re: test_l2_ovs_agent: "port value out of range for in_port"

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: neutron
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475498

Title:
  test_l2_ovs_agent: "port value out of range for in_port"

Status in neutron:
  Fix Released

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcInBvcnQgdmFsdWUgb3V0IG9mIHJhbmdlIGZvciBpbl9wb3J0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzcxMTI1Mzg1Mzl9

  2015-07-16 01:59:18.139 | 2015-07-16 01:59:18.101 | 2015-07-16 
01:57:28,478ERROR [neutron.agent.common.ovs_lib] Unable to execute 
['ovs-ofctl', 'del-flows', 'br-int873069058', '-']. Exception: 
  2015-07-16 01:59:18.139 | 2015-07-16 01:59:18.102 | Command: 
['ovs-ofctl', 'del-flows', 'br-int873069058', '-']
  2015-07-16 01:59:18.140 | 2015-07-16 01:59:18.103 | Exit code: 1
  2015-07-16 01:59:18.140 | 2015-07-16 01:59:18.104 | Stdin: 
in_port=c7a59a7b-56f4-47aa-a475-a1bc35c91118
  2015-07-16 01:59:18.140 | 2015-07-16 01:59:18.106 | Stdout: 
  2015-07-16 01:59:18.140 | 2015-07-16 01:59:18.107 | Stderr: ovs-ofctl: 
-:1: c7a59a7b-56f4-47aa-a475-a1bc35c91118: port value out of range for in_port
  2015-07-16 01:59:18.140 | 2015-07-16 01:59:18.108 | 
  2015-07-16 01:59:18.141 | 2015-07-16 01:59:18.109 | 2015-07-16 
01:57:28,478 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Configuration 
for device 6eae2e73-ba7c-4deb-95e9-0384c2865383 completed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473217] Re: Cisco Nexus1000V: Remove support for the monolithic plugin

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473217

Title:
  Cisco Nexus1000V: Remove support for the monolithic plugin

Status in networking-cisco:
  Fix Committed
Status in neutron:
  Fix Released

Bug description:
  Remove support for Cisco Meta plugin and the N1KV monolithic plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1473217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469871] Re: OVS Neutron Agent support for ovs+dpdk netdev datapath

2015-09-24 Thread Thierry Carrez
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469871

Title:
  OVS Neutron Agent support for ovs+dpdk netdev datapath

Status in neutron:
  Fix Released

Bug description:
  The OVS Neuton Agent currently supports managing 2 datapaths. 
  the linux kernel data path and the newly added openvswitch windows datapath.

  Based on feedback from the summit this whishlist bug has been created
  in place of a blueprint to  capture the changes required to enable
  the ovs l2 agent to managed the userspace netdev datapath.

  2 new config should be added to allow configuation of ovs and the ovs
  l2 agent.

  cfg.StrOpt('ovs_datapath', default='system', choices=['system','netdev'],
   help=_("ovs datapath to use.")),

  and

  cfg.StrOpt('agent_type', default=q_const.AGENT_TYPE_OVS,  
   choices=[q_const.AGENT_TYPE_OVS, 
q_const.AGENT_TYPE_OVS_DPDK],
   help=_("Selects the Agent Type 
reported"))

  the ovs_datapath config option will provided a mechanism at deploy time to 
select which datapath to enable.
  the 'system'(kernel) datapath will be enabled by default as it is today. the 
netdev(userspace) datapath option will enabled the ovs agent to configure and 
manage the netdev data path. this config option will be added to the ovs 
section of the ml2_conf.ini

  the agent_type config option will provided a mechanism to enable coexistence 
of dpdk enabled ovs nodes  and vanilla ovs nodes.
  by allowing a configurable agent_type both the standard openvswitch ml2 
mechanism driver and the ovsdpdk mechanism driver can be used. by default the 
agent_type reported will be unchanged 'Open vSwitch agent'. during deployment 
an operator can chose to spcify an agent_type of 'DPDK OVS Agent' if they have 
deployed a dpdk enabled.

  these are the only changes required to extent the ovs agent to suport
  the netdev datapath.

  documentation and unit tests will be provided to cover these changes.
  a new job can be added to the intel-networking-ci to continue to validate 
this configuration if additional 3rd party 
  testing is desired.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-09-24 Thread Ekaterina Chernova
** Also affects: murano
   Importance: Undecided
   Status: New

** Changed in: murano
   Importance: Undecided => Medium

** Changed in: murano
   Status: New => Confirmed

** Changed in: murano
Milestone: None => liberty-rc1

** Changed in: murano
 Assignee: (unassigned) => Tetiana Lashchova (tlashchova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in Glance:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  In Progress
Status in Manila:
  In Progress
Status in murano:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in Sahara:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499177] [NEW] Performance: L2 agent takes too much time to refresh sg rules

2015-09-24 Thread Lan Qi song
Public bug reported:

This issue is introducing a performance problem for the L2 agent
including LinuxBridge and OVS agent in Compute node when there are lots
of networks and instances in this Compute node (eg. 500 instances)

The performance problem reflect in two aspects:

1. When LinuxBridge agent service starts up(this seems only happened in
LinuxBridge agent not for the OVS agent), I found there were two methods
take too much time:

   1.1 get_interface_by_ip(),  we should find the interface which was
assigned with the "local ip" defined in configuration file, and to check
whether this interface support "vxlan" or not.  This method will iterate
all the interface in this compute node and execute "ip link show
[interface] to [local  ip]" to judge the result.  I think there should
be a faster way.

   1.2 prepare_port_filter() ,  in this method ,  we should make sure
the ipset are create correctly. But this method will execute too much
"ipset" commands and take too much time.

2. When devices' sg rules are changed,  L2 agent should refresh the
firewalls.

2.1 refresh_firewall() this method will call "modify_rules" to make
the rules predicable, but this method also takes too much time.

It will be very benefit for the large scales of networks if this
performance problem can be fix or optimize.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  This issue is introducing a performance problem for the L2 agent
  including LinuxBridge and OVS agent in Compute node when there are lots
  of networks and instances in this Compute node (eg. 500 instances)
  
  The performance problem reflect in two aspects:
  
  1. When LinuxBridge agent service starts up(this seems only happened in
  LinuxBridge agent not for the OVS agent), I found there were two methods
  take too much time:
  
-1.1 get_interface_by_ip(),  we should find the interface which was
+    1.1 get_interface_by_ip(),  we should find the interface which was
  assigned with the "local ip" defined in configuration file, and to check
  whether this interface support "vxlan" or not.  This method will iterate
  all the interface in this compute node and execute "ip link show
  [interface] to [local  ip]" to judge the result.  I think that should be
  a faster way.
  
-1.2 prepare_port_filter() ,  in this method ,  we should make sure
+    1.2 prepare_port_filter() ,  in this method ,  we should make sure
  the ipset are create correctly. But this method will execute too much
  "ipset" commands and take too much time.
  
+ 2. When devices' sg rules are changed,  L2 agent should refresh the
+ firewalls.
  
- 2. When devices' sg rules are changed,  L2 agent should refresh the firewalls.
- 
- 2.1 refresh_firewall() this method will call "modify_rules" to make
+ 2.1 refresh_firewall() this method will call "modify_rules" to make
  the rules predicable, but this method also takes too much time.
  
  It will be very benefit for the large scales of networks if this
  performance problem can be fix or optimize.
- 
- 
- If this kind of performance problem

** Description changed:

  This issue is introducing a performance problem for the L2 agent
  including LinuxBridge and OVS agent in Compute node when there are lots
  of networks and instances in this Compute node (eg. 500 instances)
  
  The performance problem reflect in two aspects:
  
  1. When LinuxBridge agent service starts up(this seems only happened in
  LinuxBridge agent not for the OVS agent), I found there were two methods
  take too much time:
  
     1.1 get_interface_by_ip(),  we should find the interface which was
  assigned with the "local ip" defined in configuration file, and to check
  whether this interface support "vxlan" or not.  This method will iterate
  all the interface in this compute node and execute "ip link show
- [interface] to [local  ip]" to judge the result.  I think that should be
- a faster way.
+ [interface] to [local  ip]" to judge the result.  I think there should
+ be a faster way.
  
     1.2 prepare_port_filter() ,  in this method ,  we should make sure
  the ipset are create correctly. But this method will execute too much
  "ipset" commands and take too much time.
  
  2. When devices' sg rules are changed,  L2 agent should refresh the
  firewalls.
  
  2.1 refresh_firewall() this method will call "modify_rules" to make
  the rules predicable, but this method also takes too much time.
  
  It will be very benefit for the large scales of networks if this
  performance problem can be fix or optimize.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499177

Title:
  Performance: L2 agent takes too much time to refresh sg rules

Status in neutron:
  New

Bug description:
  This issue is introducing a performance problem for the L2 agent
  including LinuxBridge and OVS agent in Compute node when 

[Yahoo-eng-team] [Bug 1470690] Re: No 'OS-EXT-VIF-NET' extension in v2.1

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470690

Title:
  No 'OS-EXT-VIF-NET' extension in v2.1

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  V2 APi has extension for virtual interface 'OS-EXT-VIF-NET' but it is
  not present in v2.1 API.

  Because of this there is difference between v2 and v2.1 response of
  virtual interface API.

  v2 List virtual interface Response (with all extension enable)

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s",
  "OS-EXT-VIF-NET:net_id": "%(id)s"
  }
  ]
  }

  v2.1 List virtual interface Response

  {
  "virtual_interfaces": [
  {
  "id": "%(id)s",
  "mac_address": "%(mac_addr)s"
  }
  ]
  }

  As v2.1 is released in kilo, we should backport this fix to kilo
  branch also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462901] Re: Remove v3 and plugins from nova code tree

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462901

Title:
  Remove v3 and plugins from nova code tree

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As we already released the V2.1 API, but all the V2.1 API code still
  under a old directory (nova/api/openstack/compute/plugins/v3) with
  'v3' word. The 'v3' isn't existed anymore, we plan to remove it in the
  Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469942] Re: Error message of quota exceeded don't contain enough information

2015-09-24 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova
Milestone: None => liberty-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469942

Title:
  Error message of quota exceeded don't contain enough information

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. base version

  stack@devstack:/opt/stack/nova$  [master]$ git log -1
  commit 6969f270c5035325c603ce7a98b1647b72bf5eaa
  Merge: ae4ae93 930da44
  Author: Jenkins 
  Date:   Sat Jun 27 08:40:25 2015 +

  Merge "Fix typos detected by toolkit misspellings."

  2. nova-api.log

  2015-06-30 10:55:26.637 DEBUG nova.compute.api 
[req-2bba2e1b-da00-477a-94e8-01eee8e17401 admin demo] cores,ram quota exceeded 
for 08751f5a95464f5db73d9f57d55fa6b9, tried to run 1 instances. Cannot run any 
mo
  re instances of this type. _check_num_instances_quota 
/opt/stack/nova/nova/compute/api.py:442
  2015-06-30 10:55:26.638 INFO nova.api.openstack.wsgi 
[req-2bba2e1b-da00-477a-94e8-01eee8e17401 admin demo] HTTP exception thrown: 
Quota exceeded for cores,ram: Requested 1, but already used 1 of 1 cores

  3. reproduce steps:

  * set the tenant qouta, core=1, ram=512

  * boot instance with flavor m1.tiny (1 core, 512 ram)

  * boot instance again with flavor m1.tiny

  Expected result:

  *  booting instance failed in the second time, and user should know
  which resource is limited, core and ram.

  Actual result:

  * the raised exception message only contain core limit details, but no
  ram details.

  stack@devstack:/home/devstack/logs$  [master]$ nova boot --image 
cirros-0.3.2-x86_64-disk --flavor m1.tiny --nic 
net-id=00d3142f-f2d1-4427-a0d3-d31c089f3c7e chenrui_demo
  ERROR (Forbidden): Quota exceeded for cores,ram: Requested 1, but already 
used 1 of 1 cores (HTTP 403) (Request-ID: 
req-2bba2e1b-da00-477a-94e8-01eee8e17401)

  
  As a End user, he should get the full information from the exception message, 
he don't know the ram limit detail, so he has no idea which flavor can be used 
to boot instance successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >