[Yahoo-eng-team] [Bug 1438550] Re: az cache is changed un-expectedly

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: Trung Trinh (trung-t-trinh) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438550

Title:
  az cache is changed un-expectedly

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Affected version: stable/juno

  Description:
  This bug report is a follow-up of the bug report 
(https://bugs.launchpad.net/nova/+bug/1390033) to fix another strange behavior 
that the az info stored in the cache is changed un-expectedly.

  For detailed procedure of re-producing the bug, please refer to the
  "Bug Description" part of the above link.

  Analysis:
  Please refer to the comment of "Trung Trinh (trung-t-trinh) wrote on 
2014-11-26"

  Proposal:
  That strange behavior should be analysed to find out the root cause and a 
possible fix

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445863] Re: Unable to pass the parameter hostname to nova-api, when creating an instance.

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: javeme (javaloveme) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445863

Title:
  Unable to pass the parameter hostname to nova-api, when creating an
  instance.

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  When we create an instance, it's unable to pass the parameter hostname to 
nova-api.
  Now, we use display_name as hostname[1], but obviously this is not a good 
practice because they are independent, In addition, hostname must conform to 
RFC 952, RFC 1123 specification, but the display name is not necessary.
  So we need to pass hostname from the Rest API, and set it into the instance.

  change method API.create()  [nova/compute/api.py]
  def create(self, context, instance_type,
     image_href, kernel_id=None, ramdisk_id=None,
     min_count=None, max_count=None,
     display_name=None, display_description=None,
     key_name=None, key_data=None, security_group=None,
     availability_zone=None, user_data=None, metadata=None,
     injected_files=None, admin_password=None,
     block_device_mapping=None, access_ip_v4=None,
     access_ip_v6=None, requested_networks=None, config_drive=None,
     auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
     shutdown_terminate=False, check_server_group_quota=False)
  into:
  def create(self, context, instance_type,
     image_href, kernel_id=None, ramdisk_id=None,
     min_count=None, max_count=None,
     display_name=None, display_description=None,
     key_name=None, key_data=None, security_group=None,
     availability_zone=None, user_data=None, metadata=None,
     injected_files=None, admin_password=None,
     block_device_mapping=None, access_ip_v4=None,
     access_ip_v6=None, requested_networks=None, config_drive=None,
     auto_disk_config=None, scheduler_hints=None, legacy_bdm=True,
     shutdown_terminate=False, check_server_group_quota=False,
     hostname=None)

  ps.
  [1] nova/compute/api.py class API._populate_instance_for_create():
  def _populate_instance_names(self, instance, num_instances):
  """Populate instance display_name and hostname."""
  display_name = instance.get('display_name')
  if instance.obj_attr_is_set('hostname'):
  hostname = instance.get('hostname')
  else:
  hostname = None

  if display_name is None:
  display_name = self._default_display_name(instance.uuid)
  instance.display_name = display_name

  if hostname is None and num_instances == 1:
  # NOTE(russellb) In the multi-instance case, we're going to
  # overwrite the display_name using the
  # multi_instance_display_name_template.  We need the default
  # display_name set so that it can be used in the template, though.
  # Only set the hostname here if we're only creating one instance.
  # Otherwise, it will be built after the template based
  # display_name.
  hostname = display_name
  instance.hostname = utils.sanitize_hostname(hostname)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1445863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430521] Re: nova network-create is ignoring --bridge when using the VlanManager

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430521

Title:
  nova network-create is ignoring --bridge when using the VlanManager

Status in OpenStack Compute (nova):
  Won't Fix
Status in python-novaclient:
  Confirmed

Bug description:
  The `nova network-create` is *silently* ignoring the --bridge when
  using the VlanManager. The bridge name is hardcoded to "br%s" % vlan.

  
https://github.com/openstack/nova/blob/stable/juno/nova/network/manager.py#L1361

  [root@jhenner-vmware ~(keystone_admin)]# nova network-create novanetwork 
--fixed-range-v4 192.168.32.0/22 --bridge br101 --bridge-interface br101 --uuid 
874ab2f1-c57d-464a-8a3f-5dda76ac7613 --vlan 102
  [root@jhenner-vmware ~(keystone_admin)]# nova network-show novanetwork
  +-+--+
  | Property| Value|
  +-+--+
  | bridge  | br102|
  | bridge_interface| br101|
  | broadcast   | 192.168.35.255   |
  | cidr| 192.168.32.0/22  |
  | cidr_v6 | -|
  | created_at  | 2015-03-10T20:25:41.00   |
  | deleted | False|
  | deleted_at  | -|
  | dhcp_server | 192.168.32.1 |
  | dhcp_start  | 192.168.32.3 |
  | dns1| 8.8.4.4  |
  | dns2| -|
  | enable_dhcp | True |
  | gateway | 192.168.32.1 |
  | gateway_v6  | -|
  | host| -|
  | id  | ca6b6f02-9f40-4578-89c6-9c8ca6d29bee |
  | injected| False|
  | label   | novanetwork  |
  | mtu | -|
  | multi_host  | True |
  | netmask | 255.255.252.0|
  | netmask_v6  | -|
  | priority| -|
  | project_id  | -|
  | rxtx_base   | -|
  | share_address   | False|
  | updated_at  | -|
  | vlan| 102  |
  | vpn_private_address | 192.168.32.2 |
  | vpn_public_address  | -|
  | vpn_public_port | 1000 |
  +-+--+


  It would be good to have the bridge name configurable when using
  VlanManager because when the nova vcenter driver is used, nova is
  checking whether there is a "port group" in the vcenter of same name
  as bridge used in nova-network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544103] [NEW] Tautology in libvirt driver cleanup method

2016-02-10 Thread Timofey Durakov
Public bug reported:

for clean up phase in driver destroy_disks parameter is calculated in compute 
manager 
https://github.com/openstack/nova/blob/8615de1bb7afac8ffbd7d9c8f8e7235c49df9b39/nova/compute/manager.py#L5303
 like:
 destroy_disks = not is_shared_block_storage
Then in libvirt driver cleanup method there is expression 
https://github.com/openstack/nova/blob/8615de1bb7afac8ffbd7d9c8f8e7235c49df9b39/nova/virt/libvirt/driver.py#L1117

if destroy_disks or is_shared_block_storage:

which doesn't make sense as its tautology and always evaluated  to True.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544103

Title:
  Tautology in libvirt driver cleanup method

Status in OpenStack Compute (nova):
  New

Bug description:
  for clean up phase in driver destroy_disks parameter is calculated in compute 
manager 
https://github.com/openstack/nova/blob/8615de1bb7afac8ffbd7d9c8f8e7235c49df9b39/nova/compute/manager.py#L5303
 like:
   destroy_disks = not is_shared_block_storage
  Then in libvirt driver cleanup method there is expression 
  
https://github.com/openstack/nova/blob/8615de1bb7afac8ffbd7d9c8f8e7235c49df9b39/nova/virt/libvirt/driver.py#L1117

  if destroy_disks or is_shared_block_storage:

  which doesn't make sense as its tautology and always evaluated  to
  True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539690] Re: volume:upload_to_image policy action does not match action in cinder and is missing from policy.json file

2016-02-10 Thread Doug Schveninger
Correction the action
"volume:upload_to_image": [],
was added in August 2014 and I was looking at the JUNO code base when reporting 
the issue.

I wll be closing the bug and resubmitting a blueprint to align the
horizon policy.json file with components for support issues

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539690

Title:
  volume:upload_to_image policy action does not match action in cinder
  and is missing from policy.json file

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The policy.json file in 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/conf/cinder_policy.json
 does not have action volume:upload_to_image.
  The horizon code uses that action.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/volumes/tables.py
  about line 220 policy_rules = (("volume", "volume:upload_to_image"),)

  But the action is also not a valid action in Cinder. Cinder code uses policy 
action volume_action:upload_image, see 
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/volume_actions.py
 
  about line 35 action = 'volume_actions:%s' % action_name
  about line 272 authorize(context, "upload_image")

  The action should match the cinder action and be in the policy.json
  file. Also glance uses the action name upload_image.

  The following bug https://bugs.launchpad.net/cinder/+bug/1539650
  points out that the action is missing from the cinder policy.json
  file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543181] Re: Raw and qcow2 disks are never preallocated on systems with newer util-linux

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277402
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=33749d2875b4e63ad8f663a734cd087980489b6e
Submitter: Jenkins
Branch:master

commit 33749d2875b4e63ad8f663a734cd087980489b6e
Author: Matthew Booth 
Date:   Mon Feb 8 12:41:06 2016 +

Fix fallocate test on newer util-linux

Newer util-linux raises an error when calling fallocate with the -n
option if the target file does not already exist. This is because the
-n option directs it to retain the file's existing size. A
non-existent file does not have an existing size. fallocate in older
releases of util-linux creates a zero-sized file in this case. This
results in _can_fallocate() always returning false, and therefore
never preallocating.

While this may reasonably be argued to be a regression in util-linux,
the -n option doesn't make sense here anyway, so we remove it.

Closes-Bug: #1543181

Change-Id: Ie96fa71e7d2641d30572b8eda5609dd3ca5b6708


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543181

Title:
  Raw and qcow2 disks are never preallocated on systems with newer util-
  linux

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  imagebackend.Image._can_fallocate tests if fallocate works by running
  the following command:

fallocate -n -l 1 .fallocate_test

  where  exists, but .fallocate_test does not.
  This command line is copied from the code which actually fallocates a
  disk. However, while this works on systems with an older version of
  util-linux, such as RHEL 7, it does not work on systems with a newer
  version of util-linux, such as Fedora 23. The result of this is that
  this test will always fail, and preallocation with fallocate will be
  erroneously disabled.

  On RHEL 7, which has util-linux-2.23.2-26.el7.x86_64 on my system:

  $ fallocate -n -l 1 foo
  $ ls -lh foo
  -rw-r--r--. 1 mbooth mbooth 0 Feb  8 15:33 foo
  $ du -sh foo
  4.0K  foo

  On Fedora 23, which has util-linux-2.27.1-2.fc23.x86_64 on my system:

  $ fallocate -n -l 1 foo
  fallocate: cannot open foo: No such file or directory

  The F23 behaviour actually makes sense. From the fallocate man page:

-n, --keep-size
Do  not modify the apparent length of the file.

  This doesn't make any sense if the file doesn't exist. That is, the -n
  option makes sense when preallocating an existing disk image, but not
  when testing if fallocate works on a given filesystem and the test
  file doesn't already exist.

  You could also reasonably argue that util-linux probably should be
  breaking an interface like this, even when misused. However, that's a
  separate discussion. We shouldn't be misusing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544058] [NEW] Nova event callback ON - Nova deletes an instance before it gets notification from Neutron that the port of the instance was deleted.

2016-02-10 Thread Toni Freger
Public bug reported:

Tested on Liberty
Step to reproduce:
Enabling of nova-neutron notifications:
On nova.conf:
vif_plugging_is_fatal = True
vif_plugging_timeout = 300
On neutron.conf:
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True

1)Delete an instance 
2)Deletion succeeded 
Until the notification is sent, nova has already deleted the instance.

Please see attached errors:
032:2016-01-12 23:47:14.526 8331 INFO nova.api.openstack.wsgi
[req-dac514a0-a869-42bb-bbf7-52a5e2986831fe71c506bd124db5a5b2081fa1e97785
733a421ebd494cc88cf502ad635a48e5 - - -] HTTP exception thrown: Noinstances
found for any event 1033:2016-01-12 23:47:14.527 8331 DEBUG
nova.api.openstack.wsgi [req-dac514a0-a869-42bb-bbf7-52a5e2986831
fe71c506bd124db5a5b2081fa1e97785 733a421ebd494cc88cf502ad635a48e5 - --]
 Returning 404 to user: No instances found for any event __call__
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1175
1034:2016-01-12 23:47:14.528 8331 INFO nova.osapi_compute.wsgi.server
[req-dac514a0-a869-42bb-bbf7-52a5e2986831
fe71c506bd124db5a5b2081fa1e97785
733a421ebd494cc88cf502ad635a48e5 - - -] 192.0.2.9 "POST
/v2/733a421ebd494cc88cf502ad635a48e5/os-server-external-events
HTTP/1.1"status: 404 len: 296 time: 0.1210408 1036
:==> /var/log/neutron/server.log <== 1038:RESP BODY: {"itemNotFound":
{"message": "No instances found for any event", "code": 404}}
1040:2016-01-12 23:47:14.530 5129 WARNING neutron.notifiers.nova [-]
Nova returned NotFound for event: 
[{'tag':u'fedc0c28-9435-4812-ae04-f9f60a269d41', 
'name':'network-vif-deleted','server_uuid': 
u'1f9959c2-62db-425a-bc14-7449dc9231e6'}]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544058

Title:
  Nova event callback ON - Nova deletes an instance before it gets
  notification from Neutron that the port of the instance was deleted.

Status in OpenStack Compute (nova):
  New

Bug description:
  Tested on Liberty
  Step to reproduce:
  Enabling of nova-neutron notifications:
  On nova.conf:
  vif_plugging_is_fatal = True
  vif_plugging_timeout = 300
  On neutron.conf:
  notify_nova_on_port_data_changes = True
  notify_nova_on_port_status_changes = True

  1)Delete an instance 
  2)Deletion succeeded 
  Until the notification is sent, nova has already deleted the instance.

  Please see attached errors:
  032:2016-01-12 23:47:14.526 8331 INFO nova.api.openstack.wsgi
  [req-dac514a0-a869-42bb-bbf7-52a5e2986831fe71c506bd124db5a5b2081fa1e97785
  733a421ebd494cc88cf502ad635a48e5 - - -] HTTP exception thrown: Noinstances
  found for any event 1033:2016-01-12 23:47:14.527 8331 DEBUG
  nova.api.openstack.wsgi [req-dac514a0-a869-42bb-bbf7-52a5e2986831
  fe71c506bd124db5a5b2081fa1e97785 733a421ebd494cc88cf502ad635a48e5 - --]
   Returning 404 to user: No instances found for any event __call__
  /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1175
  1034:2016-01-12 23:47:14.528 8331 INFO nova.osapi_compute.wsgi.server
  [req-dac514a0-a869-42bb-bbf7-52a5e2986831
  fe71c506bd124db5a5b2081fa1e97785
  733a421ebd494cc88cf502ad635a48e5 - - -] 192.0.2.9 "POST
  /v2/733a421ebd494cc88cf502ad635a48e5/os-server-external-events
  HTTP/1.1"status: 404 len: 296 time: 0.1210408 1036
  :==> /var/log/neutron/server.log <== 1038:RESP BODY: {"itemNotFound":
  {"message": "No instances found for any event", "code": 404}}
  1040:2016-01-12 23:47:14.530 5129 WARNING neutron.notifiers.nova [-]
  Nova returned NotFound for event: 
[{'tag':u'fedc0c28-9435-4812-ae04-f9f60a269d41', 
  'name':'network-vif-deleted','server_uuid': 
u'1f9959c2-62db-425a-bc14-7449dc9231e6'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544133] [NEW] Cant choose the interface do delete from router on curvature network topology

2016-02-10 Thread Ido Ovadia
Public bug reported:

Description of problem:
===
Cant choose the interface do delete from router on curvature network topology

Version-Release number of selected component:
=
python-django-horizon-8.0.0-10.el7ost.noarch
openstack-dashboard-8.0.0-10.el7ost.noarch

How reproducible:
=
100%

Steps to Reproduce:
===
1. Create private network
2. Create external network
3. Create router and connect it both networks 
4. From network topology click no the network icon
5. Click 'Delete Interface'

Actual results:
===
Cant choose the interface to delete

Expected results:
=
User can choose the interface to delete from the router

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544133

Title:
   Cant choose the interface do delete from router on curvature network
  topology

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  Cant choose the interface do delete from router on curvature network topology

  Version-Release number of selected component:
  =
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Create private network
  2. Create external network
  3. Create router and connect it both networks 
  4. From network topology click no the network icon
  5. Click 'Delete Interface'

  Actual results:
  ===
  Cant choose the interface to delete

  Expected results:
  =
  User can choose the interface to delete from the router

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437902] Re: nova redeclares the `nova` named exchange zillion times without a real need

2016-02-10 Thread Markus Zoeller (markus_z)
It looks like this only affected "oslo.messaging" and not Nova, that's
why I switch the status to "Invalid".

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437902

Title:
  nova redeclares the `nova` named exchange zillion times without a real
  need

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Fix Released

Bug description:
  The AMQP broker preserves the exchanges, they are replaced to all broker even 
in non HA mode.
  A transient exchange can disappear ONLY when the user explicitly requests 
it's deletion or when the full rabbit cluster dies.

  More efficient to declare exchanges only when it is really missing.

  Application MUST redeclare the exchange when it was reported as Not Found.
  Note.: The Channel exceptions causes channel termination, but not connection 
termination.
  Application MAY try to redeclare the exchange on connection breakage, it can 
assume the messaging cluster  dead.
  Application SHOULD redeclare the exchange at application start up to verify 
the attributes (Before the first usage).
  Application does not needs to redeclare the exchange in any other cases.

  Now, significant amount of the AMQP request/response-es is
  Exchange.Declare -> Exchange.Declare-Ok. (One per publish?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240317] Re: can't resize after live migrate(block_migrate)

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: Timofey Durakov (tdurakov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240317

Title:
  can't resize after live migrate(block_migrate)

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  when i try to resize my instance alter live migrate(block_migrate).i found 
some error message in nova-compute.log on my compute node.
  error message:
  2013-10-15 18:52:09.276 ERROR nova.compute.manager 
[req-8b4330c7-1ea6-404a-ad0d-4f064e6b9643 None None] [instance: 
28b509bb-dfe9-4793-a9f8-b121ab16aa6c] t3.uuzu.idc is not a valid node managed 
by this compute host.. Setting instance vm_state to ERROR
  2013-10-15 18:52:09.445 ERROR nova.openstack.common.rpc.amqp 
[req-8b4330c7-1ea6-404a-ad0d-4f064e6b9643 None None] Exception during message 
handling
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 430, 
in _process_data
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 133, in dispatch
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
temp_level, payload)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 94, in wrapped
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 260, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 237, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
self.gen.next()
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 224, in 
decorated_function
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2050, in 
confirm_resize
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp rt = 
self._get_resource_tracker(migration['source_node'])
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 361, in 
_get_resource_tracker
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp raise 
exception.NovaException(msg)
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 
NovaException: t3.uuzu.idc is not a valid node managed by this compute host.
  2013-10-15 18:52:09.445 17864 TRACE nova.openstack.common.rpc.amqp 

  I found the instance is on the new host t3.uuzu.idc. but the status is ERROR. 
  Version is Grizzly.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1544155] [NEW] Cant delete subnet on curvature network topology

2016-02-10 Thread Ido Ovadia
Public bug reported:

Description of problem:
===
Cant delete subnet on curvature network topology

Version-Release number of selected component:
===
python-django-horizon-8.0.0-10.el7ost.noarch
openstack-dashboard-8.0.0-10.el7ost.noarch

How reproducible:
==
100%

Steps to Reproduce:

1. Create network and subnet
2. From network topology click no the network icon
3. Click 'Delete Subnet'

Actual results:
===
Subnet does not delete

Expected results:
==
Subnet should delete


Additional info /var/log/horizon/horizon.log
==
2016-02-10 16:33:00,323 5577 WARNING horizon.exceptions Recoverable error: The 
server has either erred or is incapable of performing the requested operation. 
(HTTP 500) (Request-ID: req-ddd3f281-ea79-4bf0-939f-4a8267ad47d1)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544155

Title:
   Cant delete subnet on curvature network topology

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  Cant delete subnet on curvature network topology

  Version-Release number of selected component:
  ===
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  ==
  100%

  Steps to Reproduce:
  
  1. Create network and subnet
  2. From network topology click no the network icon
  3. Click 'Delete Subnet'

  Actual results:
  ===
  Subnet does not delete

  Expected results:
  ==
  Subnet should delete

  
  Additional info /var/log/horizon/horizon.log
  ==
  2016-02-10 16:33:00,323 5577 WARNING horizon.exceptions Recoverable error: 
The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-ddd3f281-ea79-4bf0-939f-4a8267ad47d1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542486] Re: nova-compute stack traces with BadRequest: Specifying 'tenant_id' other than authenticated tenant in request requires admin privileges

2016-02-10 Thread Adam Young
Adding Nova to the bug report because it absolutely should not require a
specific version of the Keystone API to make things work.  I suspect
that there is a workaround here, but the Keystone API and auth plugins
are designed to be versionless.  This is a step backwards, and should be
treated as a stopgap solution only.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542486

Title:
  nova-compute stack traces with BadRequest: Specifying 'tenant_id'
  other than authenticated tenant in request requires admin privileges

Status in OpenStack Compute (nova):
  New
Status in puppet-nova:
  New

Bug description:
  The puppet-openstack-integration tests (rebased on
  https://review.openstack.org/#/c/276773/ ) currently fail on the
  latest version of RDO Mitaka (delorean current) due to what seems to
  be a problem with the neutron configuration.

  Everything installs fine but tempest fails:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/console.html#_2016-02-05_20_26_35_569

  And there are stack traces in nova-compute.log:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/logs/nova/nova-compute.txt.gz#_2016-02-05_20_22_16_151

  I talked with #openstack-nova and they pointed out a difference between what 
devstack yields as a [neutron] configuration versus what puppet-nova configures:
  
  # puppet-nova via puppet-openstack-integration
  
  [neutron]
  service_metadata_proxy=True
  metadata_proxy_shared_secret =a_big_secret
  url=http://127.0.0.1:9696
  region_name=RegionOne
  ovs_bridge=br-int
  extension_sync_interval=600
  auth_url=http://127.0.0.1:35357
  password=a_big_secret
  tenant_name=services
  timeout=30
  username=neutron
  auth_plugin=password
  default_tenant_id=default

  
  # Well, it worked in devstack™
  
  [neutron]
  service_metadata_proxy = True
  url = http://127.0.0.1:9696
  region_name = RegionOne
  auth_url = http://127.0.0.1:35357/v3
  password = secretservice
  auth_strategy = keystone
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  username = neutron
  auth_plugin = v3password

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544190] [NEW] Options missing from metadata_agent.ini

2016-02-10 Thread Matt Kassawara
Public bug reported:

The metadata agent requires at least the following authentication
options, but they are missing from the auto-generated metadata_agent.ini
file:

[DEFAULT]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_region = RegionOne
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544190

Title:
  Options missing from metadata_agent.ini

Status in neutron:
  New

Bug description:
  The metadata agent requires at least the following authentication
  options, but they are missing from the auto-generated
  metadata_agent.ini file:

  [DEFAULT]
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  auth_region = RegionOne
  auth_type = password
  project_domain_id = default
  user_domain_id = default
  project_name = service
  username = neutron
  password = password

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365804] Re: Did not find the volume after live_migration

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: Eli Qiao (taget-9) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365804

Title:
  Did not find the volume after live_migration

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  I migration the VM to another host, but there is a error occured  when
  source host excute the driver's live_migration. In the live_migration
  process, the BDM was detach from source host and attach int dest,
  after this step , the error occres, the nova-compute rollback the BDM,
  it detach in dest, but didn't attach in source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1365804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423644] Re: CPU/MEM usage not accurate. Hits quotas while resources are available

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

Juno has reached its EOL, so it's "invalid" for that release.

** Changed in: juno
   Status: New => Invalid

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423644

Title:
  CPU/MEM usage not accurate. Hits quotas while resources are available

Status in Juno:
  Invalid
Status in OpenStack Compute (nova):
  Won't Fix
Status in Ubuntu:
  New

Bug description:
  I tried to set my quotas to exactly the hardware specs of my compute
  node for accurate reporting. My compute node runs on bare metal. After
  I did this I got quotas exceeded, unable to launch instance. I checked
  the hypervisor from the web interface and cli. 5 of 12 VCPUs used and
  6GB of 16GB used. When you try and change the quotas to match the
  hardware it errors. From the CLI it reports that quota cant be set to
  12 VCPUs because 17 are in use. Same with mem says 17GB are in use.
  But they arent in use clearly. So the bandaid is to set the quotas
  really high and then it ignores them and allows you to create
  instances against the actual nova usage stats. This also causes some
  failure to launch instance errors, no host is available meaning there
  arent enough resources even though there are. Im running this as a
  production environment for my domain so I have spent hundreds of hours
  chasing my tail. Hope this is helpful for debugging the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/juno/+bug/1423644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385318] Re: Nova fails to add fixed IP

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385318

Title:
  Nova fails to add fixed IP

Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  I created instance with one NIC attached.
  Then I try to attach another NIC:

  nova add-fixed-ip  ServerId NetworkId

  Nova compute raises exception:

  2014-10-24 15:40:33.925 31955 ERROR oslo.messaging.rpc.dispatcher 
[req-43570a05-937a-4ddf-a0e9-e05d42660817 ] Exception during message handling: 
Network could not be found for instance 09b6e137-37d6-475d-992c-bdcb7d3cb841.
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 414, in decorated_function
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 326, in decorated_function
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 314, in decorated_function
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 3915, in add_fixed_ip_to_instance
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher 
network_id)
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/base_api.py",
 line 61, in wrapper
  2014-10-24 15:40:33.925 31955 TRACE oslo.messaging.rpc.dispatcher res = 
f(self, context, *args, **kwargs)
  

[Yahoo-eng-team] [Bug 1544176] [NEW] br-int with Normal action

2016-02-10 Thread Sothy
Public bug reported:

Hello,
I have two Vms (VM A and VMB) connected to a single host. Flow going and coming 
from VMB is mirrored to VM c in another compute node. 

After the creation of tap flow and tap servicee, I ping from VM A to
VMB. I am able to get ICMP reply messaage at VMC ( thanks to
configuration by TaaS agent.

When I look the problem in br-int in a compute node ( where VM A and VM
B are located).

OF rules able to catch the ICMP reply and not request. And Ping works
because of normal mode of br-int.

See the sample flow in br-int.

 cookie=0x0, duration=21656.671s, table=0, n_packets=10397,
n_bytes=981610, idle_age=0, priority=20,in_port=6
actions=NORMAL,mod_vlan_vid:3913,output:12

[ICMP reply catched in OF pipe line]
 
cookie=0x0, duration=15.937s, table=0, n_packets=0, n_bytes=0, idle_age=15, 
priority=20,dl_vlan=4,dl_dst=fa:16:3e:c7:b5:42 
actions=normal,mod_vlan_vid:3913,output:12

[ICMP request suppose to catch by this rule, didnt :-)]

My question is why this rule not yet hit? Please mind that ping works as
normal.

Any clue? It may be problem with normal action.

Best regards
Sothy

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544176

Title:
  br-int with Normal action

Status in neutron:
  New

Bug description:
  Hello,
  I have two Vms (VM A and VMB) connected to a single host. Flow going and 
coming from VMB is mirrored to VM c in another compute node. 

  After the creation of tap flow and tap servicee, I ping from VM A to
  VMB. I am able to get ICMP reply messaage at VMC ( thanks to
  configuration by TaaS agent.

  When I look the problem in br-int in a compute node ( where VM A and
  VM B are located).

  OF rules able to catch the ICMP reply and not request. And Ping works
  because of normal mode of br-int.

  See the sample flow in br-int.

   cookie=0x0, duration=21656.671s, table=0, n_packets=10397,
  n_bytes=981610, idle_age=0, priority=20,in_port=6
  actions=NORMAL,mod_vlan_vid:3913,output:12

  [ICMP reply catched in OF pipe line]
   
  cookie=0x0, duration=15.937s, table=0, n_packets=0, n_bytes=0, idle_age=15, 
priority=20,dl_vlan=4,dl_dst=fa:16:3e:c7:b5:42 
actions=normal,mod_vlan_vid:3913,output:12

  [ICMP request suppose to catch by this rule, didnt :-)]

  My question is why this rule not yet hit? Please mind that ping works
  as normal.

  Any clue? It may be problem with normal action.

  Best regards
  Sothy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543169] Re: Nova os-volume-types endpoint doesn't exist

2016-02-10 Thread Anne Gentle
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-api-site
   Status: In Progress => Confirmed

** Changed in: openstack-api-site
 Assignee: Anne Gentle (annegentle) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543169

Title:
  Nova os-volume-types endpoint doesn't exist

Status in OpenStack Compute (nova):
  New
Status in openstack-api-site:
  Confirmed

Bug description:
  The Nova v2.1 documentation shows an endpoint "os-volume-types" which
  lists the available volume types. http://developer.openstack.org/api-
  ref-compute-v2.1.html#listVolumeTypes

  I am using OpenStack Liberty and that endpoint doesn't appear to exist
  anymore. GET requests sent to /v2.1/​{tenant_id}​/os-volume-types
  return 404 not found. When I searched the Nova codebase on GitHub, I
  could only find a reference to volume types in the policy.json but not
  implemented anywhere.

  Does this endpoint still exist, and if so what is the appropriate
  documentation?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331537] Re: nova service-list shows nova-compute as down and is required to be restarted frequently in order to provision new vms

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
 Assignee: Roman Podoliaka (rpodolyaka) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1331537

Title:
  nova service-list shows nova-compute as down and is required to be
  restarted frequently in order to provision new vms

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Nova compute services in Openstack Havana go down frequently as listed
  by "nova service-list" and requires to be restarted very frequently,
  multiple times every day. All the compute nodes have the ntp times in
  sync.

  When a node shows down, it is not able to use those compute nodes for
  launching new VMs and we quickly run out of compute resources. Hence
  our workaround is to restart the Compute nodes on those servers
  hourly.

  In the nova-compute node I've found the following error and they did match 
with the "Updated_at" field from nova service-list.
  2014-06-07 00:21:15.690 511340 ERROR nova.servicegroup.drivers.db [-] model 
server went away
  2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db Traceback 
(most recent call last):
  2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db File 
"/usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py", l ine 92, 
in _report_state
  5804 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db 
report_count = service.service_ref['report_count'] + 1
  5805 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db 
TypeError: 'NoneType' object has no attribute '__getitem__'
  5806 2014-06-07 00:21:15.690 511340 TRACE nova.servicegroup.drivers.db

  It looks like the ones that are shown as down haven't been able to update the 
database with the latest status and they did match with the Traceback seen 
above (2014-06-07 00:21:15.690) on at least two compute nodes that I have seen.
  
+--++--+--+---++-+
  | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+--++--+--+---++-+
  | nova-compute | nova1| blabla | enabled | up | 2014-06-07T00:37:42.00 | 
None |
  | nova-compute | nova2 | blabla | enabled | down | 2014-06-07T00:21:05.00 
| None |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1331537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525543] Re: ClientException: Unexpected API Error

2016-02-10 Thread Markus Zoeller (markus_z)
Cleanup
===

This bug report has the status "Incomplete" since more than 30 days
and it looks like that there are no open reviews for it. To keep
the bug list sane, I close this bug with "won't fix". This does not
mean that it is not a valid bug report, it's more to acknowledge that
no progress can be expected here anymore. You are still free to push a
new patch for this bug. If you could reproduce it on the current master
code or on a maintained stable branch, please switch it to "Confirmed".

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525543

Title:
  ClientException: Unexpected API Error

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  OpenStack
  Nova version: 2.30.1

  2015-12-12 16:33:00.575 2722 INFO nova.osapi_compute.wsgi.server 
[req-e153dadf-bc09-4181-ac8b-3d83870414b1 e133daa339a047fb80a49be3b5f30c7b 
56e85a07917a4818ad98356192037b2d - - -] 10.0.0.40 "GET /v2/ HTTP/1.1" status: 
200 len: 572 time: 0.1086259
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions 
[req-89067bd6-ebb3-4b32-ac57-3bbea950f370 e133daa339a047fb80a49be3b5f30c7b 
56e85a07917a4818ad98356192037b2d - - -] Unexpected exception in API method
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/images.py", line 
145, in detail
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions 
**page_params)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/api.py", line 68, in get_all
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 284, in detail
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions for 
image in images:
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 254, in list
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions for 
image in paginate(params, return_request_id):
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 238, in 
paginate
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 63, in _list
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions resp, 
body = self.client.get(url)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 280, in get
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 272, in 
_request
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 93, in 
_handle_response
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: 500 Internal Server Error: The server has either erred 
or is incapable of performing the requested operation. (HTTP 500)
  2015-12-12 16:33:01.311 2722 ERROR nova.api.openstack.extensions
  2015-12-12 16:33:01.329 2722 INFO nova.api.openstack.wsgi 
[req-89067bd6-ebb3-4b32-ac57-3bbea950f370 e133daa339a047fb80a49be3b5f30c7b 
56e85a07917a4818ad98356192037b2d - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.
  
  2015-12-12 16:33:01.343 2722 INFO nova.osapi_compute.wsgi.server 
[req-89067bd6-ebb3-4b32-ac57-3bbea950f370 e133daa339a047fb80a49be3b5f30c7b 
56e85a07917a4818ad98356192037b2d - - -] 10.0.0.40 "GET 
/v2/56e85a07917a4818ad98356192037b2d/images/detail 

[Yahoo-eng-team] [Bug 1403836] Re: Nova volume attach fails for a iscsi disk with CHAP enabled.

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/249291
Committed: 
https://git.openstack.org/cgit/openstack/os-win/commit/?id=b72790bacfd356021b2dd870ade6c9c216fd14a0
Submitter: Jenkins
Branch:master

commit b72790bacfd356021b2dd870ade6c9c216fd14a0
Author: Lucian Petrut 
Date:   Fri Nov 20 16:20:40 2015 +0200

iSCSI initiator refactoring using iscsidsc.dll

This patch adds a new iscsi initiator utils class,
leveraging iscsidsc.dll functions.

The advantages are:
* Same error output as iscsicli, without the proccess spawn
  overhead
* Improved overall performance, having finer control over
  the iSCSI initiator and avoiding unnecessary operations
* Fixed bugs related to LUN discovery
* Static targets are used instead of having portal discovery
  sessions. This will let us use backends that require
  discovery credentials (which may be different than the
  credentials used when logging in targets)
* improved MPIO support (the caller must request logging in the
  target for each of the available portals. Logging in multiple
  targets exporting the same LUN is also supported). Also, a
  specific initiator can be requested when creating sessions.

Closes-Bug: #1403836
Closes-Bug: #1372823
Closes-Bug: #1372827

Co-Authored-By: Alin Balutoiu 
Change-Id: Ie037cf1712a28e85e5eca445eea3df883c6b6831


** Changed in: os-win
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403836

Title:
  Nova volume attach fails for a iscsi disk with CHAP enabled.

Status in OpenStack Compute (nova):
  In Progress
Status in os-win:
  Fix Released

Bug description:
  I was trying nova volume attach of a disk with CHAP enabled on
  Windows(HyperV driver), I notice that the attach volume fails with
  CHAP authentication enforced and the same works without CHAP
  authentication set.

  My current setup is Juno based:

  I saw a similar bug reported as
  https://bugs.launchpad.net/nova/+bug/1397549  .  The fix of which is
  as per

  https://review.openstack.org/#/c/137623/ and
  https://review.openstack.org/#/c/134592/ .

  Even after incorporating  these changes  things do not work and it
  needs an additional fix.

  Issue: The issue even after having  the code as in the commits
  mentioned earlier is that – when we try to do nova volume-attach ,  on
  Hyperv host we first do a login to portal , then attach the volume to
  target.

  Now, if we login to portal without chap authentication – it will fail
  (Authentication failure) and hence the code needs to be changed here
  
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L64-65
  ) .

  
  Resoultion: While creating/adding  the new portal we need to add it with the 
CHAP credentials (as the way it is done on target.connect) . 

  Sample snippet of the fix would be;
  if portal:
  portal[0].Update()
  else:
  # Adding target portal to iscsi initiator. Sending targets
   LOG.debug("Create a new portal")
 auth = {}
  if auth_username and auth_password:
  auth['AuthenticationType'] = self._CHAP_AUTH_TYPE
  auth['ChapUsername'] = auth_username
  auth['ChapSecret'] = auth_password
  LOG.debug(auth)
  portal = self._conn_storage.MSFT_iSCSITargetPortal
  portal.New(TargetPortalAddress=target_address,
 TargetPortalPortNumber=target_port, **auth)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372827] Re: Improve efficiency of Hyper-V attaching iSCSI volumes

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/249291
Committed: 
https://git.openstack.org/cgit/openstack/os-win/commit/?id=b72790bacfd356021b2dd870ade6c9c216fd14a0
Submitter: Jenkins
Branch:master

commit b72790bacfd356021b2dd870ade6c9c216fd14a0
Author: Lucian Petrut 
Date:   Fri Nov 20 16:20:40 2015 +0200

iSCSI initiator refactoring using iscsidsc.dll

This patch adds a new iscsi initiator utils class,
leveraging iscsidsc.dll functions.

The advantages are:
* Same error output as iscsicli, without the proccess spawn
  overhead
* Improved overall performance, having finer control over
  the iSCSI initiator and avoiding unnecessary operations
* Fixed bugs related to LUN discovery
* Static targets are used instead of having portal discovery
  sessions. This will let us use backends that require
  discovery credentials (which may be different than the
  credentials used when logging in targets)
* improved MPIO support (the caller must request logging in the
  target for each of the available portals. Logging in multiple
  targets exporting the same LUN is also supported). Also, a
  specific initiator can be requested when creating sessions.

Closes-Bug: #1403836
Closes-Bug: #1372823
Closes-Bug: #1372827

Co-Authored-By: Alin Balutoiu 
Change-Id: Ie037cf1712a28e85e5eca445eea3df883c6b6831


** Changed in: os-win
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372827

Title:
  Improve efficiency of Hyper-V attaching iSCSI volumes

Status in OpenStack Compute (nova):
  Triaged
Status in os-win:
  Fix Released

Bug description:
  The Hyper-V driver in Nova is not very efficient attaching Cinder
  volumes to the VMs.

  It always tries to refresh the entire connection to the iSCSI target:

  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L87

  This is a time consuming task that also blocks additional calls during
  this time.

  The class should be refactored to work in a more efficient way.
  Calling the 'Update' method everytime a volume is attached should be
  replaced by a more intelligent mechanism. As reported in
  https://bugs.launchpad.net/nova/+bug/1372823 a call to
  'self._conn_storage.query("SELECT * FROM MSFT_iSCSISessionToDisk")'
  could help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544195] Re: User can not provision ironic node via nova when providing pre-created port

2016-02-10 Thread Devananda van der Veen
** Also affects: magnum
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544195

Title:
  User can not provision ironic node via nova when providing pre-created
  port

Status in Magnum:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When booting a nova instance with baremetal flavor, one can not
  provide a pre-created neutron port to "nova boot" command.

  The reason is obvious - to successfully deploy, mac address of the
  port must be the same as mac address of the ironic port corresponding
  to the ironic node where provisioning will happen, and although it is
  possible to specify a mac address during port create, a user can not
  know to which exactly ironic node provisioning will be assigned to by
  nova compute (more over, ordinary user has no access to list of ironic
  ports/macs whatsoever).

  This is most probably a known limitation, but the big problem is that
  it breaks many sorts of cloud orchestration attempts. For example, the
  most flexible in terms of usage approach in Heat is to pre-create a
  port, and create the server with this port provided (actually this is
  the only way if one needs to assign a floating IP to the instance via
  Neutron). Some other consumers of Heat extensively use this approach.

  So this limitation precludes Murano or Sahara to provision their
  instances on bare metal via Nova/Ironic.

  The solution might be to update the mac of the port to correct one
  (mac address update is possible with admin context) when working with
  baremetal nodes/Ironic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1544195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544240] [NEW] disassociate floating ip 500 response

2016-02-10 Thread Andrew Laski
Public bug reported:

>From http://logs.openstack.org/88/276088/5/check/gate-grenade-
dsvm/8051980/logs/new/screen-n-api.txt.gz

2016-02-10 13:39:19.932 ERROR nova.api.openstack.extensions 
[req-644cea97-7d26-4e2a-984b-d346ebf96ccb cinder_grenade cinder_grenade] 
Unexpected exception in API method
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 293, in 
_remove_floating_ip
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
disassociate_floating_ip(self, context, instance, address)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 80, in 
disassociate_floating_ip
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
self.network_api.disassociate_floating_ip(context, instance, address)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 49, in wrapped
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(self, context, *args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 77, in wrapper
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions res = 
f(self, context, *args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 240, in disassociate_floating_ip
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
affect_auto_assigned)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/utils.py", line 1082, in wrapper
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 460, in 
disassociate_floating_ip
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
interface, host, fixed_ip.instance_uuid)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/rpcapi.py", line 324, in 
_disassociate_floating_ip
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
instance_uuid=instance_uuid)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
retry=self.retry)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
timeout=timeout, retry=retry)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 466, in send
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
retry=retry)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 455, in _send
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions result = 
self._waiter.wait(msg_id, timeout)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 338, in wait
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions message = 
self.waiters.get(msg_id, timeout=timeout)
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 240, in get
2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 'to 
message ID %s' % msg_id)
2016-02-10 13:39:19.932 22992 ERROR 

[Yahoo-eng-team] [Bug 1544248] [NEW] rename hz-table directive to hz-table-helper

2016-02-10 Thread Cindy Lu
Public bug reported:

Originally the hzTable namespace extended the Smart-Table module to
provide support for checkboxes and sorting.

However, we are writing a table directive in another patch that will
allow us to generate table HTML content given the data and the column
definition, so we want to use the hzTable namespace for that.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544248

Title:
  rename hz-table directive to hz-table-helper

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Originally the hzTable namespace extended the Smart-Table module to
  provide support for checkboxes and sorting.

  However, we are writing a table directive in another patch that will
  allow us to generate table HTML content given the data and the column
  definition, so we want to use the hzTable namespace for that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544257] [NEW] Add metadefs for Cinder volume type configuration

2016-02-10 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/268357
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 44629deb81d153b673d520af44d384b59bfb1a31
Author: Mitsuhiro Tanino 
Date:   Fri Jan 15 17:12:35 2016 -0500

Add metadefs for Cinder volume type configuration

The following Cinder patch adds support to specify Cinder volume type
via cinder_img_volume_type parameter in glance properties on image
metadata. The property should be added in the Metadata Definitions
catalog.

https://review.openstack.org/#/c/258649/

DocImpact: Add cinder_img_volume_type to Image service property
   keys list at CLI reference.
   http://docs.openstack.org/cli-reference/glance.html
Change-Id: I3bbb20fdf153e12b7461fa9ea9fa172a8d603093
Depends-On: I62f02d817d84d3a7b651db36d7297299b1af2fe3

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: doc glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1544257

Title:
  Add metadefs for Cinder volume type configuration

Status in Glance:
  New

Bug description:
  https://review.openstack.org/268357
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 44629deb81d153b673d520af44d384b59bfb1a31
  Author: Mitsuhiro Tanino 
  Date:   Fri Jan 15 17:12:35 2016 -0500

  Add metadefs for Cinder volume type configuration
  
  The following Cinder patch adds support to specify Cinder volume type
  via cinder_img_volume_type parameter in glance properties on image
  metadata. The property should be added in the Metadata Definitions
  catalog.
  
  https://review.openstack.org/#/c/258649/
  
  DocImpact: Add cinder_img_volume_type to Image service property
 keys list at CLI reference.
 http://docs.openstack.org/cli-reference/glance.html
  Change-Id: I3bbb20fdf153e12b7461fa9ea9fa172a8d603093
  Depends-On: I62f02d817d84d3a7b651db36d7297299b1af2fe3

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1544257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468803] Re: [RFE] Create a modular L2 agent framework for linuxbridge, sriov and macvtap agents

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/273448
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7d153a671b5fcc77437bc1e9b41015da1acc57f8
Submitter: Jenkins
Branch:master

commit 7d153a671b5fcc77437bc1e9b41015da1acc57f8
Author: Andreas Scheuring 
Date:   Thu Jan 28 10:28:43 2016 +0100

Moving Common Agent into separate module

Moving the CommonAgent and all it's unittests into a speparate module.

Closes-Bug: #1468803

Change-Id: Ifccc6ee1a77eef3928ad326cd5857092aeef4a17


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468803

Title:
  [RFE] Create a modular L2 agent framework for linuxbridge, sriov and
  macvtap agents

Status in neutron:
  Fix Released

Bug description:
  Problem Statement
  =

  Currently, the Open vSwitch, Linux Bridge, and sriov mechanism drivers
  for the ML2 plugin have their own agents. This means that when
  improvements are made to one agent implementation, they have to be
  ported over to the other agents to gain the improvement. This has
  already happened, with patches like [1].  Much of the agent
  functionality is common enough that much of the code could be shared.

  Discussion on the mailing list [2]

  Analysis of current agents
  ==

  Currently the following agents are in the neutron tree
  - openvswitch [4]
  - linuxbridge [3]
  - sriov [5]
  - mlnx agent [6]

  For Mitaka the following agent is proposed
  - Macvtap agent [7]

  The following agent use a classical agent loop, monitoring or new
  devices to show up: linuxbridge, sriov, mlnx, macvtap

  OVS uses ovsdb events (or similar things) to get notified about new
  events.

  Proposal
  

  High level architecture
  ---

  Today the linuxbridge agent exists of 4 classes
  - NetworkSegment
  - LinuxBridgeManager  --> encpsulating most of the bridge specifics
  - LinuxBridgeRpcCallbacks   --> Class containing all the rpc callback methods
  - LinuxBridgeNeutronAgentRPC --> The agent loop itself

  #1 Get a clear separation between agent loop and bridge impl specifics
  Move all bridge specific code from LinuxBridgeNeutronAgentRPC to 
LinuxBridgeManager, like config options, rpc registrations,...

  #2 Modify the LinuxBridgeNeutronAgentRPC to take the manager class as
  arg instead of creating it within the constructor. Manager class will
  be instantiated in lb main method

  #3 Merge LinuxBridgeRpcCallbacks into LinuxBridgeNeutronAgentRPC

  #4 Establish a clear interface for a manager class and enforce this in
  the common agent. Other manager must satisfy this interface in order
  to work properly with the common agent

  #5 Move common agent into a new location

  Benefit
  ---

  Sharing agent code, getting improvements/fixes for all agents. No
  needs for porting anymore.

  Scope
  -
  The proposal will restructure the lb agent in such a way and establish the lb 
agent as a first user.

  NOT part of this proposal is to move over the sriov agent. However the
  common agent is designed in a way to make that easily possible. I'm
  just saying this is a separate effort.

  This proposal will have no impact on the OVS agent.

  Possible follow-up stages
  -

  - Implement macvtap agent as exploiter
  - Move over sriov agent as exploiter
  - Get shared code between the common agent and ovs agent?
  - mlnx agent?

  Sources
  ===

  [1] https://review.openstack.org/#/c/138512/
  [2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067605.html
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py
  [4] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
  [5] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py
  [6] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py
  [7] https://bugs.launchpad.net/neutron/+bug/1480979

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537526] Re: Integration tests @tables.bind_row_action() is too picky about buttons layout

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271805
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4b83421bf58e4a94c7b449b1f51f4512bfaf81b7
Submitter: Jenkins
Branch:master

commit 4b83421bf58e4a94c7b449b1f51f4512bfaf81b7
Author: Timur Sufiev 
Date:   Sun Jan 24 21:12:23 2016 +0300

Allow @tables.bind_row_action() to bind in 2 additional scenarios

First, it's now able to bind destructive actions rendered with
button.btn-dander. Second, now it successfully matches a single
primary action (button outside of div.btn-group container).

Change-Id: I1c55dc10b344c4899a80d83f4d18e59d5df266a6
Closes-Bug: #1537526


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1537526

Title:
  Integration tests @tables.bind_row_action() is too picky about buttons
  layout

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  First, it cannot bind action to the button with destructive effects
  (marked with .btn-danger), which is rendered as  element
  instead of . Second, @bind_row_action() decorator doesn't expect to
  find a single primary action, outside of div.btn-group container -
  thus failing to click the single button.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1537526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543541] Re: When an unhandled Python error is raised in i9n tests, cleanup code becomes non-functional

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277755
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=51f35099bc0be32e26012ace18b1eb4502690349
Submitter: Jenkins
Branch:master

commit 51f35099bc0be32e26012ace18b1eb4502690349
Author: Timur Sufiev 
Date:   Tue Feb 9 12:38:19 2016 +0300

Don't overwrite original traceback in certain cases of i9n failures

Achieve this by muffling exceptions (raised due to Selenium became not
resposive) while taking integration test failure screenshot.

Also extract the pattern of muffling and capturing exceptions common
for dump_browser_log, dump_html_page and save_screenshot in a common
@exceptions_captured context manager.

Closes-Bug: #1543541
Change-Id: I37fa18a302c553b43529df056d2beacff70f6189


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543541

Title:
  When an unhandled Python error is raised in i9n tests, cleanup code
  becomes non-functional

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  What's worse, the call which should take a screenshot raises another
  exception (since Selenium is no longer responding after that unhandled
  error has been raised) which completely overwrites the original
  exception and traceback. And without original traceback it's very
  difficult to understand what caused integration test to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540939] Re: Instance delete causing port leak

2016-02-10 Thread Chuck Carmack
I think the problem we are having is that we added "VMAdmin" to the
delete port rule.

We have this:

"owner": "tenant_id:%(tenant_id)s",
"admin_or_vm_admin_owner": "role:admin or (tenant_id:%(tenant_id)s and 
role:VMAdmin)",
"admin_or_vm_admin_network_owner": "role:admin or 
(tenant_id:%(network:tenant_id)s and role:VMAdmin)",
"vm_admin_owner_or_vm_admin_network_owner": 
"rule:admin_or_vm_admin_network_owner or rule:admin_or_vm_admin_owner",

...

"delete_port": "rule:vm_admin_owner_or_vm_admin_network_owner or
rule:context_is_advsvc",

So it takes VMAdmin to delete a port, but the user in this case did not
have that role when deleting an instance.

I'm going to reopen this bug to see if nova can change to use admin to
delete the port, if the neutron port binding extension is enabled.


** Changed in: nova
   Status: Invalid => New

** Changed in: nova
 Assignee: (unassigned) => Chuck Carmack (chuckcarmack75)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1540939

Title:
  Instance delete causing port leak

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova can cause a neutron port leak after deleting an instance.

  If neutron has the port binding extension installed, then nova uses admin 
credentials to create the port during instance create:
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L537

  However, during instance delete, nova always uses the user creds:
  
https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L739

  Depending on the neutron policy settings, this can leak ports in
  neutron.

  Can someone explain this behavior?

  We are running on nova kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1540939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544295] [NEW] Unable to place static in panel directory

2016-02-10 Thread Thai Tran
Public bug reported:

We are able to currently have a static folder in dashboard but not a
panel (which is a subdirectory of dashboard).  It would be much nicer to
be able to place it under each panel instead.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544295

Title:
  Unable to place static in panel directory

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are able to currently have a static folder in dashboard but not a
  panel (which is a subdirectory of dashboard).  It would be much nicer
  to be able to place it under each panel instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318721] Re: RPC timeout in all neutron agents

2016-02-10 Thread Corey Bryant
** Changed in: cloud-archive/juno
   Status: Fix Committed => Fix Released

** Changed in: cloud-archive/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1318721

Title:
  RPC timeout in all neutron agents

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  Fix Released
Status in Ubuntu Cloud Archive juno series:
  Fix Released
Status in neutron:
  Invalid
Status in oslo.messaging:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in oslo.messaging package in Ubuntu:
  Invalid
Status in neutron source package in Trusty:
  Fix Committed
Status in oslo.messaging source package in Trusty:
  Fix Released

Bug description:
  In the logs the first traceback that happen is this:

  [-] Unexpected exception occurred 1 time(s)... retrying.
  Traceback (most recent call last):
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/excutils.py",
 line 62, in inner_func
  return infunc(*args, **kwargs)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 741, in _consumer_thread

    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 732, in consume
  @excutils.forever_retry_uncaught_exceptions
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 660, in iterconsume
  try:
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 590, in ensure
  def close(self):
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 531, in reconnect
  # to return an error not covered by its transport
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 513, in _connect
  Will retry up to self.max_retries number of times.
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py",
 line 150, in reconnect
  use the callback passed during __init__()
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 508, in declare
  self.queue_bind(nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 541, in queue_bind
  self.binding_arguments, nowait=nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/kombu/entity.py", 
line 551, in bind_to
  nowait=nowait)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", 
line 1003, in queue_bind
  (50, 21),  # Channel.queue_bind_ok
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py",
 line 68, in wait
  return self.dispatch_method(method_sig, args, content)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/abstract_channel.py",
 line 86, in dispatch_method
  return amqp_method(self, args)
    File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/amqp/channel.py", 
line 241, in _close
  reply_code, reply_text, (class_id, method_id), ChannelError,
  NotFound: Queue.bind: (404) NOT_FOUND - no exchange 
'reply_8f19344531b448c89d412ee97ff11e79' in vhost '/'

  Than an RPC Timeout is raised each second in all the agents

  ERROR neutron.agent.l3_agent [-] Failed synchronizing routers
  TRACE neutron.agent.l3_agent Traceback (most recent call last):
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
 line 702, in _rpc_loop
  TRACE neutron.agent.l3_agent self.context, router_ids)
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
 line 79, in get_routers
  TRACE neutron.agent.l3_agent topic=self.topic)
  TRACE neutron.agent.l3_agent   File 
"/opt/cloudbau/neutron-virtualenv/lib/python2.7/site-packages/neutron/openstack/common/rpc/proxy.py",
 line 130, in call
  TRACE neutron.agent.l3_agent exc.info, real_topic, msg.get('method'))
  TRACE neutron.agent.l3_agent Timeout: Timeout while waiting on RPC response - 
topic: "q-l3-plugin", RPC method: "sync_routers" info: ""

  This actually make the agent useless until they are all restarted.

  An analyze of what's going on coming soon :)

  
  ---

  [Impact]

  This patch addresses an issue when a RabbitMQ cluster node goes down,
  OpenStack services try to reconnect to another RabbitMQ node and then
  re-create everything from scratch , and due to the 

[Yahoo-eng-team] [Bug 1544313] [NEW] LBaaSv2 agent schedules LB's to agents that are offline

2016-02-10 Thread Major Hayden
Public bug reported:

When I create load balancers via the LBaaSv2 agent, the load balancer
will be scheduled to an agent that is offline.  I'm not sure if this
occurs in Mitaka, but it is happening with the latest commits from
Liberty.

To reproduce:
* Ensure you have two neutron lbaasv2 agents running, one on each server
* Stop one of the agents
* Use neutron lbaas-loadbalancer-create to create two new load balancers
* One will be PENDING_CREATE since it was scheduled to the agent that is down

I would expect that the load balancer would be scheduled to an agent
that is online.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544313

Title:
  LBaaSv2 agent schedules LB's to agents that are offline

Status in neutron:
  New

Bug description:
  When I create load balancers via the LBaaSv2 agent, the load balancer
  will be scheduled to an agent that is offline.  I'm not sure if this
  occurs in Mitaka, but it is happening with the latest commits from
  Liberty.

  To reproduce:
  * Ensure you have two neutron lbaasv2 agents running, one on each server
  * Stop one of the agents
  * Use neutron lbaas-loadbalancer-create to create two new load balancers
  * One will be PENDING_CREATE since it was scheduled to the agent that is down

  I would expect that the load balancer would be scheduled to an agent
  that is online.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544195] [NEW] User can not provision ironic node via nova when providing pre-created port

2016-02-10 Thread Pavlo Shchelokovskyy
Public bug reported:

When booting a nova instance with baremetal flavor, one can not provide
a pre-created neutron port to "nova boot" command.

The reason is obvious - to successfully deploy, mac address of the port
must be the same as mac address of the ironic port corresponding to the
ironic node where provisioning will happen, and although it is possible
to specify a mac address during port create, a user can not know to
which exactly ironic node provisioning will be assigned to by nova
compute (more over, ordinary user has no access to list of ironic
ports/macs whatsoever).

This is most probably a known limitation, but the big problem is that it
breaks many sorts of cloud orchestration attempts. For example, the most
flexible in terms of usage approach in Heat is to pre-create a port, and
create the server with this port provided (actually this is the only way
if one needs to assign a floating IP to the instance via Neutron). Some
other consumers of Heat extensively use this approach.

So this limitation precludes Murano or Sahara to provision their
instances on bare metal via Nova/Ironic.

The solution might be to update the mac of the port to correct one (mac
address update is possible with admin context) when working with
baremetal nodes/Ironic.

** Affects: nova
 Importance: Undecided
 Status: Confirmed


** Tags: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544195

Title:
  User can not provision ironic node via nova when providing pre-created
  port

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When booting a nova instance with baremetal flavor, one can not
  provide a pre-created neutron port to "nova boot" command.

  The reason is obvious - to successfully deploy, mac address of the
  port must be the same as mac address of the ironic port corresponding
  to the ironic node where provisioning will happen, and although it is
  possible to specify a mac address during port create, a user can not
  know to which exactly ironic node provisioning will be assigned to by
  nova compute (more over, ordinary user has no access to list of ironic
  ports/macs whatsoever).

  This is most probably a known limitation, but the big problem is that
  it breaks many sorts of cloud orchestration attempts. For example, the
  most flexible in terms of usage approach in Heat is to pre-create a
  port, and create the server with this port provided (actually this is
  the only way if one needs to assign a floating IP to the instance via
  Neutron). Some other consumers of Heat extensively use this approach.

  So this limitation precludes Murano or Sahara to provision their
  instances on bare metal via Nova/Ironic.

  The solution might be to update the mac of the port to correct one
  (mac address update is possible with admin context) when working with
  baremetal nodes/Ironic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538482] Re: url_for does not exists for service catalog

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272990
Committed: 
https://git.openstack.org/cgit/openstack/glance_store/commit/?id=8677f9287761d848d529aed9ecf2cb2800b5418c
Submitter: Jenkins
Branch:master

commit 8677f9287761d848d529aed9ecf2cb2800b5418c
Author: kairat_kushaev 
Date:   Wed Jan 27 13:18:27 2016 +0300

Use url_for from keystoneclient in swift store

glance doesn't pass ServiceCatalog to glance_store
in user context(glance just passes a list of service endpoints).
So when  swift multi-tenant store is enabled then there is no
method url_for for context.service_catalog.
We can use ServiceCatalog from keystoneclient for these
purposes and convert this list to ServiceCatalog if url_for
method is not present.
Please also note that keystone.middleware converts
X-Service-Catalog to v2 so we can safely initialize and use
ServiceCatalogV2 in glance_store.

Closes-Bug: #1538482

Change-Id: I3c4c56e91656f09067d28923ed45595395e9880e


** Changed in: glance-store
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1538482

Title:
  url_for does not exists for service catalog

Status in Glance:
  New
Status in glance_store:
  Fix Released

Bug description:
  Here we introduced url_for method for service_catalog:
  https://review.openstack.org/#/c/250857/

  Unfortunately, we are parsing service_catalog and passing it to glance_store 
as a list in context:
  
https://github.com/openstack/glance/blob/master/glance/api/middleware/context.py#L117

  Because of this current glance_store master is broken when swift multi-tenant 
store is enabled with an error like this:
  list doesn't have an attribute url_for.
  That happens every time somebody would like to download/upload images.

  
  We also need to pass ServiceCatalog from glance to glance_store. It is better 
than initializing ServiceCatalog in glance_store. It alos saves a lot time when 
upgrading from Keystone v2 to v3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1538482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526791] Re: test_keypair integration test consistently fails

2016-02-10 Thread Timur Sufiev
** Changed in: horizon
   Status: In Progress => Fix Released

** Changed in: horizon
Milestone: mitaka-2 => mitaka-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1526791

Title:
  test_keypair integration test consistently fails

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  It fails with the following traceback:

  2015-12-16 07:11:15.197 | 2015-12-16 07:11:15.181 | ERROR: 
openstack_dashboard.test.integration_tests.tests.test_keypair.TestKeypair.test_keypair
  2015-12-16 07:11:15.197 | 2015-12-16 07:11:15.182 | 
--
  2015-12-16 07:11:15.228 | 2015-12-16 07:11:15.185 | Traceback (most recent 
call last):
  2015-12-16 07:11:15.228 | 2015-12-16 07:11:15.186 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/nose/case.py",
 line 133, in run
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.188 | self.runTest(result)
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.190 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/nose/case.py",
 line 151, in runTest
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.192 | test(result)
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.193 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py",
 line 673, in __call__
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.195 | return 
self.run(*args, **kwds)
  2015-12-16 07:11:15.229 | 2015-12-16 07:11:15.196 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 606, in run
  2015-12-16 07:11:15.230 | 2015-12-16 07:11:15.197 | return 
run_test.run(result)
  2015-12-16 07:11:15.230 | 2015-12-16 07:11:15.199 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 80, in run
  2015-12-16 07:11:15.230 | 2015-12-16 07:11:15.200 | return 
self._run_one(actual_result)
  2015-12-16 07:11:15.230 | 2015-12-16 07:11:15.202 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 94, in _run_one
  2015-12-16 07:11:15.230 | 2015-12-16 07:11:15.203 | return 
self._run_prepared_result(ExtendedToOriginalDecorator(result))
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.205 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 108, in _run_prepared_result
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.206 | self._run_core()
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.208 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 149, in _run_core
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.210 | 
self.case._run_teardown, self.result):
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.211 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 193, in _run_user
  2015-12-16 07:11:15.231 | 2015-12-16 07:11:15.212 | return 
self._got_user_exception(sys.exc_info())
  2015-12-16 07:11:15.232 | 2015-12-16 07:11:15.214 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 213, in _got_user_exception
  2015-12-16 07:11:15.232 | 2015-12-16 07:11:15.215 | 
self.case.onException(exc_info, tb_label=tb_label)
  2015-12-16 07:11:15.232 | 2015-12-16 07:11:15.216 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 558, in onException
  2015-12-16 07:11:15.233 | 2015-12-16 07:11:15.218 | handler(exc_info)
  2015-12-16 07:11:15.235 | 2015-12-16 07:11:15.219 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 130, in _save_screenshot
  2015-12-16 07:11:15.236 | 2015-12-16 07:11:15.221 | 
self.driver.get_screenshot_as_file(filename)
  2015-12-16 07:11:15.238 | 2015-12-16 07:11:15.222 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 758, in get_screenshot_as_file
  2015-12-16 07:11:15.239 | 2015-12-16 07:11:15.224 | png = 
self.get_screenshot_as_png()
  2015-12-16 07:11:15.265 | 2015-12-16 07:11:15.225 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 777, in get_screenshot_as_png
  2015-12-16 07:11:15.265 | 2015-12-16 07:11:15.226 | return 
base64.b64decode(self.get_screenshot_as_base64().encode('ascii'))
  2015-12-16 07:11:15.265 | 2015-12-16 07:11:15.229 |   File 

[Yahoo-eng-team] [Bug 1534252] Re: fernet tokens don't support oauth1 authentication

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267781
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=03b4e821889a359403b592bec5e4d3a7e45b561f
Submitter: Jenkins
Branch:master

commit 03b4e821889a359403b592bec5e4d3a7e45b561f
Author: Lance Bragstad 
Date:   Thu Jan 14 18:13:13 2016 +

Make fernet work with oauth1 authentication

Previously, fernet didn't know how to handle oauth1 authentication
flows. This patch adds a new token version to the fernet provider and
allows it to issue oauth1 tokens.

This work is being done so that we can get fernet to be feature
equivalent with the uuid token provider. Then we will be slightly
closer to making fernet the default token provider in keystone.

Closes-Bug: 1534252

Change-Id: I638404952597bb23dff01f80efb728b653e5560c


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1534252

Title:
  fernet tokens don't support oauth1 authentication

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The fernet token provider doesn't issue or validate oauth1 token
  types.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1534252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398999] Re: Block migrate with attached volumes copies volumes to themselves

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/227278
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=23fd0389f0e23e7969644079f4b1ad8504cbb8cb
Submitter: Jenkins
Branch:master

commit 23fd0389f0e23e7969644079f4b1ad8504cbb8cb
Author: Pawel Koniszewski 
Date:   Wed Feb 10 13:09:44 2016 +0100

Allow block live migration of an instance with attached volumes

Since libvirt 1.2.17 it is possible to select which block devices
should be migrated to destination host. Block devices that are not
provided will not be migrated. It means that it is possible to
exclude volumes from block migration and therefore prevent volumes
from being copied to themselves.

This patch implements new check of libvirt version. If version is
higher or equal to 1.2.17 it is possible to block live migrate vm
with attached volumes.

Co-Authored-By: Bartosz Fic 

Change-Id: I8fcc3ef3cb5d9fd3a95067929c496fdb5976fd41
Closes-Bug: #1398999
Partially implements: blueprint block-live-migrate-with-attached-volumes


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398999

Title:
  Block migrate with attached volumes copies volumes to themselves

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in libvirt package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Triaged
Status in libvirt source package in Trusty:
  Confirmed
Status in nova source package in Trusty:
  Triaged
Status in libvirt source package in Utopic:
  Won't Fix
Status in nova source package in Utopic:
  Won't Fix
Status in libvirt source package in Vivid:
  Confirmed
Status in nova source package in Vivid:
  Triaged
Status in libvirt source package in Wily:
  Fix Released
Status in nova source package in Wily:
  Triaged

Bug description:
  When an instance with attached Cinder volumes is block migrated, the
  Cinder volumes are block migrated along with it. If they exist on
  shared storage, then they end up being copied, over the network, from
  themselves to themselves. At a minimum, this is horribly slow and de-
  sparses a sparse volume; at worst, this could cause massive data
  corruption.

  More details at http://lists.openstack.org/pipermail/openstack-
  dev/2014-June/038152.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542211] Re: Some Jenkins nodes fail create_instance integration test due to missing network

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276678
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0419b0e2e6a276b7dc08c99375617b795c5989af
Submitter: Jenkins
Branch:master

commit 0419b0e2e6a276b7dc08c99375617b795c5989af
Author: Timur Sufiev 
Date:   Fri Feb 5 13:23:21 2016 +0300

Ensure that integration tests are being run in proper project

Admin users run tests in an admin project, regular users run them in a
demo project. That should prevent situations when tests don't have an
access to a resources created in a project different from the current
one.

Closes-Bug: #1542211
Change-Id: I497648ce5126e187744b5795bd524b7aba22c7a6


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1542211

Title:
  Some Jenkins nodes fail create_instance integration test due to
  missing network

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Although usually devstack has pre-created network, some Jenkins nodes
  don't have it. We should keep this possibility in mind when creating
  an instance in integration test. As a proof, below is a on-failure
  screenshot that was taken several times in a row for commit
  https://review.openstack.org/#/c/276123/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1542211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520383] Re: Tests that need policy.json can never find it if run in isolation

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/278528
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=10773b79ad7e9738ced9afe65071c60d92c8ca55
Submitter: Jenkins
Branch:master

commit 10773b79ad7e9738ced9afe65071c60d92c8ca55
Author: David Stanek 
Date:   Wed Feb 10 16:06:03 2016 +

Moves policy setup into a fixture.

The original implemention worked only because of the ordering of the
tests. By the time the tests that needed policy.json to be loaded ran it
had already been properly loaded. When running certain tests in
isolation the policy is not propertly setup, leading to a test failure.

Closes-Bug: #1520383
Change-Id: Icd041eb4ed8ddd580f49b4709ca5f05ab7315292


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1520383

Title:
  Tests that need policy.json can never find it if run in isolation

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  I am writing some tests that inherit from RestfulTestCase.  The
  server-side of the calls that the tests make are doing some checks
  that require policy.json to be read in.

  However, it seems as though the file can't ever be found.

  Looking at oslo_config, it searches these directories:

  def _get_config_dirs(project=None):
  """Return a list of directories where config files may be located.

  :param project: an optional project name

  If a project is specified, following directories are returned::

~/.${project}/
~/
/etc/${project}/
/etc/

  Otherwise, these directories::

~/
/etc/
  """

  So basically under $HOME or /etc.  The ConfigOpts.find_file() func
  also looks in any directory specified with the --config-dir option.

  When running the tests, $HOME is set to a temp dir for each and every
  test, and --config-dir is not set at all.  This currently seems to
  make it impossible for policy.json to be ever discovered in test runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1520383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544437] [NEW] messages in glance-manage purge are not formatted properly

2016-02-10 Thread Abhishek Kekane
Public bug reported:

Info messages in glance-manage purge method are not formatting properly
as it shows variable names in the message and not the corresponding
values.

For example:

$ glance-manage db purge 1

2016-02-08 04:22:20.373 INFO glance.db.sqlalchemy.api [req-1bcebb2f-
bc3a-468d-b4c5-5701ea2cfa19 None None] Deleted %(rows)d row(s) from
table %(tbl)s

2016-02-08 04:22:20.374 INFO glance.db.sqlalchemy.api [req-1bcebb2f-
bc3a-468d-b4c5-5701ea2cfa19 None None] Purging deleted rows older than
%(age_in_days)d day(s) from table %(tbl)s

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1544437

Title:
  messages in glance-manage purge are not formatted properly

Status in Glance:
  New

Bug description:
  Info messages in glance-manage purge method are not formatting
  properly as it shows variable names in the message and not the
  corresponding values.

  For example:

  $ glance-manage db purge 1

  2016-02-08 04:22:20.373 INFO glance.db.sqlalchemy.api [req-1bcebb2f-
  bc3a-468d-b4c5-5701ea2cfa19 None None] Deleted %(rows)d row(s) from
  table %(tbl)s

  2016-02-08 04:22:20.374 INFO glance.db.sqlalchemy.api [req-1bcebb2f-
  bc3a-468d-b4c5-5701ea2cfa19 None None] Purging deleted rows older than
  %(age_in_days)d day(s) from table %(tbl)s

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1544437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544195] Re: User can not provision ironic node via nova when providing pre-created port

2016-02-10 Thread Sergey Kraynev
** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544195

Title:
  User can not provision ironic node via nova when providing pre-created
  port

Status in heat:
  New
Status in Magnum:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When booting a nova instance with baremetal flavor, one can not
  provide a pre-created neutron port to "nova boot" command.

  The reason is obvious - to successfully deploy, mac address of the
  port must be the same as mac address of the ironic port corresponding
  to the ironic node where provisioning will happen, and although it is
  possible to specify a mac address during port create, a user can not
  know to which exactly ironic node provisioning will be assigned to by
  nova compute (more over, ordinary user has no access to list of ironic
  ports/macs whatsoever).

  This is most probably a known limitation, but the big problem is that
  it breaks many sorts of cloud orchestration attempts. For example, the
  most flexible in terms of usage approach in Heat is to pre-create a
  port, and create the server with this port provided (actually this is
  the only way if one needs to assign a floating IP to the instance via
  Neutron). Some other consumers of Heat extensively use this approach.

  So this limitation precludes Murano or Sahara to provision their
  instances on bare metal via Nova/Ironic.

  The solution might be to update the mac of the port to correct one
  (mac address update is possible with admin context) when working with
  baremetal nodes/Ironic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1544195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534834] Re: Policy check forces impersonation for redelgation of trust

2016-02-10 Thread Steve Martinelli
marking this as invalid. based on the latest keystone meeting it was
decided that the behaviour is correct

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
Milestone: mitaka-3 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1534834

Title:
  Policy check forces impersonation for redelgation of trust

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When redelegating a trust, the API specifies that the trustor_id is
  the original trustor_id.  However, the policy check for create_trust
  enforces that user_id = trust.trustor_user_id. Effectily limiting the
  redelgation ofr trusts to trusts which  provide impersonation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1534834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470437] Re: ImageCacheManager raises Permission denied error on nova compute in race condition

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/185549
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ec9d5e375e208686d33b9259b039cc009bded42e
Submitter: Jenkins
Branch:master

commit ec9d5e375e208686d33b9259b039cc009bded42e
Author: Ankit Agrawal 
Date:   Mon Aug 10 16:27:57 2015 +1000

libvirt: Race condition leads to instance in error

ImageCacheManager deletes base image while image backend is copying
image to the instance path leading instance to go in the error state.

Acquired lock before removing image from cache. If libvirt is copying
image to the instance path, image cache manager won't be able to remove
it until libvirt finishes copying image completely.

Closes-Bug: 1256838
Closes-Bug: 1470437
Co-Authored-By: Michael Still 
Depends-On: I337ce28e2fc516c91bec61ca3639ebff0029ad49
Change-Id: I376cc951922c338669fdf3f83da83e0d3cea1532


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470437

Title:
  ImageCacheManager raises Permission denied error on nova compute in
  race condition

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  ImageCacheManager raises Permission denied error on nova compute in
  race condition

  While creating an instance snapshot nova calls guest.launch method
  from libvirt driver which changes the base file permissions and
  updates base file user from openstack to libvirt-qemu (in case of
  qcow2 image backend). In race condition when ImageCacheManager is
  trying to update last access time of this base file and guest.launch
  is called by instance snapshot just before updating the access time,
  ImageCacheManager raise Permission denied error in nova compute for
  os.utime().

  Steps to reproduce:
  1. Configure image_cache_manager_interval=120 in nova.conf and use qcow2 
image backend.
  2. Add a sleep for 60 sec in _handle_base_image method of libvirt.imagecache 
just before calling os.utime().
  3. Restart nova services.
  4. Create an instance using image.
  $ nova boot --image 5e1659aa-6d38-44e8-aaa3-4217337436c0 --flavor 1 instance-1
  5. Check that instance is in active state.
  6. Go to the n-cpu screen and check imagecache manager logs at the point it 
waits to execute sleep statement added in step #2.
  7. Send instance snapshot request when imagecache manger is waiting to 
execute sleep.
  $ nova image-create 19c7900b-73d5-4c2e-b129-5e2a6b13f396 instance-1-snap
  8. instance snapshot request updates the base file owner to libvirt-qemu by 
calling guest.launch method from libvirt driver.
  9. Now when imagecache manger comes out from sleep and executes os.utime it 
raise following Permission denied error in nova compute.

  2015-07-01 01:51:46.794 ERROR nova.openstack.common.periodic_task 
[req-a03fa45f-ffb9-48dd-8937-5b0414c6864b None None] Error during 
ComputeManager._run_image_cache_manager_pass
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
Traceback(most recent call last):
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/openstack/common/periodic_task.py", line 224, in 
run_periodic_tasks
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/compute/manager.py", line 6177, in 
_run_image_cache_manager_pass
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self.driver.manage_image_cache(context, filtered_instances)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6252, in manage_image_cache
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self.image_cache_manager.update(context, all_instances)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 668, in update
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task
  self._age_and_verify_cached_images(context, all_instances, base_dir)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 598, in 
_age_and_verify_cached_images
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
self._handle_base_image(img, base_file)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/nova/nova/virt/libvirt/imagecache.py", line 570, in 
_handle_base_image
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
os.utime(base_file, None)
  2015-07-01 01:51:46.794 TRACE nova.openstack.common.periodic_task 
OSError:[Errno 13] 

[Yahoo-eng-team] [Bug 1256838] Re: Race between imagebackend and imagecache

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/185549
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ec9d5e375e208686d33b9259b039cc009bded42e
Submitter: Jenkins
Branch:master

commit ec9d5e375e208686d33b9259b039cc009bded42e
Author: Ankit Agrawal 
Date:   Mon Aug 10 16:27:57 2015 +1000

libvirt: Race condition leads to instance in error

ImageCacheManager deletes base image while image backend is copying
image to the instance path leading instance to go in the error state.

Acquired lock before removing image from cache. If libvirt is copying
image to the instance path, image cache manager won't be able to remove
it until libvirt finishes copying image completely.

Closes-Bug: 1256838
Closes-Bug: 1470437
Co-Authored-By: Michael Still 
Depends-On: I337ce28e2fc516c91bec61ca3639ebff0029ad49
Change-Id: I376cc951922c338669fdf3f83da83e0d3cea1532


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256838

Title:
  Race between imagebackend and imagecache

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  After ImageCacheManager judges a base image is not used recently and
  marks it as to be removed, there is some time before the image is
  actually removed. So if an instance using the image is launched during
  the time, the image will be removed unfortunately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1256838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544028] [NEW] Cannot boot an instance on a Neutron network with network which has port-security disabled

2016-02-10 Thread Roey Chen
Public bug reported:

Nova raises an error SecurityGroupCannotBeApplied when running the
following steps:

1. neutron net-create MyNet port-security-enabled False
2. neutron subnet-create MyNet
3. neutron port-create MyNet --no-security-groups # 
4. nova boot ... --nic port-id= Ins1

Nova compute raises the exception below, however, it is expected that
the instance will boot with no issues.


ERROR nova.compute.manager [req-b25820f4-4210-4c57-acd2-4e3665186d75 admin 
demo] Instance failed network setup after 1 attempt(s)
ERROR nova.compute.manager Traceback (most recent call last):
ERROR nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", 
line 1564, in _allocate_network_async
ERROR nova.compute.manager bind_host_id=bind_host_id)
ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/network/neutronv2/api.py", line 621, in 
allocate_for_instance
ERROR nova.compute.manager raise exception.SecurityGroupCannotBeApplied()
ERROR nova.compute.manager SecurityGroupCannotBeApplied: Network requires 
port_security_enabled and subnet associated in order to apply security groups.
ERROR nova.compute.manager
ERROR nova.compute.manager [req-b25820f4-4210-4c57-acd2-4e3665186d75 admin 
demo] [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] Instance failed to spawn
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
Traceback (most recent call last):
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/compute/manager.py", line 2178, in _build_resources
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
yield resources
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/compute/manager.py", line 2024, in 
_build_and_run_instance
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
block_device_info=block_device_info)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 381, in spawn
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
admin_password, network_info, block_device_info)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 724, in spawn
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
metadata)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 304, in 
build_virtual_machine
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
network_info)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/virt/vmwareapi/vif.py", line 171, in get_vif_info
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
for vif in network_info:
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/network/model.py", line 519, in __iter__
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
return self._sync_wrapper(fn, *args, **kwargs)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/network/model.py", line 510, in _sync_wrapper
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
self.wait()
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/opt/stack/nova/nova/network/model.py", line 542, in wait
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
self[:] = self._gt.wait()
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
175, in wait
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
return self._exit_event.wait()
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in 
wait
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
return hubs.get_hub().switch()
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, 
in switch
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
return self.greenlet.switch()
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d] 
result = function(*args, **kwargs)
ERROR nova.compute.manager [instance: dd397c99-d77c-4aa4-9305-9cdcfcbdd86d]   
File 

[Yahoo-eng-team] [Bug 1505935] Re: Missing table refresh after associating a Floating IP address

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/256460
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b59ebcfaac67632520968353e61568c7aa637adc
Submitter: Jenkins
Branch:master

commit b59ebcfaac67632520968353e61568c7aa637adc
Author: Itxaka 
Date:   Fri Dec 11 16:07:11 2015 +0100

Refresh the networks on ajax update

If using neutron, the ajax will query the last status from
nova which could be out of date, so any floating ips
added wont show up unless you refresh the whole page.

Change-Id: Iad1684d1a2fb677ee8850a98c8219794698722e3
Closes-Bug: 1505935


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505935

Title:
  Missing table refresh after associating a Floating IP address

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  After associating a floating IP address to an instance using the
  instances panel the panel/table is not refreshed. It is necessary to
  manually reload the panel to see the assigned floating IP address in
  the table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544044] [NEW] Add new API to force live migration to complete

2016-02-10 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/245921
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit c9091d0871948377685feca0eb2e41d8ad38228a
Author: Pawel Koniszewski 
Date:   Mon Feb 8 08:59:52 2016 +0100

Add new API to force live migration to complete

This change adds manual knob to force ongoing live migration to
complete. It is implemented as a new server-migrations API.

DocImpact
ApiImpact

Implements: blueprint pause-vm-during-live-migration
Change-Id: I034b4041414a797f65ede52db2963107f2ef7456

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544044

Title:
  Add new API to force live migration to complete

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/245921
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit c9091d0871948377685feca0eb2e41d8ad38228a
  Author: Pawel Koniszewski 
  Date:   Mon Feb 8 08:59:52 2016 +0100

  Add new API to force live migration to complete
  
  This change adds manual knob to force ongoing live migration to
  complete. It is implemented as a new server-migrations API.
  
  DocImpact
  ApiImpact
  
  Implements: blueprint pause-vm-during-live-migration
  Change-Id: I034b4041414a797f65ede52db2963107f2ef7456

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495444] Re: MTU Option should be included in ICMPv6 Router Advertisements

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/244722
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=47713f58701af9e27792230dd64daa3e20a4260b
Submitter: Jenkins
Branch:master

commit 47713f58701af9e27792230dd64daa3e20a4260b
Author: sridhargaddam 
Date:   Thu Nov 12 15:49:15 2015 +

Support MTU advertisement using IPv6 RAs

RFC4861 allows us to specify the Link MTU using IPv6 RAs.
When advertise_mtu is set in the config, this patch supports
advertising the LinkMTU using Router Advertisements.

Partially Implements: blueprint mtu-selection-and-advertisement
Closes-Bug: #1495444
Change-Id: I50d40cd3b8eabf1899461a80e729d5bd1e727f28


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495444

Title:
  MTU Option should be included in ICMPv6 Router Advertisements

Status in neutron:
  Fix Released

Bug description:
  When using an overlay network on a physical network with standard
  Ethernet MTU (1500 octets), the instances' effective MTU is reduced.

  The Neutron Router should inform the nodes about this fact, by
  including the MTU Option in the ICMPv6 Router Advertisements it sends.
  The current situation leads to blackholing of traffic, as the absence
  of the MTU Option causes the instance to believe it will be able to
  successfully transmit 1500 octets large frames to the network.
  However, these will be silently discarded. The symptom of is usually
  that the TCP three-way handshake succeeds, but that the connection
  appears to hang the moment payload starts being transmitted.

  The MTU Option is documented here:
  https://tools.ietf.org/html/rfc4861#section-4.6.4. The corresponding
  radvd.conf option is called AdvLinkMTU. Note that the Neutron router
  is clearly aware of the reduced effective MTU, as it does use the
  corresponding DHCPv4 option to advertise it to instances/subnets using
  IPv4.

  I observe this problem on OpenStack Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544349] [NEW] neutron-vpnaas db migration breaks in MySQL 5.6

2016-02-10 Thread Al Miller
Public bug reported:

Starting in MySQL 5.6, it is illegal to alter a column that is subject
to a Foreign Key constraint, and any migration doing such will fail.
Other projects (I have found examples in Barbican and Trove) work around
this by removing the constraint, performing the ALTER, and reinstating
the constraint.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544349

Title:
  neutron-vpnaas db migration breaks in MySQL 5.6

Status in neutron:
  New

Bug description:
  Starting in MySQL 5.6, it is illegal to alter a column that is subject
  to a Foreign Key constraint, and any migration doing such will fail.
  Other projects (I have found examples in Barbican and Trove) work
  around this by removing the constraint, performing the ALTER, and
  reinstating the constraint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508275] Re: modify project's subnet quota, value > used is successful

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/240493
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=02fb01b4b7b23fb7d32520e9293cfce3dc8d8b3f
Submitter: Jenkins
Branch:master

commit 02fb01b4b7b23fb7d32520e9293cfce3dc8d8b3f
Author: ZhengYue 
Date:   Thu Oct 8 11:55:19 2015 +0800

Fix bug at update quota of project's network item

Because the result keys not match between neutron tenant_quota_get
and tenant_quota_usages, lead to validate lose efficacy at project's
quota update.
We should not change result from neutron API, and the tenant_quota_usages
has depended other pages. So that changing the key at workflow clean
of quota update.

Change-Id: Ib110bc41ae10a882a901b8853d15c31b313e62aa
Closes-bug: #1508275


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1508275

Title:
  modify project's subnet quota,value > used is successful

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  set project's subnet quota is 10,then create 2 subnets,the useage of subnet 
quota is :{'available': 8, 'used': 2, 'quota': 10}.
  then modify this project's subnet quota ,set the value is 1,it is 
sucessful.but from the code of "UpdateProjectQuotaAction",
  value > used is not allowed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1508275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543625] Re: nova in mitaka reports osapi_compute and metadata services as down Edit

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277881
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d89c50f18bf3ce271baa92cdbb0e5efb242029cf
Submitter: Jenkins
Branch:master

commit d89c50f18bf3ce271baa92cdbb0e5efb242029cf
Author: Roman Podoliaka 
Date:   Tue Feb 9 16:48:17 2016 +0200

Filter APIs out from services list

API services records are a special case (unlike RPC services they do
not report their state regularly) and must not be exposed out of
Compute API.

Closes-Bug: #1543625

Change-Id: Icadd380ea1ff75f0cca433b68441ac5dad0ead53


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543625

Title:
  nova in mitaka reports osapi_compute and metadata services as down
  Edit

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  nova service-list now reports status of all services defined with
  *_listen=$IP configs in nova.conf. These services are just APIs, and
  not RPC services, so they shouldn't be present. Moreover, they
  shouldn't report as down. The APIs are certainly fulfilling requests
  as usual.

  root@node-4:~# nova service-list
  
+++---+--+-+---++-+
  | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
+++---+--+-+---++-+
  | 1 | nova-consoleauth | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
  | 2 | nova-scheduler | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:22.00 | - |
  | 3 | nova-cert | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
  | 4 | nova-conductor | node-4.domain.tld | internal | enabled | up | 
2016-01-28T14:08:23.00 | - |
  | 5 | nova-osapi_compute | 192.168.0.3 | internal | enabled | down | - | - |
  | 7 | nova-metadata | 0.0.0.0 | internal | enabled | down | - | - |
  | 8 | nova-compute | node-6.domain.tld | nova | enabled | up | 
2016-01-28T14:08:29.00 | - |
  
+++---+--+-+---++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543288] Re: osinfo should not emit multiple error messages when module isn't loaded

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277565
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=556e4944e903a2742226308190fd34bc4ee9984d
Submitter: Jenkins
Branch:master

commit 556e4944e903a2742226308190fd34bc4ee9984d
Author: Vladik Romanovsky 
Date:   Mon Feb 8 16:06:41 2016 -0500

virt: osinfo will report once if libosinfo is not loaded

Currently osinfo module emits multiple error messages when libosinfo
module cannot be loaded. Since loading the libosinfo module is optional,
it should only report this once as an INFO log message.

Change-Id: If4b582d1ec39ba79b4f993543da11ec8c6bd023b
Closes-bug: #1543288


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543288

Title:
  osinfo should not emit multiple error messages when module isn't
  loaded

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently osinfo module emits multiple error messages , when libosinfo
  module cannot be loaded:

  2016-02-08 12:44:15.270 2868 ERROR nova.virt.osinfo [req-cb9744f0
  -c5af-4bc7-a164-6e0ba06c021d tempest-
  VolumesV1SnapshotTestJSON-1106516754 tempest-
  VolumesV1SnapshotTestJSON-1593599156] Cannot find OS information -
  Reason: (Cannot load Libosinfo: (No module named
  gi.repository.Libosinfo))

  Since loading the libosinfo module is optional, It should only report
  this info once and not as an error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544190] Re: Options missing from metadata_agent.ini

2016-02-10 Thread Matt Kassawara
The metadata agent apparently no longer requires these options. Digging
around led me to a couple of patches [1][2]. The first patch
significantly impacts documentation and contains the DocImpact flag. At
the time, the DocImpact flag generated bugs in the openstack-manuals
repository. For some reason, the bug generated by this patch [3] was
changed to "fix released" without any documentation patches. So, I'm
changing the bug in openstack-manuals to "triaged" and need to address
it.

[1] https://review.openstack.org/#/c/231065/
[2] https://review.openstack.org/#/c/231571/
[3] https://bugs.launchpad.net/openstack-manuals/+bug/1504529

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544190

Title:
  Options missing from metadata_agent.ini

Status in neutron:
  Invalid

Bug description:
  The metadata agent requires at least the following authentication
  options, but they are missing from the auto-generated
  metadata_agent.ini file:

  [DEFAULT]
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  auth_region = RegionOne
  auth_type = password
  project_domain_id = default
  user_domain_id = default
  project_name = service
  username = neutron
  password = password

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537573] Re: Horizon Logo is not centered

2016-02-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271859
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=338b391beb729130c955326504ff7e3929e09693
Submitter: Jenkins
Branch:master

commit 338b391beb729130c955326504ff7e3929e09693
Author: Diana Whitten 
Date:   Sun Jan 24 17:46:09 2016 -0700

Logo on non-standard themes should be centered

When using a vanilla bootstrap theme (not 'default' or 'material')
the logo is not vertically centered.

Change-Id: Ie7bc98bae9087f25cfece09c64b2528a67855ed4
Closes-Bug: 1537573


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1537573

Title:
  Horizon Logo is not centered

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When using a vanilla bootstrap theme (not 'default' or 'material') the
  logo is not vertically centered.

  See:

  https://i.imgur.com/v0aSqKR.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1537573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544383] [NEW] Add the ability to load a set of service plugins on startup

2016-02-10 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/273439
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0
Author: armando-migliaccio 
Date:   Thu Jan 28 01:39:00 2016 -0800

Add the ability to load a set of service plugins on startup

Service plugins are a great way of adding functionality in a
cohesive way. Some plugins (e.g. network ip availability or
auto_allocate) extend the capabilities of the Neutron server
by being completely orthogonal to the core plugin, and yet may
be considered an integral part of functionality available in
any Neutron deployment. For this reason, it makes sense to
include them seamlessly in the service plugin loading process.

This patch, in particular, introduces the 'auto_allocate' service
plugin for default loading, as we'd want this feature to be enabled
for Nova to use irrespective of the chosen underlying core plugin.

The feature requires subnetpools, external_net and router, while
the first is part of the core, the others can be plugin specific
and they must be explicitly advertised. That said, they all are
features that any deployment can hardly live without.

DocImpact: The "get-me-a-network" feature simplifies the process
for launching an instance with basic network connectivity (via an
externally connected private tenant network).

Once leveraged by Nova, a tenant/admin is no longer required to
provision networking resources ahead of boot process in order to
successfully launch an instance.

Implements: blueprint get-me-a-network

Change-Id: Ia35e8a946bf0ac0bb085cde46b675d17b0bb2f51

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544383

Title:
  Add the ability to load a set of service plugins on startup

Status in neutron:
  New

Bug description:
  https://review.openstack.org/273439
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0
  Author: armando-migliaccio 
  Date:   Thu Jan 28 01:39:00 2016 -0800

  Add the ability to load a set of service plugins on startup
  
  Service plugins are a great way of adding functionality in a
  cohesive way. Some plugins (e.g. network ip availability or
  auto_allocate) extend the capabilities of the Neutron server
  by being completely orthogonal to the core plugin, and yet may
  be considered an integral part of functionality available in
  any Neutron deployment. For this reason, it makes sense to
  include them seamlessly in the service plugin loading process.
  
  This patch, in particular, introduces the 'auto_allocate' service
  plugin for default loading, as we'd want this feature to be enabled
  for Nova to use irrespective of the chosen underlying core plugin.
  
  The feature requires subnetpools, external_net and router, while
  the first is part of the core, the others can be plugin specific
  and they must be explicitly advertised. That said, they all are
  features that any deployment can hardly live without.
  
  DocImpact: The "get-me-a-network" feature simplifies the process
  for launching an instance with basic network connectivity (via an
  externally connected private tenant network).
  
  Once leveraged by Nova, a tenant/admin is no longer required to
  provision networking resources ahead of boot process in order to
  successfully launch an instance.
  
  Implements: blueprint get-me-a-network
  
  Change-Id: Ia35e8a946bf0ac0bb085cde46b675d17b0bb2f51

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp