[Yahoo-eng-team] [Bug 1766301] Re: ironic baremetal node ownership not checked with early vif plugging

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/563722
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e45c5ec819cfba3a45367bfe7dd853769c80d816
Submitter: Zuul
Branch:master

commit e45c5ec819cfba3a45367bfe7dd853769c80d816
Author: Jim Rollenhagen 
Date:   Mon Apr 23 13:53:00 2018 -0400

ironic: add instance_uuid before any other spawn activity

In the baremetal use case, it is necessary to allow a remote
source of truth assert information about a physical node,
as opposed to a client asserting desired configuration.

As such, the prepare_networks_before_block_device_mapping virt
driver method was added, however in doing so network VIF attaching
was moved before actual hardware reservation. As ironic only
supports users to have a number of VIFs limited by the number
of network interfaces in the physical node, VIF attach actions
cannot be performed without first asserting control over the node.

Adding an "instance_uuid" upfront allows other ironic API consumers
to be aware that a node is now in use. Alternatively it also allows
nova to become aware prior to adding network information on the node
that the node may already be in use by an external user.

Co-Authored-By: Julia Kreger 

Change-Id: I87f085589bb663c519650f307f25d087c88bbdb1
Closes-Bug: #1766301


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1766301

Title:
  ironic baremetal node ownership not checked with early vif plugging

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It is possible for scheduling to tell nova that a baremetal node can
  support multiple instances when in reality that is not the case. The
  issue is that the virt driver for ironic ironic does not check nor
  assert that the node is in use. This is not an issue, except when
  before the virt driver has claimed the node. Due to needing the vif
  plugging information completed for block device mapping,
  
https://github.com/openstack/nova/blob/stable/queens/nova/virt/ironic/driver.py#L1809
  can cause resource exhaustion without checking to see if a node is
  locked.

  Depending on scheduling, we can end up having multiple vif plugging
  requests for the same node. All actions beyond the first vif plugging
  will fail if only one port is assigned to the node.

  This demonstrates itself as:

  Message: Build of instance c7c5191b-59ed-44a0-8b2a-0f68e48e9a52
  aborted: Failure prepping block device., Code: 500

  With logging from nova-compute:
  2018-04-19 19:49:06.832 18246 ERROR nova.virt.ironic.driver 
[req-90f1e5e7-1ee0-4f1d-af88-a42f74b0a8e0 e9b4e6ab60ae40cc84ee5689c38608ef 
f3dccd2210514e3695c4d087d81a65a7 - default default] Can
  40-50b3-489b-ae1e-0840e0608253 to the node 
64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8 due to error: Unable to attach VIF 
7d0a6b40-50b3-489b-ae1e-0840e0608253, not enough free physical ports. (HTTP
  able to attach VIF 7d0a6b40-50b3-489b-ae1e-0840e0608253, not enough free 
physical ports. (HTTP 400)
  2018-04-19 19:49:06.833 18246 ERROR nova.virt.ironic.driver 
[req-90f1e5e7-1ee0-4f1d-af88-a42f74b0a8e0 e9b4e6ab60ae40cc84ee5689c38608ef 
f3dccd2210514e3695c4d087d81a65a7 - default default] [in
  -4067-af8e-cd6e25fb6b59] Error preparing deploy for instance 
964f0d93-e9c4-4067-af8e-cd6e25fb6b59 on baremetal node 
64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8.: VirtualInterfacePlugException: Cann
  0-50b3-489b-ae1e-0840e0608253 to the node 
64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8 due to error: Unable to attach VIF 
7d0a6b40-50b3-489b-ae1e-0840e0608253, not enough free physical ports. (HTTP 
  2018-04-19 19:49:06.833 18246 ERROR nova.compute.manager 
[req-90f1e5e7-1ee0-4f1d-af88-a42f74b0a8e0 e9b4e6ab60ae40cc84ee5689c38608ef 
f3dccd2210514e3695c4d087d81a65a7 - default default] [insta
  67-af8e-cd6e25fb6b59] Failure prepping block device: 
VirtualInterfacePlugException: Cannot attach VIF 
7d0a6b40-50b3-489b-ae1e-0840e0608253 to the node 
64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8 du
   attach VIF 7d0a6b40-50b3-489b-ae1e-0840e0608253, not enough free physical 
ports. (HTTP 400)

  HTTP logging:

  192.168.24.1 - - [19/Apr/2018:19:49:00 -0400] "POST 
/v1/nodes/64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8/vifs HTTP/1.1" 204 - "-" 
"python-ironicclient"
  192.168.24.1 - - [19/Apr/2018:19:49:04 -0400] "POST 
/v1/nodes/64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8/vifs HTTP/1.1" 400 177 "-" 
"python-ironicclient"
  192.168.24.1 - - [19/Apr/2018:19:49:05 -0400] "PATCH 
/v1/nodes/64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8 HTTP/1.1" 200 1239 "-" 
"python-ironicclient"
  192.168.24.1 - - [19/Apr/2018:19:49:06 -0400] "DELETE 
/v1/nodes/64eea9c8-b69f-4a30-a05f-2c7b09f5b1d8/vifs/7d0a6b40-50b3-489b-ae1e-0840e0608253
 HTTP/1.1" 400 225 "-" "python-ironicclient"
  192.168.24.1 - - 

[Yahoo-eng-team] [Bug 1781710] Re: ServersOnMultiNodesTest.test_create_server_with_scheduler_hint_group_anti_affinity failing with "Servers are on the same host"

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/583347
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c22b53c2481bac518a6b32cdee7b7df23d91251e
Submitter: Zuul
Branch:master

commit c22b53c2481bac518a6b32cdee7b7df23d91251e
Author: Matt Riedemann 
Date:   Tue Jul 17 17:43:37 2018 -0400

Update RequestSpec.instance_uuid during scheduling

Before change I4b67ec9dd4ce846a704d0f75ad64c41e693de0fb
the ServerGroupAntiAffinityFilter did not rely on the
HostState.instances dict to determine **within the same
request** how many members of the same anti-affinity
group were on a given host. In fact, at that time, the
HostState.instances dict wasn't updated between filter
runs in the same multi-create request. That was fixed with
change Iacc636fa8a59a9e8670a8d683c10bdbb0dc8237b so that
as we select a host for each group member being created
within a single request, we also update the HostState.instances
dict so the ServerGroupAntiAffinityFilter would have
accurate tracking of which group members were on which
hosts.

However, that did not account for a wrinkle in the filter
added in change Ie016f59f5b98bb9c70b3e33556bd747f79fc77bd
which is needed to allow resizing a server to the same host
when that server is in an anti-affinity group. That wrinkle,
combined with the fact the RequestSpec the filter is acting
upon for a given instance in a multi-create request might
not actually have the same instance_uuid can cause the filter
to think it's in a resize situation and accept a host which
already has another member from the same anti-affinity group
on it, which breaks the anti-affinity policy.

For background, during a multi-create request, we create a
RequestSpec per instance being created, but conductor only
sends the first RequestSpec for the first instance to the
scheduler. This means RequestSpec.num_instances can be >1
and we can be processing the Nth instance in the list during
scheduling but working on a RequestSpec for the first instance.
That is what breaks the resize check in ServerGroupAntiAffinityFilter
with regard to multi-create.

To resolve this, we update the RequestSpec.instance_uuid when
filtering hosts for a given instance but we don't persist that
change to the RequestSpec.

This is a bit clunky, but it kind of comes with the territory of
how we hack scheduling requests together using a single RequestSpec
for multi-create requests. An alternative to this is found in change
I0dd1fa5a70ac169efd509a50b5d69ee5deb8deb7 where we rely on the
RequestSpec.num_instances field to determine if we're in a multi-create
situation, but that has its own flaws because num_instances is
persisted with the RequestSpec which might cause us to re-introduce
bug 1558532 if we're not careful. In that case we'd have to either
(1) stop persisting RequestSpec.num_instances or (2) temporarily
unset it like we do using RequestSpec.reset_forced_destinations()
during move operations.

Co-Authored-By: Sean Mooney 

Closes-Bug: #1781710

Change-Id: Icba22060cb379ebd5e906981ec283667350b8c5a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781710

Title:
  
ServersOnMultiNodesTest.test_create_server_with_scheduler_hint_group_anti_affinity
  failing with "Servers are on the same host"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Started seeing this recently which looks like a regression:

  http://logs.openstack.org/44/56/14/check/neutron-tempest-
  multinode-full/dba40b9/job-output.txt.gz#_2018-07-13_19_53_15_275866

  2018-07-13 19:53:15.275866 | primary | {1} 
tempest.api.compute.admin.test_servers_on_multinodes.ServersOnMultiNodesTest.test_create_server_with_scheduler_hint_group_anti_affinity
 [7.164074s] ... FAILED
  2018-07-13 19:53:15.275944 | primary |
  2018-07-13 19:53:15.276012 | primary | Captured traceback:
  2018-07-13 19:53:15.276075 | primary | ~~~
  2018-07-13 19:53:15.276171 | primary | Traceback (most recent call last):
  2018-07-13 19:53:15.276452 | primary |   File 
"tempest/api/compute/admin/test_servers_on_multinodes.py", line 115, in 
test_create_server_with_scheduler_hint_group_anti_affinity
  2018-07-13 19:53:15.276598 | primary | 'Servers are on the same host: 
%s' % hosts)
  2018-07-13 19:53:15.276857 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/unittest2/case.py",
 line 845, in assertNotEqual
  2018-07-13 19:53:15.276965 | primary | raise 
self.failureException(msg)
  2018-07-13 19:53:15.277830 | primary | AssertionError: 
u'ubuntu-xenial-rax-dfw-714118' == 

[Yahoo-eng-team] [Bug 1782141] Re: QoS L3 agent extension functional tests fail on CentOS: "rate" or "avrate" MUST be specified.

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/584297
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7e0c1e9e23788fc60b2bf19659a7b513ba330dea
Submitter: Zuul
Branch:master

commit 7e0c1e9e23788fc60b2bf19659a7b513ba330dea
Author: Bernard Cafarelli 
Date:   Fri Jul 20 10:06:21 2018 +0200

[QoS] Clear rate limits when default null values are used

In that case before, tc was called with zero rate and burst values.
Newer iproute versions (>= 4.9) do not allow that anymore, so we send a
clear command instead

This also fixes QoS L3 agent extension functional tests run with such an
iproute version (CentOS 7.5, Ubuntu 18.04)

Change-Id: Idf16e2460bd434f5eab79745558c55c6579e81e8
Closes-Bug: #1782141


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1782141

Title:
  QoS L3 agent extension functional  tests fail on CentOS: "rate" or
  "avrate" MUST be specified.

Status in neutron:
  Fix Released

Bug description:
  Running dsvm-functional tests on CentOS 7.5 and current master 
(e3e91eb44c20500999c6435203f22d805de7e3ac), these tests fail with same error:
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_connection_from_diff_address_scope
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_connection_from_same_address_scope
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_direct_route_for_address_scope
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_gateway_move_does_not_remove_redirect_rules
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_ha_router_failover_with_gw_and_floatingip
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_ha_router_failover_with_gw
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_non_ha_router_update
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_gateway_redirect_cleanup_on_agent_restart
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_gateway_update_to_none
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_without_ha_with_snat_with_fips
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_rem_fips_on_restarted_agent
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_rule_and_route_table_cleared_when_fip_removed
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_snat_namespace_with_interface_remove
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_static_routes_in_fip_and_snat_namespace
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_static_routes_in_snat_namespace_and_router_namespace
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_update_on_restarted_agent_sets_rtr_fip_connect
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_with_ha_for_fip_disassociation
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_snat_namespace_has_ip_nonlocal_bind_disabled
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_unused_snat_ns_deleted_when_agent_restarts_after_move
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_floating_ip_migration_from_unbound_to_bound
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_prevent_snat_rule_exist_on_restarted_agent
  
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_snat_bound_floating_ip

  
  2018-07-17 14:16:49.316 16568 ERROR neutron.agent.linux.utils 
[req-4b90c8c6-b3a3-4482-9e5f-6db77a806139 - - - - -] Exit code: 1; Stdin: ; 
Stdout: 

[Yahoo-eng-team] [Bug 1783198] [NEW] transient failures during lxc test during shutdown

2018-07-23 Thread Scott Moser
Public bug reported:

We have been seeing a lot of transient failures

https://jenkins.ubuntu.com/server/job/cloud-init-integration-lxd-c/72/consoleFull
with a stack trace that looks like below.

I think that we might be attempting to delete an instance twice or
shutting it down twice. not sure.


2018-07-20 12:20:30,781 - tests.cloud_tests - DEBUG - executing "collect: 
instance-id"
2018-07-20 12:20:46,612 - tests.cloud_tests - ERROR - stage: collect test data 
for cosmic encountered error: not found
2018-07-20 12:20:46,614 - tests.cloud_tests - ERROR - traceback:
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/stage.py",
 line 97, in run_stage
(call_res, call_failed) = call()
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/collect.py",
 line 111, in collect_test_data
instance.shutdown()
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/platforms/lxd/instance.py",
 line 171, in shutdown
self.pylxd_container.stop(wait=wait)
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/container.py",
 line 316, in stop
wait=wait)
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/container.py",
 line 291, in _set_state
response.json()['operation'])
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/operation.py",
 line 33, in wait_for_operation
return cls.get(client, operation.id)
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/operation.py",
 line 40, in get
response = client.api.operations[operation_id].get()
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/client.py",
 line 148, in get
is_api=is_api)
  File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/client.py",
 line 103, in _assert_response
raise exceptions.NotFound(response)

** Affects: cloud-init
 Importance: Medium
 Assignee: Scott Moser (smoser)
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
 Assignee: (unassigned) => Scott Moser (smoser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1783198

Title:
  transient failures during lxc test during shutdown

Status in cloud-init:
  Confirmed

Bug description:
  We have been seeing a lot of transient failures

  
https://jenkins.ubuntu.com/server/job/cloud-init-integration-lxd-c/72/consoleFull
  with a stack trace that looks like below.

  I think that we might be attempting to delete an instance twice or
  shutting it down twice. not sure.

  
  2018-07-20 12:20:30,781 - tests.cloud_tests - DEBUG - executing "collect: 
instance-id"
  2018-07-20 12:20:46,612 - tests.cloud_tests - ERROR - stage: collect test 
data for cosmic encountered error: not found
  2018-07-20 12:20:46,614 - tests.cloud_tests - ERROR - traceback:
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/stage.py",
 line 97, in run_stage
  (call_res, call_failed) = call()
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/collect.py",
 line 111, in collect_test_data
  instance.shutdown()
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/tests/cloud_tests/platforms/lxd/instance.py",
 line 171, in shutdown
  self.pylxd_container.stop(wait=wait)
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/container.py",
 line 316, in stop
  wait=wait)
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/container.py",
 line 291, in _set_state
  response.json()['operation'])
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/operation.py",
 line 33, in wait_for_operation
  return cls.get(client, operation.id)
File 
"/var/lib/jenkins/slaves/torkoal/workspace/cloud-init-integration-lxd-c/cloud-init/.tox/citest/lib/python3.5/site-packages/pylxd/models/operation.py",
 line 40, in get
  response = 

[Yahoo-eng-team] [Bug 1779879] Re: No image description field

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/579886
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=08e0f38c843816660e2b2930c7c6ee5c0435b4a3
Submitter: Zuul
Branch:master

commit 08e0f38c843816660e2b2930c7c6ee5c0435b4a3
Author: Vladislav Kuzmin 
Date:   Tue Jul 3 18:30:09 2018 +0400

Fix image description field

Nothing changed when edit image description on Angularized panel.
This patch fixes it.

Change-Id: I29fb643bfa9b648ad24fcb9888c658a5a52e4bcc
Closes-Bug: #1779879


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1779879

Title:
  No image description field

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  No image description field with Angularized panel.

  Steps to reproduce:
  1. Create image
  2. Try to set some description for it using horizon
  3. Check out the full information about this image

  Expected result:
  Description field has been updated

  Actual result:
  No description filed at all

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1779879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1743596] Re: Users created with the angular user form can not login

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/534644
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=259648bcf75fae45cb3afc4de88f68921f036f68
Submitter: Zuul
Branch:master

commit 259648bcf75fae45cb3afc4de88f68921f036f68
Author: wei.ying 
Date:   Wed Jan 17 14:21:20 2018 +0800

Assign project role to the user when the user is created

In the angular create user form page, when the new user is created,
the role selected by the administrator is not assigned to the user,
which will result in the creation of the user can not log in normally.

This patch uses the existing service, when the new user is created,
then Assign project role for the new user.

Change-Id: I52b9bb3bc986cc89439cf22deb1e250a9252938a
Closes-Bug:#1743596


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1743596

Title:
  Users created with the angular user form can not login

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Env: devstack master branch

  Steps to reproduce:
  1. Set '"users_panel": True' in openstack_dashboard/settings.py.
  2. Go to identity/Users panel and Click 'Create User' button.
  3. Fill in the user's basic information and Submit form.
  4. Log out and log in with the user created by 3.

  The results will prompt: "You are not authorized for any projects or
  domains."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1743596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782340] Re: allocation schema does not set additionalProperties False in all the right places

2018-07-23 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/583907
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0fc4f95914dec7df68e53514771bf3afe1ee9c4f
Submitter: Zuul
Branch:master

commit 0fc4f95914dec7df68e53514771bf3afe1ee9c4f
Author: Chris Dent 
Date:   Thu Jul 19 10:40:58 2018 +0100

[placement] disallow additional fields in allocations

Back in microversion 1.12, when the allocations structure was extended
to allow project_id and user_id on PUT /allocations/{uuid},
"additionalProperties" was not set in the JSON schema, so it has been
possible since then to include unused fields in the input. The schema
was then reused in the creation of subsequent schema for new
microversions and for new URIs, such as POST /allocations and the
forthcoming /reshaper.

This change fixes it by fixing the old microversion. This is the "just
fix it" option from the discussion on the associated bug. The other
option is to create a new microversion that corrects the behavior. This
is more complex than it might initially sound because of the way in
which the original schema is used to compose new ones.

Change-Id: Ied464744803864e61a45e03c559760a8a2e2581f
Closes-Bug: #1782340


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782340

Title:
  allocation schema does not set additionalProperties False in all the
  right places

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In microversion 1.12 of placement, a schema for allocations was
  introduced that required the allocations, project_id and user_id
  fields. This schema is used for subsequent microversions, copied and
  manipulated as required.

  However, it has a flaw. It does not set additionalProperties: False
  for the object at which the required fields are set. This means you
  can add a field and it glides through. This flaw cascades all the way
  to the reshaper (where I had a test that refused to fail in the
  expected way).

  The diff below demonstrates the problem and a potential fix, but this
  fix may not be right as it is in the 1.12 microversion and we might
  want it on microversion 1.30 and beyond, only (which is a pain).

  I think we should just fix it, as below, but I'll let others chime in
  too.


  diff --git a/nova/api/openstack/placement/schemas/allocation.py 
b/nova/api/openstack/placement/schemas/allocation.py
  index e149ae3beb..796b7c5d01 100644
  --- a/nova/api/openstack/placement/schemas/allocation.py
  +++ b/nova/api/openstack/placement/schemas/allocation.py
  @@ -113,8 +113,9 @@ ALLOCATION_SCHEMA_V1_12 = {
   "type": "string",
   "minLength": 1,
   "maxLength": 255
  -}
  +}
   },
  +"additionalProperties": False,
   "required": [
   "allocations",
   "project_id",
  diff --git 
a/nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml 
b/nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml
  index df8fadd66b..04107a996b 100644
  --- 
a/nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml
  +++ 
b/nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml
  @@ -238,7 +238,9 @@ tests:
 project_id: $ENVIRON['PROJECT_ID']
 user_id: $ENVIRON['USER_ID']
 consumer_generation: null
  -  status: 204
  +  bad_field: moo
  +  #status: 204
  +  status: 400
   
   - name: put that allocation to existing consumer
 PUT: /allocations/----

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680305] Re: remote securitygroup address pairs update

2018-07-23 Thread Chengqian Liu
** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680305

Title:
  remote securitygroup address pairs update

Status in neutron:
  Fix Committed

Bug description:
  1. create two security groups
  sg-test-1:
id   523ea2a0-8b73-4a9d-b122-68030418f9a6
security_group_rules   egress, IPv4
   egress, IPv6
   ingress, IPv4, icmp, 
remote_group_id: 56dd2c05-fd80-4f1d-a17f-f1be73a42a82

  sg-test-2:
id   56dd2c05-fd80-4f1d-a17f-f1be73a42a82
security_group_rules   egress, IPv4
   egress, IPv6
  2. create two vms with security group
   vm1(10.20.10.12)   port idb11b8dde-69cb-4a1e-bd9c-20db51748c52 
sg-test-1
   vm2(10.20.10.6)port idffcd8854-f4f6-4d66-84cd-ad29192ab778 
sg-test-2

  3. in vm1's compute node

 #iptables -nvL  neutron-openvswi-ib11b8dde-6;
 
  0 0 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0 
   match-set NIPv456dd2c05-fd80-4f1d-a17f- src
 #ipset list NIPv456dd2c05-fd80-4f1d-a17f-
 Name: NIPv456dd2c05-fd80-4f1d-a17f-
 Type: hash:net
 Revision: 3
 Header: family inet hashsize 1024 maxelem 65536
 Size in memory: 19216
 References: 1
 Members:
 10.20.10.6

  4、update vm2's port 
 #neutron port-update ffcd8854-f4f6-4d66-84cd-ad29192ab778 
--allowed-address-pairs type=dict list=true \
  ip_address=10.20.10.66,mac_address=fa:16:3e:02:70:85

  5、 ipset list NIPv456dd2c05-fd80-4f1d-a17f- ,not found address
  10.20.10.66

  release used: ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1783079] Re: Unable to attach interface to a VM

2018-07-23 Thread Iago Santos
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1783079

Title:
  Unable to attach interface to a VM

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Running with neutron network.

  Unable to attach interface to a VM doing:

  neutron port-create NETWORK --fixed-ip subnet_id=SUBNET
  nova interface-attach --port-id  

  ===

  I got:

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-5c9ae62d-fad0-4cc9-3452345)

  ===

  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
[req-5c9ae62d-fad0-4cc9-9496-23452345 user 29d90dc8-2558-41c9-a997-2134123 - 
default default] Exception during message handling: NotImplementedError
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1004, in 
decorated_function
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 219, in 
decorated_function
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 207, in 
decorated_function
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5768, in 
attach_interface
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
bind_host_id=bind_host_id, tag=tag)
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 297, in 
allocate_port_for_instance
  2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server raise 
NotImplementedError()
  2018-07-23 10:28:23.633 110509 ERROR 

[Yahoo-eng-team] [Bug 1783130] [NEW] placement reshaper doesn't clearing all inventories for a resource provider

2018-07-23 Thread Chris Dent
Public bug reported:

The /reshaper API is willing to accept an empty dictionary for the
inventories attribute of a resource provider. This is intended to mean
"clear all the inventory".

However, the backend transformer code is not prepared to handle this:

  File "nova/api/openstack/placement/handlers/reshaper.py", line 103, in 
reshape
rp_obj.reshape(context, inventory_by_rp, allocation_objects)
  File 
"/mnt/share/cdentsrc/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 993, in wrapper
return fn(*args, **kwargs)
  File "nova/api/openstack/placement/objects/resource_provider.py", line 
4087, in reshape
rp = new_inv_list[0].resource_provider
  File 
"/mnt/share/cdentsrc/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 829, in __getitem__
return self.objects[index]
IndexError: list index out of range

This happening because 'new_inv_list' can be empty at

for rp_uuid, new_inv_list in inventories.items():   
  
LOG.debug("reshaping: *interim* inventory replacement for provider %s", 
  
  rp_uuid)  
  
rp = new_inv_list[0].resource_provider  

If the length of new_inv_list is zero we need to do nothing for this
iteration though the loop.

Then a few lines later at

for rp_uuid, new_inv_list in inventories.items():   
LOG.debug("reshaping: *final* inventory replacement for provider %s",   
  rp_uuid)  
# TODO(efried): If we wanted this to be more efficient, we could keep   
# track of providers for which all inventories are being deleted in the 
# above loop and just do those and skip the rest, since they're already 
# in their final form.  
new_inv_list[0].resource_provider.set_inventory(new_inv_list)   

We have the same IndexError problem and need to behave differently.

A thing we might do is instead of using the resource_provider object on
the (maybe not there) inventory objects, is create a new object: we have
the rp_uuid.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1783130

Title:
  placement reshaper doesn't clearing all inventories for a resource
  provider

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The /reshaper API is willing to accept an empty dictionary for the
  inventories attribute of a resource provider. This is intended to mean
  "clear all the inventory".

  However, the backend transformer code is not prepared to handle this:

File "nova/api/openstack/placement/handlers/reshaper.py", line 103, in 
reshape
  rp_obj.reshape(context, inventory_by_rp, allocation_objects)
File 
"/mnt/share/cdentsrc/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 993, in wrapper
  return fn(*args, **kwargs)
File "nova/api/openstack/placement/objects/resource_provider.py", line 
4087, in reshape
  rp = new_inv_list[0].resource_provider
File 
"/mnt/share/cdentsrc/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 829, in __getitem__
  return self.objects[index]
  IndexError: list index out of range

  This happening because 'new_inv_list' can be empty at

  for rp_uuid, new_inv_list in inventories.items(): 

  LOG.debug("reshaping: *interim* inventory replacement for provider 
%s",   
rp_uuid)

  rp = new_inv_list[0].resource_provider  

  If the length of new_inv_list is zero we need to do nothing for this
  iteration though the loop.

  Then a few lines later at

  for rp_uuid, new_inv_list in inventories.items(): 
  
  LOG.debug("reshaping: *final* inventory replacement for provider %s", 
  
rp_uuid)
  
  # TODO(efried): If we wanted this to be more efficient, we could keep 
  
  # track of providers for which all inventories are being deleted in 
the 
  # above loop and just do those and skip the rest, since they're 
already 
  # in their final form.
  
  new_inv_list[0].resource_provider.set_inventory(new_inv_list) 
  

  We have the same IndexError problem and need to behave differently.

  A thing we 

[Yahoo-eng-team] [Bug 1782604] Re: designate-bind9-py36 gate is failing due to python-zvm-sdk

2018-07-23 Thread Graham Hayes
** Changed in: python-zvm-sdk
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782604

Title:
  designate-bind9-py36 gate is failing due to python-zvm-sdk

Status in Designate:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-zvm-sdk:
  Fix Released

Bug description:
  http://logs.openstack.org/97/583297/1/check/designate-
  bind9-py36/ca6327e/job-output.txt.gz#_2018-07-17_16_40_10_110717

  It looks like zVMCloudConnector is being installed in a py36 env when
  it errors out if that happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1782604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782604] Re: designate-bind9-py36 gate is failing due to python-zvm-sdk

2018-07-23 Thread Graham Hayes
** Changed in: designate
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782604

Title:
  designate-bind9-py36 gate is failing due to python-zvm-sdk

Status in Designate:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-zvm-sdk:
  Fix Committed

Bug description:
  http://logs.openstack.org/97/583297/1/check/designate-
  bind9-py36/ca6327e/job-output.txt.gz#_2018-07-17_16_40_10_110717

  It looks like zVMCloudConnector is being installed in a py36 env when
  it errors out if that happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1782604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1783095] [NEW] Fullstack tests fail on python3 parsing MTU configuration

2018-07-23 Thread Bernard Cafarelli
Public bug reported:

On python3 gate master, fullstack tests fail to load configuration since
Ia838d2a661c5098f90b58b2cb31557f2ebf78868 was merged

Sample failure:
ft1.1: 
neutron.tests.fullstack.test_connectivity.TestConnectivitySameNetworkNoDhcp.test_connectivity(Open
 vSwitch agent)_StringException: traceback-1: {{{
Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 197, in setUp
self._setUp()
  File "/opt/stack/new/neutron/neutron/tests/fullstack/resources/config.py", 
line 103, in _setUp
super(NeutronConfigFixture, self)._setUp()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
55, in _setUp
self.write_config_to_configfile()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
58, in write_config_to_configfile
config_parser = self.dict_to_config_parser(self.config)
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
71, in dict_to_config_parser
config_parser.set(section, option, value)
  File "/usr/lib/python3.5/configparser.py", line 1189, in set
self._validate_value_types(option=option, value=value)
  File "/usr/lib/python3.5/configparser.py", line 1174, in _validate_value_types
raise TypeError("option values must be strings")
TypeError: option values must be strings

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 208, in setUp
raise SetupError(details)
fixtures.fixture.SetupError: {}
}}}

traceback-2: {{{
Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 197, in setUp
self._setUp()
  File "/opt/stack/new/neutron/neutron/tests/fullstack/resources/config.py", 
line 103, in _setUp
super(NeutronConfigFixture, self)._setUp()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
55, in _setUp
self.write_config_to_configfile()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
58, in write_config_to_configfile
config_parser = self.dict_to_config_parser(self.config)
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
71, in dict_to_config_parser
config_parser.set(section, option, value)
  File "/usr/lib/python3.5/configparser.py", line 1189, in set
self._validate_value_types(option=option, value=value)
  File "/usr/lib/python3.5/configparser.py", line 1174, in _validate_value_types
raise TypeError("option values must be strings")
TypeError: option values must be strings

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 197, in setUp
self._setUp()
  File 
"/opt/stack/new/neutron/neutron/tests/fullstack/resources/environment.py", line 
380, in _setUp
cfg.CONF.database.connection, self.rabbitmq_environment))
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 257, in useFixture
fixture.setUp()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 212, in setUp
raise MultipleExceptions(*errors)
testtools.runtest.MultipleExceptions: ((, TypeError('option 
values must be strings',), ), (, SetupError({},), ))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 208, in setUp
raise SetupError(details)
fixtures.fixture.SetupError: {}
}}}

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/fixtures/fixture.py",
 line 197, in setUp
self._setUp()
  File "/opt/stack/new/neutron/neutron/tests/fullstack/resources/config.py", 
line 103, in _setUp
super(NeutronConfigFixture, self)._setUp()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
55, in _setUp
self.write_config_to_configfile()
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
58, in write_config_to_configfile
config_parser = self.dict_to_config_parser(self.config)
  File "/opt/stack/new/neutron/neutron/tests/common/config_fixtures.py", line 
71, in dict_to_config_parser
config_parser.set(section, option, value)
  File "/usr/lib/python3.5/configparser.py", line 1189, in set
self._validate_value_types(option=option, value=value)
  File "/usr/lib/python3.5/configparser.py", line 1174, in _validate_value_types
raise TypeError("option values 

[Yahoo-eng-team] [Bug 1783079] [NEW] Unable to attach interface to a VM

2018-07-23 Thread Iago Santos
Public bug reported:

Running with neutron network.

Unable to attach interface to a VM doing:

neutron port-create NETWORK --fixed-ip subnet_id=SUBNET
nova interface-attach --port-id  

===

I got:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-5c9ae62d-fad0-4cc9-3452345)

===

2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
[req-5c9ae62d-fad0-4cc9-9496-23452345 user 29d90dc8-2558-41c9-a997-2134123 - 
default default] Exception during message handling: NotImplementedError
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 1004, in 
decorated_function
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 219, in 
decorated_function
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 207, in 
decorated_function
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5768, in 
attach_interface
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
bind_host_id=bind_host_id, tag=tag)
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 297, in 
allocate_port_for_instance
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server raise 
NotImplementedError()
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server 
NotImplementedError
2018-07-23 10:28:23.633 110509 ERROR oslo_messaging.rpc.server

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1783079

Title:
  Unable to attach interface to a VM

Status in OpenStack