[Yahoo-eng-team] [Bug 1715545] [NEW] outdated URLs in docs

2017-09-06 Thread Andreas Jaeger
Public bug reported:

The nova repo contains links to the ocata config-reference like:

https://docs.openstack.org/ocata/config-
reference/compute/hypervisors.html

For pike, the config-reference has moved to the project repos as part of
the large reorg. All these links should be changed to the new location.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715545

Title:
  outdated URLs in docs

Status in OpenStack Compute (nova):
  New

Bug description:
  The nova repo contains links to the ocata config-reference like:

  https://docs.openstack.org/ocata/config-
  reference/compute/hypervisors.html

  For pike, the config-reference has moved to the project repos as part
  of the large reorg. All these links should be changed to the new
  location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715542] [NEW] Remove console doesn't work when using nova API V2.1

2017-09-06 Thread Sam Morrison
Public bug reported:

Upgraded horizon to Ocata and now remote consoles don't work.

This is due to horizon switching to using the new server-remote-consoles
API see:

https://developer.openstack.org/api-ref/compute/#server-remote-consoles

Also deprecation about the old way at

https://developer.openstack.org/api-ref/compute/#get-vnc-console-os-
getvncconsole-action-deprecated


The error is masked as there is a dangerous catch all exception (removing it 
shows the following)


  File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/views.py", line 
312, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/opt/horizon/horizon/tabs/views.py", line 56, in get_context_data
exceptions.handle(self.request)
  File "/opt/horizon/horizon/exceptions.py", line 354, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/opt/horizon/horizon/tabs/views.py", line 54, in get_context_data
context["tab_group"].load_tab_data()
  File "/opt/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/opt/horizon/horizon/exceptions.py", line 354, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/opt/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File "/opt/horizon/openstack_dashboard/dashboards/project/instances/tabs.py", 
line 74, in get_context_data
request, console_type, instance)
  File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/console.py", 
line 53, in get_console
console = api_call(request, instance.id)
  File "/opt/horizon/openstack_dashboard/api/nova.py", line 504, in 
server_vnc_console
instance_id, console_type)['console'])
KeyError: 'console'

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1715542

Title:
  Remove console doesn't work when using nova API V2.1

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Upgraded horizon to Ocata and now remote consoles don't work.

  This is due to horizon switching to using the new server-remote-
  consoles API see:

  https://developer.openstack.org/api-ref/compute/#server-remote-
  consoles

  Also deprecation about the old way at

  https://developer.openstack.org/api-ref/compute/#get-vnc-console-os-
  getvncconsole-action-deprecated

  
  The error is masked as there is a dangerous catch all exception (removing it 
shows the following)

  
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/views.py", line 
312, in get_context_data
  context = super(DetailView, self).get_context_data(**kwargs)
File "/opt/horizon/horizon/tabs/views.py", line 56, in get_context_data
  exceptions.handle(self.request)
File "/opt/horizon/horizon/exceptions.py", line 354, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/horizon/horizon/tabs/views.py", line 54, in get_context_data
  context["tab_group"].load_tab_data()
File "/opt/horizon/horizon/tabs/base.py", line 128, in load_tab_data
  exceptions.handle(self.request)
File "/opt/horizon/horizon/exceptions.py", line 354, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/horizon/horizon/tabs/base.py", line 125, in load_tab_data
  tab._data = tab.get_context_data(self.request)
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/tabs.py", line 
74, in get_context_data
  request, console_type, instance)
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/console.py", 
line 53, in get_console
  console = api_call(request, instance.id)
File "/opt/horizon/openstack_dashboard/api/nova.py", line 504, in 
server_vnc_console
  instance_id, console_type)['console'])
  KeyError: 'console'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1715542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715533] [NEW] 'nova-manage cell_v2 map_instances' return argument error when --max-count option is used

2017-09-06 Thread Eunjin Baek
Public bug reported:

[Test step]
1. Create cell1
2. Commit a command "nova-manage cell_v2 map_instances --cell_uuid  
--max-count ",
error is occured because of 'max-count' argument like below.

[root@aosc cmd(keystone_admin)]# nova-manage cell_v2 list_cells
+---+--+
|  Name | UUID |
+---+--+
| cell0 | ---- |
| cell1 | b0fd1642-2836-4de7-9d2c-2b44d890bd89 |
+---+--+

[root@osc ~(keystone_admin)]#  nova-manage cell_v2 map_instances --cell_uuid 
b0fd1642-2836-4de7-9d2c-2b44d890bd89 --max-count 2000
An error has occurred:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1609, in main
ret = fn(*fn_args, **fn_kwargs)
TypeError: map_instances() got an unexpected keyword argument 'max-count'

** Affects: nova
 Importance: Undecided
 Assignee: Eunjin Baek (ej218-baek)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Eunjin Baek (ej218-baek)

** Summary changed:

- nova-manage cell_v2 map_instances return error when --max-count option is used
+ 'nova-manage cell_v2 map_instances' return error when --max-count option is 
used

** Summary changed:

- 'nova-manage cell_v2 map_instances' return error when --max-count option is 
used
+ 'nova-manage cell_v2 map_instances' return argument error when --max-count 
option is used

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715533

Title:
  'nova-manage cell_v2 map_instances' return argument error when --max-
  count option is used

Status in OpenStack Compute (nova):
  New

Bug description:
  [Test step]
  1. Create cell1
  2. Commit a command "nova-manage cell_v2 map_instances --cell_uuid  --max-count ",
  error is occured because of 'max-count' argument like below.

  [root@aosc cmd(keystone_admin)]# nova-manage cell_v2 list_cells
  +---+--+
  |  Name | UUID |
  +---+--+
  | cell0 | ---- |
  | cell1 | b0fd1642-2836-4de7-9d2c-2b44d890bd89 |
  +---+--+

  [root@osc ~(keystone_admin)]#  nova-manage cell_v2 map_instances --cell_uuid 
b0fd1642-2836-4de7-9d2c-2b44d890bd89 --max-count 2000
  An error has occurred:
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1609, in 
main
  ret = fn(*fn_args, **fn_kwargs)
  TypeError: map_instances() got an unexpected keyword argument 'max-count'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715451] Re: Castellan 0.13.0 doesn't work with ConfKeyManager due to missing list() abstract method

2017-09-06 Thread Jeremy Liu
This bug should affect nova similarly.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715451

Title:
  Castellan 0.13.0 doesn't work with ConfKeyManager due to missing
  list() abstract method

Status in castellan:
  New
Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  New
Status in OpenStack Global Requirements:
  New

Bug description:
  Seen here: https://review.openstack.org/#/c/500770/

  http://logs.openstack.org/70/500770/7/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/b813494/logs/screen-c-api.txt.gz?level=TRACE#_Sep_06_17_25_08_182255

  This change in castellan 0.13.0 breaks cinder's ConfKeyManager:

  
https://github.com/openstack/castellan/commit/1a13c2b2030390e3c0a5d498da486d92ddd1152c

  Because the Cinder ConfKeyManager extends the abstract KeyManager
  class in castellan but doesn't implement the list() method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1715451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703059] Re: DHCP agent should not start metadata ns proxy with vrouter id

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/481785
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=13eea520b5c03010dc4b42a9dcf41b004fe40ed7
Submitter: Jenkins
Branch:master

commit 13eea520b5c03010dc4b42a9dcf41b004fe40ed7
Author: Liping Mao (limao) 
Date:   Sat Jul 8 10:22:28 2017 +0800

dhcp agent start md-proxy with vrouter id only when has metadata subnet

When user create network with isolated subnet, dhcp agent will
create md-proxy with vrouter id. This will conflict with then md-proxy
created by l3 agent. This patch updated dhcp agent start md-proxy with
vrouter id only when the network has metadata subnet.

Change-Id: I3288327bf9d0cdf759a6fdf365d1289e8b7442db
Closes-Bug: #1703059


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703059

Title:
  DHCP agent should not start metadata ns proxy with vrouter id

Status in neutron:
  Fix Released

Bug description:
  When user create network with isolated subnet, dhcp may create
  metadata ns proxy with router id.

  How to reproduce:
  1. create a router: R1
  2. create a network: Net1
  3. create two subnetworks: Sub1, Sub2
  4. attach Sub1 to R1. (do not attach Sub2)

  if you deploy dhcp-agent and l3-agent on seperate node, you will see:
     a) dhcp-agent will start metadata ns proxy with router-uuid with port 80 
(This metadata ns proxy will not work in data path)

  Here is a sample, you can see the metadata_port is 80 with router_id:
  neutron  237535  0.0  0.2 295864 46284 ?SJul07   0:00 
/usr/bin/python2 /bin/neutron-ns-metadata-proxy 
--pid_file=/var/lib/neutron/external/pids/160f2356-3bf5-4a1c-80e6-b9ef8b971047.pid
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
--router_id=160f2356-3bf5-4a1c-80e6-b9ef8b971047 --state_path=/var/lib/neutron 
--metadata_port=80 --metadata_proxy_user=995 --metadata_proxy_group=992 --debug 
--log-file=neutron-ns-metadata-proxy-160f2356-3bf5-4a1c-80e6-b9ef8b971047.log 
--log-dir=/var/log/neutron

     b) l3-agent will start metadata ns proxy with router-uuid with port
  9697 (This metadata ns proxy works in data path)

  if you deploy deploy dhcp-agent and l3-agent on same node, metadata ns
  proxy may start by dhcp or l3 agent, both process manager in dhcp
  agent and l3 agent will monitor this metadata ns proxy.  This means if
  you kill this metadata ns proxy, it may be start up by dhcp agent or
  l3 agent, this is relay on which agent start it first. you will see
  the port of metadata ns proxy may be 80 or 9697. The action is
  unpredictable.

  The problem is DHCP agent should not manage metadata ns proxy with
  router_id, this kind of metadata ns proxy should be managed by l3
  agent.

  When dhcp agent find isolated subnet, it should start metadata ns proxy with 
network_id, not router_id, the following code should be removed:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L508

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1703059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715525] [NEW] v3 openrc file is missing PROJECT_DOMAIN_NAME

2017-09-06 Thread Sam Morrison
Public bug reported:

The generated v3 openrc file doesn't specify either PROJECT_DOMAIN_ID or
PROJECT_DOMAIN_NAME.


If your project isn't in the default domain then this openrc file won't work 
for you.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1715525

Title:
  v3 openrc file is missing PROJECT_DOMAIN_NAME

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The generated v3 openrc file doesn't specify either PROJECT_DOMAIN_ID
  or PROJECT_DOMAIN_NAME.

  
  If your project isn't in the default domain then this openrc file won't work 
for you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1715525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667055] Re: Fullstack: neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router fails

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500185
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f998e8d96552d8c0e2d2421a4e2a8583ad2cfffb
Submitter: Jenkins
Branch:master

commit f998e8d96552d8c0e2d2421a4e2a8583ad2cfffb
Author: Ihar Hrachyshka 
Date:   Fri Sep 1 13:21:23 2017 -0700

test_ha_router: wait until two agents are scheduled

We need to give some time to neutron-server to schedule the router to
both agents. This reflects what other fullstack test cases do.

Change-Id: I3bce907262635c76b5444fab480f7157172e77a2
Closes-Bug: #1667055


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667055

Title:
  Fullstack:
  neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router
  fails

Status in neutron:
  Fix Released

Bug description:
  20 hits in 7 days

  logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%222%20!%3D%201%3A%20HA%20router%20must%20be%20scheduled%20to%20both%20nodes%5C%22%20AND%20tags%3Aconsole%20AND%20build_branch%3Amaster

  Test failure example: http://logs.openstack.org/41/437041/1/check
  /gate-neutron-dsvm-fullstack-ubuntu-
  xenial/2fe600c/console.html#_2017-02-22_17_46_50_795749

  
  2017-02-22 17:46:50.893183 | 2017-02-22 17:46:50.892 | Captured traceback:
  2017-02-22 17:46:50.896340 | 2017-02-22 17:46:50.895 | ~~~
  2017-02-22 17:46:50.899564 | 2017-02-22 17:46:50.898 | Traceback (most 
recent call last):
  2017-02-22 17:46:50.902799 | 2017-02-22 17:46:50.902 |   File 
"neutron/tests/base.py", line 116, in func
  2017-02-22 17:46:50.906044 | 2017-02-22 17:46:50.905 | return f(self, 
*args, **kwargs)
  2017-02-22 17:46:50.909335 | 2017-02-22 17:46:50.908 |   File 
"neutron/tests/fullstack/test_l3_agent.py", line 202, in test_ha_router
  2017-02-22 17:46:50.912626 | 2017-02-22 17:46:50.911 | 'HA router 
must be scheduled to both nodes')
  2017-02-22 17:46:50.915910 | 2017-02-22 17:46:50.915 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2017-02-22 17:46:50.919219 | 2017-02-22 17:46:50.918 | 
self.assertThat(observed, matcher, message)
  2017-02-22 17:46:50.922394 | 2017-02-22 17:46:50.921 |   File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-02-22 17:46:50.925527 | 2017-02-22 17:46:50.924 | raise 
mismatch_error
  2017-02-22 17:46:50.928784 | 2017-02-22 17:46:50.927 | 
testtools.matchers._impl.MismatchError: 2 != 1: HA router must be scheduled to 
both nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714641] Re: ML2: port deletion fails when dns extension is enabled

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500261
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=22d6a1540f6ab9029c85781d2a2cb8c7c39299c7
Submitter: Jenkins
Branch:master

commit 22d6a1540f6ab9029c85781d2a2cb8c7c39299c7
Author: Jens Harbott 
Date:   Sat Sep 2 08:00:51 2017 +

Fix port deletion when dns_integration is enabled

The records found in ip_allocations contain objects of type IPAddress,
but the external dns service expects them as string, so we need to
insert a conversion.

Change-Id: I622993fc273121bfd051d2fd9c7811e2ae49a1d8
Closes-Bug: 1714641


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714641

Title:
  ML2: port deletion fails when dns extension is enabled

Status in neutron:
  Fix Released

Bug description:
  Setup: devstack from current master, ext-dns-integration enabled

  $ openstack port delete ex4
  Failed to delete port with name or ID 'ex4': HttpException: Internal Server 
Error (HTTP 500) (Request-ID: req-9a7fab16-5dba-4501-a999-88adc3766da9), 
Request Failed: internal server error while processing your request.
  1 of 1 ports failed to delete.

  Traceback from q-svc:

  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron.plugins.ml2.plugin [None req-9a7fab16-5dba-4501-a999-88adc3766da9 admin 
admin] Deleting port 59ab0574-856d-4503-aa
  51-0ece9d01bf27 {{(pid=1047) _pre_delete_port 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:1480}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron_lib.callbacks.manager [None req-9a7fab16-5dba-4501-a999-88adc3766da9 
admin admin] Notify callbacks ['neutron.db.l3
  _db._prevent_l3_port_delete_callback-8786651305326', 
'neutron.plugins.ml2.extensions.dns_integration._delete_port_in_external_dns_service-8786651000550']
 for port, before_delete {{(
  pid=1047) _notify_loop 
/usr/local/lib/python2.7/dist-packages/neutron_lib/callbacks/manager.py:167}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron.services.externaldns.driver [None 
req-9a7fab16-5dba-4501-a999-88adc3766da9 admin admin] Loading external dns drive
  r: designate {{(pid=1047) get_instance 
/opt/stack/neutron/neutron/services/externaldns/driver.py:39}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron.plugins.ml2.extensions.dns_integration [None 
req-9a7fab16-5dba-4501-a999-88adc3766da9 admin admin] External DNS dr
  iver loaded: designate {{(pid=1047) _get_dns_driver 
/opt/stack/neutron/neutron/plugins/ml2/extensions/dns_integration.py:411}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron_lib.callbacks.manager [None req-9a7fab16-5dba-4501-a999-88adc3766da9 
admin admin] Callback neutron.plugins.ml2.ext
  ensions.dns_integration._delete_port_in_external_dns_service-8786651000550 
raised sequence item 0: expected string, IPAddress found {{(pid=1047) 
_notify_loop /usr/local/lib/python2.
  7/dist-packages/neutron_lib/callbacks/manager.py:184}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: DEBUG 
neutron_lib.callbacks.manager [None req-9a7fab16-5dba-4501-a999-88adc3766da9 
admin admin] Notify callbacks [] for port, ab
  ort_delete {{(pid=1047) _notify_loop 
/usr/local/lib/python2.7/dist-packages/neutron_lib/callbacks/manager.py:167}}
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation [None 
req-9a7fab16-5dba-4501-a999-88adc3766da9 admin admin] DELETE failed.: TypeError
  : sequence item 0: expected string, IPAddress found
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation Traceback (most recent call last):
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 683, in __call__
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation self.invoke_controller(controller, 
args, kwargs, state)
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 574, in invoke_co
  ntroller
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation result = controller(*args, **kwargs)
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/db/api.py", line 93, in wrapped
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation setattr(e, '_RETRY_EXCEEDED', True)
  Sep 02 07:35:40 jh-devstack-02 neutron-server[1033]: ERROR 
neutron.pecan_wsgi.hooks.translation   File 

[Yahoo-eng-team] [Bug 1710203] Re: Add port dns_domain processing logic

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/495466
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d5b4f242d4242206be7c7efc0cda1842b6d04c0b
Submitter: Jenkins
Branch:master

commit d5b4f242d4242206be7c7efc0cda1842b6d04c0b
Author: Miguel Lavalle 
Date:   Fri Aug 18 19:25:11 2017 -0500

Document dns_domain for ports attribute

This commit adds to the Networking Guide chapter for DNS integration the
workings of the dns_domain attribute for ports.

Change-Id: Ib775fe3d44b766c457875a04460ad90d1e377974
Closes-Bug: #1710203


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710203

Title:
  Add port dns_domain processing logic

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/468697
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4a7753325999ef1e5c77f6131cfe03b2cfa364a7
  Author: Miguel Lavalle 
  Date:   Sat May 27 18:27:34 2017 -0500

  Add port dns_domain processing logic
  
  This patchset adds logic to the ML2 DNS integration extension to process
  a new dns_domain attribute associated to ports.
  
  This patchset belongs to a series that adds dns_domain attribute
  functionality to ports.
  
  DocImpact: Ports have a new dns_domain attribute, that takes precedence
 over networks dns_domain when published to an external DNS
 service.
  
  APIImpact: Users can now specify a dns_domain attribute in port POST and
 PUT operations.
  
  Change-Id: I02d8587d3a1f9f3f6b8cbc79dbe8df4b4b99a893
  Partial-Bug: #1650678

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1710203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696417] Re: nova-manage db online_data_migrations can fail when upgrading to newton under certain conditions

2017-09-06 Thread Guang Yee
I also ran into this exact same issue. We have mysql as the backend. And
the datetime type is not timezone aware. Neither "--verbose" and "--
debug" are effective for the online_data_migrations command as they are
not being taken into consideration. I ended up manually printing out the
traceback by adding these two lines

import sys, traceback
traceback.print_exc(file=sys.stdout)

here
https://github.com/openstack/nova/blob/stable/newton/nova/cmd/manage.py#L897

And able to see more information on the failure. Here's a sample
traceback.


Running batches of 50 until complete
-- 2017-05-19 23:34:43+00:00 -
Error attempting to run 
Traceback (most recent call last):
  File 
"/opt/stack/venv/nova-20170728T171245Z/lib/python2.7/site-packages/nova/cmd/manage.py",
 line 892, in _run_migration
found, done = migration_meth(ctxt, count)
  File 
"/opt/stack/venv/nova-20170728T171245Z/lib/python2.7/site-packages/nova/objects/flavor.py",
 line 717, in migrate_flavors
flavor._flavor_create(ctxt, flavor_values)
  File 
"/opt/stack/venv/nova-20170728T171245Z/lib/python2.7/site-packages/nova/objects/flavor.py",
 line 463, in _flavor_create
return _flavor_create(context, updates)
  File 
"/opt/stack/venv/nova-20170728T171245Z/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 824, in wrapper
return fn(*args, **kwargs)
  File 
"/opt/stack/venv/nova-20170728T171245Z/lib/python2.7/site-packages/nova/objects/flavor.py",
 line 166, in _flavor_create
raise db_exc.DBError(e)
DBError: (_mysql_exceptions.OperationalError) (1292, "Incorrect datetime value: 
'2017-05-19 23:34:43+00:00' for column 'created_at' at row 1") [SQL: u'INSERT 
INTO flavors (created_at, updated_at, id, name, memory_mb, vcpus, root_gb, 
ephemeral_gb, flavorid, swap, rxtx_factor, vcpu_weight, disabled, is_public) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(datetime.datetime(2017, 5, 19, 23, 34, 43, tzinfo=), None, 16, 
'foo', 1024, 1, 2, 0, '50aae60f-ba2a-40d8-a0c3-2117ba0dd2a6', 0, 1.0, 0, 0, 0)]

Looks like migrations code is attempting to insert a timezone aware
datetime field into mysql.


** Changed in: nova
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696417

Title:
  nova-manage db online_data_migrations can fail when upgrading to
  newton under certain conditions

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Reproduceable under conditions:

  * Mitaka upgraded to Newton
  * online_data_migrations have not yet migrated flavors and/or aggregates from 
the nova database, to the nova-api database
  * One or more of the datetime-fields (created_at, updated_at, deleted_at) are 
set.
  ** Because a custom flavor has been created
  ** Because a flavor has been updated
  ** Because a flavor has been deleted (deleted flavors are probably not 
relevant, as the new table have no deleted flag, it just removes them 
altogether)

  Steps to reproduce:

  * Run 'nova-manage db online_data_migrations'

  It throws an error message like:
  Error attempting to run 

  Workaround:

  * Set created_at,updated_at and deleted_at to NULL
  * Run migration


  I have done quite a bit of troubleshooting, but haven't managed to
  write a patch so far.  As far as I can tell, inserting a flavor or
  aggregate to the new tables fail due to the datetime fields including
  a timezone.  There exists code for stripping away the timezone in
  nova/db/sqlalchemy/api.py (convert_objects_related_datetimes) - but
  the timezone reappears in nova/objects/flavor.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1696417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715470] Re: test_rpc_consumer_isolation fails with oslo.messaging 5.31.0

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/501385
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=bd1e2fd60ff6c694b9daf2373b3f508b6274e15f
Submitter: Jenkins
Branch:master

commit bd1e2fd60ff6c694b9daf2373b3f508b6274e15f
Author: Matt Riedemann 
Date:   Wed Sep 6 15:12:45 2017 -0400

Fix test_rpc_consumer_isolation for oslo.messaging 5.31.0

With this change in oslo.messaging 5.31.0:

  I0bbf9fca0ecbe71efa87c9613ffd32eb718f2c0e

The endpoint passed in is going to be checked for a 'target'
attribute and check it's type if set, so looking for it has
to be ignored.

Change-Id: Ic022bdc0291ce1498abdfe1acd3cc51adf7db7c6
Closes-Bug: #1715470


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715470

Title:
  test_rpc_consumer_isolation fails with oslo.messaging 5.31.0

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Seen in this upper-constraints change:

  http://logs.openstack.org/70/500770/9/check/gate-cross-cinder-python27
  -ubuntu-xenial/1b40c85/console.html#_2017-09-06_18_33_51_290254

  2017-09-06 18:33:51.290254 | 
cinder.tests.unit.test_test.IsolationTestCase.test_rpc_consumer_isolation
  2017-09-06 18:33:51.290274 | 
-
  2017-09-06 18:33:51.290280 | 
  2017-09-06 18:33:51.290289 | Captured traceback:
  2017-09-06 18:33:51.290298 | ~~~
  2017-09-06 18:33:51.290311 | Traceback (most recent call last):
  2017-09-06 18:33:51.290334 |   File "cinder/tests/unit/test_test.py", 
line 46, in test_rpc_consumer_isolation
  2017-09-06 18:33:51.290348 | endpoints=[NeverCalled()])
  2017-09-06 18:33:51.290364 |   File "cinder/rpc.py", line 159, in 
get_server
  2017-09-06 18:33:51.290377 | access_policy=access_policy)
  2017-09-06 18:33:51.290418 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/debtcollector/updating.py",
 line 64, in wrapper
  2017-09-06 18:33:51.290433 | return wrapped(*args, **kwargs)
  2017-09-06 18:33:51.290475 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 214, in get_rpc_server
  2017-09-06 18:33:51.290486 | access_policy)
  2017-09-06 18:33:51.290527 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 155, in __init__
  2017-09-06 18:33:51.290555 | target = getattr(ep, 'target', None)
  2017-09-06 18:33:51.290576 |   File "cinder/tests/unit/test_test.py", 
line 42, in __getattribute__
  2017-09-06 18:33:51.290592 | self.fail(msg="I should never get 
called.")
  2017-09-06 18:33:51.290648 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  2017-09-06 18:33:51.290666 | raise self.failureException(msg)
  2017-09-06 18:33:51.290681 | AssertionError: I should never get called.

  https://github.com/openstack/oslo.messaging/compare/5.30.0...5.31.0

  My guess is this change breaks the test:

  
https://github.com/openstack/oslo.messaging/commit/b7382d58d773e9be61dda3fac5b2e3cbddc22a22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1715470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614478] Re: Synthetic_fields can contain any string

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/501185
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=a45b1f9feb3f0d2a427f5a25e256f0f7fbb59cd7
Submitter: Jenkins
Branch:master

commit a45b1f9feb3f0d2a427f5a25e256f0f7fbb59cd7
Author: Rodolfo Alonso Hernandez 
Date:   Wed Sep 6 10:36:58 2017 +0100

Add exception when a synthetic field is invalid

A check is introduced in the ``__init__`` method of ``NeutronObject``
class [1]. This check ensures synthetic fields are valid in any oslo
versioned object.

[1] I33c41fbd4dd40ba292ebe67ac497372d4354f260

Change-Id: I12c7a330555966d30e44418cbd500958fe844462
Closes-Bug: #1614478


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614478

Title:
  Synthetic_fields can contain any string

Status in neutron:
  Fix Released

Bug description:
  The objects/base NeutronDbObject doesn't check synthetic_fields for
  validity, so typos and errors pass silently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715462] Re: Instances failing quota recheck end up with no assigned cell

2017-09-06 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715462

Title:
  Instances failing quota recheck end up with no assigned cell

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  When an instance fails the quota rechecks codebase which is here:

  
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L992-L1006

  It raises an exception, however, the cell mapping is only saved much
  later (thanks help of dansmith for finding this):

  
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L1037-L1043

  This results in an instance with an unassigned cell, where it should
  technically be the cell it was scheduled into.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715493] [NEW] Instances always get mapped into the last processed cell in conductor

2017-09-06 Thread Matt Riedemann
Public bug reported:

This was a regression introduced in Pike here:
https://review.openstack.org/#/c/416521/

The schedule_and_build_instances method in conductor was split into two
for loops, where the first loop creates the instance record in the cell
database and the cell is looked up via the host mapping for the chosen
host for that instance.

The problem is the 2nd for loop doesn't do the same cell lookup based on
the host:

https://github.com/openstack/nova/blob/b79492f70257754f960eaf38ad6a3f56f647cb3d/nova/conductor/manager.py#L1023

It just re-uses the last set cell variable from the first for loop, so
we could have a case where an instance is created in cell1, and then
another instance is created in cell2, and then when the 2nd loop maps
the first instance, it maps it to cell2 but it really lives in cell1.

Not to mention the BDMs and tags would be created in the wrong cell.

** Affects: nova
 Importance: High
 Assignee: Dan Smith (danms)
 Status: In Progress

** Affects: nova/pike
 Importance: High
 Status: Triaged


** Tags: cells conductor

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Triaged

** Changed in: nova/pike
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715493

Title:
  Instances always get mapped into the last processed cell in conductor

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Triaged

Bug description:
  This was a regression introduced in Pike here:
  https://review.openstack.org/#/c/416521/

  The schedule_and_build_instances method in conductor was split into
  two for loops, where the first loop creates the instance record in the
  cell database and the cell is looked up via the host mapping for the
  chosen host for that instance.

  The problem is the 2nd for loop doesn't do the same cell lookup based
  on the host:

  
https://github.com/openstack/nova/blob/b79492f70257754f960eaf38ad6a3f56f647cb3d/nova/conductor/manager.py#L1023

  It just re-uses the last set cell variable from the first for loop, so
  we could have a case where an instance is created in cell1, and then
  another instance is created in cell2, and then when the 2nd loop maps
  the first instance, it maps it to cell2 but it really lives in cell1.

  Not to mention the BDMs and tags would be created in the wrong cell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714285] Re: Hyper-V: leaked resources after failed spawn

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499690
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=761a3f4658336eba498844a6ff7586e6d2b6b7e3
Submitter: Jenkins
Branch:master

commit 761a3f4658336eba498844a6ff7586e6d2b6b7e3
Author: Lucian Petrut 
Date:   Thu Aug 31 18:41:19 2017 +0300

HyperV: Perform proper cleanup after failed instance spawns

This change ensures that vif ports as well as volume connections
are properly removed after an instance fails to spawn.

In order to avoid having similar issues in the future, the
'block_device_info' and 'network_info' arguments become mandatory
for the VMOps.destroy method.

Change-Id: I8d255658c4e45df855379738b120f0543b11027f
Closes-Bug: #1714285


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714285

Title:
  Hyper-V: leaked resources after failed spawn

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Volume connections as well as vif ports are not cleaned up after a
  failed instance spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1714285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710083] Re: [api-ref] Allow to set/modify network mtu

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499797
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=24bdf4009b217f34f95f800a0458a5cf4d0f8fde
Submitter: Jenkins
Branch:master

commit 24bdf4009b217f34f95f800a0458a5cf4d0f8fde
Author: Ihar Hrachyshka 
Date:   Thu Aug 31 13:36:09 2017 -0700

Document the new net-mtu-writable extension

The extension was added in Pike. This patch is intended for backport.

Closes-Bug: #1710083
Change-Id: Ie747df83e62b3c1d433441ab0a7c00b6b54ce5d4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1710083

Title:
  [api-ref] Allow to set/modify network mtu

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/483518
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit f21c7e2851bc99b424bdc5322dcd0e3dee7ee5a3
  Author: Ihar Hrachyshka 
  Date:   Mon Aug 7 10:18:11 2017 -0700

  Allow to set/modify network mtu
  
  This patch adds ``net-mtu-writable`` API extension that allows to write
  to network ``mtu`` attribute.
  
  The patch also adds support for the extension to ml2, as well as covers
  the feature with unit and tempest tests. Agent side implementation of
  the feature is moved into a separate patch to ease review.
  
  DocImpact: neutron controller now supports ``net-mtu-writable`` API
 extension.
  APIImpact: new ``net-mtu-writable`` API extension was added.
  
  Related-Bug: #1671634
  Change-Id: Ib232796562edd8fa69ec06b0cc5cb752c1467add

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1710083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715470] Re: test_rpc_consumer_isolation fails with oslo.messaging 5.31.0

2017-09-06 Thread Matt Riedemann
Nova has the same test:

http://logs.openstack.org/70/500770/9/check/gate-cross-nova-python27
-ubuntu-xenial/9e412b4/console.html#_2017-09-06_18_37_07_065637

** Changed in: cinder
   Importance: Undecided => Medium

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715470

Title:
  test_rpc_consumer_isolation fails with oslo.messaging 5.31.0

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen in this upper-constraints change:

  http://logs.openstack.org/70/500770/9/check/gate-cross-cinder-python27
  -ubuntu-xenial/1b40c85/console.html#_2017-09-06_18_33_51_290254

  2017-09-06 18:33:51.290254 | 
cinder.tests.unit.test_test.IsolationTestCase.test_rpc_consumer_isolation
  2017-09-06 18:33:51.290274 | 
-
  2017-09-06 18:33:51.290280 | 
  2017-09-06 18:33:51.290289 | Captured traceback:
  2017-09-06 18:33:51.290298 | ~~~
  2017-09-06 18:33:51.290311 | Traceback (most recent call last):
  2017-09-06 18:33:51.290334 |   File "cinder/tests/unit/test_test.py", 
line 46, in test_rpc_consumer_isolation
  2017-09-06 18:33:51.290348 | endpoints=[NeverCalled()])
  2017-09-06 18:33:51.290364 |   File "cinder/rpc.py", line 159, in 
get_server
  2017-09-06 18:33:51.290377 | access_policy=access_policy)
  2017-09-06 18:33:51.290418 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/debtcollector/updating.py",
 line 64, in wrapper
  2017-09-06 18:33:51.290433 | return wrapped(*args, **kwargs)
  2017-09-06 18:33:51.290475 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 214, in get_rpc_server
  2017-09-06 18:33:51.290486 | access_policy)
  2017-09-06 18:33:51.290527 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 155, in __init__
  2017-09-06 18:33:51.290555 | target = getattr(ep, 'target', None)
  2017-09-06 18:33:51.290576 |   File "cinder/tests/unit/test_test.py", 
line 42, in __getattribute__
  2017-09-06 18:33:51.290592 | self.fail(msg="I should never get 
called.")
  2017-09-06 18:33:51.290648 |   File 
"/home/jenkins/workspace/gate-cross-cinder-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  2017-09-06 18:33:51.290666 | raise self.failureException(msg)
  2017-09-06 18:33:51.290681 | AssertionError: I should never get called.

  https://github.com/openstack/oslo.messaging/compare/5.30.0...5.31.0

  My guess is this change breaks the test:

  
https://github.com/openstack/oslo.messaging/commit/b7382d58d773e9be61dda3fac5b2e3cbddc22a22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1715470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715462] [NEW] Instances failing quota recheck end up with no assigned cell

2017-09-06 Thread Mohammed Naser
Public bug reported:

When an instance fails the quota rechecks codebase which is here:

https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L992-L1006

It raises an exception, however, the cell mapping is only saved much
later (thanks help of dansmith for finding this):

https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L1037-L1043

This results in an instance with an unassigned cell, where it should
technically be the cell it was scheduled into.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cells quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715462

Title:
  Instances failing quota recheck end up with no assigned cell

Status in OpenStack Compute (nova):
  New

Bug description:
  When an instance fails the quota rechecks codebase which is here:

  
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L992-L1006

  It raises an exception, however, the cell mapping is only saved much
  later (thanks help of dansmith for finding this):

  
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L1037-L1043

  This results in an instance with an unassigned cell, where it should
  technically be the cell it was scheduled into.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715463] [NEW] binary name gets confused under upgrades of osapi_compute and metadata

2017-09-06 Thread Ebbex
Public bug reported:

During an upgrade, you'll already have a entry in the nova.services
table for 'nova-osapi_compute'.

The new wsgi app has NAME='osapi_compute' and first queries for this name, 
which yields 0 rows.
Then since there's no entry it decides to create a new entry with INSERT where 
it appends 'nova-' to this 'name'. Problem is there's already an entry for 
'nova-osapi_compute', so now the insert fails because of duplicate entries.

So NAME has to be changed, or append 'nova-' on both queries.


Also the queries

SELECT
if exists UPDATE
if not exists INSERT

Could really just boil down to

UPDATE
if fail INSERT

This way it's atomic aswell.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715463

Title:
  binary name gets confused under upgrades of osapi_compute and metadata

Status in OpenStack Compute (nova):
  New

Bug description:
  During an upgrade, you'll already have a entry in the nova.services
  table for 'nova-osapi_compute'.

  The new wsgi app has NAME='osapi_compute' and first queries for this name, 
which yields 0 rows.
  Then since there's no entry it decides to create a new entry with INSERT 
where it appends 'nova-' to this 'name'. Problem is there's already an entry 
for 'nova-osapi_compute', so now the insert fails because of duplicate entries.

  So NAME has to be changed, or append 'nova-' on both queries.

  
  Also the queries

  SELECT
  if exists UPDATE
  if not exists INSERT

  Could really just boil down to

  UPDATE
  if fail INSERT

  This way it's atomic aswell.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715217] Re: nova xenapi unit tests fail with os-xenapi 0.3.0

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500968
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=81b99caae021593ba65f2fe6a2dbc0f9532d1c70
Submitter: Jenkins
Branch:master

commit 81b99caae021593ba65f2fe6a2dbc0f9532d1c70
Author: Matt Riedemann 
Date:   Tue Sep 5 16:06:31 2017 -0400

Make xen unit tests work with os-xenapi>=0.3.0

Change Ie1b49a206b57219083059871f326926cc4628142 in os-xenapi
0.3.0 requires that the URL passed into the XenAPISession is
an actual URL, which means we need to fix a bunch of unit tests.

Change-Id: Ida4b8c33e8b3bbd03548648f8e57d923b255f35c
Closes-Bug: #1715217


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715217

Title:
  nova xenapi unit tests fail with os-xenapi 0.3.0

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Global Requirements:
  Confirmed

Bug description:
  Noticed here: https://review.openstack.org/#/c/500770/

  Failures are here: http://logs.openstack.org/70/500770/3/check/gate-
  cross-nova-python27-ubuntu-xenial/1305547/testr_results.html.gz

  For example:

  
nova.tests.unit.compute.test_compute_xen.ComputeXenTestCase.test_sync_power_states_instance_not_found_StringException:
  pythonlogging:'': {{{2017-09-05 17:33:49,276 INFO [nova.virt.driver]
  Loading compute driver 'xenapi.XenAPIDriver'}}}

  Traceback (most recent call last):
File "nova/tests/unit/compute/test_compute_xen.py", line 40, in setUp
  self.compute = manager.ComputeManager()
File "nova/compute/manager.py", line 541, in __init__
  self.driver = driver.load_compute_driver(self.virtapi, compute_driver)
File "nova/virt/driver.py", line 1609, in load_compute_driver
  virtapi)
File 
"/home/jenkins/workspace/gate-cross-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/importutils.py",
 line 44, in import_object
  return import_class(import_str)(*args, **kwargs)
File "nova/virt/xenapi/driver.py", line 90, in __init__
  originator="nova")
File 
"/home/jenkins/workspace/gate-cross-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/os_xenapi/client/session.py",
 line 90, in __init__
  self.ip = self._get_ip_from_url(url)
File 
"/home/jenkins/workspace/gate-cross-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/os_xenapi/client/session.py",
 line 137, in _get_ip_from_url
  return socket.gethostbyname(url_parts.netloc)
File 
"/home/jenkins/workspace/gate-cross-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/eventlet/support/greendns.py",
 line 477, in gethostbyname
  rrset = resolve(hostname)
File 
"/home/jenkins/workspace/gate-cross-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/eventlet/support/greendns.py",
 line 364, in resolve
  raise EAI_NODATA_ERROR
  socket.gaierror: [Errno -5] No address associated with hostname

  It looks like this is due to this change that went into os-xenapi
  0.3.0:

  https://review.openstack.org/#/c/485933/

  I don't know if this is an issue in os-xenapi (regression) or if nova
  needs to now start stubbing out the XenAPISession init code.

  It looks like the unit tests within os-xenapi are mocking out the call
  to the socket module:

  https://github.com/openstack/os-
  xenapi/blob/0.3.0/os_xenapi/tests/client/test_session.py#L32

  It would be nice if there were a fixture class in the os-xenapi
  library that nova's xenapi unit tests could load and that would
  perform the proper stubs like this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538740] Re: SIGTERM on neutron-l3 agent process causes stacktrace

2017-09-06 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538740

Title:
  SIGTERM on neutron-l3 agent process causes stacktrace

Status in neutron:
  Invalid

Bug description:
  vagrant@vagrant-ubuntu-wily-32:/opt/stack/logs$ pkill l3

  
  Log:

  2016-01-27 20:45:39.152 14651 DEBUG neutron.agent.linux.utils [-] Exit code: 
0 execute /opt/stack/neutron/neutron/agent/linux/utils.py:142
  2016-01-27 20:45:50.012 DEBUG oslo_concurrency.lockutils 
[req-3c4051c2-ceb6-404b-9086-ccc17569bfdd None None] Acquired semaphore 
"singleton_lock" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
  2016-01-27 20:45:50.014 DEBUG oslo_concurrency.lockutils 
[req-3c4051c2-ceb6-404b-9086-ccc17569bfdd None None] Releasing semaphore 
"singleton_lock" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2016-01-27 20:45:50.015 14651 ERROR oslo.messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: 'NoneType' object has no attribute 'info'
  2016-01-27 20:45:50.015 14651 ERROR root [-] Unexpected exception occurred 1 
time(s)... retrying.
  2016-01-27 20:45:50.015 14651 ERROR root Traceback (most recent call last):
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 234, in 
wrapper
  2016-01-27 20:45:50.015 14651 ERROR root return infunc(*args, **kwargs)
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_executors/impl_pooledexecutor.py",
 line 98, in _runner
  2016-01-27 20:45:50.015 14651 ERROR root 
prefetch_size=self.dispatcher.batch_size)
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/base.py", line 
45, in wrapper
  2016-01-27 20:45:50.015 14651 ERROR root msg = func(in_self, 
timeout=watch.leftover(return_none=True))
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 212, in poll
  2016-01-27 20:45:50.015 14651 ERROR root 
self.conn.consume(timeout=timeout)
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py",
 line 904, in consume
  2016-01-27 20:45:50.015 14651 ERROR root error_callback=_error_callback)
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py",
 line 689, in ensure
  2016-01-27 20:45:50.015 14651 ERROR root ret, channel = autoretry_method()
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/kombu/connection.py", line 449, in 
_ensured
  2016-01-27 20:45:50.015 14651 ERROR root errback and errback(exc, 0)
  2016-01-27 20:45:50.015 14651 ERROR root   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py",
 line 625, in on_error
  2016-01-27 20:45:50.015 14651 ERROR root 
info.update(self.connection.info())
  2016-01-27 20:45:50.015 14651 ERROR root AttributeError: 'NoneType' object 
has no attribute 'info'
  2016-01-27 20:45:50.015 14651 ERROR root
  2016-01-27 20:45:50.021 INFO oslo_rootwrap.client 
[req-3c4051c2-ceb6-404b-9086-ccc17569bfdd None None] Stopping rootwrap daemon 
process with pid=14684

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677047] Re: glance download fsync raises EINVAL for FIFOs

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/451094
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=641798f75f50e7b4db3c7a8ccefcb8b590228893
Submitter: Jenkins
Branch:master

commit 641798f75f50e7b4db3c7a8ccefcb8b590228893
Author: Eric Fried 
Date:   Tue Mar 28 17:13:09 2017 -0500

Glance download: only fsync files

Recent changes [1][2] added fsync to the data file in
GlanceImageServiceV2.download.  This raises EINVAL if the file is a
pipe/FIFO or socket [3].

This change set adds a static _safe_fsync method to GlanceImageServiceV2
which conditions the fsync call not to run if the file handle represents
a pipe/FIFO or socket, and uses that call from the download method.

[1] https://review.openstack.org/#/c/441246/
[2] https://review.openstack.org/#/c/443583/
[3] http://man7.org/linux/man-pages/man2/fsync.2.html#ERRORS

Change-Id: Ied5788deadcf3d1336a48288cf49d8571db23659
Closes-Bug: #1677047


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1677047

Title:
  glance download fsync raises EINVAL for FIFOs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The nova.image.glance.GlanceImageServiceV2.download method recently added 
fsync [1][2] before closing the download file.

  Some hypervisors don't use regular files for download.  For example,
  PowerVM uses a FIFO pipe, the other end of which is read by a service
  that offloads the image data to a remote node.

  fsync on a pipe, FIFO, or socket errors with EINVAL [3].

  [1] https://review.openstack.org/#/c/441246/
  [2] https://review.openstack.org/#/c/443583/
  [3] http://man7.org/linux/man-pages/man2/fsync.2.html#ERRORS

  Steps to reproduce
  ==
  Invoke nova.image.glance.GlanceImageServiceV2.download with data=None, 
dst_path=path where path represents a FIFO or socket.

  Expected result
  ===
  Successful transfer of data through the FIFO/socket.

  Actual result
  =
  An exception similar to the following:

File 
"/usr/local/lib/python2.7/dist-packages/pypowervm/internal_utils/thread_utils.py",
 line 34, in future_func
  return func(*args, **kwargs)
File "/opt/stack/nova/nova/virt/powervm/disk/ssp.py", line 161, in upload
  IMAGE_API.download(context, image_meta.id, dest_path=path)
File "/opt/stack/nova/nova/image/api.py", line 184, in download
  dst_path=dest_path)
File "/opt/stack/nova/nova/image/glance.py", line 387, in download
  os.fsync(data.fileno())
  OSError: [Errno 22] Invalid argument

  Immutable reference to the offending fsync call:
  
https://github.com/openstack/nova/blob/640b152004fe3d9c43c26538809c3ac796f20eba/nova/image/glance.py#L375

  Environment
  ===
  devstack, pike, with the nova tree at this in-flight patch set: 
https://review.openstack.org/#/c/443189/15

  Ubuntu 16.04.1 LTS running on PowerVM NovaLink, using Shared Storage
  Pools through a single VIOS.

  No networking.

  Logs & Configs
  ==
  Available on request if needed.  This is a snap to reproduce.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1677047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715437] [NEW] No docs for 'emulator_threads_policy'

2017-09-06 Thread Stephen Finucane
Public bug reported:

There is no documentation for the 'emulator_threads_policy' flavor extra
spec. This should be included in [1].

[1] https://docs.openstack.org/nova/pike/admin/flavors.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715437

Title:
  No docs for 'emulator_threads_policy'

Status in OpenStack Compute (nova):
  New

Bug description:
  There is no documentation for the 'emulator_threads_policy' flavor
  extra spec. This should be included in [1].

  [1] https://docs.openstack.org/nova/pike/admin/flavors.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705450] Re: Nova doesn't pass the conf object to oslo_reports

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/485575
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f1de38c26fcd88eb88b8792eaa1651c07fc40485
Submitter: Jenkins
Branch:master

commit f1de38c26fcd88eb88b8792eaa1651c07fc40485
Author: AlexMuresan 
Date:   Thu Jul 20 13:59:05 2017 +0300

Pass config object to oslo_reports

oslo_reports accepts a few config options that cannot be used at
the moment since nova does not pass the config object.

This change ensures that we properly set up oslo_reports when
starting the nova services.

Change-Id: Iacdca85402647861984405a4c7246f117eee
Closes-Bug: #1705450


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1705450

Title:
  Nova doesn't pass the conf object to oslo_reports

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  oslo_reports accepts a few config options that cannot be used at the
  moment since nova does not pass the config object.

  For example: one may want to use the file trigger feature, which has
  to be configured and is not possible at the moment. This especially
  affects Windows, in which case we cannot use the default signals.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1705450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688054] Re: Flavors in Administrator Guide - confusing description for rxtx factor

2017-09-06 Thread Stephen Finucane
These docs are now part of nova, so I'm retargeting this bug
accordingly.

** Project changed: openstack-manuals => nova

** Changed in: nova
 Assignee: (unassigned) => Stephen Finucane (stephenfinucane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1688054

Title:
  Flavors in Administrator Guide - confusing description for rxtx factor

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  - [x] This doc is inaccurate in this way: __

  The RXTX Factor description currently states:

  "Optional property allows created servers to have a different
  bandwidth cap than that defined in the network they are attached to.
  This factor is multiplied by the rxtx_base property of the network.
  Default value is 1.0. That is, the same as attached network. This
  parameter is only available for Xen or NSX based systems."

  The compute API reference has a better and more accurate description:

  https://developer.openstack.org/api-ref/compute/?expanded=create-
  flavor-detail#create-flavor

  "The receive / transmit factor (as a float) that will be set on ports
  if the network backend supports the QOS extension. Otherwise it will
  be ignored. It defaults to 1.0."

  The admin guide description is really talking about nova-network and
  the xen virt driver, which is not untrue, but is a bit confusing (I
  don't know where the NSX part comes from).

  But the way this is used with neutron in nova is on the port if the
  QOS extension is enabled. Nova will likely deprecate this field in the
  flavor resource since nova-network is deprecated and if you're doing
  QOS on ports you should be doing that via the networking service, not
  the compute service flavors.

  ---
  Release: 15.0.0 on 2017-05-03 11:19
  SHA: 991820bc90e3f08a7ddfd1a649bc78a12a9406ab
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/compute-flavors.rst
  URL: https://docs.openstack.org/admin-guide/compute-flavors.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1688054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714358] Re: ds-identify does not find CloudStack datasource

2017-09-06 Thread Swen Brueseke
We recreated the image with newest version of cloud-init and now it is
working!

** Changed in: cloud-init
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1714358

Title:
  ds-identify does not find CloudStack datasource

Status in cloud-init:
  Fix Released

Bug description:
  We are usng CloudStack with XenServer as hypervisor and we are getting
  this:

  Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-81-generic x86_64)

   * Documentation:  https://help.ubuntu.com
   * Management: https://landscape.canonical.com
   * Support:https://ubuntu.com/advantage
  **
  # A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   []   #
  # However, the datasource used was: CloudStack   #
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1669675  #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
  ##
  # #cloud-config  #
  # warnings:  #
  #   dsid_missing_source: off #
  **

  Disable the warnings above by:
touch /home/ubuntu/.cloud-warnings.skip
  or
touch /var/lib/cloud/instance/warnings/.skip

  
  This is our config in /etc/cloud/cloud.cfg.d/99_cloudstack.cfg:
  datasource:
CloudStack: {}
None: {}
  datasource_list: [ CloudStack ]

  this is the output of /run/cloud-init/ds-identify.log:
  [up 3.77s] ds-identify
  policy loaded: mode=report report=false found=all maybe=all notfound=enabled
  /etc/cloud/cloud.cfg.d/99_cloudstack.cfg set datasource_list: [ CloudStack ]
  DMI_PRODUCT_NAME=HVM domU
  DMI_SYS_VENDOR=Xen
  DMI_PRODUCT_SERIAL=75c58df9-e2b6-8139-c697-7d93c287a1e7
  DMI_PRODUCT_UUID=75C58DF9-E2B6-8139-C697-7D93C287A1E7
  PID_1_PRODUCT_NAME=unavailable
  DMI_CHASSIS_ASSET_TAG=
  FS_LABELS=
  KERNEL_CMDLINE=BOOT_IMAGE=/boot/vmlinuz-4.4.0-81-generic 
root=UUID=3f377544-33e0-4408-b498-72fca4233a00 ro vga=0x318 
console=ttyS0,115200n8 console=hvc0 consoleblank=0 elevator=deadline 
biosdevname=0 net.ifnames=0
  VIRT=xen
  UNAME_KERNEL_NAME=Linux
  UNAME_KERNEL_RELEASE=4.4.0-81-generic
  UNAME_KERNEL_VERSION=#104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017
  UNAME_MACHINE=x86_64
  UNAME_NODENAME=swen-test-ubuntu1604
  UNAME_OPERATING_SYSTEM=GNU/Linux
  DSNAME=
  DSLIST=CloudStack
  MODE=report
  ON_FOUND=all
  ON_MAYBE=all
  ON_NOTFOUND=enabled
  pid=197 ppid=188
  is_container=false
  single entry in datasource_list (CloudStack) use that.
  [up 3.83s] returning 0

  this is the output of /run/cloud-init/cloud.cfg:
  di_report:
datasource_list: [ CloudStack, None ]

  cloud-init version is: 0.7.9-153-g16a7302f-0ubuntu1~16.04.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1714358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688054] [NEW] Flavors in Administrator Guide - confusing description for rxtx factor

2017-09-06 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

- [x] This doc is inaccurate in this way: __

The RXTX Factor description currently states:

"Optional property allows created servers to have a different bandwidth
cap than that defined in the network they are attached to. This factor
is multiplied by the rxtx_base property of the network. Default value is
1.0. That is, the same as attached network. This parameter is only
available for Xen or NSX based systems."

The compute API reference has a better and more accurate description:

https://developer.openstack.org/api-ref/compute/?expanded=create-flavor-
detail#create-flavor

"The receive / transmit factor (as a float) that will be set on ports if
the network backend supports the QOS extension. Otherwise it will be
ignored. It defaults to 1.0."

The admin guide description is really talking about nova-network and the
xen virt driver, which is not untrue, but is a bit confusing (I don't
know where the NSX part comes from).

But the way this is used with neutron in nova is on the port if the QOS
extension is enabled. Nova will likely deprecate this field in the
flavor resource since nova-network is deprecated and if you're doing QOS
on ports you should be doing that via the networking service, not the
compute service flavors.

---
Release: 15.0.0 on 2017-05-03 11:19
SHA: 991820bc90e3f08a7ddfd1a649bc78a12a9406ab
Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/admin-guide/source/compute-flavors.rst
URL: https://docs.openstack.org/admin-guide/compute-flavors.html

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: admin-guide compute flavors low-hanging-fruit
-- 
Flavors in Administrator Guide - confusing description for rxtx factor
https://bugs.launchpad.net/bugs/1688054
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714769] Re: quota_details is broken for CountableResource provided by plugins other than the core plugin

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500367
Committed: 
https://git.openstack.org/cgit/openstack/networking-ovn/commit/?id=dc11f5cedbd2b51f75c944e16a91bd94e053f53a
Submitter: Jenkins
Branch:master

commit dc11f5cedbd2b51f75c944e16a91bd94e053f53a
Author: Numan Siddique 
Date:   Sun Sep 3 19:20:19 2017 +0530

Track router and floatingip quota usage using TrackedResource

Presently these resources are created as CountableResource. The
newly added quota_details extension is not handling the countable
resources properly. See the bug description for more details.
It's any way better to create as trackable resources. Please see
[1] for more details on trackable resources.

[1] - 
https://docs.openstack.org/neutron/latest/contributor/internals/quota.html
Closes-bug: #1714769
Change-Id: I50dcb05d3d58ee6c23e59861d76da3c3ef83022b


** Changed in: networking-ovn
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714769

Title:
  quota_details is broken for CountableResource provided by plugins
  other than the core plugin

Status in networking-midonet:
  In Progress
Status in networking-ovn:
  Fix Released
Status in neutron:
  Confirmed

Bug description:
  The neutron tempest API test -
  neutron.tests.tempest.api.admin.test_quotas.QuotasTest.test_detail_quotas
  calls the API - ""GET /v2.0/quotas/{tenant_id}/details" which is
  failing with the below logs in the neutron server

  INFO neutron.pecan_wsgi.hooks.translation [None 
req-64308681-f568-4dea-961b-5c9de579ac7e admin admin] GET failed (client 
error): The resource could not be found.
  INFO neutron.wsgi [None req-64308681-f568-4dea-961b-5c9de579ac7e admin admin] 
10.0.0.7 "GET /v2.0/quotas/ff5c5121117348df94aa181d3504375b/detail HTTP/1.1" 
status: 404  len: 309 time: 0.0295429
  ERROR neutron.api.v2.resource [None req-b1b677cd-73b1-435d-bcc4-845dfa713046 
admin admin] details failed: No details.: AttributeError: 'Ml2Plugin' object 
has no attribute 'get_floatingips'
  ERROR neutron.api.v2.resource Traceback (most recent call last):
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 98, in resource
  ERROR neutron.api.v2.resource result = method(request=request, **args)
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/quotasv2_detail.py", line 56, in details
  ERROR neutron.api.v2.resource self._get_detailed_quotas(request, id)}
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/quotasv2_detail.py", line 46, in 
_get_detailed_quotas
  ERROR neutron.api.v2.resource resource_registry.get_all_resources(), 
tenant_id)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 163, in wrapped
  ERROR neutron.api.v2.resource return method(*args, **kwargs)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 93, in wrapped
  ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 89, in wrapped
  ERROR neutron.api.v2.resource return f(*args, **kwargs)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 150, in wrapper
  ERROR neutron.api.v2.resource ectxt.value = e.inner_exc
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
  ERROR neutron.api.v2.resource return f(*args, **kwargs)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 128, in wrapped
  ERROR neutron.api.v2.resource LOG.debug("Retry wrapper got retriable 
exception: %s", e)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1668542] Re: nova.conf - az configuration options in Configuration Reference

2017-09-06 Thread Andreas Jaeger
** No longer affects: openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1668542

Title:
  nova.conf - az configuration options in Configuration Reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: __see below
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  

  The descriptions of default_availability_zone and
  default_schedule_zone are confusing, they seem to serve the same
  purpose and it is not clear how they differ.

  Looking at the code a bit, the text for default_schedule_zone is even
  wrong, it does not affect the scheduler (at least not directly), but
  is being used in the "create server" call in the API in case that the
  original request did not specify an availability_zone.

  The default_availability_zone in contrast seems to be used to evaluate
  what the az for a compute host will be if it is not being set by other
  means.

  It would be nice if someone from Nova team could confirm this before
  we start updating the docs.

  ---
  Release: 0.9 on 2017-02-28 05:45
  SHA: f8b8c1c2f797d927274c6b005dffb4acb18b3a6e
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/compute/config-options.rst
  URL: 
https://docs.openstack.org/draft/config-reference/compute/config-options.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1668542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668542] Re: nova.conf - az configuration options in Configuration Reference

2017-09-06 Thread Stephen Finucane
Removing openstack-manuals as docs are maintained in the nova tree since
Pike and this is fixed there

** Changed in: openstack-manuals
   Status: Triaged => Invalid

** Changed in: openstack-manuals
 Assignee: foundjem (foundjem-devops) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1668542

Title:
  nova.conf - az configuration options in Configuration Reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: __see below
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  

  The descriptions of default_availability_zone and
  default_schedule_zone are confusing, they seem to serve the same
  purpose and it is not clear how they differ.

  Looking at the code a bit, the text for default_schedule_zone is even
  wrong, it does not affect the scheduler (at least not directly), but
  is being used in the "create server" call in the API in case that the
  original request did not specify an availability_zone.

  The default_availability_zone in contrast seems to be used to evaluate
  what the az for a compute host will be if it is not being set by other
  means.

  It would be nice if someone from Nova team could confirm this before
  we start updating the docs.

  ---
  Release: 0.9 on 2017-02-28 05:45
  SHA: f8b8c1c2f797d927274c6b005dffb4acb18b3a6e
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/compute/config-options.rst
  URL: 
https://docs.openstack.org/draft/config-reference/compute/config-options.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1668542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714416] Re: Incorrect response returned for invalid Accept header

2017-09-06 Thread Brian Rosmaita
Glance is behaving within acceptable parameters of RFC 7231 ("HTTP/1.1
Semantics and Content") on this [0].

If we suddenly begin enforcing this, we're likely to break currently
working clients who specify "Accept: aplication/json".  On one hand,
it's their fault for having a typo, but on the other hand, it's not
clear to me what we gain by enforcing the 406.  I think we want to err
on the side of backward compatibility.


[0] https://tools.ietf.org/html/rfc7231#section-5.3.2 (see the fifth paragraph 
counting backwards from the end of the section)

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714416

Title:
  Incorrect response returned for invalid Accept header

Status in Cinder:
  Won't Fix
Status in Glance:
  Invalid
Status in OpenStack Heat:
  New
Status in OpenStack Identity (keystone):
  New
Status in masakari:
  Won't Fix
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  As of now, when user passes 'Accept' header in request other than JSON
  and XML using curl command then it returns 200 OK response with json
  format data.

  In api-ref guide [1] also it's not clearly mentioned about what
  response it should return if invalid value for 'Accept' header is
  specified. IMO instead of 'HTTP 200 OK' it should return 'HTTP 406 Not
  Acceptable' response.

  Steps to reproduce:
   
  Request:
  curl -g -i -X GET 
http://controller/volume/v2/c72e66cc4f1341f381e0c2eb7b28b443/volumes/detail -H 
"User-Agent: python-cinderclient" -H "Accept: application/abc" -H 
"X-Auth-Token: cd85aff745ce4dc0a04f686b52cf7e4f"
   
   
  Response:
  HTTP/1.1 200 OK
  Date: Thu, 31 Aug 2017 07:12:18 GMT
  Server: Apache/2.4.18 (Ubuntu)
  x-compute-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Content-Type: application/json
  Content-Length: 2681
  x-openstack-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Connection: close
   
  [1] 
https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1714416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715395] [NEW] FWaaS: Firewall creation fails in case of distributed routers (Pike)

2017-09-06 Thread Jens Offenbach
Public bug reported:

I have manually setup a fresh OpenStack Pike HA environment based on
Ubuntu 16.04.3 in conjunction with DVR. Firewall creation works in case
of centralized routers, but when a firewall gets attached to a
distributed router, the firewall gets stuck in "PENDUNG UPDATE". The log
file contains the following exception:

2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server 
[req-28e7a23e-fa55-4358-9977-c1db08435624 dddfba8e02f746799a6408a523e6cd25 
ed2d2efd86dd40e7a45491d8502318d3 - - -] Exception during message handling: 
AttributeError: 'DvrEdgeHaRouter' object has no attribute 'dist_fip_count'
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in 
_process_incoming
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, 
in dispatch
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _do_dispatch
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/oslo_log/helpers.py", line 67, in wrapper
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server return 
method(*args, **kwargs)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent.py",
 line 284, in create_firewall
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server firewall)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/drivers/linux/iptables_fwaas.py",
 line 89, in create_firewall
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server 
self._setup_firewall(agent_mode, apply_list, firewall)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/drivers/linux/iptables_fwaas.py",
 line 195, in _setup_firewall
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server agent_mode, 
router_info)
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/neutron_fwaas/services/firewall/drivers/linux/iptables_fwaas.py",
 line 119, in _get_ipt_mgrs_with_if_prefix
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server if 
router_info.dist_fip_count:
2017-09-06 13:58:29.572 22581 ERROR oslo_messaging.rpc.server AttributeError: 
'DvrEdgeHaRouter' object has no attribute 'dist_fip_count'

Some version information:
$ pip list | grep neutron
neutron (11.0.0)
neutron-fwaas (11.0.0)
neutron-fwaas-dashboard (1.0.1.dev1)
neutron-lbaas (11.0.0)
neutron-lbaas-dashboard (3.0.1)
neutron-lib (1.9.1)

##
l3_agent.ini
##

[DEFAULT]
agent_mode = dvr_snat
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

[agent]
extensions = fwaas

[fwaas]
agent_version = v1
driver = iptables
enabled = true

##
neutron.conf
##

[DEFAULT]
allow_overlapping_ips = true
auth_strategy = keystone
base_mac = 02:05:69:00:00:00
bind_host = 10.30.200.101
bind_port = 9696
core_plugin = ml2
debug = false
default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=WARN,oslo.messaging=WARN,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=WARN,dogpile.core.dogpile=WARN,oslo_service=WARN,neutron=WARN
dhcp_agents_per_network = 2
dns_domain = openstack.mycompany.com.
dvr_base_mac = 0A:05:69:00:00:00
endpoint_type = internalURL
host = os-network01
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
l3_ha = true
l3_ha_net_cidr = 169.254.192.0/18
log_dir = /var/log/neutron
max_l3_agents_per_router = 2
min_l3_agents_per_router = 2
notify_nova_on_port_data_changes = true
notify_nova_on_port_status_changes = true
router_distributed = true
service_plugins = router,firewall,qos,lbaasv2
state_path = /var/lib/neutron
transport_url = 
rabbit://neutron:neutronpass@os-rabbit01:5672,neutron:neutronpass@os-rabbit02:5672/openstack

[agent]
root_helper = sudo /usr/bin/neutron-rootwrap 

[Yahoo-eng-team] [Bug 1705683] Re: Leaked resources after cold migration

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/486955
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=20196b74dea4a1af4ea643a45fdb96b03f8ea96d
Submitter: Jenkins
Branch:master

commit 20196b74dea4a1af4ea643a45fdb96b03f8ea96d
Author: Alexandru Muresan 
Date:   Tue Jul 25 12:18:27 2017 +0300

Hyper-V: Perform proper cleanup after cold migration

At the moment, vif ports and volume connections are not cleaned up
on the source node after a cold migration.

This change addresses this issue by passing the network and block
device info objects when destroying the instance.

Change-Id: I4fd61a6ac09f194ad8be61e6dda092bfd402b806
Closes-Bug: #1705683


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1705683

Title:
  Leaked resources after cold migration

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Fix Committed

Bug description:
  At the moment, the vif ports are not unplugged after cold migration on
  the source node.

  This affects ovs ports, which have to be unplugged by nova.

  At the same time, attached volumes are not disconnected from the
  source node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1705683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715386] [NEW] [RFE]Support policy routing based on subnet

2017-09-06 Thread zhaobo
Public bug reported:

In real data center, it may contains several external gateways to access
external network. The VM instances can access all of them in L3 layer.
Each gateway may from different network providers, and different network
performance(such as stability, bandwidth, speed). Cloud admin may want
to make the VM in the specified subent of network to routing the traffic
to the site which own the best connection line. But the others traffic
pass through the original gateway. So we need to route the specified
traffic to other gateway.

For openstack, neutron. That means policy routing based on subnet. Each
subnet has individual route table, each subnet may have different
nexthop for support more flexible routing. Then the user of tenant may
have 1 network, and several subnets in it. But a part of subnets may
need to access Internal network only for working in company, the other
need to access Internet.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715386

Title:
  [RFE]Support policy routing based on subnet

Status in neutron:
  New

Bug description:
  In real data center, it may contains several external gateways to
  access external network. The VM instances can access all of them in L3
  layer. Each gateway may from different network providers, and
  different network performance(such as stability, bandwidth, speed).
  Cloud admin may want to make the VM in the specified subent of network
  to routing the traffic to the site which own the best connection line.
  But the others traffic pass through the original gateway. So we need
  to route the specified traffic to other gateway.

  For openstack, neutron. That means policy routing based on subnet.
  Each subnet has individual route table, each subnet may have different
  nexthop for support more flexible routing. Then the user of tenant may
  have 1 network, and several subnets in it. But a part of subnets may
  need to access Internal network only for working in company, the other
  need to access Internet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715380] [NEW] [RFE] Need Qos function based on public IPs

2017-09-06 Thread zhaobo
Public bug reported:

We use public IP address on SNAT/VPN service/DNAT. As now SNAT/VPN need
to support rate limit for access external network or the connection
cross openstack. I think [1] will meet this requirements. DNAT/port
forwarding[2] also need the Qos for the same reason which is saving the
bandwidth for not affecting other users.

[1]http://specs.openstack.org/openstack/neutron-specs/specs/pike/layer-3-rate-limit.html
[2]https://review.openstack.org/#/c/470596/4/specs/pike/port-forwarding.rst

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715380

Title:
  [RFE] Need Qos function based on public IPs

Status in neutron:
  New

Bug description:
  We use public IP address on SNAT/VPN service/DNAT. As now SNAT/VPN
  need to support rate limit for access external network or the
  connection cross openstack. I think [1] will meet this requirements.
  DNAT/port forwarding[2] also need the Qos for the same reason which is
  saving the bandwidth for not affecting other users.

  
[1]http://specs.openstack.org/openstack/neutron-specs/specs/pike/layer-3-rate-limit.html
  [2]https://review.openstack.org/#/c/470596/4/specs/pike/port-forwarding.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715374] [NEW] Reloading compute with SIGHUP prenvents instances to boot

2017-09-06 Thread sahid
Public bug reported:

When trying to boot a new instance at a compute-node, where nova-compute
received SIGHUP(the SIGHUP is used as a trigger for reloading mutable
options), it always failed.

  == nova/compute/manager.py ==
def cancel_all_events(self):
if self._events is None:
LOG.debug('Unexpected attempt to cancel events during shutdown.')
return
our_events = self._events
# NOTE(danms): Block new events
self._events = None<--- Set self._events to "None" 
...
=

  This will cause a NovaException when prepare_for_instance_event() was called.
  It's the cause of the failure of network allocation.

== nova/compute/manager.py ==
def prepare_for_instance_event(self, instance, event_name):
...
if self._events is None:
# NOTE(danms): We really should have a more specific error
# here, but this is what we use for our default error case
raise exception.NovaException('In shutdown, no new events '
  'can be scheduled')
=

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715374

Title:
  Reloading compute with SIGHUP prenvents instances to boot

Status in OpenStack Compute (nova):
  New

Bug description:
  When trying to boot a new instance at a compute-node, where nova-
  compute received SIGHUP(the SIGHUP is used as a trigger for reloading
  mutable options), it always failed.

== nova/compute/manager.py ==
  def cancel_all_events(self):
  if self._events is None:
  LOG.debug('Unexpected attempt to cancel events during shutdown.')
  return
  our_events = self._events
  # NOTE(danms): Block new events
  self._events = None<--- Set self._events to 
"None" 
  ...
  =

This will cause a NovaException when prepare_for_instance_event() was 
called.
It's the cause of the failure of network allocation.

  == nova/compute/manager.py ==
  def prepare_for_instance_event(self, instance, event_name):
  ...
  if self._events is None:
  # NOTE(danms): We really should have a more specific error
  # here, but this is what we use for our default error case
  raise exception.NovaException('In shutdown, no new events '
'can be scheduled')
  =

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715370] [NEW] Migration between DVR+HA and HA creating redundant "network:router_ha_interface" ports

2017-09-06 Thread venkata anil
Public bug reported:

When a router is migrated between DVR+HA and HA(i.e DVR+HA->HA and
HA->DVR+HA), redundant "network:router_ha_interface" ports are created.

To reproduce the issue(2 node setup with "dvr" and "dvr-snat" modes is 
sufficient), create a router 
dr1 in DVR+HA mode. Then repeatedly flip this router's  DVR+HA and HA flags. 
You can see redundant "network:router_ha_interface" ports.

I have a 2 node devstack setup, 1st l3-agent in "dvr" mode and 2nd one in 
"dvr-snat" mode.
Whenever  HA flag is set to router, port with device_owner 
"network:router_ha_interface" should be created for only 2nd node i.e l3 agent  
with "dvr-snat" mode.

Steps to reproduce:
1) create a network n1, and subnet on this network with name sn1
2) create a DVR+HA router(with name 'dr1'), attach it to sn1 through router 
interface add and set gateway(router-gateway-set public)
3) boot a vm on n1 and associate a floating ip
4) set admin-state to False i.e neutron router-update --admin-state-up False dr1
5) Now update the router to HA  i.e
   neutron router-update --distributed=False --ha=True 
   set admin-state to True
6) There will be two "network:router_ha_interface" ports, though one will be 
used by qrouter-xx namespace
7_ Again update router to DVR+HA
8) There will be three "network:router_ha_interface" ports, though one will be 
used by snat-xx namespace

I observed the "network:router_ha_interface" port first created will
always be used by qrouter-xx(when router is HA) and snat-xx(when router
is DVR+HA) and later created ports are never used.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715370

Title:
  Migration between DVR+HA and HA creating redundant
  "network:router_ha_interface" ports

Status in neutron:
  New

Bug description:
  When a router is migrated between DVR+HA and HA(i.e DVR+HA->HA and
  HA->DVR+HA), redundant "network:router_ha_interface" ports are
  created.

  To reproduce the issue(2 node setup with "dvr" and "dvr-snat" modes is 
sufficient), create a router 
  dr1 in DVR+HA mode. Then repeatedly flip this router's  DVR+HA and HA flags. 
You can see redundant "network:router_ha_interface" ports.

  I have a 2 node devstack setup, 1st l3-agent in "dvr" mode and 2nd one in 
"dvr-snat" mode.
  Whenever  HA flag is set to router, port with device_owner 
"network:router_ha_interface" should be created for only 2nd node i.e l3 agent  
with "dvr-snat" mode.

  Steps to reproduce:
  1) create a network n1, and subnet on this network with name sn1
  2) create a DVR+HA router(with name 'dr1'), attach it to sn1 through router 
interface add and set gateway(router-gateway-set public)
  3) boot a vm on n1 and associate a floating ip
  4) set admin-state to False i.e neutron router-update --admin-state-up False 
dr1
  5) Now update the router to HA  i.e
 neutron router-update --distributed=False --ha=True 
 set admin-state to True
  6) There will be two "network:router_ha_interface" ports, though one will be 
used by qrouter-xx namespace
  7_ Again update router to DVR+HA
  8) There will be three "network:router_ha_interface" ports, though one will 
be used by snat-xx namespace

  I observed the "network:router_ha_interface" port first created will
  always be used by qrouter-xx(when router is HA) and snat-xx(when
  router is DVR+HA) and later created ports are never used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715371] [NEW] _ensure_vr_id_and_network is not used anywhere in the code

2017-09-06 Thread venkata anil
Public bug reported:

_ensure_vr_id_and_network is not used anywhere in the code, hence can be
removed.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715371

Title:
  _ensure_vr_id_and_network is not used anywhere in the code

Status in neutron:
  New

Bug description:
  _ensure_vr_id_and_network is not used anywhere in the code, hence can
  be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715194] Re: floating ips not reachable with linuxbridge agent

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/500927
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f1b43395e787e8f6d91436bec42f79a6ea0858bd
Submitter: Jenkins
Branch:master

commit f1b43395e787e8f6d91436bec42f79a6ea0858bd
Author: Stefan Nica 
Date:   Tue Sep 5 18:55:43 2017 +0200

linuxbridge-agent: add missing sysctl rootwrap entry

Sysctl was missing from the linuxbridge plugin rootwrap
configuration file. This was causing failures in the
linuxbridge agent when networks are created:

Rootwrap error running command: ['sysctl', '-w', 
'net.ipv6.conf.eth0/557.disable_ipv6=1']:

NOTE: this bug was hidden by the fact that sysctl was
covered by the iptables-firewall.filters until recently,
when it was removed (see https://review.openstack.org/#/c/436315/).

Change-Id: Id20175df30d4d6039fb42e722d03f39521f6a499
Closes-Bug: #1715194


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715194

Title:
  floating ips not reachable with linuxbridge agent

Status in neutron:
  Fix Released

Bug description:
  The floating IPs of instances are not reachable when linuxbridge is
  used as the ML2 mechanism driver. The vlan subinterfaces on compute
  nodes are DOWN and the linuxbridge-agent logs exhibit errors such as:

  2017-09-05 13:58:56.625 30355 ERROR neutron.agent.linux.utils 
[req-f34b20bf-9c8c-41dd-a8f7-cf04379af6c3 - - - - -] Rootwrap error running 
command: ['sysctl', '-w', 'net.ipv6.conf.eth0/557.disable_ipv6=1']: 
RemoteError: 
  ---
  Unserializable message: Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 288, in 
serve_client
  send(msg)
File "/usr/lib/python2.7/site-packages/oslo_rootwrap/jsonrpc.py", line 128, 
in send
  s = self.dumps(obj)
File "/usr/lib/python2.7/site-packages/oslo_rootwrap/jsonrpc.py", line 170, 
in dumps
  return json.dumps(obj, cls=RpcJSONEncoder).encode('utf-8')
File "/usr/lib64/python2.7/json/__init__.py", line 251, in dumps
  sort_keys=sort_keys, **kw).encode(obj)
File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
  chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
  return _iterencode(o, 0)
File "/usr/lib/python2.7/site-packages/oslo_rootwrap/jsonrpc.py", line 43, 
in default
  return super(RpcJSONEncoder, self).default(o)
File "/usr/lib64/python2.7/json/encoder.py", line 184, in default
  raise TypeError(repr(o) + " is not JSON serializable")
  TypeError: ValueError('I/O operation on closed file',) is not JSON 
serializable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714378] Re: Pecan is missing logic to add project_id to fields when tenant_id is specified

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499429
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bf36f8c934b49310103a1b1831c9f7c8a0d14adf
Submitter: Jenkins
Branch:master

commit bf36f8c934b49310103a1b1831c9f7c8a0d14adf
Author: Kevin Benton 
Date:   Wed Aug 30 20:05:30 2017 -0700

Pecan: set tenant_id field when project_id set

Add logic to pecan to add 'project_id' as a mandatory policy
field whenever 'tenant_id' is specified as a policy field.
This logic was added to the previous controller code to ensure
that 'project_id' was queried in the database whenever 'tenant_id'
was required by the policy engine and the user had fields set.

Closes-Bug: #1714378
Change-Id: I3652fbc50ce0c9a7cd1cc193e0933cf0373ecb54


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714378

Title:
  Pecan is missing logic to add project_id to fields when tenant_id is
  specified

Status in neutron:
  Fix Released

Bug description:
  Pecan is missing this logic in the old controller code that adds
  'tenant_id' to the filters required by the policy engine when the
  'project_id' field is specified:
  
https://github.com/openstack/neutron/blob/8d2c1bd88b14eefbea74c72f384cb9952e7ee62e/neutron/api/v2/base.py#L96

  This is necessary when tenants request that only the tenant_id field
  is returned and we have a new class of resource that has a project_id
  field only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714378/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715340] [NEW] [RFE] reduce the duration of network interrupt during live migration in DVR scenario

2017-09-06 Thread zhaobo
Public bug reported:

Nova contains 3 stages when process 1ive migration:
1. pre_live_migration
2. migrating
3. post_live_migration
The current implement, nova will plug a new vif on the target host. The 
ovs-agent on the target host will process this new vif, and try to up this port 
on target host. But the port host_id is src host now.The agent send a rpc to 
server and return nothing..
After nova process the real migration in stage 2. Maybe the flavor of the 
instance is small and the duration is very short. Then in stage 3, nova call 
neutron to update the port's host_id of instance. Network interrupt begins. In 
the whole live migration ,the vm status is always ACTIVE. But users can not 
login the VM, or the applications running in the VM will be offline for a 
while. The reason is neutron process the whole traffic is too late. When nova 
migrate the instance to the target host, and setup the instance by libvirt, the 
network traffic provided by neutron is not ready, that means we need to verify 
both l2 and l3 connection are ready for this.

We test in our product env which is the old release Mitaka(I still think
there is the same issue in master), the interrupt time last depends on
the port counts in the router subnets, also whether the port is
associated with floatingip. When the ports counts <20, the interrupt
duration <= 8 seconds, the time will increase 5s if the port is
associated with floatingip.  When port counts > 20, the duration <=30s,
also increase 5s by floatingip.

This cannot accept in NFV scenario or in some telecommunications
company. Even though the spec[1] want to pre-configure the nework during
live migration, let migration and network configure process in
asynchronous way, but the key issue is not sloved, we also need a
mechanism like provision_block to let l2 and l3 to process in a
synchronize way. And need a way to let nova know about the work is done
in neutron, nova could do the next thing during live migration.

[1]http://specs.openstack.org/openstack/neutron-
specs/specs/pike/portbinding_information_for_nova.html

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Nova contains 3 stages when process 1ive migration:
  1. pre_live_migration
  2. migrating
  3. post_live_migration
- The current implement, nova will plug a new vif on the target host. The 
ovs-agent on the target host will process this new vif, and try to up this port 
on target host. But the port host_id is src host now.The agent send a rpc to 
server and return nothing.. 
+ The current implement, nova will plug a new vif on the target host. The 
ovs-agent on the target host will process this new vif, and try to up this port 
on target host. But the port host_id is src host now.The agent send a rpc to 
server and return nothing..
  After nova process the real migration in stage 2. Maybe the flavor of the 
instance is small and the duration is very short. Then in stage 3, nova call 
neutron to update the port's host_id of instance. Network interrupt begins. In 
the whole live migration ,the vm status is always ACTIVE. But users can not 
login the VM, or the applications running in the VM will be offline for a 
while. The reason is neutron process the whole traffic is too late. When nova 
migrate the instance to the target host, and setup the instance by libvirt, the 
network traffic provided by neutron is not ready, that means we need to verify 
both l2 and l3 connection are ready for this.
  
- We test in our product env which is the old release Mitaka, the
- interrupt time last depends on the port counts in the router subnets,
- also whether the port is associated with floatingip. When the ports
- counts <20, the interrupt duration <= 8 seconds, the time will increase
- 5s if the port is associated with floatingip.  When port counts > 20,
- the duration <=30s, also increase 5s by floatingip.
+ We test in our product env which is the old release Mitaka(I still think
+ there is the same issue in master), the interrupt time last depends on
+ the port counts in the router subnets, also whether the port is
+ associated with floatingip. When the ports counts <20, the interrupt
+ duration <= 8 seconds, the time will increase 5s if the port is
+ associated with floatingip.  When port counts > 20, the duration <=30s,
+ also increase 5s by floatingip.
  
  This cannot accept in NFV scenario or in some telecommunications
  company. Even though the spec[1] want to pre-configure the nework during
  live migration, let migration and network configure process in
  asynchronous way, but the key issue is not sloved, we also need a
  mechanism like provision_block to let l2 and l3 to process in a
  synchronize way. And need a way to let nova know about the work is done
  in neutron, nova could do the next thing during live migration.
  
- 
- 
[1]http://specs.openstack.org/openstack/neutron-specs/specs/pike/portbinding_information_for_nova.html
+ 

[Yahoo-eng-team] [Bug 1714131] Re: pecan hooks break on pagination + ID filter

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/499426
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=41e6f02bf253e82fcc4baf4b77795ad831b58b43
Submitter: Jenkins
Branch:master

commit 41e6f02bf253e82fcc4baf4b77795ad831b58b43
Author: Kevin Benton 
Date:   Wed Aug 30 19:46:39 2017 -0700

Pecan: process filters at end of hook pipeline

Separate user-applied filters from policy-enforcement
filters processing and put it at the end of the pipeline
so users can't put filters on the API request that impact
the fields available to the hooks.

This prevents a filter excluding the ID field from breaking
pagination.

Change-Id: I05f4582fb1e8809740d473e24fa54483e040a6c8
Closes-Bug: #1714131


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714131

Title:
  pecan hooks break on pagination + ID filter

Status in neutron:
  Fix Released

Bug description:
  The user filters are applied to the results before pagination in the
  pecan hook pipeline so if the user filters out the ID field, the
  pagination code will throw an exception trying to build the next link.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715317] [NEW] Hybrid bridge should permanently keep MAC entries

2017-09-06 Thread sahid
Public bug reported:

The linux bridge installed for the particular vif type ovs-hybrid should
be configured to persistently keep the MAC learned from the RARP packets
sent by QEMU when starting on destination node. That to avoid any break
of the datapath during a live-migration.

That issue can be saying when using the opflex plugin.

  https://github.com/noironetworks/python-opflex-
agent/commit/3163b9a2668f29dd1e52e9757b8c25ef48822765

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715317

Title:
  Hybrid bridge should permanently keep MAC entries

Status in OpenStack Compute (nova):
  New

Bug description:
  The linux bridge installed for the particular vif type ovs-hybrid
  should be configured to persistently keep the MAC learned from the
  RARP packets sent by QEMU when starting on destination node. That to
  avoid any break of the datapath during a live-migration.

  That issue can be saying when using the opflex plugin.

https://github.com/noironetworks/python-opflex-
  agent/commit/3163b9a2668f29dd1e52e9757b8c25ef48822765

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715320] [NEW] Documentation link broken on API Extensions in neutron-lib

2017-09-06 Thread Emmanuel Zhao
Public bug reported:

External Hyperlink on doc page is broken.
According to the docs on this link:
https://docs.openstack.org/neutron-lib/latest/contributor/api_extensions.html#using-neutron-lib-s-base-extension-classes

When I clicked the hyper-link(neutron api extension dev-ref),  it shows 404 not 
found error.
Plz fix the hyper-link.
Thanks.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715320

Title:
  Documentation link broken on API Extensions in neutron-lib

Status in neutron:
  New

Bug description:
  External Hyperlink on doc page is broken.
  According to the docs on this link:
  
https://docs.openstack.org/neutron-lib/latest/contributor/api_extensions.html#using-neutron-lib-s-base-extension-classes

  When I clicked the hyper-link(neutron api extension dev-ref),  it shows 404 
not found error.
  Plz fix the hyper-link.
  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712185] Re: iptables-restore calls fail acquiring 'xlock' with iptables from master

2017-09-06 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/495974
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a521bf0393d33d6e69f59900942404c2b5c84d83
Submitter: Jenkins
Branch:master

commit a521bf0393d33d6e69f59900942404c2b5c84d83
Author: Ihar Hrachyshka 
Date:   Mon Aug 21 12:15:25 2017 -0700

Make use of -w argument for iptables calls

Upstream iptables added support for -w ('wait') argument to
iptables-restore. It makes the command grab a 'xlock' that guarantees
that no two iptables calls will mess a table if called in parallel.
[This somewhat resembles what we try to achieve with a file lock we
grab in iptables manager's _apply_synchronized.]

If two processes call to iptables-restore or iptables in parallel, the
second call risks failing, returning error code = 4, and also printing
the following error:

Another app is currently holding the xtables lock. Perhaps you want
to use the -w option?

If we call to iptables / iptables-restore with -w though, it will wait
for the xlock release before proceeding, and won't fail.

Though the feature was added in iptables/master only and is not part of
an official iptables release, it was already backported to RHEL 7.x
iptables package, and so we need to adopt to it. At the same time, we
can't expect any underlying platform to support the argument.

A solution here is to call iptables-restore with -w when a regular call
failed. Also, the patch adds -w to all iptables calls, in the iptables
manager as well as in ipset-cleanup.

Since we don't want to lock agent in case current xlock owner doesn't
release it in reasonable time, we limit the time we wait to ~1/3 of
report_interval, to give the agent some time to recover without
triggering expensive fullsync.

In the future, we may be able to get rid of our custom synchronization
lock that we use in iptables manager. But this will require all
supported platforms to get the feature in and will take some time.

Closes-Bug: #1712185
Change-Id: I94e54935df7c6caa2480eca19e851cb4882c0f8b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1712185

Title:
  iptables-restore calls fail acquiring 'xlock' with iptables from
  master

Status in neutron:
  Fix Released

Bug description:
  This happens when you use iptables that includes
  
https://git.netfilter.org/iptables/commit/?id=999eaa241212d3952ddff39a99d0d55a74e3639e
  (f.e. the one from latest RHEL repos)

  
neutron.tests.functional.agent.test_firewall.FirewallTestCase.test_established_connection_is_cut(IptablesFirewallDriver,without
 ipset)
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_firewall.py", line 113, in 
setUp
  self.firewall.prepare_port_filter(self.src_port_desc)
File "neutron/agent/linux/iptables_firewall.py", line 204, in 
prepare_port_filter
  return self.iptables.apply()
File "neutron/agent/linux/iptables_manager.py", line 432, in apply
  return self._apply()
File "neutron/agent/linux/iptables_manager.py", line 440, in _apply
  first = self._apply_synchronized()
File "neutron/agent/linux/iptables_manager.py", line 539, in 
_apply_synchronized
  '
  '.join(log_lines))
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "neutron/agent/linux/iptables_manager.py", line 518, in 
_apply_synchronized
  run_as_root=True)
File "neutron/agent/linux/utils.py", line 156, in execute
  raise ProcessExecutionError(msg, returncode=returncode)
  neutron.agent.linux.utils.ProcessExecutionError: Exit code: 4; Stdin: # 
Generated by iptables_manager
  *filter
  :neutron-filter-top - [0:0]
  :run.py-FORWARD - [0:0]
  :run.py-INPUT - [0:0]
  :run.py-OUTPUT - [0:0]
  :run.py-it-veth0bc5 - [0:0]
  :run.py-local - [0:0]
  :run.py-ot-veth0bc5 - [0:0]
  :run.py-sg-chain - [0:0]
  :run.py-sg-fallback - [0:0]
  -I FORWARD 1 -j neutron-filter-top
  -I FORWARD 2 -j run.py-FORWARD
  -I INPUT 1 -j run.py-INPUT
  -I OUTPUT 1 -j neutron-filter-top
  -I OUTPUT 2 -j run.py-OUTPUT
  -I neutron-filter-top 1 -j run.py-local
  -I run.py-FORWARD 1 -m physdev --physdev-out test-veth0bc5b8