[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread zhangjialong
** Also affects: mistral
   Importance: Undecided
   Status: New

** Changed in: mistral
 Assignee: (unassigned) => zhangjialong (zhangjl)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  New
Status in Ironic:
  In Progress
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in Mistral:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread guoshan
** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) => guoshan (guoshan)

** Also affects: senlin
   Importance: Undecided
   Status: New

** Changed in: senlin
 Assignee: (unassigned) => guoshan (guoshan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  New
Status in Ironic:
  In Progress
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in Mistral:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread Xu Ao
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Xu Ao (xuao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  New
Status in Ironic:
  In Progress
Status in ironic-python-agent:
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in Mistral:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640059] [NEW] Placeholder not set on new glance image upload

2016-11-07 Thread Maxime
Public bug reported:

Back in Juno the "Create An Image" form used to have a placeholder
attribute on the "Image location" field. It is not there anymore in
trunk (probably since Kilo)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Juno-Mitaka comparaison"
   
https://bugs.launchpad.net/bugs/1640059/+attachment/4774321/+files/Screen%20Shot%202016-11-08%20at%2008.26.38.png

** Description changed:

- Back in Juno the "Create An Image" form use to have a placeholder
- attribute on the "Image location". It is not there anymore in trunk
- (probably since Kilo)
+ Back in Juno the "Create An Image" form used to have a placeholder
+ attribute on the "Image location" field. It is not there anymore in
+ trunk (probably since Kilo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1640059

Title:
  Placeholder not set on new glance image upload

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Back in Juno the "Create An Image" form used to have a placeholder
  attribute on the "Image location" field. It is not there anymore in
  trunk (probably since Kilo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1640059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread Tuan
** Also affects: ironic-python-agent
   Importance: Undecided
   Status: New

** Changed in: ironic-python-agent
 Assignee: (unassigned) => Tuan (tuanla)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Ironic:
  In Progress
Status in ironic-python-agent:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640049] [NEW] Action services retain state, and should not

2016-11-07 Thread Richard Jones
Public bug reported:

AngularJS services are singletons, so storing state on them is dangerous
(using the same service twice in a single context will result in that
state data being indeterminate).

** Affects: horizon
 Importance: High
 Assignee: Richard Jones (r1chardj0n3s)
 Status: In Progress

** Changed in: horizon
Milestone: None => ocata-1

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
   Status: New => Triaged

** Changed in: horizon
 Assignee: (unassigned) => Richard Jones (r1chardj0n3s)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1640049

Title:
  Action services retain state, and should not

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  AngularJS services are singletons, so storing state on them is
  dangerous (using the same service twice in a single context will
  result in that state data being indeterminate).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1640049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640052] [NEW] LBaaS: Only one pool member gets deleted when multiple pool members are selected for deletion

2016-11-07 Thread Koteswara Rao Kelam
Public bug reported:

In mitaka:
Only one pool member gets deleted when multiple pool members are selected for 
deletion.This issue is seen when horizon UI is used for operation. Some times 
when u add multiple members at a time using UI, we see some members are getting 
missed in list ( neutron lbaas-member-list).
1. Create load balancer
2. create listener
3. Create pool
4. Add Members to the list
5. Attach health monitor
6. Load balancer will get created successfully.
7. Go to Load balancer >> Listeners >> Default Pool id >> Members >> Add/Remove 
Pool Members
8. Remove all the members.
9. Click on Add/Remove Pool Members tab.
10. Go back to default pool id and check for members.
11. Few members will remain there. Only one member gets deleted at a time.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1640052

Title:
  LBaaS: Only one pool member gets deleted when multiple pool members
  are selected for deletion

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In mitaka:
  Only one pool member gets deleted when multiple pool members are selected for 
deletion.This issue is seen when horizon UI is used for operation. Some times 
when u add multiple members at a time using UI, we see some members are getting 
missed in list ( neutron lbaas-member-list).
  1. Create load balancer
  2. create listener
  3. Create pool
  4. Add Members to the list
  5. Attach health monitor
  6. Load balancer will get created successfully.
  7. Go to Load balancer >> Listeners >> Default Pool id >> Members >> 
Add/Remove Pool Members
  8. Remove all the members.
  9. Click on Add/Remove Pool Members tab.
  10. Go back to default pool id and check for members.
  11. Few members will remain there. Only one member gets deleted at a time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1640052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread Tuan
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Ironic:
  In Progress
Status in ironic-python-agent:
  New
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2016-11-07 Thread Tuan
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in gantt:
  New
Status in Glance:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in oslo.utils:
  New
Status in python-keystoneclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

  List of attributes and functions to be removed from timeutils:
  * override_time
  * set_time_override()
  * clear_time_override()
  * advance_time_delta()
  * advance_time_seconds()

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640034] [NEW] Creating/Updating port returns 500 error when specifying list which includes string that contains "ip_address" or "mac_address" as 'allowed_address_pairs'

2016-11-07 Thread Kengo Hobo
Public bug reported:

The error should be returned with 400 because invalid request format is
reason.

* How to reproduce
-ip_address
ubuntu@neutron-ml2:/opt/stack/neutron$ curl -si -X PUT -H "X-Auth-Token: 
$TOKEN" -H "Content-type: application/json" 
http://172.16.1.29:9696/v2.0/ports/517eeaa9-238a-4c95-96d3-c6ed6b289ffb -d 
'{"port":{"allowed_address_pairs":["ip_address"]}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-b68113ea-6323-4087-b789-b9f8a9a7f260
Date: Tue, 08 Nov 2016 06:33:19 GMT

{"NeutronError": {"message": "Request Failed: internal server error
while processing your request.", "type": "HTTPInternalServerError",
"detail": ""}}


-mac_address
ubuntu@neutron-ml2:/opt/stack/neutron$ curl -si -X PUT -H "X-Auth-Token: 
$TOKEN" -H "Content-type: application/json" 
http://172.16.1.29:9696/v2.0/ports/517eeaa9-238a-4c95-96d3-c6ed6b289ffb -d 
'{"port":{"allowed_address_pairs":["mac_address"]}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-1e62dafa-dc0b-4986-a0cf-7a1ec84c2eee
Date: Tue, 08 Nov 2016 06:32:18 GMT

{"NeutronError": {"message": "Request Failed: internal server error
while processing your request.", "type": "HTTPInternalServerError",
"detail": ""}}

* trace in neutron-server
-ip_address
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
[req-b68113ea-6323-4087-b789-b9f8a9a7f260 6759f544889746448631792bb12bd2ea 
d713c7d4c02541d8b239d6d9761768e5 - - -] upd
ate failed: No details.
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 604, in update
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 83, in wrapped
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 79, in wrapped
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 119, in wrapped
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 114, in wrapped
2016-11-08 06:33:19.441 7216 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1640029] [NEW] [stable/newton] Deleting heat stack failed due to error "QueuePool limit of size 50 overflow 50 reached, connection timed out, timeout 30"

2016-11-07 Thread Sujai
Public bug reported:

In my CH + stable/newton setup, I brought up 5 heat stacks each having 100 nova 
instances in the same /16 network.
Deleting those heat stacks failed due to the below error.

"
2016-11-03 17:27:34.146 2399 ERROR nova.api.openstack.extensions TimeoutError: 
QueuePool limit of size 50 overflow 50 reached, connection timed out, timeout 30
"

Because of this error, out of 500 instances, deletion of about 67 instances got 
failed.
With default parameters in neutron.conf, I'm getting the below neutron error.

2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
[req-a0022887-cc01-4f2e-980d-490136524363 admin -] delete failed: No details.
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 555, in delete
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 84, in wrapped
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 119, in wrapped
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 577, in _delete
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1814, in 
delete_port
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource 
self.disassociate_floatingips(context, port_id)
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 
"/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 2804, in 
disassociate_floatingips
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource for fip_db in 
fip_dbs:
2016-11-02 20:42:06.557 18058 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1637972] Re: VPNaaS: report_state fails by key error 'tenant_id'

2016-11-07 Thread Darek Smigiel
I'm not able to reproduce this issue on master branch.
For VPNaaS I've used this description [1]. To verify correctness of VPN, I've 
run script [2], which is included in config site. There is a small error in 
third line. WEST_SUBNET should be in newline. 

[1] https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall
[2] http://paste.openstack.org/raw/44702/

Hiroyuki, could you retry this and verify if you still see this error?

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1637972

Title:
  VPNaaS: report_state fails by key error 'tenant_id'

Status in neutron:
  Invalid

Bug description:
  When creating ipsec-site-connection, the error KeyError: 'tenant_id'
  occurred in vpn agent.

  
  Operation:

  $ neutron ipsec-site-connection-create --peer-cidr 192.168.91.0/24 --peer-id 
192.168.7.4 --peer-address 192.168.7.4 --psk ps --vpnservice-id service1 
--ikepolicy-id ike1 --ipsecpolicy-id ipsec1 --name test1 --dpd action=disabled
  Created a new ipsec_site_connection:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | auth_mode | psk|
  | description   ||
  | dpd   | {"action": "disabled", "interval": 30, "timeout": 120} |
  | id| 298a689b-428b-45fd-a868-2d4738d59eb1   |
  | ikepolicy_id  | be1f92ab-8064-4328-8862-777ae6878691   |
  | initiator | bi-directional |
  | ipsecpolicy_id| 09c67ae8-6ede-47ca-a15b-c52be1d7feaf   |
  | local_ep_group_id ||
  | local_id  ||
  | mtu   | 1500   |
  | name  | test1  |
  | peer_address  | 192.168.7.4|
  | peer_cidrs| 192.168.91.0/24|
  | peer_ep_group_id  ||
  | peer_id   | 192.168.7.4|
  | project_id| 068a47c758ae4b5d9fab059539e57740   |
  | psk   | ps |
  | route_mode| static |
  | status| PENDING_CREATE |
  | tenant_id | 068a47c758ae4b5d9fab059539e57740   |
  | vpnservice_id | 4f82612c-5e3a-4699-aafa-bdfa5ede31fe   |
  +---++

  Error log in vpn agent:

  2016-10-31 19:24:15.591 ERROR oslo_messaging.rpc.server 
[req-169503b5-edbc-46a9-8ded-03b5b5d278ea demo 
068a47c758ae4b5d9fab059539e57740] Exception during message handling
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
225, in dispatch
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
195, in _do_dispatch
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 884, in vpnservice_updated
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server 
self.sync(context, [router] if router else [])
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server return f(*args, 
**kwargs)
  2016-10-31 19:24:15.591 TRACE oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1640019] [NEW] table text should be placed over progress image

2016-11-07 Thread chenyujie
Public bug reported:

Abstract horizon table code as following:

if ($new_row.hasClass('warning')) {
  var $container = $(document.createElement('div'))
.addClass('progress-text horizon-loading-bar');

  var $progress = $(document.createElement('div'))
.addClass('progress progress-striped active')
.appendTo($container);

  $(document.createElement('div'))
.addClass('progress-bar')
.appendTo($progress);

  // if action/confirm is required, show progress-bar with "?"
  // icon to indicate user action is required
  if ($new_row.find('.btn-action-required').length > 0) {
$(document.createElement('span'))
  .addClass('fa fa-question-circle progress-bar-text')
  .appendTo($container);
  }
  $new_row.find("td.warning:last").prepend($container);
}

When progress bar being displayed, an image would be placed before the
table text. The result is a long process bar displaying before a short
text, and column become two line. The visual effect is poor. How ever,
if we change 'prepend' to 'wrapInner', the text would be placed over the
progress bar, and the column keep single line.

  $new_row.find("td.warning:last").prepend($container);
=>  $new_row.find("td.warning:last").wrapInner($container);

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Abstract horizon table code as following:
  
- if ($new_row.hasClass('warning')) {
-   var $container = $(document.createElement('div'))
- .addClass('progress-text horizon-loading-bar');
+ if ($new_row.hasClass('warning')) {
+   var $container = $(document.createElement('div'))
+ .addClass('progress-text horizon-loading-bar');
  
-   var $progress = $(document.createElement('div'))
- .addClass('progress progress-striped active')
- .appendTo($container);
+   var $progress = $(document.createElement('div'))
+ .addClass('progress progress-striped active')
+ .appendTo($container);
  
-   $(document.createElement('div'))
- .addClass('progress-bar')
- .appendTo($progress);
+   $(document.createElement('div'))
+ .addClass('progress-bar')
+ .appendTo($progress);
  
-   // if action/confirm is required, show progress-bar with "?"
-   // icon to indicate user action is required
-   if ($new_row.find('.btn-action-required').length > 0) {
- $(document.createElement('span'))
-   .addClass('fa fa-question-circle progress-bar-text')
-   .appendTo($container);
-   }
-   $new_row.find("td.warning:last").prepend($container);
- }
+   // if action/confirm is required, show progress-bar with "?"
+   // icon to indicate user action is required
+   if ($new_row.find('.btn-action-required').length > 0) {
+ $(document.createElement('span'))
+   .addClass('fa fa-question-circle progress-bar-text')
+   .appendTo($container);
+   }
+   $new_row.find("td.warning:last").prepend($container);
+ }
  
- When progress bar should be displaced, an image would be placed before
+ When progress bar should be displayed, an image would be placed before
  the table text. The result is a long process bar displaying before a
  short text, and column become two line. The visual effect is poor. How
  ever, if we change 'prepend' to 'wrapInner', the text would be placed
  over the progress bar, and the colume keep single line.
  
-   $new_row.find("td.warning:last").prepend($container);
+   $new_row.find("td.warning:last").prepend($container);
  =>  $new_row.find("td.warning:last").wrapInner($container);

** Description changed:

  Abstract horizon table code as following:
  
  if ($new_row.hasClass('warning')) {
    var $container = $(document.createElement('div'))
  .addClass('progress-text horizon-loading-bar');
  
    var $progress = $(document.createElement('div'))
  .addClass('progress progress-striped active')
  .appendTo($container);
  
    $(document.createElement('div'))
  .addClass('progress-bar')
  .appendTo($progress);
  
    // if action/confirm is required, show progress-bar with "?"
    // icon to indicate user action is required
    if ($new_row.find('.btn-action-required').length > 0) {
  

[Yahoo-eng-team] [Bug 1460720] Re: [RFE] Add API to set ipv6 gateway

2016-11-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460720

Title:
  [RFE] Add API to set ipv6 gateway

Status in neutron:
  Expired

Bug description:
  Currently the ipv6 external gateway is an admin configuration item. We
  want instead to have an API to set the ipv6 gateway.

  Background:

  The ipv6 router BP
  (https://blueprints.launchpad.net/neutron/+spec/ipv6-router) added a
  new L3 agent config called ipv6_gateway wherein an admin can configure
  the IPv6 LLA of the upstream physical router, so that the neutron
  virtual router has a default V6 gateway route to the upstream router.

  This solution is however not scalable when there are multiple external 
routers per L3 agent.
  Per review comments - 
https://review.openstack.org/#/c/156283/42/etc/l3_agent.ini
  It is better to move this config to the CLI.

  As discussed in the L3 weekly meeting -
  
http://eavesdrop.openstack.org/meetings/neutron_l3/2015/neutron_l3.2015-06-11-15.04.log.html,
  this change aims to make this exact change by updating neutron net-
  create CLI will to now have a new option to set an ipv6_gateway (for
  the external router).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521783] Re: [RFE] Cascading delete for LBaaS Objects

2016-11-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521783

Title:
  [RFE] Cascading delete for LBaaS Objects

Status in neutron:
  Expired

Bug description:
  The LBaaS-Horizon Dashboard people requested a cascading delete in the
  LBaaS V2 REST API. So that if say you use an additional parameter
  (let's call it force=True) by deleting a load balancer it will also
  delete listeners, pools, and members. The same should be true for
  listeners, pools, etc.

  In a first step we likely should just a dd that to the API, and then
  in a next step add it to the CLI.

  As a side effect that might help operators cleaning out accounts
  efficiently...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640008] [NEW] Getting routing table by parsing 'ip route' command fails on xenial

2016-11-07 Thread Omer Anson
Public bug reported:

Getting routing table by parsing 'ip route' command fails on Ubuntu
Xenial.

The code assumes that ip route command returns the network, and then
key-value pairs of data of the routing record. In xenial, it can be seen
that ip also adds some flags at the end, e.g. linkdown.

A reproduction of this failure can be seen on the gate, via 'check
experimental'.

** Affects: neutron
 Importance: Undecided
 Assignee: Omer Anson (omer-anson)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Omer Anson (omer-anson)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640008

Title:
  Getting routing table by parsing 'ip route' command fails on xenial

Status in neutron:
  New

Bug description:
  Getting routing table by parsing 'ip route' command fails on Ubuntu
  Xenial.

  The code assumes that ip route command returns the network, and then
  key-value pairs of data of the routing record. In xenial, it can be
  seen that ip also adds some flags at the end, e.g. linkdown.

  A reproduction of this failure can be seen on the gate, via 'check
  experimental'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639930] Re: initramfs network configuration ignored if only ip6= on kernel command line

2016-11-07 Thread LaMont Jones
** Also affects: maas
   Importance: Undecided
   Status: New

** Tags added: maas-ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639930

Title:
  initramfs network configuration ignored if only ip6= on kernel command
  line

Status in cloud-init:
  Confirmed
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  In changes made under bug 1621615 (specifically a1cdebdea), we now
  expect that there may be a 'ip6=' argument on the kernel command line.
  The changes made did not test the case where there is 'ip6=' and no
  'ip='.

  The code currently will return with no network configuration found if
  there is only ip6=...


  Related bugs:
   * bug 1621615: network not configured when ipv6 netbooted into cloud-init 
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639958] [NEW] Behavior of down arrow key in dropdowns is inconsistent across Horizon

2016-11-07 Thread Eddie Ramirez
Public bug reported:

Dropdowns on Bootstrap lets you cycle through its item using the "Down"
arrow key, but this behavior is not consistent in both Django and NG
panes - actually seems to not work at all on NG panes, focus area for
next releases.

How to reproduce:
1. Go to Project->Images/Admin->Images.
2. Expand any dropdown inside the resource panel - could be the one that shows 
"row actions".
3. Press the "Down" arrow key.
4. Go to a Django Pane and repeat last steps, see that the event does cycle the 
items in the dropdown.

Actual result:
The "event" won't "activate" the next item in the dropdown. It does not cycle 
(this on NG panels)

Expected result:
There's a keydown event attached to the  element containing all items. This 
event should let you cycle through all the elements in the dropdown when the 
user hits the "Down" arrow key.

Learn more: http://getbootstrap.com/components/#dropdowns and in the
"Bootstrap Theme Preview" "http://yourhorizon/developer/#/buttons; pane.

Note: Fixing this issue could drastically improve the UX of Magicsearch
since all Facets and options are displayed using dropdowns - you cannot
select any of those options using ONLY the keyboard (you need to click
them).

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: accessibility dropdown ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1639958

Title:
  Behavior of down arrow key in dropdowns is inconsistent across Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dropdowns on Bootstrap lets you cycle through its item using the
  "Down" arrow key, but this behavior is not consistent in both Django
  and NG panes - actually seems to not work at all on NG panes, focus
  area for next releases.

  How to reproduce:
  1. Go to Project->Images/Admin->Images.
  2. Expand any dropdown inside the resource panel - could be the one that 
shows "row actions".
  3. Press the "Down" arrow key.
  4. Go to a Django Pane and repeat last steps, see that the event does cycle 
the items in the dropdown.

  Actual result:
  The "event" won't "activate" the next item in the dropdown. It does not cycle 
(this on NG panels)

  Expected result:
  There's a keydown event attached to the  element containing all items. 
This event should let you cycle through all the elements in the dropdown when 
the user hits the "Down" arrow key.

  Learn more: http://getbootstrap.com/components/#dropdowns and in the
  "Bootstrap Theme Preview" "http://yourhorizon/developer/#/buttons;
  pane.

  Note: Fixing this issue could drastically improve the UX of
  Magicsearch since all Facets and options are displayed using dropdowns
  - you cannot select any of those options using ONLY the keyboard (you
  need to click them).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1639958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537936] Re: Pecan: put does not return resource names in responses

2016-11-07 Thread Brandon Logan
This is no longer valid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537936

Title:
  Pecan: put does not return resource names in responses

Status in neutron:
  Invalid

Bug description:
  clients are hardly going to work with an issue like this

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537936/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611074] Re: Reformatting of ephemeral drive fails on resize of Azure VM

2016-11-07 Thread Scott Moser
** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1611074

Title:
  Reformatting of ephemeral drive fails on resize of Azure VM

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact]
  In some cases, cloud-init writes entries to /etc/fstab, and on azure it will
  even format a disk for mounting and then write the entry for that 'ephemeral'
  disk there.

  A supported operation on Azure is to "resize" the system.  When you do this
  the system is shut down, resized (given larger/faster disks and more CPU) and
  then brought back up.  In that process, the "ephemeral" disk re-initialized
  to its original NTFS format.  The designed goal is for cloud-init to recognize
  this situation and re-format the disk to ext4.

  The problem is that the mount of that disk happens before cloud-init can
  reformat.  Thats because the entry in fstab has 'auto' and is automatically
  mounted.  The end result is that after resize operation the user will be left
  with the ephemeral disk mounted at /mnt and having a ntfs filesystem rather
  than ext4.

  [Test Case]
  The text in comment 3 describes how to recreate by the original reporter.
  Another way to do this is to just re-format the ephemeral disk as
  ntfs and then reboot.  The result *should* be that after reboot it
  comes back up and has an ext4 filesystem on it.

  1.) boot system on azure
    (for this, i use https://gist.github.com/smoser/5806147, but you can
     use web ui or any other way).

  2.) unmount the ephemeral disk
     $ umount /mnt

  3.) repartition it so that mkfs.ntfs does less and is faster
     This is not strictly necessary, but mkfs.ntfs can take upwards of
     20 minutes.  shrinking /dev/sdb2 to be 200M means it will finish
     in < 1 minute.

     $ disk=/dev/disk/cloud/azure_resource
     $ part=/dev/disk/cloud/azure_resource-part1
     $ echo "2048,$((2*1024*100)),7" | sudo sfdisk "$disk"
     $ time mkfs.ntfs --quick "$part"

  4.) reboot
  5.) expect that /proc/mounts has /dev/disk/cloud/azure_resource-part1 as ext4
  and that fstab has x-systemd.requires in it.

  $ awk '$2 == "/mnt" { print $0 }' /proc/mounts
  /dev/sdb1 /mnt ext4 rw,relatime,data=ordered 0 0

  $ awk '$2 == "/mnt" { print $0 }' /etc/fstab
  /dev/sdb1 /mnt auto 
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2

  [Regression Potential]
  Regression is unlikely.  Likely failure case is just that the problem is not
  correctly fixed, and the user ends up with either an NTFS formated disk that
  is mounted at /mnt or there is nothing mounted at /mnt.

  === End SRU Template ===

  After resizing a 16.04 VM on Azure, the VM is presented with a new
  ephemeral drive (of a different size), which initially is NTFS
  formatted. Cloud-init tries to format the appropriate partition ext4,
  but fails because it is mounted. Cloud-init has unmount logic for
  exactly this case in the get_data call on the Azure data source, but
  this is never called because fresh cache is found.

  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] handlers.py[DEBUG]: start: 
init-network/check-cache: attempting to read from cache [trust]
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Reading from 
/var/lib/cloud/instance/obj.pkl (quiet=False)
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Read 5950 bytes 
from /var/lib/cloud/instance/obj.pkl
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] stages.py[DEBUG]: restored from 
cache: DataSourceAzureNet [seed=/dev/sr0]
  Jun 27 19:07:47 azubuntu1604arm [CLOUDINIT] handlers.py[DEBUG]: finish: 
init-network/check-cache: SUCCESS: restored from cache: DataSourceAzureNet 
[seed=/dev/sr0]
  ...
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] cc_disk_setup.py[DEBUG]: Creating 
file system None on /dev/sdb1
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] cc_disk_setup.py[DEBUG]:  
Using cmd: /sbin/mkfs.ext4 /dev/sdb1
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Running command 
['/sbin/mkfs.ext4', '/dev/sdb1'] with allowed return codes [0] (shell=False, 
capture=True)
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] util.py[DEBUG]: Creating fs for 
/dev/disk/cloud/azure_resource took 0.052 seconds
  Jun 27 19:07:48 azubuntu1604arm [CLOUDINIT] util.py[WARNING]: Failed during 
filesystem operation#012Failed to exec of '['/sbin/mkfs.ext4', 
'/dev/sdb1']':#012Unexpected error while running command.#012Command: 
['/sbin/mkfs.ext4', '/dev/sdb1']#012Exit code: 1#012Reason: 

[Yahoo-eng-team] [Bug 1629797] Re: resolve service in nsswitch.conf adds 25 seconds to failed lookups before systemd-resolved is up

2016-11-07 Thread Scott Moser
** Also affects: dbus (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: dbus (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: Confirmed => Fix Released

** Changed in: dbus (Ubuntu Xenial)
   Status: New => Invalid

** Changed in: dbus (Ubuntu Yakkety)
   Status: New => Invalid

** No longer affects: dbus (Ubuntu Xenial)

** No longer affects: dbus (Ubuntu Yakkety)

** Description changed:

- During boot, cloud-init does DNS resolution checks to if particular
- metadata services are available (in order to determine which cloud it is
- running on).  These checks happen before systemd-resolved is up[0] and
- if they resolve unsuccessfully they take 25 seconds to complete.
+ === Begin SRU Template ===
+ [Impact] 
+ In cases where cloud-init used dns during early boot and system was
+ configured in nsswitch.conf to use systemd-resolvd, the system would
+ timeout on dns attempts making system boot terribly slow.
+ 
+ [Test Case]
+ Boot a system on GCE.
+ check for WARN in /var/log/messages
+ check that time to boot is reasonable (<30 seconds).  In failure case the
+ times would be minutes.
+ 
+ [Regression Potential]
+ Changing order in boot can be dangerous.  There is real chance for 
+ regression here, but it should be fairly small as xenial does not include
+ systemd-resolved usage.  This was first noticed on yakkety where it did.
+ 
+ [Other Info]
+ It seems useful to SRU this in the event that systemd-resolvd is used
+ on 16.04 or the case where user upgrades components (admittedly small use
+ case).
+ 
+ === End SRU Template ===
+ 
+ 
+ 
+ During boot, cloud-init does DNS resolution checks to if particular metadata 
services are available (in order to determine which cloud it is running on).  
These checks happen before systemd-resolved is up[0] and if they resolve 
unsuccessfully they take 25 seconds to complete.
  
  This has substantial impact on boot time in all contexts, because cloud-
  init attempts to resolve three known-invalid addresses ("does-not-
  exist.example.com.", "example.invalid." and a random string) to enable
  it to detect when it's running in an environment where a DNS server will
  always return some sort of redirect.  As such, we're talking a minimum
  impact of 75 seconds in all environments.  This increases when cloud-
  init is configured to check for multiple environments.
  
  This means that yakkety is consistently taking 2-3 minutes to boot on
  EC2 and GCE, compared to the ~30 seconds of the first boot and ~10
  seconds thereafter in xenial.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1629797

Title:
  resolve service in nsswitch.conf adds 25 seconds to failed lookups
  before systemd-resolved is up

Status in cloud-init:
  Fix Committed
Status in D-Bus:
  Unknown
Status in cloud-init package in Ubuntu:
  Fix Released
Status in dbus package in Ubuntu:
  Won't Fix
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact] 
  In cases where cloud-init used dns during early boot and system was
  configured in nsswitch.conf to use systemd-resolvd, the system would
  timeout on dns attempts making system boot terribly slow.

  [Test Case]
  Boot a system on GCE.
  check for WARN in /var/log/messages
  check that time to boot is reasonable (<30 seconds).  In failure case the
  times would be minutes.

  [Regression Potential]
  Changing order in boot can be dangerous.  There is real chance for 
  regression here, but it should be fairly small as xenial does not include
  systemd-resolved usage.  This was first noticed on yakkety where it did.

  [Other Info]
  It seems useful to SRU this in the event that systemd-resolvd is used
  on 16.04 or the case where user upgrades components (admittedly small use
  case).

  === End SRU Template ===


  
  During boot, cloud-init does DNS resolution checks to if particular metadata 
services are available (in order to determine which cloud it is running on).  
These checks happen before systemd-resolved is up[0] and if they resolve 
unsuccessfully they take 25 seconds to complete.

  This has substantial impact on boot time in all contexts, because
  cloud-init attempts to resolve three known-invalid addresses ("does-
  

[Yahoo-eng-team] [Bug 1635350] Re: unit tests fail as non-root on maas deployed system

2016-11-07 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Low

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Low

** Description changed:

+ === Begin SRU Template ===
+ [Impact] 
+ Running cloud-init's unit test cases on a system deployed by MAAS would
+ fail.  The reason is that the non-root user would not be able to read 
+ files with MAAS node credentials in /etc/cloud/cloud.cfg.d 
+   
+ [Test Case]
+ Run unit tests on a system deployed by maas, or even just with:
+   f=/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg
+   sh -c 'mkdir -p "${1%/*}" && touch "$1" && chmod ugo-r "$1"' -- "$f"
+   tox -e py3
+ 
+ [Regression Potential] 
+ This was just to fix a build break or unit tests being run.
+ Changes are only to unit tests.
+ === End SRU Template ===
+ 
+ 
  Observed Behavior:
  
  On a system deployed by MAAS I checked out master and then tried to 
immediately build it:
  > git clone https://git.launchpad.net/cloud-init
  > cd cloud-init
  > ./packages/bddeb
  
  I get a number of errors around permission issues around this file:
  PermissionError: [Errno 13] Permission denied: 
\'/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg\'
  
  See: https://paste.ubuntu.com/23354559/
-  or formatted better: http://paste.ubuntu.com/23374383/
+  or formatted better: http://paste.ubuntu.com/23374383/
  
  If I run as root however, it build as expected.
  
  Expected Behavior:
  Running bddeb works as a non-root user.

** Description changed:

  === Begin SRU Template ===
- [Impact] 
+ [Impact]
  Running cloud-init's unit test cases on a system deployed by MAAS would
- fail.  The reason is that the non-root user would not be able to read 
- files with MAAS node credentials in /etc/cloud/cloud.cfg.d 
-   
+ fail.  The reason is that the non-root user would not be able to read
+ files with MAAS node credentials in /etc/cloud/cloud.cfg.d
+ 
+ We want this change SRU so that an attempt to build and run tests on a
+ system deployed by maas will work rather than fail due to unit test failure.
+ 
  [Test Case]
  Run unit tests on a system deployed by maas, or even just with:
-   f=/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg
-   sh -c 'mkdir -p "${1%/*}" && touch "$1" && chmod ugo-r "$1"' -- "$f"
-   tox -e py3
- 
- [Regression Potential] 
+   f=/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg
+   sh -c 'mkdir -p "${1%/*}" && touch "$1" && chmod ugo-r "$1"' -- "$f"
+   tox -e py3
+ 
+ [Regression Potential]
  This was just to fix a build break or unit tests being run.
  Changes are only to unit tests.
  === End SRU Template ===
- 
  
  Observed Behavior:
  
  On a system deployed by MAAS I checked out master and then tried to 
immediately build it:
  > git clone https://git.launchpad.net/cloud-init
  > cd cloud-init
  > ./packages/bddeb
  
  I get a number of errors around permission issues around this file:
  PermissionError: [Errno 13] Permission denied: 
\'/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg\'
  
  See: https://paste.ubuntu.com/23354559/
   or formatted better: http://paste.ubuntu.com/23374383/
  
  If I run as root however, it build as expected.
  
  Expected Behavior:
  Running bddeb works as a non-root user.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1635350

Title:
  unit tests fail as non-root on maas deployed system

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact]
  Running cloud-init's unit test cases on a system deployed by MAAS would
  fail.  The reason is that the non-root user would not be able to read
  files with MAAS node credentials in /etc/cloud/cloud.cfg.d

  We want this change SRU so that an attempt to build and run tests on a
  system deployed by maas will work rather than fail due to unit test failure.

  [Test Case]
  Run unit tests on a system deployed by maas, or even just with:
    f=/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg
    sh -c 'mkdir -p "${1%/*}" && touch "$1" && chmod ugo-r "$1"' -- "$f"
    tox -e py3

  [Regression Potential]
  This was just to fix a build break or unit tests being run.
  Changes are only to unit tests.
  === End SRU Template ===

  Observed Behavior:

  On a system deployed by MAAS I checked out master and then tried to 
immediately build it:

[Yahoo-eng-team] [Bug 1639930] Re: initramfs kernel configuration ignored if only ip6= on kernel command line

2016-11-07 Thread Scott Moser
The initial fix is easy, but bug 1621615 changes sneaked in without a
unit test.

I'd really like a unit test of read_kernel_cmdline_config to accompany this 
change.
Also, a change to 'DHCP6_CONTENT_1' in tests/unittests/test_net.py to contain 
what we're currently expecting.


diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
index 4075a27..a077730 100644
--- a/cloudinit/net/cmdline.py
+++ b/cloudinit/net/cmdline.py
@@ -199,7 +199,7 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, 
cmdline=None):
 if data64:
 return util.load_yaml(_b64dgz(data64))
 
-if 'ip=' not in cmdline:
+if 'ip=' not in cmdline and 'ip6=' not in cmdline:
 return None


** Summary changed:

- initramfs kernel configuration ignored if only ip6= on kernel command line
+ initramfs network configuration ignored if only ip6= on kernel command line

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639930

Title:
  initramfs network configuration ignored if only ip6= on kernel command
  line

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  In changes made under bug 1621615 (specifically a1cdebdea), we now
  expect that there may be a 'ip6=' argument on the kernel command line.
  The changes made did not test the case where there is 'ip6=' and no
  'ip='.

  The code currently will return with no network configuration found if
  there is only ip6=...


  Related bugs:
   * bug 1621615: network not configured when ipv6 netbooted into cloud-init 
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639930] [NEW] initramfs network configuration ignored if only ip6= on kernel command line

2016-11-07 Thread Scott Moser
Public bug reported:

In changes made under bug 1621615 (specifically a1cdebdea), we now
expect that there may be a 'ip6=' argument on the kernel command line.
The changes made did not test the case where there is 'ip6=' and no
'ip='.

The code currently will return with no network configuration found if
there is only ip6=...


Related bugs:
 * bug 1621615: network not configured when ipv6 netbooted into cloud-init 
 * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
 * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Affects: cloud-init (Ubuntu)
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639930

Title:
  initramfs network configuration ignored if only ip6= on kernel command
  line

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  In changes made under bug 1621615 (specifically a1cdebdea), we now
  expect that there may be a 'ip6=' argument on the kernel command line.
  The changes made did not test the case where there is 'ip6=' and no
  'ip='.

  The code currently will return with no network configuration found if
  there is only ip6=...


  Related bugs:
   * bug 1621615: network not configured when ipv6 netbooted into cloud-init 
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621615] Re: network not configured when ipv6 netbooted into cloud-init

2016-11-07 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.8-35-gc24187e-
0ubuntu1

---
cloud-init (0.7.8-35-gc24187e-0ubuntu1) zesty; urgency=medium

  * New upstream snapshot.
- pyflakes: fix issue with pyflakes 1.3 found in ubuntu zesty-proposed.

 -- Scott Moser   Mon, 07 Nov 2016 13:31:30 -0500

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1621615

Title:
  network not configured when ipv6 netbooted into cloud-init

Status in cloud-init:
  Fix Committed
Status in MAAS:
  In Progress
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-initramfs-tools package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-initramfs-tools source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-initramfs-tools source package in Yakkety:
  Confirmed

Bug description:
  https://bugs.launchpad.net/ubuntu/+source/klibc/+bug/1621507 talks of
  how IPv6 netboot with iscsi root disk doesn't work, blocking IPv6-only
  MAAS.

  After I hand-walked busybox through getting an IPv6 address,
  everything worked just fine until cloud-init couldn't fetch the
  instance data, because it insisted on bringing up the interface in
  IPv4, and there is no IPv4 DHCP on that vlan.

  Please work with initramfs and friends on getting IPv6 netboot to
  actually configure the interface.  This may be as simple as teaching
  it about "inet6 dhcp" interfaces, and bolting the pieces together.
  Note that "use radvd" is not really an option for our use case.

  Related bugs:
   * bug 1621507: initramfs-tools configure_networking() fails to dhcp IPv6 
addresses
   * bug 1635716: Can't bring up a machine on a dual network (ipv4 and ipv6) 

  [Impact]

  It is not possible to enlist, commmission, or deploy with MAAS in an
  IPv6-only environment. Anyone wanting to netboot with a network root
  filesystem in an IPv6-only environment is affected.

  This upload addresses this by accepting, using, and forwarding any
  IPV6* variables from the initramfs boot.  (See
  https://launchpad.net/bugs/1621507)

  [Test Case]

  See Bug 1229458. Configure radvd, dhcpd, and tftpd for your IPv6-only
  netbooting world. Pass the boot process an IPv6 address to fetch
  instance-data from, and see it fail to configure the network.

  [Regression Potential]

  1) If the booting host is in a dual-boot environment, and the
  instance-dat URL uses a hostname that has both A and  RRsets, the
  booting host may try to talk IPv6 to get instance data.  If the
  instance-data providing host is only allowing that to happen over
  IPv4, it will fail. (It also represents a configuraiton issue on the
  providing host...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1621615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626243] Re: Cloud-init fails to write ext4 filesystem to Azure Ephemeral Drive

2016-11-07 Thread Scott Moser
** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Zesty)
   Importance: Medium
   Status: Fix Released

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1626243

Title:
  Cloud-init fails to write ext4 filesystem to Azure Ephemeral Drive

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-init source package in Zesty:
  Fix Released

Bug description:
  The symptom is similar to bug 1611074 but the cause is different. In
  this case it seems there is an error accessing /dev/sdb1 when lsblk is
  run, possibly because sgdisk isn't done creating the partition. The
  specific error message is "/dev/sdb1: not a block device." A simple
  wait and retry here may resolve the issue.

  util.py[DEBUG]: Running command ['/sbin/sgdisk', '-p', '/dev/sdb'] with 
allowed return codes [0] (shell=False, capture=True)
  cc_disk_setup.py[DEBUG]: Device partitioning layout matches
  util.py[DEBUG]: Creating partition on /dev/disk/cloud/azure_resource took 
0.056 seconds
  cc_disk_setup.py[DEBUG]: setting up filesystems: [{'filesystem': 'ext4', 
'device': 'ephemeral0.1', 'replace_fs': 'ntfs'}]
  cc_disk_setup.py[DEBUG]: ephemeral0.1 is mapped to 
disk=/dev/disk/cloud/azure_resource part=1
  cc_disk_setup.py[DEBUG]: Creating new filesystem.
  cc_disk_setup.py[DEBUG]: Checking /dev/sdb against default devices
  cc_disk_setup.py[DEBUG]: Manual request of partition 1 for /dev/sdb1
  cc_disk_setup.py[DEBUG]: Checking device /dev/sdb1
  util.py[DEBUG]: Running command ['/sbin/blkid', '-c', '/dev/null', 
'/dev/sdb1'] with allowed return codes [0, 2] (shell=False, capture=True)
  cc_disk_setup.py[DEBUG]: Device /dev/sdb1 has None None
  cc_disk_setup.py[DEBUG]: Device /dev/sdb1 is cleared for formating
  cc_disk_setup.py[DEBUG]: File system None will be created on /dev/sdb1
  util.py[DEBUG]: Running command ['/bin/lsblk', '--pairs', '--output', 
'NAME,TYPE,FSTYPE,LABEL', '/dev/sdb1', '--nodeps'] with allowed return codes 
[0] (shell=False, capture=True)
  util.py[DEBUG]: Creating fs for /dev/disk/cloud/azure_resource took 0.008 
seconds
  util.py[WARNING]: Failed during filesystem operation#012Failed during disk 
check for /dev/sdb1#012Unexpected error while running command.#012Command: 
['/bin/lsblk', '--pairs', '--output', 'NAME,TYPE,FSTYPE,LABEL', '/dev/sdb1', 
'--nodeps']#012Exit code: 32#012Reason: -#012Stdout: ''#012Stderr: 'lsblk: 
/dev/sdb1: not a block device\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1626243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635350] Re: unit tests fail as non-root on maas deployed system

2016-11-07 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1635350

Title:
  unit tests fail as non-root on maas deployed system

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  New

Bug description:
  Observed Behavior:

  On a system deployed by MAAS I checked out master and then tried to 
immediately build it:
  > git clone https://git.launchpad.net/cloud-init
  > cd cloud-init
  > ./packages/bddeb

  I get a number of errors around permission issues around this file:
  PermissionError: [Errno 13] Permission denied: 
\'/etc/cloud/cloud.cfg.d/90_dpkg_maas.cfg\'

  See: https://paste.ubuntu.com/23354559/
   or formatted better: http://paste.ubuntu.com/23374383/

  If I run as root however, it build as expected.

  Expected Behavior:
  Running bddeb works as a non-root user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1635350/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639914] [NEW] Race condition in nova compute during snapshot

2016-11-07 Thread Srinivas Sakhamuri
Public bug reported:

On Liberty nova with Ceph storage, when snapshot is created and
immediately deleting the instance seems to cause race condition.

This can be created with following commands.

1. nova boot --flavor m1.large --image 6d4259ce-5873-42cb-8cbe-
9873f069c149 testinstance

id   | bef22f9b-
ade4-48a1-86c4-b9a007897eb3

2. nova image-create bef22f9b-ade4-48a1-86c4-b9a007897eb3 testinstance-snap ; 
nova delete bef22f9b-ade4-48a1-86c4-b9a007897eb3
Request to delete server bef22f9b-ade4-48a1-86c4-b9a007897eb3 has been accepted.
3. nova image-list doesn't show the snapshot

4. nova list doesn't show the instance

Nova compute log indicates a race condition while executing CLI commands
in 2 above

<182>1 2016-10-28T14:46:41.830208+00:00 hyper1 nova-compute 30056 - [40521 
levelname="INFO" component="nova-compute" funcname="nova.compute.manager" 
request_id="req-e9e4e899-e2a7-4bf8-bdf1-c26f5634cfda" 
user="51fa0172fbdf495e89132f7f4574e750" 
tenant="00ead348c5f9475f8940ab29cd767c5e" instance="[instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] " 
lineno="/usr/lib/python2.7/site-packages/nova/compute/manager.py:2249"] 
nova.compute.manager Terminating instance
<183>1 2016-10-28T14:46:42.057653+00:00 hyper1 nova-compute 30056 - [40521 
levelname="DEBUG" component="nova-compute" funcname="nova.compute.manager" 
request_id="req-1c4cf749-a6a8-46af-b331-f70dc1e9f364" 
user="51fa0172fbdf495e89132f7f4574e750" 
tenant="00ead348c5f9475f8940ab29cd767c5e" instance="[instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] " 
lineno="/usr/lib/python2.7/site-packages/nova/compute/manager.py:420"] 
nova.compute.manager Cleaning up image ae9ebf4b-7dd6-4615-816f-c2f3c7c08530 
decorated_function /usr/lib/python2.7/site-packages/nova/compute/manager.py:420
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] Traceback (most recent call last):
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 416, in 
decorated_function
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] *args, **kwargs)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3038, in 
snapshot_instance
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] task_states.IMAGE_SNAPSHOT)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3068, in 
_snapshot_instance
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] update_task_state)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1447, in 
snapshot
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] guest.save_memory_state()
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 363, in 
save_memory_state
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] self._domain.managedSave(0)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] rv = execute(f, *args, **kwargs)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] six.reraise(c, e, tb)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] rv = meth(*args, **kwargs)
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1397, in managedSave
!!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] if ret == -1: raise 

[Yahoo-eng-team] [Bug 1626205] Re: increase token validation performance relating to revoked tokens

2016-11-07 Thread Richard
https://review.openstack.org/#/c/382107/

** Changed in: keystone
 Assignee: (unassigned) => Richard (csravelar)

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1626205

Title:
  increase token validation performance relating to revoked tokens

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, there is are two methods called is_revoke and matches that
  iterate over all revoked events one by one and then further iterate
  over every field, one by one until it can either short circuit by not
  matching one value in the event to the passed in token, or until it
  has matched all fields of non-empty values in the revocation event to
  the corresponding fields in the given token.

  In most cases, the token is not revoked and it will iterate over the
  entire list of revocations. As the list gets longer, validation
  becomes slower. You start to see big performance issues around 1500+
  revocation entries. It would be nice to directly query the database
  using sql instead of pulling all the revocation events down,
  deserializing them, and then iterating over each one in python.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1626205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639894] [NEW] TestInstanceNotificationSample.test_volume_swap_server_with_error is racy

2016-11-07 Thread Matt Riedemann
Public bug reported:

This failed on an unrelated change today:

http://logs.openstack.org/24/394524/1/check/gate-nova-tox-db-functional-
ubuntu-xenial/01a5cce/console.html#_2016-11-07_17_27_14_569696

https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/tests/functional/notification_sample_tests/test_instance.py#L547

That says it expects 3 notifications, but it really only cares about 2.
Based on when the compute.exception happens, and when
self._wait_until_swap_volume_error() returns True, the 3rd
compute.exception notification might not have happened.

The swap_error flag is set in the cinder fixture here:

https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/tests/fixtures.py#L868

That happens here:

https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/compute/manager.py#L4936

Which is after the swap-volume error notification is sent.

The compute.exception comes from the instance fault handler here:

https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/compute/manager.py#L4961

Which is after cinder.swap_error is set to true.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: functional notifications testing volume

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639894

Title:
  TestInstanceNotificationSample.test_volume_swap_server_with_error is
  racy

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This failed on an unrelated change today:

  http://logs.openstack.org/24/394524/1/check/gate-nova-tox-db-
  functional-ubuntu-
  xenial/01a5cce/console.html#_2016-11-07_17_27_14_569696

  
https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/tests/functional/notification_sample_tests/test_instance.py#L547

  That says it expects 3 notifications, but it really only cares about
  2. Based on when the compute.exception happens, and when
  self._wait_until_swap_volume_error() returns True, the 3rd
  compute.exception notification might not have happened.

  The swap_error flag is set in the cinder fixture here:

  
https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/tests/fixtures.py#L868

  That happens here:

  
https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/compute/manager.py#L4936

  Which is after the swap-volume error notification is sent.

  The compute.exception comes from the instance fault handler here:

  
https://github.com/openstack/nova/blob/0132cc8c2663843a891e054d9185e6ba2fd589ad/nova/compute/manager.py#L4961

  Which is after cinder.swap_error is set to true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1639894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639879] [NEW] Deprecate and remove send_arp_for_ha option

2016-11-07 Thread Ihar Hrachyshka
Public bug reported:

It puzzles me why we would want to have it configurable. Having it = 0
is just plain bad (it breaks a floating IP roaming around HA routers),
having it = 1 may be unsafe if clients miss the update, having it more
than 3 (the default) is probably wasteful. That makes me think that
maybe we should not have it in the first place.

The patch that introduced the option also introduced the feature itself,
and does not provide any clue around why we would need it:
https://review.openstack.org/#/c/12037/

Maybe the option is in the tree because, in Assaf's words, "we're a
bunch of lazy developers that like to shift the responsibility to our
poor users that have to deal with thousands of configuration options".

I suggest we just move with deprecation and removal here.

** Affects: neutron
 Importance: Low
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: deprecation l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: deprecation l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639879

Title:
  Deprecate and remove send_arp_for_ha option

Status in neutron:
  Confirmed

Bug description:
  It puzzles me why we would want to have it configurable. Having it = 0
  is just plain bad (it breaks a floating IP roaming around HA routers),
  having it = 1 may be unsafe if clients miss the update, having it more
  than 3 (the default) is probably wasteful. That makes me think that
  maybe we should not have it in the first place.

  The patch that introduced the option also introduced the feature
  itself, and does not provide any clue around why we would need it:
  https://review.openstack.org/#/c/12037/

  Maybe the option is in the tree because, in Assaf's words, "we're a
  bunch of lazy developers that like to shift the responsibility to our
  poor users that have to deal with thousands of configuration options".

  I suggest we just move with deprecation and removal here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639880] [NEW] Deprecate and remove send_arp_for_ha option

2016-11-07 Thread Ihar Hrachyshka
*** This bug is a duplicate of bug 1639879 ***
https://bugs.launchpad.net/bugs/1639879

Public bug reported:

It puzzles me why we would want to have it configurable. Having it = 0
is just plain bad (it breaks a floating IP roaming around HA routers),
having it = 1 may be unsafe if clients miss the update, having it more
than 3 (the default) is probably wasteful. That makes me think that
maybe we should not have it in the first place.

The patch that introduced the option also introduced the feature itself,
and does not provide any clue around why we would need it:
https://review.openstack.org/#/c/12037/

Maybe the option is in the tree because, in Assaf's words, "we're a
bunch of lazy developers that like to shift the responsibility to our
poor users that have to deal with thousands of configuration options".

I suggest we just move with deprecation and removal here.

** Affects: neutron
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1639879
   Deprecate and remove send_arp_for_ha option

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639880

Title:
  Deprecate and remove send_arp_for_ha option

Status in neutron:
  New

Bug description:
  It puzzles me why we would want to have it configurable. Having it = 0
  is just plain bad (it breaks a floating IP roaming around HA routers),
  having it = 1 may be unsafe if clients miss the update, having it more
  than 3 (the default) is probably wasteful. That makes me think that
  maybe we should not have it in the first place.

  The patch that introduced the option also introduced the feature
  itself, and does not provide any clue around why we would need it:
  https://review.openstack.org/#/c/12037/

  Maybe the option is in the tree because, in Assaf's words, "we're a
  bunch of lazy developers that like to shift the responsibility to our
  poor users that have to deal with thousands of configuration options".

  I suggest we just move with deprecation and removal here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639877] [NEW] unittests failing on stable/mitaka with run_tests.sh

2016-11-07 Thread Yves-Gwenael Bourhis
Public bug reported:

On stable/mitaka branch, unittests fail with run_tests.sh with the
following error: http://paste.openstack.org/show/588283/

However tests succeed when launched with tox.

** Affects: horizon
 Importance: High
 Status: Confirmed


** Tags: mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1639877

Title:
  unittests failing on stable/mitaka with run_tests.sh

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  On stable/mitaka branch, unittests fail with run_tests.sh with the
  following error: http://paste.openstack.org/show/588283/

  However tests succeed when launched with tox.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1639877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639876] [NEW] OpenStack Dashboard test settings should not import local enabled files

2016-11-07 Thread Rob Cresswell
Public bug reported:

openstack_dashboard/test/settings.py is importing from local/enabled/,
which it shouldnt do. Plugins should be tested in their own
infrastructure.

** Affects: horizon
 Importance: High
 Assignee: Rob Cresswell (robcresswell)
 Status: New


** Tags: newton-backport-potential

** Changed in: horizon
Milestone: None => ocata-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1639876

Title:
  OpenStack Dashboard test settings should not import local enabled
  files

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  openstack_dashboard/test/settings.py is importing from local/enabled/,
  which it shouldnt do. Plugins should be tested in their own
  infrastructure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1639876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627560] Re: The name of log on button flicker

2016-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/376064
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ce3a0fbdd94ffb954ca390c5e93fb1140af984e4
Submitter: Jenkins
Branch:master

commit ce3a0fbdd94ffb954ca390c5e93fb1140af984e4
Author: Kenji Ishii 
Date:   Mon Sep 26 10:13:38 2016 +0900

Fix the flicker of the log on button name

A log on button name in horizon depends on auth_type.
This patch will fix it so as to be displayed without a flicker
after evaluation is finished.

Change-Id: I174716651ea2f4ac894c7dd5d52e5f57fe8be06a
Closes-Bug: #1627560


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1627560

Title:
  The name of log on button flicker

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  A log on button name in horizon depends on auth_type.
  If auth_type is 'credentials', it will be shown 'Sign in'. If not so, it will 
be Connect'.

  In this case, 'Sign in' is shown firstly, whichever an auth_type a user have. 
  Then if auth_type is not 'credentials', its name will be changed.

  It should be displayed after evaluation is finished.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1627560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625570] Re: fullstack : should add test of ensure traffic is using DSCP marks outbound

2016-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/390803
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ea7b51655f5504404cc7c416547608c24fbd45ab
Submitter: Jenkins
Branch:master

commit ea7b51655f5504404cc7c416547608c24fbd45ab
Author: Sławek Kapłoński 
Date:   Wed Oct 26 11:46:46 2016 +0200

Add fullstack test for check DSCP marks outbounds

New fullstack test is added to check if packets which
should be sent from port with DSCP mark set are really
sent to another port with this DSCP mark.

This test uses IPv4 only.

Change-Id: I4b26c3c644eb6f2f7813658c99d16fbc3cc61e06
Closes-Bug: #1625570


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625570

Title:
  fullstack : should add test of ensure traffic is using DSCP marks
  outbound

Status in neutron:
  Fix Released

Bug description:
  https://review.openstack.org/#/c/190285/57/specs/newton/ml2-qos-with-
  dscp.rst

  we should add in fullstack a test that ensure traffic is using DSCP
  marks outbound

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread Tuan
** Also affects: sahara
   Importance: Undecided
   Status: New

** Changed in: sahara
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in Sahara:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574113] Re: curtin/maas don't support multiple (derived) archives/repositories with custom keys

2016-11-07 Thread Blake Rouse
** Also affects: maas/1.9
   Importance: Undecided
   Status: New

** Changed in: maas/1.9
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1574113

Title:
  curtin/maas don't support multiple (derived) archives/repositories
  with custom keys

Status in cloud-init:
  Fix Released
Status in curtin:
  Fix Committed
Status in MAAS:
  Fix Released
Status in MAAS 1.9 series:
  Won't Fix
Status in cloud-init package in Ubuntu:
  Fix Released
Status in curtin package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Committed
Status in curtin source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * Curtin doesn't support multiple derived archive/repositories with
 custom keys as typically deployed in an offline Landscape deployment.
 Adding the custom key resulted in an error when processing the
 apt_source configuration as provided in this setup.

 Curtin has been updated to support the updated apt-source model
 implemented in cloud-init as well.  Together the existing Landscape
 deployments for offline users can now supply an apt-source config
 that updates curtin to use the specified derived repository with a
 custom key.
 
  [Test Case]

   * Install proposed curtin package and deploy a system behind a
 Landscape Offline configuration with a derived repo.

PASS: Curtin will successfully accept the derived repo and install the
  system from the specified apt repository.

FAIL: Curtin will fail to install the OS with an error like:

W: GPG error: http://100.107.231.166 trusty InRelease:
The following signatures couldn't be verified because the public key
is not available: NO_PUBKEY 2C6F2731D2B38BD3
E: There are problems and -y was used without --force-yes

Unexpected error while running command.
Command: ['chroot', '/tmp/tmpcEfTLw/target', 'eatmydata', 'apt-get',
  '--quiet', '--assume-yes',
  '--option=Dpkg::options::=--force-unsafe-io',
  '--option=Dpkg::Options::=--force-confold', 'install',
  'lvm2', 'ifenslave']
Exit code: 100


  [Regression Potential]

   * Other users of previous curtin 'apt_source' configurations may not
 continue to work without re-formatting the apt_source configuration.

  
  [Original Description]

  In a customer environment I have to deploy using offline resources (no
  internet connection at all), so I created apt mirror and MAAS images
  mirror. I configured MAAS  to use the local  mirrors and I'm able to
  commission the nodes but I'm not able to deploy the nodes because
  there is no way to add gpg key of the local repo in target before the
  'late' stage'.

  Using curtin I'm able to add the key but too late, in fact  according
  with http://bazaar.launchpad.net/~curtin-
  dev/curtin/trunk/view/head:/curtin/commands/install.py#L52 "late"
  stage is executed  after "curthooks" this prevent to add the key.

  I checked also apt_config function in curthooks.py  I did't see code
  that add the key for each mirror.

  It should be possible to add gpg public of the repository in maas.

  --
  configs/config-000.cfg
  --

  #cloud-config
  debconf_selections:
   maas: |
    cloud-init   cloud-init/datasources  multiselect MAAS
    cloud-init   cloud-init/maas-metadata-url  string 
http://100.107.231.164/MAAS/metadata/
    cloud-init   cloud-init/maas-metadata-credentials  string 
oauth_token_key=8eZmzQWSSQzsUkaLnE_token_secret=LKmn8sHgzEXfvzSZePAa9jUXvTMRrFNP_consumer_key=htwDZJFtmv2YvQXhUW
    cloud-init   cloud-init/local-cloud-config  string 
apt_preserve_sources_list: true\nmanage_etc_hosts: false\nmanual_cache_clean: 
true\nreporting:\n  maas: {consumer_key: htwDZJFtmv2YvQXhUW, endpoint: 
'http://100.107.231.164/MAAS/metadata/status/node-61b6987c-07a7-11e6-9d23-5254003d2515',\n
token_key: 8eZmzQWSSQzsUkaLnE, token_secret: 
LKmn8sHgzEXfvzSZePAa9jUXvTMRrFNP,\ntype: webhook}\nsystem_info:\n  
package_mirrors:\n  - arches: [i386, amd64]\nfailsafe: {primary: 
'http://archive.ubuntu.com/ubuntu', security: 
'http://security.ubuntu.com/ubuntu'}\nsearch:\n  primary: 
['http://100.107.231.166/']\n  security: ['http://100.107.231.166/']\n  - 
arches: [default]\nfailsafe: {primary: 
'http://ports.ubuntu.com/ubuntu-ports', security: 
'http://ports.ubuntu.com/ubuntu-ports'}\nsearch:\n  primary: 
['http://ports.ubuntu.com/ubuntu-ports']\n  security: 
['http://ports.ubuntu.com/ubuntu-ports']\n
  late_commands:
    maas: [wget, '--no-proxy', 
'http://100.107.231.164/MAAS/metadata/latest/by-id/node-61b6987c-07a7-11e6-9d23-5254003d2515/',
 '--post-data', 'op=netboot_off', '-O', '/dev/null']
    apt_key: ["curtin", "in-target", "--", "sh", "-c", "/usr/bin/wget 

[Yahoo-eng-team] [Bug 1637641] Re: fixed_ips missing from compute.update.end notifications

2016-11-07 Thread Balazs Gibizer
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1637641

Title:
  fixed_ips missing from compute.update.end notifications

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is part of a series of reports around discrepancies between the
  nova servers API response and notification payloads as affects
  Searchlight; see https://bugs.launchpad.net/nova/+bug/1637634 for
  background.

  An example API response to retrieve a server's information is at
  http://paste.openstack.org/show/xbv2CwtHnhhl1nLLiJeN/

  An example compute.create.end notification is at
  http://paste.openstack.org/show/zG5aJeUpC3LAGr0J0P2T/

  An example compute.update.end notification is at
  http://paste.openstack.org/show/uwh1izVsaW5eg7zDrgFm/

  fixed_ips is present in compute.create.end notifications and contains
  IP/MAC/network information related to neutron ports added during
  instance creation. This field is missing from compute.update.end
  notification payloads, and this causes us a problem because it's much
  better from Searchlight's perspective if notification payloads are
  consistent and complete representations of a resource's state.

  Searchlight currently has an optimization to detect whether
  compute.update.end notifications represent scheduler state changes
  like suspending, resuming, etc, and for those does a partial update.
  For other events it currently has to go the nova API to get the
  current representation of the affected server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1637641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-07 Thread Vitaly Gridnev
I think that we really don't need a bug for this. Just a minor
improvement.

** Changed in: sahara
   Status: New => Invalid

** No longer affects: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630912] Re: stable/newton: unable to spin up a kvm instance

2016-11-07 Thread Sujai
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630912

Title:
  stable/newton: unable to spin up a kvm instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Happens randowmly:

  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager 
[req-50d2dc6d-072e-423e-9de0-096ba307411f admin demo] [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] Instance failed to spawn
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] Traceback (most recent call last):
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2078, in _build_resources
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] yield resources
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1920, in _build_and_run_instance
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] block_device_info=block_device_info)
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2571, in spawn
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] admin_pass=admin_password)
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2960, in _create_image
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] image_id=disk_images['kernel_id'])
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 218, in cache
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] *args, **kwargs)
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 504, in create_image
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] prepare_template(target=base, *args, 
**kwargs)
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
264, in inner
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] do_log=False, semaphores=semaphores, 
delay=delay):
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] return self.gen.next()
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
216, in lock
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] ext_lock.acquire(delay=delay)
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/fasteners/process_lock.py", line 151, 
in acquire
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] self._do_open()
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/fasteners/process_lock.py", line 123, 
in _do_open
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] self.lockfile = open(self.path, 'a')
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] IOError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/locks/nova-273d5645757056cdd056de4cfe9f121b9eee6ae3'
  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1562878] Re: L3 HA: Unable to complete operation on subnet

2016-11-07 Thread John Schwarz
I found the bug, and it's in rally. Patch
Ieab53624dc34dc687a0e8eebd84778f7fc95dd77 added a new type of router
interface value for "device_owner", called
"network:ha_router_replicated_interface". However, rally was not made
aware of it so it thinks this interface is a normal port, trying to
delete it with a normal 'neutron port-delete' (and not 'neutron router-
interface-remove').

I'll adjust the bug report and will submit a fix for rally.

** Also affects: rally
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: rally
 Assignee: (unassigned) => John Schwarz (jschwarz)

** Changed in: rally
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562878

Title:
  L3 HA: Unable to complete operation on subnet

Status in neutron:
  Invalid
Status in Rally:
  In Progress

Bug description:
  Environment 3 controllers, 46 computes, liberty. L3 HA During execution 
NeutronNetworks.create_and_delete_routers several times test failed with 
"Unable to complete operation on subnet . One or more ports have an IP 
allocation from this subnet. " trace in neutron-server logs 
http://paste.openstack.org/show/491557/
  Rally report attached.

  Current problem is with HA subnet. The side effect of this problem is
  bug  https://bugs.launchpad.net/neutron/+bug/1562892

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459065] Re: Unable to update the user - unable to retrieve user list

2016-11-07 Thread Kuldeep Khandelwal
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459065

Title:
  Unable to update the user - unable to retrieve user list

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The steps to produce the bug:
   1/ Login to openstack with user name: admin
   2/ Go to Identity -> Users -> Edit  --> To update "admin" user
   3/ Choose primary project for admin user is admin   --> Update user 
successful 
   4/ go to Edit of admin user again and choose primary project is demo --> 
update user --> getting the following errors:
  Error: Unable to update the user.
  Error: Unauthorized: Unable to retrieve user list.

  But if signing  out Openstack and sign in again with admin user -->
  The user list is updated normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638662] Re: "openstack_dashboard.api.keystone: Unable to retrieve Domain: default" incessant warning logging when switching Projects while being on the Identity>Project panel

2016-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/392944
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=55baf9254d78de2c6e409156e6649875cb7797e3
Submitter: Jenkins
Branch:master

commit 55baf9254d78de2c6e409156e6649875cb7797e3
Author: Kam Nasim 
Date:   Wed Nov 2 19:34:40 2016 +

"Unable to retrieve Domain" incessant warning logs

"openstack_dashboard.api.keystone: Unable to retrieve Domain: default"
incessant warning logging when switching Projects while being on the
Identity>Project panel.

Retrieving domain information is a Keystone admin URL operation. As a
pre-check, such operations would be Forbidden if the logon user does not
have an 'admin' role on the current project.

Since this is a common occurence, and can cause incessant warning
logging in the horizon logs, we recognize this condition and return the
user's domain information instead.

Signed-off-by: Kam Nasim 

Closes-Bug: #1638662
Change-Id: Iadd5184a16a73da1da5a7230c89e996248f1eba7


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1638662

Title:
  "openstack_dashboard.api.keystone: Unable to retrieve Domain: default"
  incessant warning logging when switching Projects while being on the
  Identity>Project panel

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
   REPRODUCTION 
  One scenario in horizon where we shall get the "Unable to retrieve Domain: 
default" incessant logging: 
  1. Login to horizon as admin 
  2. Select Identity - Projects panel and switch Projects
  Cancel 
  3. Remain on the Identity - Projects panel
  Result 
  horizon log shows the following warnings, logged at a cadence of 6 seconds...

  
  2016-08-17 20:23:06,128 [INFO] openstack_auth.views: Project switch 
successful for user "admin" "128.224.141.74". 
  2016-08-17 20:23:06,156 [INFO] openstack_auth.views: Deleted token 
db95cc356ca54ea5b3a7bd39a6ec6806 
  2016-08-17 20:23:06,416 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:11,917 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:17,153 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:22,430 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:27,670 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:32,993 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 
  2016-08-17 20:23:38,248 [WARNING] openstack_dashboard.api.keystone: Unable to 
retrieve Domain: default 

  
   ANALYSIS 
  Further investigation reveals that the horizon error log (unable to retrieve 
domain) when switching Projects is because the admin user does NOT have an 
admin role on this new project (tenant1): 

  {'username': u'admin', 'token': , 'project_name': u'tenant1', 'user_id':
  u'c118176de885401c97314e0d6da8e786', 'roles': [u'_member_'],
  'is_admin': False, 'project_id': u'fe71d23184764a25b10d367fd4ed18a1',
  'domain_id': u'default'}

  In Identity V3, all Keystone operations can be done over the
  internalURL with the exception of domain specific operations, which
  still go over the adminURL. Therefore Horizon calls Keystone's RBAC
  policy to ensure that this logged in user has the "admin" role on this
  project, and if so then use the adminURL. This is not true and
  therefore we get that incessant log error. When I disable RBAC policy
  enforcement at Horizon, and Horizon makes the call out to Keystone
  server, for domain information, it does so using the internalURL which
  Keystone server rejects.

  Therefore the Horizon code that re-renders the Identity > Project
  panel needs to account for this scnenario, i.e. "if the Horizon
  session does NOT have a domain context, and if the logged in user does
  NOT have an admin role on the current project, then DO NOT attempt to
  get the domain from Keystone, but instead use the logged in user's
  domain and assume it to be the same as the project domain"

  A new debug log will be added to indicate this scenario: 
  2016-10-31 21:31:20,267 [DEBUG] openstack_dashboard.api.keystone: Cannot 
retrieve domain information for user (admin) that does not have an admin role 
on project (tenant2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1638662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp