[Yahoo-eng-team] [Bug 1746144] [NEW] Next steps in Horizon, spelling mistake "derails"

2018-01-29 Thread zhuxuanpeng
Public bug reported:


- [X] This doc is inaccurate in this way: Spelling mistake: "Set up session 
storage. For derails, see..."

---
Release: 13.0.0.0b4.dev25 on 2018-01-29 19:11
SHA: ee72d89bdae7e09da618707f1a5f67a131e57f72
Source: 
https://git.openstack.org/cgit/openstack/horizon/tree/doc/source/install/next-steps.rst
URL: https://docs.openstack.org/horizon/latest/install/next-steps.html

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1746144

Title:
  Next steps in Horizon, spelling mistake "derails"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  - [X] This doc is inaccurate in this way: Spelling mistake: "Set up session 
storage. For derails, see..."

  ---
  Release: 13.0.0.0b4.dev25 on 2018-01-29 19:11
  SHA: ee72d89bdae7e09da618707f1a5f67a131e57f72
  Source: 
https://git.openstack.org/cgit/openstack/horizon/tree/doc/source/install/next-steps.rst
  URL: https://docs.openstack.org/horizon/latest/install/next-steps.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1746144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729621] Re: Inconsistent value for vcpu_used

2018-01-29 Thread Matt Riedemann
There are more details in duplicate bug 1739349.

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1729621

Title:
  Inconsistent value for vcpu_used

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  Description
  ===

  Nova updates hypervisor resources using function called
  ./nova/compute/resource_tracker.py:update_available_resource().

  In case of *shutdowned* instances it could impact inconsistent values
  for resources like vcpu_used.

  Resources are taken from function self.driver.get_available_resource():
  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/compute/resource_tracker.py#L617
  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/virt/libvirt/driver.py#L5766

  This function calculates allocated vcpu's based on function _get_vcpu_total().
  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/virt/libvirt/driver.py#L5352

  As we see in _get_vcpu_total() function calls
  *self._host.list_guests()* without "only_running=False" parameter. So
  it doesn't respect shutdowned instances.

  At the end of resource update process function _update_available_resource() 
is beign called:
  > /opt/stack/nova/nova/compute/resource_tracker.py(733)

   677 @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE)
   678 def _update_available_resource(self, context, resources):
   679
   681 # initialize the compute node object, creating it
   682 # if it does not already exist.
   683 self._init_compute_node(context, resources)

  It initialize compute node object with resources that are calculated
  without shutdowned instances. If compute node object already exists it
  *UPDATES* its fields - *for a while nova-api has other resources
  values than it its in real.*

   731 # update the compute_node
   732 self._update(context, cn)

  The inconsistency is automatically fixed during other code execution:
  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/compute/resource_tracker.py#L709

  But for heavy-loaded hypervisors (like 100 active instances and 30
  shutdowned instances) it creates wrong informations in nova database
  for about 4-5 seconds (in my usecase) - it could impact other issues
  like spawning on already full hypervisor (because scheduler has wrong
  informations about hypervisor usage).

  Steps to reproduce
  ==

  1) Start devstack
  2) Create 120 instances
  3) Stop some instances
  4) Watch blinking values in nova hypervisor-show
  nova hypervisor-show e6dfc16b-7914-48fb-a235-6fe3a41bb6db

  Expected result
  ===
  Returned values should be the same during test.

  Actual result
  =
  while true; do echo -n "$(date) "; echo "select hypervisor_hostname, 
vcpus_used from compute_nodes where 
hypervisor_hostname='example.compute.node.com';" | mysql nova_cell1; sleep 0.3; 
done

  Thu Nov  2 14:50:09 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:10 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:11 UTC 2017 example.compute.node.com  120
  Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:12 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:13 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:14 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:15 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:16 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  117
  Thu Nov  2 14:50:17 UTC 2017 example.compute.node.com  117
  Thu Nov  2 

[Yahoo-eng-team] [Bug 1713420] Re: event: subnet.create.* lacks detailed information

2018-01-29 Thread gordon chung
closing, as i assume nothing was done on neutron end. if there is, this
is a trivial addition to event_definitions

** Changed in: ceilometer
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1713420

Title:
  event: subnet.create.* lacks detailed information

Status in Ceilometer:
  Invalid
Status in neutron:
  New

Bug description:
  This is actually not a bug report but much like a feature request.

  I have found out that subnet.create.* events have such few traits recorded:
  project_id, user_id, request_id, resource_id, service.

  Information such as allocation_pools, cidr, dns_nameservers, etc., is not
  recorded, but they are quite essential in my opinion.

  I have done a little research and found that neutron doesn't store these
  traits either when it generates notifications. It might be my fault that I
  missed something. Glad if you would tell me that I did.

  I'm wondering if anyone like me thinks that this is a quite important
  feature to have. And I'm guessing there are not much works to be done if we
  decide to implement this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1713420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737708] Re: create instance failed when the userdata size is larger than 64k

2018-01-29 Thread Matt Riedemann
** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737708

Title:
  create instance failed when the userdata size is larger than 64k

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  when create vm use the userdata which the size is larger than 64k, i
  get the log like below:

  
  NFO nova.api.openstack.wsgi [req-0e6ac583-e663-4041-a66b-3bff219eb8e1 
1811a3f9748a44188af880ee02a2f9e0 0e511185e2cc43deb3c04c13107e8eac - - -] HTTP 
exception thrown: User data too large. User data must be no larger than 65535 
bytes once base64 encoded. Your data is 69504 bytes

  
  I think we should enlarge the size of the userdata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746082] [NEW] Hard to navigate from trunk panel to the parent or subports details in network panel

2018-01-29 Thread Lajos Katona
Public bug reported:

On the trunk panel now only the uuid of the parent port and on the trunk 
details page only the uuid of the subports are visible. To find out the details 
of these ports the user have to navigate to the networks panel.
The more user friendly approach is to give direct link from the trunk panel to 
the port details page.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1746082

Title:
  Hard to navigate from trunk panel to the parent or subports details in
  network panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the trunk panel now only the uuid of the parent port and on the trunk 
details page only the uuid of the subports are visible. To find out the details 
of these ports the user have to navigate to the networks panel.
  The more user friendly approach is to give direct link from the trunk panel 
to the port details page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1746082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412483] Re: Horizon UI behavior when browser cookies disabled

2018-01-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/534581
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=c9a143fab4127bc65927285c88bf496e0da37a24
Submitter: Zuul
Branch:master

commit c9a143fab4127bc65927285c88bf496e0da37a24
Author: Ola Khalifa 
Date:   Wed Jan 17 14:11:59 2018 +1300

Horizon UI message when browser cookies disabled

Used dijango's CSRF_FAILURE_VIEW setting to create
a view indicating the reason the request was rejected.
This information is passed on to the login page so it
can render the error.

Change-Id: I61c7195c9bafb269816fde12b058e19ebc69953c
Closes-Bug: #1412483


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412483

Title:
  Horizon UI behavior when browser cookies disabled

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:

  1) Disable cookies in browser settings.

  2) Navigate to Horizon.

  3) See login page.

  4) Fill username and password fields, then click "Sign In" button

  5)
  Actual: Forbidden (403) CSRF verification failed. Request aborted. (see 
screenshot)

  Expected: to see some neat message that cookies must be enabled for
  logging-in.

  Usually good services provide human-friendly warning for user. Examples:
  Google: "Oops! Your browser seems to have cookies disabled. Make sure cookies 
are enabled or try opening a new browser window."
  Microsoft: "Cookies must be allowed. Your browser is currently set to block 
cookies. Your browser must allow cookies before you can use a Microsoft 
account."

  Environment: 
  {"build_id": "2014-12-26_14-25-46", "ostf_sha": 
"a9afb68710d809570460c29d6c3293219d3624d4", "build_number": "58", 
"auth_required": true, "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "release_versions": {"2014.2-6.0": {"VERSION": {"build_id": 
"2014-12-26_14-25-46", "ostf_sha": "a9afb68710d809570460c29d6c3293219d3624d4", 
"build_number": "58", "api": "1.0", "nailgun_sha": 
"5f91157daa6798ff522ca9f6d34e7e135f150a90", "production": "docker", 
"fuelmain_sha": "81d38d6f2903b5a8b4bee79ca45a54b76c1361b8", "astute_sha": 
"16b252d93be6aaa73030b8100cf8c5ca6a970a91", "feature_groups": ["mirantis"], 
"release": "6.0", "fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}}}, 
"fuellib_sha": "fde8ba5e11a1acaf819d402c645c731af450aff0"}

  Browsers: 
  Chrome: Version 39.0.2171.99 (64-bit) on Ubuntu 14.04
  Firefox: 35.0  on Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1412483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746075] [NEW] Report client placement cache consistency is broken

2018-01-29 Thread Eric Fried
Public bug reported:

Today the report client makes assumptions about how resource provider
generation is calculated by the placement service.  Specifically, that
it starts at zero [1], and that it increases by 1 when the provider's
inventory is deleted [2].

While these assumptions happen to be true today [3], they are not a
documented part of the placement API.  Which either means we need to
document this behavior; or clients should not be relying on it.

[1] 
https://github.com/openstack/nova/blob/b214dfc41928d9e05199263301f8e5b23555c170/nova/scheduler/client/report.py#L552
[2] 
https://github.com/openstack/nova/blob/b214dfc41928d9e05199263301f8e5b23555c170/nova/scheduler/client/report.py#L927
[3] The latter more broadly stated as "increases by 1 when anything about the 
provider changes" - except we have a known hole for aggregates (see 
https://blueprints.launchpad.net/nova/+spec/placement-aggregate-generation)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746075

Title:
  Report client placement cache consistency is broken

Status in OpenStack Compute (nova):
  New

Bug description:
  Today the report client makes assumptions about how resource provider
  generation is calculated by the placement service.  Specifically, that
  it starts at zero [1], and that it increases by 1 when the provider's
  inventory is deleted [2].

  While these assumptions happen to be true today [3], they are not a
  documented part of the placement API.  Which either means we need to
  document this behavior; or clients should not be relying on it.

  [1] 
https://github.com/openstack/nova/blob/b214dfc41928d9e05199263301f8e5b23555c170/nova/scheduler/client/report.py#L552
  [2] 
https://github.com/openstack/nova/blob/b214dfc41928d9e05199263301f8e5b23555c170/nova/scheduler/client/report.py#L927
  [3] The latter more broadly stated as "increases by 1 when anything about the 
provider changes" - except we have a known hole for aggregates (see 
https://blueprints.launchpad.net/nova/+spec/placement-aggregate-generation)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746032] Re: By rebuilding twice with the same "forbidden" image one can circumvent scheduler rebuild restrictions

2018-01-29 Thread Matt Riedemann
This will also be an issue in newton but we're waiting to end of life
newton so we won't fix this upstream there.

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/pike
   Importance: Undecided => High

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746032

Title:
  By rebuilding twice with the same "forbidden" image one can circumvent
  scheduler rebuild restrictions

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Won't Fix
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  Description
  ===

  Since CVE-2017-16239, we call to the scheduler when doing a rebuild
  with a new image. If the scheduler refuses a rebuild because a filter
  forbids the new image on the instance's host (for example,
  IsolatedHostsFilter), at first there was no indication of this in the
  API (bug 1744325). Currently, with the fix for bug 1744325 merged [1],
  the instance goes to ERROR to indicate the refused rebuild. However,
  by rebuilding again with the same "forbidden" image it is possible to
  circumvent scheduler restrictions.

  Steps to reproduce
  ==

  1. Configure IsolatedHostsFilter:

 [filter_scheduler]
 enabled_filters = [...],IsolatedHostsFilter
 isolated_images = 41d3e5ca-14cf-436c-9413-4826b5c8bdb1
 isolated_hosts = ubuntu
 restrict_isolated_hosts_to_isolated_images = true

  2. Have two images, one isolated and one not:

 $ openstack image list

   8d0581a5-ed9d-4b98-a766-a41efbc99929 | centos | active
   41d3e5ca-14cf-436c-9413-4826b5c8bdb1 | cirros-0.3.5-x86_64-disk | active

   cirros is the isolated one

  3. Have only one hypervisor (the isolated one):

 $ openstack hypervisor list

   ubuntu | QEMU | 192.168.100.194 | up

  5. Boot a cirros (isolated) image:

 $ openstack server create \
   --image 41d3e5ca-14cf-436c-9413-4826b5c8bdb1 \
   --flavor m1.nano \
   cirros-test-expect-success

 $ openstack server list

   cirros-test-expect-success | ACTIVE | [...] |
  cirros-0.3.5-x86_64-disk | m1.nano

  6. Rebuild the cirros instance with centos (this should be refused by
  the scheduler):

 $ nova --debug rebuild cirros-test-expect-success centos

   DEBUG (session:722) POST call to compute for
   
http://192.168.100.194/compute/v2.1/servers/d9d98bf7-623e-4587-b82c-06f36abf59cb/action
   used request id req-c234346a-6e05-47cf-a0cd-45f89d11e15d

  8. Observe the instance going to ERROR,
 but still showing the new centos image :

 $ nova show cirros-test-expect-success

   [...]
   status | ERROR
   image  | centos (8d0581a5-ed9d-4b98-a766-a41efbc99929)
   [...]

  9. Rebuild again with the same centos image:

 $ nova rebuild cirros-test-expect-success centos

  10. The rebuild goes through.

  
  Expected result
  ===

  At step 10, the rebuild should still be refused.

  Actual result
  =

  The rebuild is allowed.

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following

 Was reported in Red Hat OpenStack 12, affects newton through
  master.

  2. Which hypervisor did you use?

 libvirt+kvm

  [1] https://review.openstack.org/#/c/536268/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746032] Re: By rebuilding twice with the same "forbidden" image one can circumvent scheduler rebuild restrictions

2018-01-29 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746032

Title:
  By rebuilding twice with the same "forbidden" image one can circumvent
  scheduler rebuild restrictions

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  Description
  ===

  Since CVE-2017-16239, we call to the scheduler when doing a rebuild
  with a new image. If the scheduler refuses a rebuild because a filter
  forbids the new image on the instance's host (for example,
  IsolatedHostsFilter), at first there was no indication of this in the
  API (bug 1744325). Currently, with the fix for bug 1744325 merged [1],
  the instance goes to ERROR to indicate the refused rebuild. However,
  by rebuilding again with the same "forbidden" image it is possible to
  circumvent scheduler restrictions.

  Steps to reproduce
  ==

  1. Configure IsolatedHostsFilter:

 [filter_scheduler]
 enabled_filters = [...],IsolatedHostsFilter
 isolated_images = 41d3e5ca-14cf-436c-9413-4826b5c8bdb1
 isolated_hosts = ubuntu
 restrict_isolated_hosts_to_isolated_images = true

  2. Have two images, one isolated and one not:

 $ openstack image list

   8d0581a5-ed9d-4b98-a766-a41efbc99929 | centos | active
   41d3e5ca-14cf-436c-9413-4826b5c8bdb1 | cirros-0.3.5-x86_64-disk | active

   cirros is the isolated one

  3. Have only one hypervisor (the isolated one):

 $ openstack hypervisor list

   ubuntu | QEMU | 192.168.100.194 | up

  5. Boot a cirros (isolated) image:

 $ openstack server create \
   --image 41d3e5ca-14cf-436c-9413-4826b5c8bdb1 \
   --flavor m1.nano \
   cirros-test-expect-success

 $ openstack server list

   cirros-test-expect-success | ACTIVE | [...] |
  cirros-0.3.5-x86_64-disk | m1.nano

  6. Rebuild the cirros instance with centos (this should be refused by
  the scheduler):

 $ nova --debug rebuild cirros-test-expect-success centos

   DEBUG (session:722) POST call to compute for
   
http://192.168.100.194/compute/v2.1/servers/d9d98bf7-623e-4587-b82c-06f36abf59cb/action
   used request id req-c234346a-6e05-47cf-a0cd-45f89d11e15d

  8. Observe the instance going to ERROR,
 but still showing the new centos image :

 $ nova show cirros-test-expect-success

   [...]
   status | ERROR
   image  | centos (8d0581a5-ed9d-4b98-a766-a41efbc99929)
   [...]

  9. Rebuild again with the same centos image:

 $ nova rebuild cirros-test-expect-success centos

  10. The rebuild goes through.

  
  Expected result
  ===

  At step 10, the rebuild should still be refused.

  Actual result
  =

  The rebuild is allowed.

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following

 Was reported in Red Hat OpenStack 12, affects newton through
  master.

  2. Which hypervisor did you use?

 libvirt+kvm

  [1] https://review.openstack.org/#/c/536268/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746029] Re: Unclear libvirt error message when attaching multiattach volume

2018-01-29 Thread Matt Riedemann
There could be any number of reasons really why the instance can't do
multiattach. I don't think exposing the underlying versions of the
libvirt/qemu packages on the system is worthwhile, nor does it help the
user. To the user, it just means, you can't do this thing.

If the operator is getting flooded with issues about failed multiattach
attempts because their computes aren't setup to handle it yet, they
should disable the ability to do multiattach in cinder, via policy, or
setup host aggregates and special flavors for the ones that can.

I'm going to mark this as invalid as I don't think it's something we
need to fix.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746029

Title:
  Unclear libvirt error message when attaching multiattach volume

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  There is a known problem using certain versions of qemu and libvirt
  with multiattach. Nova will not let you attach a multiattach enabled
  volume if the version of libvirt is <= 3.10 or qemu >= 2.10. This bug
  is only about the user error message that is displayed when you hit
  this error.

  Ubuntu 16.04:

  $ virsh --version
  3.6.0

  $ kvm --version
  QEMU emulator version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.1~cloud0)

  $ nova volume-attach vm1 a7de1d48-fe81-402b-a8ed-656e27ff0d08

  ERROR (Conflict): Volume a7de1d48-fe81-402b-a8ed-656e27ff0d08 has
  'multiattach' set, which is not supported for this instance. (HTTP
  409) (Request-ID: req-f34d2c90-559f-4646-903c-60661b082c85)

  This CLI message does not give the user any indication that there is a
  version issue with libvirt/qemu.

  The n-cpu log has a clearer debug message:

  ... DEBUG nova.virt.libvirt.driver ... Volume multiattach is not
  supported based on current versions of QEMU and libvirt. QEMU must be
  less than 2.10 or libvirt must be greater than or equal to 3.10.

  
  It would be better if the message returned by the cli command was more 
specific to the reason why the attach failed.

  nova$ git show
  commit 87ea686f9f2cc706205d188922bb14272625e7be
  Merge: dc63965 8ec0b43
  Author: Zuul 
  Date:   Wed Jan 24 13:53:24 2018 +

  Merge "Transform instance.resize_confirm notification"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746032] [NEW] By rebuilding twice with the same "forbidden" image one can circumvent scheduler rebuild restrictions

2018-01-29 Thread Artom Lifshitz
Public bug reported:

Description
===

Since CVE-2017-16239, we call to the scheduler when doing a rebuild with
a new image. If the scheduler refuses a rebuild because a filter forbids
the new image on the instance's host (for example, IsolatedHostsFilter),
at first there was no indication of this in the API (bug 1744325).
Currently, with the fix for bug 1744325 merged [1], the instance goes to
ERROR to indicate the refused rebuild. However, by rebuilding again with
the same "forbidden" image it is possible to circumvent scheduler
restrictions.

Steps to reproduce
==

1. Configure IsolatedHostsFilter:

   [filter_scheduler]
   enabled_filters = [...],IsolatedHostsFilter
   isolated_images = 41d3e5ca-14cf-436c-9413-4826b5c8bdb1
   isolated_hosts = ubuntu
   restrict_isolated_hosts_to_isolated_images = true

2. Have two images, one isolated and one not:

   $ openstack image list

 8d0581a5-ed9d-4b98-a766-a41efbc99929 | centos | active
 41d3e5ca-14cf-436c-9413-4826b5c8bdb1 | cirros-0.3.5-x86_64-disk | active

 cirros is the isolated one

3. Have only one hypervisor (the isolated one):

   $ openstack hypervisor list

 ubuntu | QEMU | 192.168.100.194 | up

5. Boot a cirros (isolated) image:

   $ openstack server create \
 --image 41d3e5ca-14cf-436c-9413-4826b5c8bdb1 \
 --flavor m1.nano \
 cirros-test-expect-success

   $ openstack server list

 cirros-test-expect-success | ACTIVE | [...] |
cirros-0.3.5-x86_64-disk | m1.nano

6. Rebuild the cirros instance with centos (this should be refused by
the scheduler):

   $ nova --debug rebuild cirros-test-expect-success centos

 DEBUG (session:722) POST call to compute for
 
http://192.168.100.194/compute/v2.1/servers/d9d98bf7-623e-4587-b82c-06f36abf59cb/action
 used request id req-c234346a-6e05-47cf-a0cd-45f89d11e15d

8. Observe the instance going to ERROR,
   but still showing the new centos image :

   $ nova show cirros-test-expect-success

 [...]
 status | ERROR
 image  | centos (8d0581a5-ed9d-4b98-a766-a41efbc99929)
 [...]

9. Rebuild again with the same centos image:

   $ nova rebuild cirros-test-expect-success centos

10. The rebuild goes through.


Expected result
===

At step 10, the rebuild should still be refused.

Actual result
=

The rebuild is allowed.

Environment
===

1. Exact version of OpenStack you are running. See the following

   Was reported in Red Hat OpenStack 12, affects newton through master.

2. Which hypervisor did you use?

   libvirt+kvm

[1] https://review.openstack.org/#/c/536268/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746032

Title:
  By rebuilding twice with the same "forbidden" image one can circumvent
  scheduler rebuild restrictions

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Since CVE-2017-16239, we call to the scheduler when doing a rebuild
  with a new image. If the scheduler refuses a rebuild because a filter
  forbids the new image on the instance's host (for example,
  IsolatedHostsFilter), at first there was no indication of this in the
  API (bug 1744325). Currently, with the fix for bug 1744325 merged [1],
  the instance goes to ERROR to indicate the refused rebuild. However,
  by rebuilding again with the same "forbidden" image it is possible to
  circumvent scheduler restrictions.

  Steps to reproduce
  ==

  1. Configure IsolatedHostsFilter:

 [filter_scheduler]
 enabled_filters = [...],IsolatedHostsFilter
 isolated_images = 41d3e5ca-14cf-436c-9413-4826b5c8bdb1
 isolated_hosts = ubuntu
 restrict_isolated_hosts_to_isolated_images = true

  2. Have two images, one isolated and one not:

 $ openstack image list

   8d0581a5-ed9d-4b98-a766-a41efbc99929 | centos | active
   41d3e5ca-14cf-436c-9413-4826b5c8bdb1 | cirros-0.3.5-x86_64-disk | active

   cirros is the isolated one

  3. Have only one hypervisor (the isolated one):

 $ openstack hypervisor list

   ubuntu | QEMU | 192.168.100.194 | up

  5. Boot a cirros (isolated) image:

 $ openstack server create \
   --image 41d3e5ca-14cf-436c-9413-4826b5c8bdb1 \
   --flavor m1.nano \
   cirros-test-expect-success

 $ openstack server list

   cirros-test-expect-success | ACTIVE | [...] |
  cirros-0.3.5-x86_64-disk | m1.nano

  6. Rebuild the cirros instance with centos (this should be refused by
  the scheduler):

 $ nova --debug rebuild cirros-test-expect-success centos

   DEBUG (session:722) POST call to compute for
   
http://192.168.100.194/compute/v2.1/servers/d9d98bf7-623e-4587-b82c-06f36abf59cb/action
   used request id req-c234346a-6e05-47cf-a0cd-45f89d11e15d

  8. Observe the instance going to ERROR,
  

[Yahoo-eng-team] [Bug 1746029] [NEW] Unclear libvirt error message when attaching multiattach volume

2018-01-29 Thread Steve Noyes
Public bug reported:

There is a known problem using certain versions of qemu and libvirt with
multiattach. Nova will not let you attach a multiattach enabled volume
if the version of libvirt is <= 3.10 or qemu >= 2.10. This bug is only
about the user error message that is displayed when you hit this error.

Ubuntu 16.04:

$ virsh --version
3.6.0

$ kvm --version
QEMU emulator version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.1~cloud0)

$ nova volume-attach vm1 a7de1d48-fe81-402b-a8ed-656e27ff0d08

ERROR (Conflict): Volume a7de1d48-fe81-402b-a8ed-656e27ff0d08 has
'multiattach' set, which is not supported for this instance. (HTTP 409)
(Request-ID: req-f34d2c90-559f-4646-903c-60661b082c85)

This CLI message does not give the user any indication that there is a
version issue with libvirt/qemu.

The n-cpu log has a clearer debug message:

... DEBUG nova.virt.libvirt.driver ... Volume multiattach is not
supported based on current versions of QEMU and libvirt. QEMU must be
less than 2.10 or libvirt must be greater than or equal to 3.10.


It would be better if the message returned by the cli command was more specific 
to the reason why the attach failed.

nova$ git show
commit 87ea686f9f2cc706205d188922bb14272625e7be
Merge: dc63965 8ec0b43
Author: Zuul 
Date:   Wed Jan 24 13:53:24 2018 +

Merge "Transform instance.resize_confirm notification"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746029

Title:
  Unclear libvirt error message when attaching multiattach volume

Status in OpenStack Compute (nova):
  New

Bug description:
  There is a known problem using certain versions of qemu and libvirt
  with multiattach. Nova will not let you attach a multiattach enabled
  volume if the version of libvirt is <= 3.10 or qemu >= 2.10. This bug
  is only about the user error message that is displayed when you hit
  this error.

  Ubuntu 16.04:

  $ virsh --version
  3.6.0

  $ kvm --version
  QEMU emulator version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.1~cloud0)

  $ nova volume-attach vm1 a7de1d48-fe81-402b-a8ed-656e27ff0d08

  ERROR (Conflict): Volume a7de1d48-fe81-402b-a8ed-656e27ff0d08 has
  'multiattach' set, which is not supported for this instance. (HTTP
  409) (Request-ID: req-f34d2c90-559f-4646-903c-60661b082c85)

  This CLI message does not give the user any indication that there is a
  version issue with libvirt/qemu.

  The n-cpu log has a clearer debug message:

  ... DEBUG nova.virt.libvirt.driver ... Volume multiattach is not
  supported based on current versions of QEMU and libvirt. QEMU must be
  less than 2.10 or libvirt must be greater than or equal to 3.10.

  
  It would be better if the message returned by the cli command was more 
specific to the reason why the attach failed.

  nova$ git show
  commit 87ea686f9f2cc706205d188922bb14272625e7be
  Merge: dc63965 8ec0b43
  Author: Zuul 
  Date:   Wed Jan 24 13:53:24 2018 +

  Merge "Transform instance.resize_confirm notification"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1746029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746016] [NEW] unit test jobs sometimes time out

2018-01-29 Thread Colleen Murphy
Public bug reported:

Both the py27 and py35 unit test jobs sometimes time out, with no
obvious reason. For example:

http://logs.openstack.org/33/530133/13/check/openstack-tox-py27/d6560ad/
http://logs.openstack.org/23/524423/44/gate/openstack-tox-py35/0f8083f/
http://logs.openstack.org/23/524423/44/check/openstack-tox-py35/ea6fcfe/

There's an empty space of time between the output of a test and the
message that the job has timed out, so it's unclear if it's hanging on a
particular unknown test or if an infrastructure issue has caused the
test runner to stop for some reason.

** Affects: keystone
 Importance: High
 Status: New

** Changed in: keystone
   Importance: Undecided => High

** Changed in: keystone
Milestone: None => queens-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1746016

Title:
  unit test jobs sometimes time out

Status in OpenStack Identity (keystone):
  New

Bug description:
  Both the py27 and py35 unit test jobs sometimes time out, with no
  obvious reason. For example:

  http://logs.openstack.org/33/530133/13/check/openstack-tox-py27/d6560ad/
  http://logs.openstack.org/23/524423/44/gate/openstack-tox-py35/0f8083f/
  http://logs.openstack.org/23/524423/44/check/openstack-tox-py35/ea6fcfe/

  There's an empty space of time between the output of a test and the
  message that the job has timed out, so it's unclear if it's hanging on
  a particular unknown test or if an infrastructure issue has caused the
  test runner to stop for some reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1746016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746000] [NEW] dnsmasq does not fallback on SERVFAIL

2018-01-29 Thread Bernhard M. Wiedemann
Public bug reported:

In our cloud deployment, we configured neutron-dhcp-agent
to have multiple external DNS servers

However, the last server on the list happened to be misconfigured (by others). 
So it returned SERVFAIL instead of correct responses and strace showed that 
dnsmasq only ever asked the last server for a name.
My testing shows that dropping the --strict-order parameter helped this problem.
It was introduced in commit 43960ee448 without reason given.

** Affects: neutron
 Importance: Undecided
 Assignee: Dirk Mueller (dmllr)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746000

Title:
  dnsmasq does not fallback on SERVFAIL

Status in neutron:
  In Progress

Bug description:
  In our cloud deployment, we configured neutron-dhcp-agent
  to have multiple external DNS servers

  However, the last server on the list happened to be misconfigured (by 
others). So it returned SERVFAIL instead of correct responses and strace showed 
that dnsmasq only ever asked the last server for a name.
  My testing shows that dropping the --strict-order parameter helped this 
problem.
  It was introduced in commit 43960ee448 without reason given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1746000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745978] [NEW] write_files recursively sets ownership to root:root, ignores owner directive.

2018-01-29 Thread Chris Glass
Public bug reported:

The "write_files" cloud init directive tramples on folder permissions
while ignoring the "owner" directive, resulting in wrong ownership of
"root:root" being set all along the file path.

Provider: LXD (maybe/probably others too)

Cloud-init version:
17.1-46-g7acc9e68-0ubuntu1~16.04.1

Sample cloud config:

#cloud-config
write_files:
- content: |
Example content.
  path: /home/ubuntu/example
  owner: ubuntu:ubuntu
  permissions: '0600'

Expected behavior:

A "/home/ubuntu/example" file is created, with "ubuntu:ubuntu" as owner
and a permission of 600. Permissions of the parent /home/ubuntu folder
do not change.

What actually happens:

A "/home/ubuntu/example" file is created, with an owner of "root:root" and a 
permission of 600.
The "/home/ubuntu" folder now *also* has "root:root" as owner, resulting in a 
non-writable home for the "ubuntu" user.

The permissions should:

1) Honor the chosen user:group pair
2) Only set permission on parent folders if they do not already exist.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  The "write_files" cloud init directive tramples on folder permissions
  while ignoring the "owner" directive, resulting in wrong ownership of
  "root:root" being set all along the file path.
  
- Provider: LXD
+ Provider: LXD (maybe/probably others too)
  
  Cloud-init version:
  17.1-46-g7acc9e68-0ubuntu1~16.04.1
  
  Sample cloud config:
  
  #cloud-config
  write_files:
  - content: |
- Example content.
-   path: /home/ubuntu/example
-   owner: ubuntu:ubuntu
-   permissions: '0600'
+ Example content.
+   path: /home/ubuntu/example
+   owner: ubuntu:ubuntu
+   permissions: '0600'
  
  Expected behavior:
  
  A "/home/ubuntu/example" file is created, with "ubuntu:ubuntu" as owner
  and a permission of 600. Permissions of the parent /home/ubuntu folder
  do not change.
  
  What actually happens:
  
- A "/home/ubuntu/example" file is created, with an owner of "root:root" a 
permission of 600.
+ A "/home/ubuntu/example" file is created, with an owner of "root:root" and a 
permission of 600.
  The "/home/ubuntu" folder now *also* has "root:root" as owner, resulting in a 
non-writable home for the "ubuntu" user.
  
  The permissions should:
  
  1) Honor the chosen user:group pair
  2) Only set permission on parent folders if they do not already exist.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1745978

Title:
  write_files recursively sets ownership to root:root, ignores owner
  directive.

Status in cloud-init:
  New

Bug description:
  The "write_files" cloud init directive tramples on folder permissions
  while ignoring the "owner" directive, resulting in wrong ownership of
  "root:root" being set all along the file path.

  Provider: LXD (maybe/probably others too)

  Cloud-init version:
  17.1-46-g7acc9e68-0ubuntu1~16.04.1

  Sample cloud config:

  #cloud-config
  write_files:
  - content: |
  Example content.
    path: /home/ubuntu/example
    owner: ubuntu:ubuntu
    permissions: '0600'

  Expected behavior:

  A "/home/ubuntu/example" file is created, with "ubuntu:ubuntu" as
  owner and a permission of 600. Permissions of the parent /home/ubuntu
  folder do not change.

  What actually happens:

  A "/home/ubuntu/example" file is created, with an owner of "root:root" and a 
permission of 600.
  The "/home/ubuntu" folder now *also* has "root:root" as owner, resulting in a 
non-writable home for the "ubuntu" user.

  The permissions should:

  1) Honor the chosen user:group pair
  2) Only set permission on parent folders if they do not already exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1745978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1745977] [NEW] When source compute service up, will not destroy and clean up those instances which be evacuated then be deleted.

2018-01-29 Thread jiangyuhao
Public bug reported:

Description
===
When the instance evacuated to destination host successfully, then deleted this 
instance. The source host up will cleanup this instance failed.

Steps to reproduce
==
1.deploy a local instance in source host.
2.power off the source host.
3.evacuate the instance to destination host.
4.delete this instance.
5.power on the source host.

Expected result
===
The source host nova-compute service cleanup this evacuated and deleted 
instance.

Actual result
=
This instance still on source host.

Environment
===
Openstack Pike
Libvirt + KVM
ovs network


Logs & Configs
==
source host nova-compute log:

2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
[req-7bdfe28f-0464-4af8-bdd0-2d433b25d84a - - - - -] Error starting thread.: 
InstanceNotFound_Remote: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could 
not be found.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 125, 
in _object_dispatch
return getattr(target, method)(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
result = fn(cls, context, *args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 474, 
in get_by_uuid
use_slave=use_slave)

  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 235, 
in wrapper
return f(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 466, 
in _db_instance_get_by_uuid
columns_to_join=columns_to_join)

  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 744, in 
instance_get_by_uuid
return IMPL.instance_get_by_uuid(context, uuid, columns_to_join)

  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 179, 
in wrapper
return f(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 280, 
in wrapped
return f(context, *args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 1911, 
in instance_get_by_uuid
columns_to_join=columns_to_join)

  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 1920, 
in _instance_get_by_uuid
raise exception.InstanceNotFound(instance_id=uuid)

InstanceNotFound: Instance 19022200-7abc-423d-90bd-e9dcd0887679 could not be 
found.
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service Traceback (most recent 
call last):
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 721, in 
run_service
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service service.start()
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 156, in start
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.manager.init_host()
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1173, in 
init_host
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self._destroy_evacuated_instances(context)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 691, in 
_destroy_evacuated_instances
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service bdi, destroy_disks)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 909, in 
destroy
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service destroy_disks)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1032, in 
cleanup
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service attempts = 
int(instance.system_metadata.get('clean_attempts',
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 67, in 
getter
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self.obj_load_attr(name)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 1131, in 
obj_load_attr
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
self._load_generic(attrname)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/objects/instance.py", line 858, in 
_load_generic
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service 
expected_attrs=[attrname])
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 177, in 
wrapper
2018-01-29 10:28:48.664 9364 ERROR oslo_service.service args, kwargs)
2018-01-29 10:28:48.664 9364 ERROR oslo_service.servic

[Yahoo-eng-team] [Bug 1743351] Re: keystone error apt-get upgrade packages version

2018-01-29 Thread Colleen Murphy
In that case I'm going to close this as invalid. Feel free to reopen if
you see the issue again and can show us the keystone logs.

The error "failed to discover available identity versions" comes from
keystoneauth and usually happens when it can't reach keystone, in this
case it looks like keystone crashed for some reason. But the only way to
know why is to see the logs.

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1743351

Title:
  keystone error apt-get upgrade packages version

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Good Morning,

  I wanted to comment on a problem that has happened to me several times
  ...

  I had the OpenStack version "Newton" recently and I updated the
  OpenStack version "Pike", and all right until the keystone in the
  controller node did not work well for me as it told me in horizon:

  "An error occurred authenticating. Please try again later."

  In cli command line: Failed to discover available identity versions when 
contacting http: // controller: 35357 / v3. Attempting to parse version from 
URL.
  Internal Server Error (HTTP 500)

  In the end I had to do a clean installation of the controller node.

  But just a week ago I decided to update the packages because there
  were updates and bug fixes in some of them and the same error was
  shown again.

  Horizon:

  "An error occurred authenticating. Please try again later."

  In cli command line:

  Failed to discover available identity versions when contacting http: // 
controller: 35357 / v3. Attempting to parse version from URL.
  Internal Server Error (HTTP 500)

  Does anyone know why something like this happens every time an apt-get
  upgrade is done?

  Greetings and thanks in advance,

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1743351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp