[Yahoo-eng-team] [Bug 1824059] Re: On image panel click the Create Volume button and the form prompts an error message.

2019-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1824059

Title:
  On image panel click the Create Volume button and the form prompts an
  error message.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  On image panel click the Create Volume button and the form prompts an error 
message.
  The error messages:Unable to retrieve the default volume type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1824059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833125] Re: Remaining neutron-lbaas relevant code and documentation

2019-09-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/665838
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d1d0a04c37740aefe26d553df1f1fbeaf631151b
Submitter: Zuul
Branch:master

commit d1d0a04c37740aefe26d553df1f1fbeaf631151b
Author: Michael Johnson 
Date:   Mon Jun 17 18:36:34 2019 -0700

Remove Neutron LBaaS

Neutron-LBaaS has now been retired and there will be no Train
release[1]. This patch removes neutron-lbaas references from
neutron.

[1] https://review.opendev.org/658494

Closes-Bug: #1833125
Change-Id: I0fe3fbaf4adf7fb104632fd94cd093e701e12289


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833125

Title:
  Remaining neutron-lbaas relevant code and documentation

Status in neutron:
  Fix Released

Bug description:
  neutron-lbaas was deprecated for some time and is now completely
  retired in Train cycle [0]

  From a quick grep in neutron repository, we still have references to
  it as of June 17.

  Some examples:
  * Admin guide page [1] on configuration and usage
  * LBaaS related policies in neutron/conf/policies/agent.py
  * L3 DVR checking device_owner names DEVICE_OWNER_LOADBALANCER and 
DEVICE_OWNER_LOADBALANCERV2
  * Relevant unit tests (mostly related to previous feature)

  We should drop all of these from neutron repository

  [0] 
http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006142.html
  [1] https://docs.openstack.org/neutron/latest/admin/config-lbaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844583] Re: tox -e docs fails with "WARNING: RSVG converter command 'rsvg-convert' cannot be run. Check the rsvg_converter_bin setting"

2019-09-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/683003
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9ae92e3f9841bd76a8327309580c19280f3d52b8
Submitter: Zuul
Branch:master

commit 9ae92e3f9841bd76a8327309580c19280f3d52b8
Author: Matt Riedemann 
Date:   Wed Sep 18 17:25:49 2019 -0400

Add librsvg2* to bindep

I3aaea1d15a357f550f529beaa84fb1a1a7748358 added the docs
build requirement on sphinxcontrib-svg2pdfconverter which
needs the native rsvg-convert command. This change adds
the native package that provides that command to bindep.txt.

Change-Id: I064a1f33902405c3db699a46feeb93397fc3b038
Closes-Bug: #1844583


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1844583

Title:
  tox -e docs fails with "WARNING: RSVG converter command 'rsvg-convert'
  cannot be run. Check the rsvg_converter_bin setting"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Since this change:

  
https://github.com/openstack/nova/commit/16b9486bf7e91bfd5dc48297cee9f54b49156c93

  Local docs builds fail if you don't have librsvg2-bin installed for
  the sphinxcontrib-svg2pdfconverter dependency (I'm on Ubuntu 18.04).
  We should include that in bindep.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1844583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694844] Re: Boot from volume fails when cross_az_attach=False and volume is provided to nova without an AZ for the instance

2019-09-23 Thread Matt Riedemann
** No longer affects: nova/ocata

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694844

Title:
  Boot from volume fails when cross_az_attach=False and volume is
  provided to nova without an AZ for the instance

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This was recreated with a devstack change:

  http://logs.openstack.org/74/467674/4/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/3dbd6e9/logs/screen-n-api.txt.gz#_May_26_02_41_54_584798

  In this failing test, Tempest creates a volume:

  {"volume": {"status": "creating", "user_id":
  "2256bb66db8741aab58a20367b00bfa2", "attachments": [], "links":
  [{"href":
  "https://10.39.38.35:8776/v2/272882ba896341d483982dbcb1fde0f4/volumes
  /55a7c64a-f7b2-4b77-8f60-c1ccda8e0c30", "rel": "self"}, {"href":
  "https://10.39.38.35:8776/272882ba896341d483982dbcb1fde0f4/volumes
  /55a7c64a-f7b2-4b77-8f60-c1ccda8e0c30", "rel": "bookmark"}],
  "availability_zone": "nova", "bootable": "false", "encrypted": false,
  "created_at": "2017-05-26T02:41:45.617286", "description": null,
  "updated_at": null, "volume_type": "lvmdriver-1", "name": "tempest-
  TestVolumeBootPattern-volume-origin-1984626538", "replication_status":
  null, "consistencygroup_id": null, "source_volid": null,
  "snapshot_id": null, "multiattach": false, "metadata": {}, "id":
  "55a7c64a-f7b2-4b77-8f60-c1ccda8e0c30", "size": 1}}

  And the AZ on the volume defaults to 'nova' because that's the default
  AZ in cinder.conf.

  That volume ID is then passed to create the server:

  {"server": {"block_device_mapping_v2": [{"source_type": "volume",
  "boot_index": 0, "destination_type": "volume", "uuid": "55a7c64a-
  f7b2-4b77-8f60-c1ccda8e0c30", "delete_on_termination": true}],
  "networks": [{"uuid": "da48954d-1f66-427b-892c-a7f2eb1b54a3"}],
  "imageRef": "", "name": "tempest-TestVolumeBootPattern-
  server-1371698056", "flavorRef": "42"}}

  Which fails with the 400 InvalidVolume error because of this check in
  the API:

  
https://github.com/openstack/nova/blob/f112dc686dadd643410575cc3487cf1632e4f689/nova/volume/cinder.py#L286

  The instance is not associated with a host yet so it's not in an
  aggregate, and since an AZ wasn't specified when creating an instance
  (and I don't think we want people passing 'nova' as the AZ), it fails
  when comparing None to 'nova'.

  This is separate from bug 1497253 and change
  https://review.openstack.org/#/c/366724/ because in that case Nova is
  creating the volume during boot from volume and can specify the AZ for
  the volume. In this bug, the volume already exists and is provided to
  Nova.

  We might need to be able to distinguish if the API or compute service
  is calling check_availability_zone and if so, pass a default AZ in the
  case of the API if one isn't defined.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1694844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837106] Re: "socket.getaddrinfo" of "metadata.google.internal" fails on GCE

2019-09-23 Thread Ryan Harper
** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: Expired => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1837106

Title:
  "socket.getaddrinfo" of "metadata.google.internal" fails on GCE

Status in cloud-init:
  Triaged

Bug description:
  When booting an Ubuntu 18.04 based image on GCE, we see the following
  messages in the log:

  2019-05-30 00:05:27,818 - util.py[DEBUG]: Resolving URL: 
http://metadata.google.internal/computeMetadata/v1/ took 0.001 seconds
  2019-05-30 00:05:27,818 - DataSourceGCE.py[DEBUG]: 
http://metadata.google.internal/computeMetadata/v1/ is not resolvable
  2019-05-30 00:05:27,818 - util.py[DEBUG]: Crawl of GCE metadata service took 
0.013 seconds
  2019-05-30 00:05:27,818 - DataSourceGCE.py[WARNING]: address 
"http://metadata.google.internal/computeMetadata/v1/; is not resolvable 

  Further, the contents of "/run/cloud-init/instance-data.json" doesn't
  have any meaningful data.

  What I've found is that, read_md() in DataSourceGCE.py will call
  util.is_resolvable_url() on the address
  "http://metadata.google.internal/computeMetadata/v1/;, which results
  is calling socket.getaddrinfo() for "metadata.google.internal", and
  it's this socket.getaddrinfo() call that fails.

  This failure appears to be due to the fact that "cloud-init.service"
  does not ensure it waits for DNS (i.e. "systemd-resolved.service") to
  be working before it runs. I say this because:

  1. If I add "After=systemd-resolved.service" to the "cloud-
  init.service" definition, this failures goes away.

  2. If I run "cloud-init init" after the system has booted up (i.e.
  after enough time has passed, such that DNS is working), the failure
  doesn't occur.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1837106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833721] Re: ip_lib synchronized decorator should wrap the privileged one

2019-09-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/683109
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2a7030a6b7ae2ab2e24648727bbde91b05de82cc
Submitter: Zuul
Branch:master

commit 2a7030a6b7ae2ab2e24648727bbde91b05de82cc
Author: Rodolfo Alonso Hernandez 
Date:   Thu Sep 19 10:58:32 2019 +

Change ip_lib decorators order

Change the execution order of:
- @privileged.default.entrypoint
- @lockutils.synchronized("privileged-ip-lib")

"synchronized" decorator holds the execution of the function until
the lock is released. Using the current decorator ordering, this
active wait is done inside the privsep context. This can exhaust
the number of execution threads reserved for the privsep daemon.

Closes-Bug: #1833721

Change-Id: Ifcce954003e360f620f9131a36a08ab84cbe6193


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833721

Title:
  ip_lib synchronized decorator should wrap the privileged one

Status in neutron:
  Fix Released

Bug description:
  In ip_lib library, the methods calling Pyroute commands are decorated with 
two functions (in this order):
  - @privileged.default.entrypoint
  - @lockutils.synchronized("privileged-ip-lib")

  "synchronized" decorator holds the execution of the function until the
  lock is released. Using the current decorator ordering, this active
  wait is done inside the privsep context. This can exhaust the number
  of execution threads reserved for the privsep daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560961] Re: [RFE] Allow instance-ingress bandwidth limiting

2019-09-23 Thread Corey Bryant
** No longer affects: neutron (Ubuntu)

** No longer affects: neutron (Ubuntu Xenial)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560961

Title:
  [RFE] Allow instance-ingress bandwidth limiting

Status in neutron:
  Fix Released

Bug description:
  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.

  Use cases
  =
  There are cases where ingress bandwidth limiting is more important than
  egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.

  Other example are CSPs which need to plan & allocate the bandwidth
  provided to customers, or provide different levels of network service.

  API/Model impact
  ===
  The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
  be backward compatible.

  Combining egress/ingress would be achieved by including an egress
  bandwidth limit and an ingress bandwidth limit.

  Additional information
  ==
  The CLI and SDK modifications are addressed in 
https://bugs.launchpad.net/python-openstackclient/+bug/1614121

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1842930] Re: Deleted user still can delete volumes in Horizon

2019-09-23 Thread Morgan Fainberg
Added Keystonemiddleware and documentation tags. Marked as "medium"
importance as it requires documentation changes but is not
critical/RC/otherwise impacting. Clear communication of expected
behavior is important and should be found in Horizon and
Keystonemiddleware's documentation.

I am marking invalid for Keystone itself as keystone will invalidate
it's internal cache (barring cases such as in-memory [not production
quality] dict-base cache).


** Tags added: documentation

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Confirmed

** Changed in: keystonemiddleware
   Status: New => Triaged

** Changed in: keystone
   Status: Confirmed => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystonemiddleware
   Importance: Undecided => Medium

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1842930

Title:
  Deleted user still can delete volumes in Horizon

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  Triaged
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  ==Problem==
  User session in a second browser is not terminated after deleting this user 
by admin from another browser. User is still able to manage some objects 
(delete volumes, for example) in a project after being deleted by admin.

  ==Steps to reproduce==
  Install OpenStack following official docs for Stein.
  Login as admin to (Horizon) in one browser.
  Create a user with role 'member' and assign it to a project.
  Open another browser and login as created user.
  As admin user delete created user from "first" browser.
  Switch to the "second" browser and try to browse through different sections 
in the dashboard as deleted user -> instances are not shown, but deleted user 
can list images, volumes, networks. Also this deleted user can delete a volume.

  ==Expected result==
  User session in current browser is closed after user is deleted in another 
browser.
  I tried this in Newton release and it works as expected (for a short time 
before session is ended, this deleted user can't list object in 
instances,volumes).

  ==Environment==
  OpenStack Stein
  rpm -qa | grep -i stein
  centos-release-openstack-stein-1-1.el7.centos.noarch

  cat /etc/redhat-release
  CentOS Linux release 7.6.1810 (Core)

   rpm -qa | grep -i horizon
  python2-django-horizon-15.1.0-1.el7.noarch

  rpm -qa | grep -i dashboard
  openstack-dashboard-15.1.0-1.el7.noarch
  openstack-dashboard-theme-15.1.0-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1842930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841700] Re: instance ingress bandwidth limiting doesn't works in ocata.

2019-09-23 Thread Corey Bryant
** Changed in: neutron (Ubuntu Xenial)
   Status: New => Invalid

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841700

Title:
  instance ingress bandwidth limiting doesn't works in ocata.

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive ocata series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Invalid

Bug description:
  [Environment]

  Xenial-Ocata deployment

  [Description]

  The instance ingress bandwidth limit implementation was targeted for
  Ocata [0], but the full implementation ingress/egress was done during
  the pike [1] cycle.

  However, isn't reported or explicit that ingress direction isn't
  supported in ocata, which causes the following exception when
  --ingress is specified.

  $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 
--max-burst-kbits 300 --ingress bw-limiter
  Failed to create Network QoS rule: BadRequestException: 400: Client Error for 
url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, 
Unrecognized attribute(s) 'direction'

  It would be desirable for this feature to be available on Ocata for being 
able to 
  set ingress/egress bandwidth limits on the ports.

  [0] https://blueprints.launchpad.net/neutron/+spec/instance-ingress-bw-limit
  [1] https://bugs.launchpad.net/neutron/+bug/1560961

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1841700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-09-23 Thread Corey Bryant
This bug was fixed in the package keystone - 2:14.1.0-0ubuntu1.1~cloud0
---

 keystone (2:14.1.0-0ubuntu1.1~cloud0) bionic; urgency=medium
 .
   * d/p/token-consistently-decode-binary-types.patch: Ensure binary
 types are consistently decoded under Python 3 (LP: #1832265).


** Changed in: cloud-archive/rocky
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

Status in OpenStack Keystone LDAP integration:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Cosmic:
  Won't Fix
Status in keystone source package in Disco:
  Fix Released

Bug description:
  When using an LDAP domain user on a bionic-rocky cloud within horizon,
  we are unable to see the projects listed in the project selection
  drop-down, and are unable to query resources from any projects to
  which we are assigned the role Member.

  It appears that the following log entries in keystone may be helpful
  to troubleshooting this issue:

  (keystone.middleware.auth): 2019-06-10 19:47:02,700 DEBUG RBAC: auth_context: 
{'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_id': None, 
'domain_name': None, 'group_ids': [], 'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
Dispatching request to legacy mapper: /v3/users
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
SCRIPT_NAME: `/v3`, PATH_INFO: 
`/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects`
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Matched GET 
/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Route path: 
'/users/{user_id}/projects', defaults: {'action': 'list_user_projects', 
'controller': }
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Match dict: {'user_id': 
'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 'action': 
'list_user_projects', 'controller': 
}
  (keystone.common.wsgi): 2019-06-10 19:47:02,700 INFO GET 
https://keystone.mysite:5000/v3/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (keystone.common.controller): 2019-06-10 19:47:02,700 DEBUG RBAC: Adding 
query filter params ()
  (keystone.common.authorization): 2019-06-10 19:47:02,700 DEBUG RBAC: 
Authorizing 
identity:list_user_projects(user_id=d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4)
  (keystone.policy.backends.rules): 2019-06-10 19:47:02,701 DEBUG enforce 
identity:list_user_projects: {'trust_id': None, 'trustor_id': None, 
'trustee_id': None, 'domain_id': None, 'domain_name': None, 'group_ids': [], 
'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.common.wsgi): 2019-06-10 19:47:02,702 WARNING You are not 
authorized to perform the requested action: identity:list_user_projects.

  
  It actually appears elsewhere in the keystone.log that there is a string 
which has encapsulated bytecode data in it (or vice versa).

  (keystone.common.wsgi): 2019-06-10 19:46:59,019 INFO POST 
https://keystone.mysite:5000/v3/auth/tokens
  (sqlalchemy.orm.path_registry): 2019-06-10 19:46:59,021 DEBUG set 
'memoized_setups' on path 'EntityRegistry((,))' to '{}'
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,021 DEBUG Connection 
 checked out from pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 being returned to pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 rollback-on-return, 
via agent
  (keystone.auth.core): 2019-06-10 19:46:59,025 DEBUG MFA Rules not processed 
for user `b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4'`. 
Rule list: `[]` (Enabled: `True`).
  

[Yahoo-eng-team] [Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-09-23 Thread Corey Bryant
This bug was fixed in the package keystone - 2:15.0.0-0ubuntu1.1~cloud0
---

 keystone (2:15.0.0-0ubuntu1.1~cloud0) bionic-stein; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 keystone (2:15.0.0-0ubuntu1.1) disco; urgency=medium
 .
   [ Corey Bryant ]
   * d/gbp.conf: Create stable/stein branch.
 .
   [ James Page ]
   * d/p/token-consistently-decode-binary-types.patch: Ensure binary
 types are consistently decoded under Python 3 (LP: #1832265).


** Changed in: cloud-archive/stein
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

Status in OpenStack Keystone LDAP integration:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Cosmic:
  Won't Fix
Status in keystone source package in Disco:
  Fix Released

Bug description:
  When using an LDAP domain user on a bionic-rocky cloud within horizon,
  we are unable to see the projects listed in the project selection
  drop-down, and are unable to query resources from any projects to
  which we are assigned the role Member.

  It appears that the following log entries in keystone may be helpful
  to troubleshooting this issue:

  (keystone.middleware.auth): 2019-06-10 19:47:02,700 DEBUG RBAC: auth_context: 
{'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_id': None, 
'domain_name': None, 'group_ids': [], 'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
Dispatching request to legacy mapper: /v3/users
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
SCRIPT_NAME: `/v3`, PATH_INFO: 
`/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects`
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Matched GET 
/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Route path: 
'/users/{user_id}/projects', defaults: {'action': 'list_user_projects', 
'controller': }
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Match dict: {'user_id': 
'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 'action': 
'list_user_projects', 'controller': 
}
  (keystone.common.wsgi): 2019-06-10 19:47:02,700 INFO GET 
https://keystone.mysite:5000/v3/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (keystone.common.controller): 2019-06-10 19:47:02,700 DEBUG RBAC: Adding 
query filter params ()
  (keystone.common.authorization): 2019-06-10 19:47:02,700 DEBUG RBAC: 
Authorizing 
identity:list_user_projects(user_id=d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4)
  (keystone.policy.backends.rules): 2019-06-10 19:47:02,701 DEBUG enforce 
identity:list_user_projects: {'trust_id': None, 'trustor_id': None, 
'trustee_id': None, 'domain_id': None, 'domain_name': None, 'group_ids': [], 
'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.common.wsgi): 2019-06-10 19:47:02,702 WARNING You are not 
authorized to perform the requested action: identity:list_user_projects.

  
  It actually appears elsewhere in the keystone.log that there is a string 
which has encapsulated bytecode data in it (or vice versa).

  (keystone.common.wsgi): 2019-06-10 19:46:59,019 INFO POST 
https://keystone.mysite:5000/v3/auth/tokens
  (sqlalchemy.orm.path_registry): 2019-06-10 19:46:59,021 DEBUG set 
'memoized_setups' on path 'EntityRegistry((,))' to '{}'
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,021 DEBUG Connection 
 checked out from pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 being returned to pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 rollback-on-return, 
via agent
  

[Yahoo-eng-team] [Bug 1844516] Re: [neutron-tempest-plugin] SSH timeout exceptions when executing remote commands

2019-09-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/682864
Committed: 
https://git.openstack.org/cgit/openstack/neutron-tempest-plugin/commit/?id=aa65dfb5265536eca40dbaf9b1826f8bf5148f80
Submitter: Zuul
Branch:master

commit aa65dfb5265536eca40dbaf9b1826f8bf5148f80
Author: Rodolfo Alonso Hernandez 
Date:   Wed Sep 18 11:30:04 2019 +

Add retry decorator to SSH "execute" method

In case of SSH timeout (TimeoutException, TimeoutError), the
tenacity.retry decorator retries the execution of the SSH
"execute" method up to 10 times.

Some SSH execute calls, related to QoS scenario tests, have been
enhanced by setting a relatively small timeout value. The commands
executed should be quick enough to be executed in this amount of time.
In case of timeout (due to communication problems), the retry decorator
will send again the command to be executed.

Change-Id: Idc0d55b776f499a4bc5d8c9d9a549f0af8f3fac0
Closes-Bug: #1844516


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1844516

Title:
  [neutron-tempest-plugin] SSH timeout exceptions when executing remote
  commands

Status in neutron:
  Fix Released

Bug description:
  Those SSH timeout exceptions have been detected during the execution
  of the scenario QoS tests, in particular "test_qos_basic_and_update".

  Those SSH timeouts happened during the execution of the following commands:
  - "_kill_nc_process", killing any existing "nc" command being executed in the 
remote host.
  - creating the "nc" server in the remote host.

  Logs:
  
[1]https://9a0240ca9f61a595b570-86672578d4e6ceb498f2d932b0da6815.ssl.cf1.rackcdn.com/633871/20/check/neutron-tempest-plugin-scenario-openvswitch/772f7a4/testr_results.html.gz
  
[2]https://1fd93ff32a555bc48a73-5fe9d093373d887f2b09d5c4b981e1db.ssl.cf2.rackcdn.com/652099/34/check/neutron-tempest-plugin-scenario-openvswitch-rocky/4897a52/testr_results.html.gz
  
[3]https://openstack.fortnebula.com:13808/v1/AUTH_e8fd161dc34c421a979a9e6421f823e9/zuul_opendev_logs_cf0/679510/1/check/neutron-tempest-plugin-scenario-openvswitch/cf055c6/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1844516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827973] Re: IPv6 is not supported when adding interfaces to the same network with IPv6 address

2019-09-23 Thread Abhinay
Hi Akhirio, 
Tried to reproduce the issue by adding  ipv6 subnets to the router-interface 
and as you mentioned horizon was not provided with clear message and doesn't 
relate to horizon and it is normal behaviour of neutron but In L2 level we can 
assign a single port to multiple interfaces, so this issue should be invalid 


** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1827973

Title:
  IPv6 is not supported when adding interfaces to the same network with
  IPv6 address

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When create a new interface using a IPv6 subnet, if input the IP address, and 
the interface already has a subnet under the same network, a error appeared.
  1,Create a network named 'net1', and then create two subnets in IPv6 format, 
subnet name:'testSubnet1','testSubnet2' .
  2,Create a router named 'router1', and add a interface to it, select 
'testSubnet1' and add it to the router.
  3.Add another subnet to this router, use 'testSubnet2',and in the IP Address 
input box, input a address, and then click the Submit.
     There is a error here shows that
  'Error: Failed to add interface: Bad router request: Cannot have multiple 
router ports with the same network id if both contain IPv6 subnets. Existing 
port ae3239e1-7ca8-49c8-9055-7d4a41db3045 has IPv6 subnet(s) and network id 
fb030ad6-de1d-4e98-8bf0-3b04ce9c0c11. Neutron server returns request_ids: 
['req-0ca563fd-a6de-41ff-8143-f1c09be807b0']
  '

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1827973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844461] Re: Role assignment list for subtree is only project scoped

2019-09-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/682762
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=05ea390c67da8056bd0cb4445f4f030d8181aaf6
Submitter: Zuul
Branch:master

commit 05ea390c67da8056bd0cb4445f4f030d8181aaf6
Author: Colleen Murphy 
Date:   Tue Sep 17 15:47:35 2019 -0700

Allow system/domain scope for assignment tree list

The comment regarding the scope_types setting for
identity:list_role_assignments_for_tree was incorrect: the project ID
for this request comes from a query parameter, not the token context,
and therefore it makes sense to allow system users and domain users to
call this API to get information about a project they have access to.
This change updates the default policy for this API and adds tests for
it.

For project scope, the admin role is still required, as project members
and project readers are typically not allowed rights to view the project
hierarchy.

Change-Id: If246298092940884a7b90e47cc9ce2f30da3e9e5
Closes-bug: #1844461


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1844461

Title:
  Role assignment list for subtree is only project scoped

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The identity:list_role_assignment_for_subtree is limited to the
  'project' scope type, but this means that system readers and domain
  readers can't list role assignments for the subtree of a project they
  would otherwise have access to. Since the project ID is specified as a
  query parameter and is not taken directly from the token context, it
  makes sense to allow system readers and domain readers to make this
  query.

  Project members and readers should still be forbidden from getting
  role assignment information on their own project or its subprojects,
  but project admins should remain allowed to get this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1844461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844993] [NEW] migrate a server with qos port with compute RPC pinned to 5.0 fails and leaves the qos port in an inconsistent state

2019-09-23 Thread Balazs Gibizer
Public bug reported:

Steps to reproduce
==

1) Set the [upgrade_levels]/compute config parameter to '5.0'
2) Boot an instance with a neutron port having resource request (e.g. QoS min 
bandwidth policy)
4) Do the migration

Expected result
===

Migration is rejected as qos port handing in migration needs at least
RPC 5.2. Instance remains on the source host.

Actual result
=

Migration fails at finish_resize step. Server goes to ERROR state on the
destination host. The allocation key in the qos port binding profile
still points to the source host.

** Affects: nova
 Importance: Undecided
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1844993

Title:
  migrate a server with qos port with compute RPC pinned to 5.0 fails
  and leaves the qos port in an inconsistent state

Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce
  ==

  1) Set the [upgrade_levels]/compute config parameter to '5.0'
  2) Boot an instance with a neutron port having resource request (e.g. QoS min 
bandwidth policy)
  4) Do the migration

  Expected result
  ===

  Migration is rejected as qos port handing in migration needs at least
  RPC 5.2. Instance remains on the source host.

  Actual result
  =

  Migration fails at finish_resize step. Server goes to ERROR state on
  the destination host. The allocation key in the qos port binding
  profile still points to the source host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1844993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832265] Re: py3: inconsistent encoding of token fields

2019-09-23 Thread Launchpad Bug Tracker
This bug was fixed in the package keystone - 2:15.0.0-0ubuntu1.1

---
keystone (2:15.0.0-0ubuntu1.1) disco; urgency=medium

  [ Corey Bryant ]
  * d/gbp.conf: Create stable/stein branch.

  [ James Page ]
  * d/p/token-consistently-decode-binary-types.patch: Ensure binary
types are consistently decoded under Python 3 (LP: #1832265).

 -- Corey Bryant   Fri, 12 Jul 2019 14:54:08
+0100

** Changed in: keystone (Ubuntu Disco)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1832265

Title:
  py3: inconsistent encoding of token fields

Status in OpenStack Keystone LDAP integration:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in Ubuntu Cloud Archive stein series:
  Fix Committed
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in keystone source package in Cosmic:
  Won't Fix
Status in keystone source package in Disco:
  Fix Released

Bug description:
  When using an LDAP domain user on a bionic-rocky cloud within horizon,
  we are unable to see the projects listed in the project selection
  drop-down, and are unable to query resources from any projects to
  which we are assigned the role Member.

  It appears that the following log entries in keystone may be helpful
  to troubleshooting this issue:

  (keystone.middleware.auth): 2019-06-10 19:47:02,700 DEBUG RBAC: auth_context: 
{'trust_id': None, 'trustor_id': None, 'trustee_id': None, 'domain_id': None, 
'domain_name': None, 'group_ids': [], 'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
Dispatching request to legacy mapper: /v3/users
  (keystone.server.flask.application): 2019-06-10 19:47:02,700 DEBUG 
SCRIPT_NAME: `/v3`, PATH_INFO: 
`/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects`
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Matched GET 
/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Route path: 
'/users/{user_id}/projects', defaults: {'action': 'list_user_projects', 
'controller': }
  (routes.middleware): 2019-06-10 19:47:02,700 DEBUG Match dict: {'user_id': 
'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 'action': 
'list_user_projects', 'controller': 
}
  (keystone.common.wsgi): 2019-06-10 19:47:02,700 INFO GET 
https://keystone.mysite:5000/v3/users/d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4/projects
  (keystone.common.controller): 2019-06-10 19:47:02,700 DEBUG RBAC: Adding 
query filter params ()
  (keystone.common.authorization): 2019-06-10 19:47:02,700 DEBUG RBAC: 
Authorizing 
identity:list_user_projects(user_id=d4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4)
  (keystone.policy.backends.rules): 2019-06-10 19:47:02,701 DEBUG enforce 
identity:list_user_projects: {'trust_id': None, 'trustor_id': None, 
'trustee_id': None, 'domain_id': None, 'domain_name': None, 'group_ids': [], 
'token': , 'user_id': 
b'd4fb94cfa3ce0f7829d76fe44697488e7765d88e29f5a896f57d43caadb0fad4', 
'user_domain_id': '997b3e91271140feb1635eefba7c65a1', 'system_scope': None, 
'project_id': None, 'project_domain_id': None, 'roles': [], 'is_admin_project': 
True, 'service_user_id': None, 'service_user_domain_id': None, 
'service_project_id': None, 'service_project_domain_id': None, 'service_roles': 
[]}
  (keystone.common.wsgi): 2019-06-10 19:47:02,702 WARNING You are not 
authorized to perform the requested action: identity:list_user_projects.

  
  It actually appears elsewhere in the keystone.log that there is a string 
which has encapsulated bytecode data in it (or vice versa).

  (keystone.common.wsgi): 2019-06-10 19:46:59,019 INFO POST 
https://keystone.mysite:5000/v3/auth/tokens
  (sqlalchemy.orm.path_registry): 2019-06-10 19:46:59,021 DEBUG set 
'memoized_setups' on path 'EntityRegistry((,))' to '{}'
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,021 DEBUG Connection 
 checked out from pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 being returned to pool
  (sqlalchemy.pool.QueuePool): 2019-06-10 19:46:59,024 DEBUG Connection 
 rollback-on-return, 
via agent
  (keystone.auth.core): 2019-06-10 19:46:59,025 DEBUG MFA Rules not processed 
for user 

[Yahoo-eng-team] [Bug 1844983] [NEW] Create log file should not explicitly set file mode - it should use the OS umask

2019-09-23 Thread ChrisA
Public bug reported:

In the _initialize_filesystem call (cloudinit/stages.py#L149-L153) to
create the log file via util.ensure_file(log_file) the file mode is
explicitly set to Oo644.  This is poor for the security of the system as
the file is world readable and thus fails the CIS benchmarks for the OS.

A suggested remedy is within cloudinit/util.py#L1879 to not call
chmod(filename, mode) and rely on the OS value of umask when creating
log files.

Alternatively the mode for log files could be exposed via the config.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1844983

Title:
  Create log file should not explicitly set file mode - it should use
  the OS umask

Status in cloud-init:
  New

Bug description:
  In the _initialize_filesystem call (cloudinit/stages.py#L149-L153) to
  create the log file via util.ensure_file(log_file) the file mode is
  explicitly set to Oo644.  This is poor for the security of the system
  as the file is world readable and thus fails the CIS benchmarks for
  the OS.

  A suggested remedy is within cloudinit/util.py#L1879 to not call
  chmod(filename, mode) and rely on the OS value of umask when creating
  log files.

  Alternatively the mode for log files could be exposed via the config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1844983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp