[Yahoo-eng-team] [Bug 1887617] [NEW] Install and configure in keystone

2020-07-14 Thread ad...@zhangqi.net
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [x] I have a fix to the document that I can paste below including example: 
input and output. 

There is no rpm package named mod_wsgi in CentOS 8.x
It seems we should use python3-mode_wsgi instead.


If you have a troubleshooting or support issue, use the following  resources:

 - Ask OpenStack: https://ask.openstack.org
 - The mailing list: https://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2019-09-18 18:54:05
SHA: 6b8948f6f497f1c79151fb88c733667682f0d6ea
Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-rdo.rst
URL: 
https://docs.openstack.org/keystone/ussuri/install/keystone-install-rdo.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1887617

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

  There is no rpm package named mod_wsgi in CentOS 8.x
  It seems we should use python3-mode_wsgi instead.

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: https://ask.openstack.org
   - The mailing list: https://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-09-18 18:54:05
  SHA: 6b8948f6f497f1c79151fb88c733667682f0d6ea
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/install/keystone-install-rdo.rst
  URL: 
https://docs.openstack.org/keystone/ussuri/install/keystone-install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1887617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887588] [NEW] Should add user's domian when using cinder as store backend

2020-07-14 Thread Eric Xie
Public bug reported:

When using cinder as store backend, there are some configurations of user which 
can call cinder's API.
cinder_store_auth_address
cinder_store_user_name
cinder_store_password
cinder_store_project_name
cinder_os_region_name
cinder_catalog_info = volumev3:cinderv3:internalURL

In the multi-domain, user maybe not belongs to 'Default' domain, got
auth error.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1887588

Title:
  Should add user's domian when using cinder as store backend

Status in Glance:
  New

Bug description:
  When using cinder as store backend, there are some configurations of user 
which can call cinder's API.
  cinder_store_auth_address
  cinder_store_user_name
  cinder_store_password
  cinder_store_project_name
  cinder_os_region_name
  cinder_catalog_info = volumev3:cinderv3:internalURL

  In the multi-domain, user maybe not belongs to 'Default' domain, got
  auth error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1887588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1886374] Re: Improve lazy loading mechanism for multiple stores

2020-07-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/739423
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=ab0e5268a9c2614572659d763b3c0b6fc36dd0cf
Submitter: Zuul
Branch:master

commit ab0e5268a9c2614572659d763b3c0b6fc36dd0cf
Author: Abhishek Kekane 
Date:   Mon Jul 6 07:49:31 2020 +

Improve lazy loading mechanism for multiple stores

Glance has a facility lazy loading for legacy images which will be called
on get/list api calls to add store information in image's location metadata
based on location URL of image. Even if admin decides to change the store
names in glance-api.conf same will also be updated in location metadata
for all images related to that particular store. Current implementation of
legacy image performs this operation on each get/list call as location 
metadata
is not getting updated in database or it doesn't handle to perform store 
name
check in glance-api.conf.

Improvements done:
1. Save updated location metadata information in database permenantly
2. Add logic to perform lazy loading only if store information is not 
present
in location metadata or store present in location metadata is not defined in
glance's enbaled_backends configuration option.

Change-Id: I789fa7adfb459e7861c90a51f418a635c0c22244
Closes-Bug: #1886374


** Changed in: glance
   Status: In Progress => Fix Released

** Changed in: glance/ussuri
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1886374

Title:
  Improve lazy loading mechanism for multiple stores

Status in Glance:
  Fix Released
Status in Glance train series:
  New
Status in Glance ussuri series:
  In Progress

Bug description:
  Glance has a facility lazy loading for legacy images which will be
  called on get/list api calls to add store information in image's
  location metadata based on location URL of image. Even if admin
  decides to change the store names in glance-api.conf same will also be
  updated in location metadata for all images related to that particular
  store. Current implementation of legacy image performs this operation
  on each get/list call as location metadata is not getting updated in
  database or it doesn't handle to perform store name check in glance-
  api.conf

  Proposed fix for improvements:
  1. Save updated location metadata information in database permanently
  2. Add logic to perform lazy loading only if store information is not present 
in location metadata or store present in location metadata is not defined in 
glance's enbaled_backends configuration option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1886374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887497] Re: Cleanup stale flows by cookie and table_id instead of just by cookie

2020-07-14 Thread Lajos Katona
Hi, thanks for bug report, and thanks Liu for checking.
I feel that this is now more an opinion, and perhaps with some details it can 
be an RFE, which can be discussed on drivers meeting with the rest of the team, 
what do you think?

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887497

Title:
  Cleanup stale flows by cookie and table_id instead of just by cookie

Status in neutron:
  Opinion

Bug description:
  Pre-conditions: After restart neutron-ovs-agent.

  After I restart neutron-ovs-agent, I found neutron cleanup stale flows
  only by cookie, and the cookies in different tables always be same,
  that means I can cleanup flows in table 20 by cookies in table 0! I
  think the safer way is to cleanup stale flows by cookie and table_id
  instead of just by cookie.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1884764] Re: Some overcloud nodes are missing in the ansible inventory file generated for migration to ML2OVN

2020-07-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/738212
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=aa6491a9d91bb5aab6bfaba81cf3a82279a551f7
Submitter: Zuul
Branch:master

commit aa6491a9d91bb5aab6bfaba81cf3a82279a551f7
Author: Oliver Walsh 
Date:   Fri Jun 26 15:04:56 2020 +0100

migration: Use ansible-inventory to parse tripleo inventory

Instead of adapting to changes to the tripleo inventory structure let
ansible parse it for us using ansible-inventory.

Change-Id: I34ad0fd5feed65dd1266993a77f6ebc69fecfdfb
Closes-bug: #1884764


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1884764

Title:
  Some overcloud nodes are missing in the ansible inventory file
  generated for migration to ML2OVN

Status in neutron:
  Fix Released

Bug description:
  When trying to perform a migration from ml2ovs to ml2ovn using
  migration tool, ovn_migration.sh script creates a file
  hosts_for_migration which includes only single controller and single
  compute node even on environments with more than 1 compute and
  controller nodes.

  The problem started to happen because output of "/usr/bin/tripleo-
  ansible-inventory --list" changed after some recent tripleo change.

  When running get_role_hosts function from
  tools/ovn_migration/tripleo_environment/ovn_migration.sh:L143
  (get_role_hosts /tmp/ansible-inventory.txt neutron_api) on an
  environment with 3 controller nodes now we get the following :

  jq: error (at /tmp/ansible-inventory.txt:1): Cannot iterate over null (null)
  controller-0

  in the past the output was correct:
  controller-0 controller-1 controller-2

  similar for   tools/ovn_migration/tripleo_environment/ovn_migration.sh:L158
  get_role_hosts /tmp/ansible-inventory.txt   neutron_ovs_agent

  jq: error (at ansible-inventory_osp16.1_ovs:1): Cannot iterate over null 
(null)
  compute-0 controller-0

  while correct output should be:
  controller-0 controller-1 controller-2 compute-0 compute-1


  Possible solution is to replace L93 in  
tools/ovn_migration/tripleo_environment/ovn_migration.sh
  from
  roles=`jq -r  \.$role_name\.children\[\] $inventory_file`
  to 
  roles=`roles=`jq -r  \.overcloud_$role_name\.children\[\] $inventory_file ||  
jq -r  \.$role_name\.children\[\] $inventory_file`

  In this case the function returns proper lists of nodes for old and
  new  ansible-inventory file format.

  
  Some details:

  output of jq command from 
tools/ovn_migration/tripleo_environment/ovn_migration.sh:L93
  old inventory format
  [stack@undercloud-0 ~]$ jq -r  \.neutron_api\.children\[\] 
/tmp/ansible-inventory.txt
  Controller

  new inventory format
  (overcloud) [stack@undercloud-0 ~]$ jq -r  \.neutron_api\.children\[\] 
/tmp/ansible-inventory.txt
  overcloud_neutron_api

  
  related snippet from old tripleo-ansible-inventory format:
  ...
  "neutron_api": {
  "children": [
  "Controller"
  ],
  "vars": {
  "ansible_ssh_user": "heat-admin"
  }
  },
  ...

  related snippet from new tripleo-ansible-inventory format:
  ...
 "neutron_api": {
  "children": [
  "overcloud_neutron_api"
  ]
  },
  "overcloud_neutron_api": {
  "children": [
  "overcloud_Controller"
  ]
  },

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1884764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887099] Re: Invalid metadefs for watchdog

2020-07-14 Thread Erno Kuvaja
** Also affects: glance/ussuri
   Importance: Undecided
   Status: New

** Also affects: glance/train
   Importance: Undecided
   Status: New

** Also affects: glance/victoria
   Importance: Undecided
 Assignee: Cyril Roelandt (cyril-roelandt)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1887099

Title:
  Invalid metadefs for watchdog

Status in Glance:
  In Progress
Status in Glance train series:
  New
Status in Glance ussuri series:
  New
Status in Glance victoria series:
  In Progress

Bug description:
  The metadefs for the watchdog features, located in etc/metadefs
  /compute-watchdog.json, seem to be invalid and should read:

  
 "resource_type_associations": [
  {
  "name": "OS::Glance::Image",
  "prefix": "hw_"
  },
  {
  "name": "OS::Nova::Flavor",
  "prefix": "hw:"
  }
  ],

  
  The "hw" prefix should also be removed from the property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1887099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887523] [NEW] Deadlock detection code can be stale

2020-07-14 Thread Mohammed Naser
Public bug reported:

oslo.db has plenty of infrastructure for detecting deadlocks, however,
it seems that at the moment, neutron has it's own implementation of it
which is missing a bunch of deadlocks, causing issues when doing work at
scale.

this bug is to track the work in refactoring all of this to use the
native oslo retry.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887523

Title:
  Deadlock detection code can be stale

Status in neutron:
  New

Bug description:
  oslo.db has plenty of infrastructure for detecting deadlocks, however,
  it seems that at the moment, neutron has it's own implementation of it
  which is missing a bunch of deadlocks, causing issues when doing work
  at scale.

  this bug is to track the work in refactoring all of this to use the
  native oslo retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887363] Re: [ovn-octavia-provider] Functional tests job fails

2020-07-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/726917
Committed: 
https://git.openstack.org/cgit/openstack/ovn-octavia-provider/commit/?id=773daf59de78617046e7881d6f2c55e2ff6e595c
Submitter: Zuul
Branch:master

commit 773daf59de78617046e7881d6f2c55e2ff6e595c
Author: Maciej Józefczyk 
Date:   Mon Jul 13 09:24:59 2020 +0200

Fix pep8 and functional jobs

This change includes a 3 different patches that we need
to squash in order to pass the gate.

1) New versions of isort broke pylint. This patch fixes it at 4.3.21.

2) The functional job fails because of bugfixes in neutron
devstack lib. We need to update our functional jobs also.

3) Add functional release and master job that builds OVN

In order to test the latest code in the core OVN repository,
add a job that builds it from source from the master branch.

Define also second job that will run code with latest
OVN/OVS release.

Closes-Bug: #1887363
Change-Id: Ic013e5a0605c28453d3ee1b64031022f6f75f8f6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887363

Title:
  [ovn-octavia-provider] Functional tests job fails

Status in neutron:
  Fix Released

Bug description:
  Functional tests job fails on:

  2020-07-13 08:22:50.145117 | controller | + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_base_deps:113
 :   source 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs
  2020-07-13 08:22:50.145252 | controller | 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:
 line 113: 
/home/zuul/src/opendev.org/openstack/ovn-octavia-provider/devstack/lib/ovs: No 
such file or directory

  
https://9ce43a75e3387ceb8909-2b4f2fa211fea8445ec0f4a568f6056b.ssl.cf2.rackcdn.com/740625/1/check
  /ovn-octavia-provider-functional/714ba02/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887515] [NEW] [RFE] Keystone to honor the "domain" attribute mapping rules

2020-07-14 Thread Rafael Weingartner
Public bug reported:

Problem Description
 =
Currently, Keystone identity provider (IdP) attribute mapping schema only uses 
the "domain" attribute mapping as a default configuration for the domain of 
groups being mapped; groups can override the default attribute mapping domain 
by setting their specific domain. However, there are other "elements" such as 
user and project that can also have a domain to define their location in 
OpenStack.

An operator when reading the attribute mapping section and seeing the
schema for the attribute mapping definition, can be led to think that
the domain defined in the mapping will also apply to users and projects.
However, that is not what happens.

Proposed Change
 ===
First of all, to facilitate the development and extension concerning attribute 
mappings for IdPs, we changed the way the attribute mapping schema is handled. 
We introduce a new configuration `federation_attribute_mapping_schema_version`, 
which defaults to "1.0". This attribute mapping schema version will then be 
used to control the validation of attribute mapping, and also the rule 
processors used to process the attributes that come from the IdP. So far, with 
this PR, we introduce the attribute mapping schema "1.1", which enables 
operators to also define a domain for the projects they want to assign users. 
If no domain is defined either in the project or in the global domain 
definition for the attribute mapping, we take the IdP domain as the default.

Moreover, we propose to extend Keystone identity provider (IdP)
attribute mapping schema to make Keystone honor the `domain`
configuration that we have on it. Currently, that configuration is only
used to define a default domain for groups (and then each group there,
could override it). It is interesting to expand this configuration (as
long as it is in the root of the attribute mapping) to be also applied
for users and projects.

** Affects: keystone
 Importance: Undecided
 Assignee: Rafael Weingartner (rafaelweingartner)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Rafael Weingartner (rafaelweingartner)

** Changed in: keystone
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1887515

Title:
  [RFE] Keystone to honor the "domain" attribute  mapping rules

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Problem Description
   =
  Currently, Keystone identity provider (IdP) attribute mapping schema only 
uses the "domain" attribute mapping as a default configuration for the domain 
of groups being mapped; groups can override the default attribute mapping 
domain by setting their specific domain. However, there are other "elements" 
such as user and project that can also have a domain to define their location 
in OpenStack.

  An operator when reading the attribute mapping section and seeing the
  schema for the attribute mapping definition, can be led to think that
  the domain defined in the mapping will also apply to users and
  projects. However, that is not what happens.

  Proposed Change
   ===
  First of all, to facilitate the development and extension concerning 
attribute mappings for IdPs, we changed the way the attribute mapping schema is 
handled. We introduce a new configuration 
`federation_attribute_mapping_schema_version`, which defaults to "1.0". This 
attribute mapping schema version will then be used to control the validation of 
attribute mapping, and also the rule processors used to process the attributes 
that come from the IdP. So far, with this PR, we introduce the attribute 
mapping schema "1.1", which enables operators to also define a domain for the 
projects they want to assign users. If no domain is defined either in the 
project or in the global domain definition for the attribute mapping, we take 
the IdP domain as the default.

  Moreover, we propose to extend Keystone identity provider (IdP)
  attribute mapping schema to make Keystone honor the `domain`
  configuration that we have on it. Currently, that configuration is
  only used to define a default domain for groups (and then each group
  there, could override it). It is interesting to expand this
  configuration (as long as it is in the root of the attribute mapping)
  to be also applied for users and projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1887515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887283] Re: Touchpad not working with 5.3.0-59-generic and but working with 5.3.0-53-generic

2020-07-14 Thread Digvijay Singh
Hi Lajos,

Corrected application name.

** Project changed: neutron => kernel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887283

Title:
  Touchpad not working with 5.3.0-59-generic and but working with
  5.3.0-53-generic

Status in linux:
  New

Bug description:
  I have installed ubuntu on my Dell Inspiron laptop. After a recent
  upgrade to 18.04.1-Ubuntu with kernel 5.3.0-59-generic my touchpad
  stopped working. When I reverted back the kernel to 5.3.0-53-generic
  kernel it is working fine. Please find below details:


  $uname -a 
  Linux Renegade 5.3.0-53-generic #47~18.04.1-Ubuntu SMP Thu May 7 13:10:50 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux
  $xinput 
  ⎡ Virtual core pointerid=2[master pointer  (3)]
  ⎜   ↳ Virtual core XTEST pointer  id=4[slave  pointer 
 (2)]
  ⎜   ↳ **SynPS/2 Synaptics TouchPad**  id=12   [slave  pointer 
 (2)]
  ⎣ Virtual core keyboard   id=3[master keyboard (2)]
  ↳ Virtual core XTEST keyboard id=5[slave  
keyboard (3)]
  ↳ Power Buttonid=6[slave  
keyboard (3)]
  ↳ Video Bus   id=7[slave  
keyboard (3)]
  ↳ Power Buttonid=8[slave  
keyboard (3)]
  ↳ Laptop_Integrated_Webcam_HD: In id=9[slave  
keyboard (3)]
  ↳ Dell WMI hotkeysid=10   [slave  
keyboard (3)]
  ↳ AT Translated Set 2 keyboardid=11   [slave  
keyboard (3)]
  $

  With the new kernel.

  
  uname -a
  Linux Renegade 5.3.0-59-generic #53~18.04.1-Ubuntu SMP Thu Jun 4 14:58:26 UTC 
2020 x86_64 x86_64 x86_64 GNU/Linux

  $xinout 
  ⎡ Virtual core pointerid=2[master pointer  (3)]
  ⎜   ↳ Virtual core XTEST pointer  id=4[slave  pointer 
 (2)]
  ⎣ Virtual core keyboard   id=3[master keyboard (2)]
  ↳ Virtual core XTEST keyboard id=5[slave  
keyboard (3)]
  ↳ Power Buttonid=6[slave  
keyboard (3)]
  ↳ Video Bus   id=7[slave  
keyboard (3)]
  ↳ Power Buttonid=8[slave  
keyboard (3)]
  ↳ Laptop_Integrated_Webcam_HD: In id=9[slave  
keyboard (3)]
  ↳ Dell WMI hotkeysid=10   [slave  
keyboard (3)]
  ↳ AT Translated Set 2 keyboardid=11   [slave  
keyboard (3)]


  
  Version: Intel(R) Core(TM) i5-3337U CPU @ 1.80GHz

  I have tried using xserver-xorg-input-synaptics package but it is also not 
working.
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.9-0ubuntu7.15
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  renegade   1378 F pulseaudio
   /dev/snd/pcmC0D0p:   renegade   1378 F...m pulseaudio
  CurrentDesktop: ubuntu:GNOME
  DistroRelease: Ubuntu 18.04
  InstallationDate: Installed on 2020-01-05 (189 days ago)
  InstallationMedia: Ubuntu 18.04.3 LTS "Bionic Beaver" - Release amd64 
(20190805)
  MachineType: Dell Inc. Inspiron 3521
  Package: linux (not installed)
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.3.0-53-generic 
root=/dev/mapper/ubuntu--vg-root ro quiet splash vt.handoff=1
  ProcVersionSignature: Ubuntu 5.3.0-53.47~18.04.1-generic 5.3.18
  RelatedPackageVersions:
   linux-restricted-modules-5.3.0-53-generic N/A
   linux-backports-modules-5.3.0-53-generic  N/A
   linux-firmware1.173.18
  Tags:  bionic
  Uname: Linux 5.3.0-53-generic x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 04/18/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A07
  dmi.board.name: 0010T1
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A02
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: A07
  dmi.modalias: 
dmi:bvnDellInc.:bvrA07:bd04/18/2013:svnDellInc.:pnInspiron3521:pvrA07:rvnDellInc.:rn0010T1:rvrA02:cvnDellInc.:ct8:cvrA07:
  dmi.product.family: 103C_5335KV
  dmi.product.name: Inspiron 3521
  dmi.product.sku: xxx123x#ABA
  dmi.product.version: A07
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kernel/+bug/1887283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887497] [NEW] Cleanup stale flows by cookie and table_id instead of just by cookie

2020-07-14 Thread gao yu
Public bug reported:

Pre-conditions: After restart neutron-ovs-agent.

After I restart neutron-ovs-agent, I found neutron cleanup stale flows
only by cookie, and the cookies in different tables always be same, that
means I can cleanup flows in table 20 by cookies in table 0! I think the
safer way is to cleanup stale flows by cookie and table_id instead of
just by cookie.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887497

Title:
  Cleanup stale flows by cookie and table_id instead of just by cookie

Status in neutron:
  New

Bug description:
  Pre-conditions: After restart neutron-ovs-agent.

  After I restart neutron-ovs-agent, I found neutron cleanup stale flows
  only by cookie, and the cookies in different tables always be same,
  that means I can cleanup flows in table 20 by cookies in table 0! I
  think the safer way is to cleanup stale flows by cookie and table_id
  instead of just by cookie.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828768] Re: dashboard can‘t access,http error.log:KeyError: 'used'

2020-07-14 Thread Akihiro Motoki
As commented in #2 above, this happens only when a new release horizon
is used together with a backend of a couple of release older.
Considering this, we won't fix it.

** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1828768

Title:
  dashboard can‘t access,http error.log:KeyError: 'used'

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  os system: centos7.4
  openstack(horizon,nova,glance): rocky
  keystone: ocata
  dashboard: http://$ip/dashboardweb is ok
  region: have two region

  less /var/log/httpd/error_log

  
  [Fri May 10 06:02:19.989590 2019] [auth_digest:notice] [pid 118545] AH01757: 
generating secret for digest authentication ...
  [Fri May 10 06:02:19.990169 2019] [lbmethod_heartbeat:notice] [pid 118545] 
AH02282: No slotmem from mod_heartmonitor
  [Fri May 10 06:02:19.995029 2019] [mpm_prefork:notice] [pid 118545] AH00163: 
Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal 
operations
  [Fri May 10 06:02:19.995050 2019] [core:notice] [pid 118545] AH00094: Command 
line: '/usr/sbin/httpd -D FOREGROUND'
  [Fri May 10 16:22:33.626411 2019] [:error] [pid 118549] WARNING:root:Use of 
this 'djano.wsgi' file has been deprecated since the Rocky release in favor of 
'wsgi.py' in the 'openstack_dashboard' module. This fi
  le is a legacy naming from before Django 1.4 and an importable 'wsgi.py' is 
now the default. This file will be removed in the T release cycle.
  [Fri May 10 08:22:56.809565 2019] [:error] [pid 118549] INFO 
openstack_auth.plugin.base Attempted scope to domain default failed, will 
attempt to scope to another domain.
  [Fri May 10 08:22:57.492431 2019] [:error] [pid 118549] INFO 
openstack_auth.forms Login successful for user "admin" using domain "default", 
remote address 10.4.10.101.
  [Fri May 10 08:23:04.971785 2019] [:error] [pid 118549] ERROR django.request 
Internal Server Error: /dashboard/project/
  [Fri May 10 08:23:04.971810 2019] [:error] [pid 118549] Traceback (most 
recent call last):
  [Fri May 10 08:23:04.971814 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, 
in inner
  [Fri May 10 08:23:04.971817 2019] [:error] [pid 118549] response = 
get_response(request)
  [Fri May 10 08:23:04.971820 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in 
_get_response
  [Fri May 10 08:23:04.971823 2019] [:error] [pid 118549] response = 
self.process_exception_by_middleware(e, request)
  [Fri May 10 08:23:04.971826 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in 
_get_response
  [Fri May 10 08:23:04.971829 2019] [:error] [pid 118549] response = 
wrapped_callback(request, *callback_args, **callback_kwargs)
  [Fri May 10 08:23:04.971832 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
  [Fri May 10 08:23:04.971835 2019] [:error] [pid 118549] return 
view_func(request, *args, **kwargs)
  [Fri May 10 08:23:04.971837 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in dec
  [Fri May 10 08:23:04.971840 2019] [:error] [pid 118549] return 
view_func(request, *args, **kwargs)
  [Fri May 10 08:23:04.971842 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in dec
  [Fri May 10 08:23:04.971845 2019] [:error] [pid 118549] return 
view_func(request, *args, **kwargs)
  [Fri May 10 08:23:04.971859 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 113, in dec
  [Fri May 10 08:23:04.971862 2019] [:error] [pid 118549] return 
view_func(request, *args, **kwargs)
  [Fri May 10 08:23:04.971865 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in dec
  [Fri May 10 08:23:04.971868 2019] [:error] [pid 118549] return 
view_func(request, *args, **kwargs)
  [Fri May 10 08:23:04.971870 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in 
view
  [Fri May 10 08:23:04.971873 2019] [:error] [pid 118549] return 
self.dispatch(request, *args, **kwargs)
  [Fri May 10 08:23:04.971876 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 88, in 
dispatch
  [Fri May 10 08:23:04.971878 2019] [:error] [pid 118549] return 
handler(request, *args, **kwargs)
  [Fri May 10 08:23:04.971881 2019] [:error] [pid 118549]   File 
"/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 226, in get
  [Fri May 10 08:23:04.971884 2019] [:error] [pid 118549] context = 
s