[Yahoo-eng-team] [Bug 1779251] [NEW] Unable to return list of all images (e.g. public + community) in a single request

2018-06-28 Thread Andy Botting
Public bug reported:

I just came across a use-case where I needed to list all includes
available to me, including community ones.

It would be useful to have a new value for the visibility filter which
was 'all' to return all available images in a single request.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1779251

Title:
  Unable to return list of all images (e.g. public + community) in a
  single request

Status in Glance:
  New

Bug description:
  I just came across a use-case where I needed to list all includes
  available to me, including community ones.

  It would be useful to have a new value for the visibility filter which
  was 'all' to return all available images in a single request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1779251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779250] [NEW] Launch instance workflow doesn't allow for community/shared images filter

2018-06-28 Thread Andy Botting
Public bug reported:

When launching an instance, it is not possible to filter the source
image list by visibility values of 'Shared With Project' or 'Community'.
These options are not available as support for them appears to be
missing in the frontend JavasScript code.

The Project->Images table doesn't suffer from the same limitation as the
Visibility filter is applied as a new dynamic API request to Glance
instead. The Launch Instance->Source tab appears to do one API request
to Glance and filter the list locally, and Glance currently cannot
return all images in one request (e.g. can't list public + community
together).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1779250

Title:
  Launch instance workflow doesn't allow for community/shared images
  filter

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance, it is not possible to filter the source
  image list by visibility values of 'Shared With Project' or
  'Community'. These options are not available as support for them
  appears to be missing in the frontend JavasScript code.

  The Project->Images table doesn't suffer from the same limitation as
  the Visibility filter is applied as a new dynamic API request to
  Glance instead. The Launch Instance->Source tab appears to do one API
  request to Glance and filter the list locally, and Glance currently
  cannot return all images in one request (e.g. can't list public +
  community together).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1779250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779225] [NEW] Incorrect policy check for update/create port fixed_ips ip_address attribute

2018-06-28 Thread Cliff Parsons
Public bug reported:

The two Patrole test cases below have helped me identify that Neutron is
incorrectly performing the policy check for creating/updating the
fixed ip_address on a port. 

patrole_tempest_plugin.tests.api.network.test_ports_rbac.PortsRbacTest.
test_create_port_fixed_ips_ip_address 
patrole_tempest_plugin.tests.api.network.test_ports_rbac.PortsRbacTest.
test_update_port_fixed_ips_ip_address

The policy.json file has two rules for the fixed IP addresses:
"create_port:fixed_ips:ip_address": "rule:context_is_advsvc or \
 rule:admin_or_network_owner",
"update_port:fixed_ips:ip_address": "rule:context_is_advsvc or \
 rule:admin_or_network_owner",

The problem is that these two rules are not enforced within the Neutron
code. Instead, the older "create_port:fixed_ips" and "update_port:fixed_ips"
rules are enforced; these older rules are no longer in the policy.json file.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779225

Title:
  Incorrect policy check for update/create port fixed_ips ip_address
  attribute

Status in neutron:
  New

Bug description:
  The two Patrole test cases below have helped me identify that Neutron is
  incorrectly performing the policy check for creating/updating the
  fixed ip_address on a port. 

  patrole_tempest_plugin.tests.api.network.test_ports_rbac.PortsRbacTest.
  test_create_port_fixed_ips_ip_address 
  patrole_tempest_plugin.tests.api.network.test_ports_rbac.PortsRbacTest.
  test_update_port_fixed_ips_ip_address

  The policy.json file has two rules for the fixed IP addresses:
  "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or \
   rule:admin_or_network_owner",
  "update_port:fixed_ips:ip_address": "rule:context_is_advsvc or \
   rule:admin_or_network_owner",

  The problem is that these two rules are not enforced within the Neutron
  code. Instead, the older "create_port:fixed_ips" and "update_port:fixed_ips"
  rules are enforced; these older rules are no longer in the policy.json file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762688] Re: rescued instance doesn't have attached vGPUs

2018-06-28 Thread Matt Riedemann
** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Sylvain Bauza (sylvain-bauza)

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762688

Title:
  rescued instance doesn't have attached vGPUs

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  With the libvirt driver, rescuing an instance means that the attached
  mediated devices for virtual GPUs will be released. When unrescuing
  the instance, the driver will use the existing guest XML so it will
  attach again the mediated devices, but it could be a race condition in
  case the related vGPUs are now attached to a separate instance.

  We should attach the mediated devices to the rescued instance too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1768980] Re: Wrong Port in "Create OpenStack client environment scripts in keystone" document

2018-06-28 Thread Lance Bragstad
Moving this back to Fix Committed since we haven't released the fix yet.

** Changed in: keystone
 Assignee: (unassigned) => Colleen Murphy (krinkle)

** Changed in: keystone
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1768980

Title:
  Wrong Port in "Create OpenStack client environment scripts in
  keystone" document

Status in OpenStack Identity (keystone):
  Fix Committed

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __

  On the admin auth url, it was supposed to be port 35357 instead of
  5000, as mentioned at the page before. Even it working on 5000 too,
  the script is not doing the same as the page before.

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 13.0.1.dev8 on 2018-05-02 17:02
  SHA: 56d108858a2284516e1cba66a86883ea969755d4
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-openrc-rdo.rst
  URL: 
https://docs.openstack.org/keystone/queens/install/keystone-openrc-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1768980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761140] Re: dpkg eror processing package nova-compute

2018-06-28 Thread Corey Bryant
** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: nova (Ubuntu Bionic)
   Importance: Undecided => Medium

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/rocky
   Status: New => Triaged

** Changed in: cloud-archive/queens
   Status: New => Triaged

** Changed in: cloud-archive/queens
   Importance: Undecided => Medium

** Changed in: cloud-archive/rocky
   Importance: Undecided => Medium

** Also affects: cloud-archive/mitaka
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/pike
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/pike
   Status: New => Triaged

** Changed in: cloud-archive/ocata
   Status: New => Triaged

** Changed in: cloud-archive/pike
   Importance: Undecided => Medium

** Changed in: cloud-archive/mitaka
   Status: New => Triaged

** Changed in: cloud-archive/ocata
   Importance: Undecided => Medium

** Changed in: cloud-archive/mitaka
   Importance: Undecided => Medium

** Also affects: nova (Ubuntu Cosmic)
   Importance: Medium
   Status: Triaged

** Also affects: nova (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: nova (Ubuntu Xenial)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1761140

Title:
  dpkg eror processing package nova-compute

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive mitaka series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in Ubuntu Cloud Archive pike series:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Triaged
Status in nova source package in Xenial:
  Triaged
Status in nova source package in Bionic:
  Triaged
Status in nova source package in Cosmic:
  Triaged

Bug description:
  Hello! 
  I've encountered the bug while installing Nova on compute nodes:

  ...
  Setting up qemu-system-x86 (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up qemu-kvm (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up qemu-utils (1:2.11+dfsg-1ubuntu5~cloud0) ...
  Setting up python-keystone (2:13.0.0-0ubuntu1~cloud0) ...
  Processing triggers for initramfs-tools (0.122ubuntu8.11) ...
  update-initramfs: Generating /boot/initrd.img-4.4.0-116-generic
  Setting up nova-compute-libvirt (2:17.0.1-0ubuntu1~cloud0) ...
  adduser: The user `nova' does not exist.
  dpkg: error processing package nova-compute-libvirt (--configure):
   subprocess installed post-installation script returned error exit status 1
  dpkg: dependency problems prevent configuration of nova-compute-kvm:
   nova-compute-kvm depends on nova-compute-libvirt (= 
2:17.0.1-0ubuntu1~cloud0); however:
Package nova-compute-libvirt is not configured yet.

  dpkg: error processing package nova-compute-kvm (--configure):
   dependency problems - leaving unconfigured
  Setting up python-os-brick (2.3.0-0ubuntu1~cloud0) ...
  No apport report written because the error message indicates its a followup 
error from a previous failure.
  Setting up python-nova (2:17.0.1-0ubuntu1~cloud0) ...
  Setting up nova-common (2:17.0.1-0ubuntu1~cloud0) ...
  Setting up nova-compute (2:17.0.1-0ubuntu1~cloud0) ...
  Processing triggers for libc-bin (2.23-0ubuntu10) ...
  Processing triggers for systemd (229-4ubuntu21.2) ...
  Processing triggers for ureadahead (0.100.0-19) ...
  Processing triggers for dbus (1.10.6-1ubuntu3.3) ...
  Errors were encountered while processing:
   nova-compute-libvirt
   nova-compute-kvm
  ...

  Installation failed when executing 'post-installation script'. 
  After some investigation I've found out that if I've create 'nova' user 
BEFORE running package installation, it's will be succeded. 

  Steps to reproduce
  --
  1. Prepare the node for installing nova-compute packages
  2. Run 'apt-get install nova-compute'

  Expected result
  --
  Nova-compute installed successfully without errors

  Actual result
  --
  Installation failed with dpkg error

  Workaround
  --
  1. Create system user: add to /etc/passwd
 nova:x:64060:64060::/var/lib/nova:/bin/false
  2. Create system group: add to /etc/group
 nova:x:64060:
  3. Run 'apt-get install nova-compute'
 
  My Environment
  --
  

[Yahoo-eng-team] [Bug 1779207] [NEW] Failed mount of '/dev/sdb1' with Swap File cloud-init config

2018-06-28 Thread Daniel Sol
Public bug reported:

Environment : RHEL 7.5 on Azure, using cloud-init 18.2 rebase with these
patches: https://git.launchpad.net/cloud-init/commit/?id=aa4eeb80

Scenario:
I provision an image with the above on Azure, in addition, i apply this 
cloud-init config:
#cloud-config
disk_setup:
ephemeral0:
table_type: gpt
layout: [66, [33, 82]]
overwrite: True
fs_setup:
- device: ephemeral0.1
  filesystem: ext4
- device: ephemeral0.2
  filesystem: swap
mounts:
- ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]

The VM provisions successfully, you see /sdb1 gets mounted and the swap
file config succeeds.

If i deallocate the VM and start it, /sdb1 does not get mounted, you see
the errors below.

2018-06-20 19:50:38,138 - util.py[WARNING]: Failed reading the partition table 
Unexpected error while running command.
Command: ['/usr/sbin/blockdev', '--rereadpt', '/dev/sdb']
Exit code: 1
Reason: -
Stdout:
Stderr: blockdev: ioctl error on BLKRRPART: Device or resource busy
2018-06-20 19:50:38,138 - util.py[DEBUG]: Failed reading the partition table 
Unexpected error while running command.
Command: ['/usr/sbin/blockdev', '--rereadpt', '/dev/sdb']
Exit code: 1
Reason: -
Stdout:
Stderr: blockdev: ioctl error on BLKRRPART: Device or resource busy
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cloudinit/config/cc_disk_setup.py", 
line 685, in read_parttbl
util.subp(blkdev_cmd)
  File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 1958, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.

Running multiple deallocates and starts does not see the system resolve
itself.

I check fstab:
UUID=6e4681f0-1ec3-4c27-9c16-afb90a4c4513 /   xfs 
defaults0 0
UUID=f2116236-4be6-4b72-b107-c2d7a68688c4 /boot   xfs 
defaults0 0
/dev/disk/cloud/azure_resource-part1/mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
/dev/disk/cloud/azure_resource-part2noneswapsw,comment=cloudconfig  
0   0

As a test, i changed the FSTAB entry for the swap to add a dependency on 
cloud-init:
/dev/disk/cloud/azure_resource-part2noneswap
sw,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0   0

I deallocated and started the VM, sdb mounts correctly, but the FSTAB is
overwritten for swap with the  previous config. I modified the FSTAB
again, with the addition, then rebooted, again, everything works.

Does azure_resource-part2 need a dependency like azure_resource-part1?

You can also test this using the existing Azure cloud-init preview images 
(cloud-init 0.7.9.x)
az vm create 
--resource-group rgName 
--name vmName 
--image RedHat:RHEL:7-RAW-CI:latest  
--admin-username anotherusr 
--custom-data /../swapconfig.txt 
--ssh-key-value /.../my.pub 

az vm deallocate --resource-group rgName --name vmName
az vm start --resource-group rgName --name vmName


Thanks,

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1779207

Title:
  Failed mount of '/dev/sdb1' with Swap File cloud-init config

Status in cloud-init:
  New

Bug description:
  Environment : RHEL 7.5 on Azure, using cloud-init 18.2 rebase with
  these patches: https://git.launchpad.net/cloud-
  init/commit/?id=aa4eeb80

  Scenario:
  I provision an image with the above on Azure, in addition, i apply this 
cloud-init config:
  #cloud-config
  disk_setup:
  ephemeral0:
  table_type: gpt
  layout: [66, [33, 82]]
  overwrite: True
  fs_setup:
  - device: ephemeral0.1
filesystem: ext4
  - device: ephemeral0.2
filesystem: swap
  mounts:
  - ["ephemeral0.1", "/mnt"]
  - ["ephemeral0.2", "none", "swap", "sw", "0", "0"]

  The VM provisions successfully, you see /sdb1 gets mounted and the
  swap file config succeeds.

  If i deallocate the VM and start it, /sdb1 does not get mounted, you
  see the errors below.

  2018-06-20 19:50:38,138 - util.py[WARNING]: Failed reading the partition 
table Unexpected error while running command.
  Command: ['/usr/sbin/blockdev', '--rereadpt', '/dev/sdb']
  Exit code: 1
  Reason: -
  Stdout:
  Stderr: blockdev: ioctl error on BLKRRPART: Device or resource busy
  2018-06-20 19:50:38,138 - util.py[DEBUG]: Failed reading the partition table 
Unexpected error while running command.
  Command: ['/usr/sbin/blockdev', '--rereadpt', '/dev/sdb']
  Exit code: 1
  Reason: -
  Stdout:
  Stderr: blockdev: ioctl error on BLKRRPART: Device or resource busy
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/config/cc_disk_setup.py", 
line 685, in read_parttbl
  util.subp(blkdev_cmd)
File 

[Yahoo-eng-team] [Bug 1779194] [NEW] neutron-lbaas haproxy agent, when configured with allow_automatic_lbaas_agent_failover = True, after failover, when the failed agent restarts or reconnects to Rabb

2018-06-28 Thread Swaminathan Vasudevan
Public bug reported:

When we configure two or more lbaas haproxy agents with high
availability by setting the  allow_automatic_lbaas_agent_failover to
True for failover, then the LBaaS fails over to an available active
agent, either when the agent is not responsive or the agent lost
connection with RabitMQ.

This works exactly as per the expectation.

But when the dead agent comes up active and when it trys to re-sync the
state with the server, the agent finds the LBaaS configured or
associated with that agent is an 'Orphan' and tries to clean up the
Orphan LBaaS.

In the process of cleaning it up, it tries to unplug the VIF port, which
affects the other agent that is hosting the LBaaS.

When the VIF port is unplugged, the port device_owner changes and it
causes other issues.

So there should be a check before the VIF port is removed, to make sure,
if there is an active agent using the port. In that case the VIF port
should not be unplugged.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779194

Title:
  neutron-lbaas haproxy agent, when configured with
  allow_automatic_lbaas_agent_failover = True,  after failover, when the
  failed agent restarts or reconnects to RabbitMQ, it tries to unplug
  the vif port without checking if it is used by other agent

Status in neutron:
  New

Bug description:
  When we configure two or more lbaas haproxy agents with high
  availability by setting the  allow_automatic_lbaas_agent_failover to
  True for failover, then the LBaaS fails over to an available active
  agent, either when the agent is not responsive or the agent lost
  connection with RabitMQ.

  This works exactly as per the expectation.

  But when the dead agent comes up active and when it trys to re-sync
  the state with the server, the agent finds the LBaaS configured or
  associated with that agent is an 'Orphan' and tries to clean up the
  Orphan LBaaS.

  In the process of cleaning it up, it tries to unplug the VIF port,
  which affects the other agent that is hosting the LBaaS.

  When the VIF port is unplugged, the port device_owner changes and it
  causes other issues.

  So there should be a check before the VIF port is removed, to make
  sure, if there is an active agent using the port. In that case the VIF
  port should not be unplugged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779170] [NEW] neutron.tests.functional.agent.linux.test_netlink_lib.NetlinkLibTestCase.test_delete_* fail with python 3

2018-06-28 Thread Bernard Cafarelli
Public bug reported:

Sample run: http://logs.openstack.org/83/577383/1/experimental/neutron-
functional-python35/e2cbc92/logs/testr_results.html.gz

ft1.2: 
neutron.tests.functional.agent.linux.test_netlink_lib.NetlinkLibTestCase.test_delete_tcp_entry_StringException:
 pythonlogging:'': {{{
DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kafka = 
oslo_messaging._drivers.impl_kafka:KafkaDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('zmq = 
oslo_messaging._drivers.impl_zmq:ZmqDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('fake = 
oslo_messaging._drivers.impl_fake:FakeDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('kombu = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('rabbit = 
oslo_messaging._drivers.impl_rabbit:RabbitDriver')
   DEBUG [stevedore.extension] found extension EntryPoint.parse('amqp = 
oslo_messaging._drivers.impl_amqp1:ProtonDriver')
   DEBUG [oslo_policy._cache_handler] Reloading cached file 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
   DEBUG [oslo_policy.policy] Reloaded policy file: 
/opt/stack/new/neutron/neutron/tests/etc/policy.json
}}}

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/base.py", line 140, in func
return f(self, *args, **kwargs)
  File 
"/opt/stack/new/neutron/neutron/tests/functional/agent/linux/test_netlink_lib.py",
 line 106, in test_delete_tcp_entry
self._delete_entry(tcp_entry, remain_entries, _zone)
  File 
"/opt/stack/new/neutron/neutron/tests/functional/agent/linux/test_netlink_lib.py",
 line 58, in _delete_entry
self.assertEqual(remain_entries, entries_list)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python35/lib/python3.5/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional-python35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = ((4, 'icmp', 8, 0, '1.1.1.1', '2.2.2.2', , 51),
 (4, 'udp', 4, 5, '1.1.1.1', '2.2.2.2', 51))
actual= ((4, 'icmp', 8, 0, '1.1.1.1', '2.2.2.2', , 51),
 (4, 'tcp', 1, 2, '1.1.1.1', '2.2.2.2', 51),
 (4, 'udp', 4, 5, '1.1.1.1', '2.2.2.2', 51))

** Affects: neutron
 Importance: Undecided
 Assignee: Bernard Cafarelli (bcafarel)
 Status: In Progress


** Tags: py34

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed 

[Yahoo-eng-team] [Bug 1779164] [NEW] openvswitch agent failed when adding esp security rule

2018-06-28 Thread Dmitry Kudyukin
Public bug reported:

When you add esp rule with port range the openvswitch agent fails during sync 
iptables rule with error:
ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: 
; Stderr: iptables-restore v1.4.21: multiport only works with TCP, UDP, 
UDPLITE, SCTP and DCCP
Rule looks like:
 -s 10.10.10.10/32 -p esp -m multiport —dports 1:65535 -j

I suggest it's a good idea to add filter for port like icmp in commit
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9c64da0a642148750d7e930d77278aa0977edf81
to prevent such behavior in agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779164

Title:
  openvswitch agent failed when adding esp security rule

Status in neutron:
  New

Bug description:
  When you add esp rule with port range the openvswitch agent fails during sync 
iptables rule with error:
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; 
Stdout: ; Stderr: iptables-restore v1.4.21: multiport only works with TCP, UDP, 
UDPLITE, SCTP and DCCP
  Rule looks like:
   -s 10.10.10.10/32 -p esp -m multiport —dports 1:65535 -j

  I suggest it's a good idea to add filter for port like icmp in commit
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9c64da0a642148750d7e930d77278aa0977edf81
  to prevent such behavior in agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779159] [NEW] Store metadata on a configuration drive in nova - missing powervm

2018-06-28 Thread Matt Riedemann
Public bug reported:

- [x] This is a doc addition request.

The powervm driver also supports config drive since queens:
https://review.openstack.org/#/c/409404/

---
Release: 18.0.0.0b3.dev225 on 2018-06-28 13:03
SHA: de7055bfa937a0b3d26e5a02e9fc38650a0bfdb1
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/config-drive.rst
URL: https://docs.openstack.org/nova/latest/user/config-drive.html

** Affects: nova
 Importance: Low
 Assignee: Eric Fried (efried)
 Status: In Progress


** Tags: docs low-hanging-fruit powervm

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1779159

Title:
  Store metadata on a configuration drive in nova - missing powervm

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  - [x] This is a doc addition request.

  The powervm driver also supports config drive since queens:
  https://review.openstack.org/#/c/409404/

  ---
  Release: 18.0.0.0b3.dev225 on 2018-06-28 13:03
  SHA: de7055bfa937a0b3d26e5a02e9fc38650a0bfdb1
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/config-drive.rst
  URL: https://docs.openstack.org/nova/latest/user/config-drive.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1779159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774754] Re: [api_database] config options are duplicated in config reference

2018-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/577023
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5b72eeec54e9287f6638ab10d2b3d80ad4bfb1ee
Submitter: Zuul
Branch:master

commit 5b72eeec54e9287f6638ab10d2b3d80ad4bfb1ee
Author: chenxing 
Date:   Thu Jun 21 10:36:27 2018 +0800

Fix the duplicated config options of api_database and placement_database

The "api_database" and "placement_database" config options have
their help text duplicated in the configuration reference:


https://docs.openstack.org/nova/latest/configuration/config.html#api-database

https://docs.openstack.org/nova/latest/configuration/config.html#placement-database

Co-Authored-By: Eric Fried 
Change-Id: I6a20c13d570b58f2467ad0e7ba595f199e7c7c41
Closes-Bug: #1774754


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1774754

Title:
  [api_database] config options are duplicated in config reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The [api_database] config options have their help text duplicated in
  the configuration reference:

  https://docs.openstack.org/nova/latest/configuration/config.html#api-
  database

  For example:

  
  connection¶
  Type: string
  Default:  

  The SQLAlchemy connection string to use to connect to the
  database.The SQLAlchemy connection string to use to connect to the
  database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1774754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770636] Re: placement API not translating CannotDeleteParentResourceProvider to 409 Conflict

2018-06-28 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => In Progress

** Changed in: nova/queens
   Importance: Undecided => Low

** Changed in: nova/queens
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1770636

Title:
  placement API not translating CannotDeleteParentResourceProvider to
  409 Conflict

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Something I noticed while reviewing
  
https://review.openstack.org/#/c/546675/14/osc_placement/resources/resource_provider.py
  is that the code that prevents parent providers from being deleted:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/objects/resource_provider.py#L970

  raises a CannotDeleteParentResourceProvider exception, but we're not
  catching that in the handler:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/resource_provider.py#L130-L140

  Need to add a gabbit func test and catch that exception properly,
  converting it to an HTTP 409 Conflict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1770636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779139] [NEW] metadata api detection runs before network-online

2018-06-28 Thread Bernhard M. Wiedemann
Public bug reported:

Version: 18.2

originally filed at
https://bugzilla.opensuse.org/show_bug.cgi?id=1097388

I found that cloud-init did not import authorized keys
because DataSourceNone was used
because the metadata API detection ran before the VM's network was up.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "0001-Run-metadata-detection-after-network-online.patch"
   
https://bugs.launchpad.net/bugs/1779139/+attachment/5157420/+files/0001-Run-metadata-detection-after-network-online.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1779139

Title:
  metadata api detection runs before network-online

Status in cloud-init:
  New

Bug description:
  Version: 18.2

  originally filed at
  https://bugzilla.opensuse.org/show_bug.cgi?id=1097388

  I found that cloud-init did not import authorized keys
  because DataSourceNone was used
  because the metadata API detection ran before the VM's network was up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1779139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775665] Re: api-ref: rebuild server does not mention pre-conditions

2018-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/576438
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1a079a6394b567ea38cc892146ad13829c1c0256
Submitter: Zuul
Branch:master

commit 1a079a6394b567ea38cc892146ad13829c1c0256
Author: jichenjc 
Date:   Fri Jun 8 14:59:05 2018 +0800

Mention server status in api-ref when rebuild

Add description about server status in active, shutoff,
error can accept a rebuild action.

Closes-Bug: 1775665

Change-Id: Id52acb9fdb264b337a6a9748049aeecd22901bf4


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775665

Title:
  api-ref: rebuild server does not mention pre-conditions

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The API reference for the rebuild server action does not mention the
  preconditions required for rebuild a server, which are pretty basic:

  https://developer.openstack.org/api-ref/compute/#rebuild-server-
  rebuild-action

  
https://github.com/openstack/nova/blob/54c9a944c618dc173bd1214be4de9a44479c8959/nova/compute/api.py#L2972

  The server "status" must be ACTIVE, ERROR or SHUTOFF and "OS-SRV-
  USG:launched_at" must not be null.

  The server must not also be locked, but that's generally true of all
  server actions, so it's probably not worth mentioning explicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779075] [NEW] Tempest jobs fails because of timeout

2018-06-28 Thread Slawek Kaplonski
Public bug reported:

It happens now quite often that tempest related tests in neutron are failing 
because of reaching global job timeout.
Example of such failure: 
http://logs.openstack.org/61/566961/4/check/neutron-tempest-iptables_hybrid/c70896b/job-output.txt.gz

We need to investigate why those timeouts are reached and fix it somehow

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779075

Title:
  Tempest jobs fails because of timeout

Status in neutron:
  Confirmed

Bug description:
  It happens now quite often that tempest related tests in neutron are failing 
because of reaching global job timeout.
  Example of such failure: 
http://logs.openstack.org/61/566961/4/check/neutron-tempest-iptables_hybrid/c70896b/job-output.txt.gz

  We need to investigate why those timeouts are reached and fix it
  somehow

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779077] [NEW] Unit test jobs (py35) fails with timeout often

2018-06-28 Thread Slawek Kaplonski
Public bug reported:

It happens quite often recently.
Examples of failures:
http://logs.openstack.org/58/565358/14/check/openstack-tox-py35/aa30b12/job-output.txt.gz
 or 
http://logs.openstack.org/03/563803/9/check/openstack-tox-py35/a50de4a/job-output.txt.gz

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779077

Title:
  Unit test jobs (py35) fails with timeout often

Status in neutron:
  Confirmed

Bug description:
  It happens quite often recently.
  Examples of failures:
  
http://logs.openstack.org/58/565358/14/check/openstack-tox-py35/aa30b12/job-output.txt.gz
 or 
http://logs.openstack.org/03/563803/9/check/openstack-tox-py35/a50de4a/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779054] Re: QoS – cannot associate policy with “minimum bandwidth” rule to port

2018-06-28 Thread Slawek Kaplonski
I think that this issue is related more to how OpenStack client handles such 
errors returned from API and You should open it against OSC in storyboard.
Neutron is reporting correctly that this rule type is not supported by this 
backend.

** Changed in: neutron
   Status: New => Invalid

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779054

Title:
  QoS – cannot associate policy with “minimum bandwidth” rule to port

Status in neutron:
  Invalid

Bug description:
  ### Scenario: ###
  1) Create a new QoS policy with:
  openstack network qos policy create bandwidth-control

  2) Create a new rule of “minimum bandwith” type and associate this rule to 
previously created policy with:
  openstasck network qos rule create --type minimum-bandwidth --min-kbps 1 
--egress bandwidth-control

  3) Set VM port to use QoS policy with:
  openstack port set --qos-policy bandwidth-control vm1_port

  ### Actual Result: ###
  Getting “Unknown error” in CLI, in spite of the fact that reason for this 
failure is known, at least in neutron’s server.log (Rest API messages) and it’s 
logged with: “Rule type minimum_bandwidth is not supported by openvswitch”
  See attached server.log file

  ### Expected Result: ###
  CLI output should contain the same info as API, so Instead of “unknown error” 
we should prompt “Rule type minimum_bandwidth is not supported by openvswitch”

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp