[Yahoo-eng-team] [Bug 1551836] Re: CORS middleware's latent configuration options need to change

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288074
Committed: 
https://git.openstack.org/cgit/openstack/cue/commit/?id=98bd25c5849cc6ffab8df36bab89187ef3052716
Submitter: Jenkins
Branch:master

commit 98bd25c5849cc6ffab8df36bab89187ef3052716
Author: Michael Krotscheck 
Date:   Thu Mar 3 11:23:21 2016 -0800

Moved CORS middleware configuration into oslo-config-generator

The default values needed for cue's implementation of cors
middleware have been moved from paste.ini into the configuration
hooks provided by oslo.config. Furthermore, these values have been
added to the default initialization procedure. This ensures
that if a value remains unset in the configuration file, it will
fallback to using sane defaults. It also ensures that an operator
modifying the configuration will be presented with that same
set of defaults.

Change-Id: Ia179bbd7489ca128186990439a161903b7b4c28d
Closes-Bug: 1551836


** Changed in: cue
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551836

Title:
  CORS middleware's latent configuration options need to change

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in cloudkitty:
  In Progress
Status in congress:
  Fix Released
Status in Cue:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.config:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Trove:
  Fix Released

Bug description:
  It was pointed out in http://lists.openstack.org/pipermail/openstack-
  dev/2016-February/086746.html that configuration options included in
  paste.ini are less than optimal, because they impose an upgrade burden
  on both operators and engineers. The following discussion expanded to
  all projects (not just those using paste), and the following
  conclusion was reached:

  A) All generated configuration files should contain any headers which the API 
needs to operate. This is currently supported in oslo.config's generate-config 
script, as of 3.7.0
  B) These same configuration headers should be set as defaults for the given 
API, using cfg.set_defaults. This permits an operator to simply activate a 
domain, and not have to worry about tweaking additional settings.
  C) All hardcoded headers should be detached from the CORS middleware.
  D) Configuration and activation of CORS should be consistent across all 
projects.

  It was also agreed that this is a blocking bug for mitaka. A reference
  patch has already been approved for keystone, available here:
  https://review.openstack.org/#/c/285308/

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1551836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558825] [NEW] Some code changes are missing release notes

2016-03-19 Thread Rob Cresswell
Public bug reported:

Several large features in Horizon Mitaka are missing release notes.

** Affects: horizon
 Importance: High
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

** Changed in: horizon
Milestone: None => mitaka-rc1

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558825

Title:
  Some code changes are missing release notes

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Several large features in Horizon Mitaka are missing release notes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558758] [NEW] no unit test coverage for Flavors related code in dashboard/api/nova.py

2016-03-19 Thread Daniel Castellanos
Public bug reported:

When runing the "./run_tests.sh -c" command and after looking at the
reports not all the code is covered by the unit tests. The code related
to Flavors in the openstack_dashboard/api/nova.py (get flavor list,
create flavor, delete flavor, etc) is not covered.

** Affects: horizon
 Importance: Undecided
 Assignee: Daniel Castellanos (luis-daniel-castellanos)
 Status: New


** Tags: horizon unittest

** Changed in: horizon
 Assignee: (unassigned) => Daniel Castellanos (luis-daniel-castellanos)

** Description changed:

  When runing the "./run_tests.sh -c" command and after looking at the
- reports not all the code is covered by the unit tests. The Flavors
- related code in the openstack_dashboard/api/nova.py (get flavor list,
+ reports not all the code is covered by the unit tests. The code related
+ to Flavors in the openstack_dashboard/api/nova.py (get flavor list,
  create flavor, delete flavor, etc) is not covered.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558758

Title:
  no unit test coverage for Flavors related code in
  dashboard/api/nova.py

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When runing the "./run_tests.sh -c" command and after looking at the
  reports not all the code is covered by the unit tests. The code
  related to Flavors in the openstack_dashboard/api/nova.py (get flavor
  list, create flavor, delete flavor, etc) is not covered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558880] [NEW] instance can not resize ephemeral in mitaka

2016-03-19 Thread SongRuixia
Public bug reported:

Version
  mitaka

 Reproduce steps:
example:
* create a flavor with ephemeral disk
* boot an instance with the flavor
* resize the instance to a flavor with larger disk size and ephemeral disk size

Expected result:
* VM was running, disk and ephemeral are larger.

Actual result:
* VM was running, disk are larger but ephemeral are not larger.

This is because the VM ephemeral name is disk.eph0, but nova check is 
disk.local,
 this leads to ephemeral can not be extended.
I think it is unreasonable that ephemeral can not be extended.

** Affects: nova
 Importance: Undecided
 Assignee: SongRuixia (song-ruixia)
 Status: In Progress


** Tags: resize

** Tags added: resize

** Tags removed: resize
** Tags added: ephemeral

** Tags removed: ephemeral
** Tags added: resize

** Changed in: nova
 Assignee: (unassigned) => SongRuixia (song-ruixia)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558880

Title:
  instance can not resize ephemeral in mitaka

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Version
mitaka

   Reproduce steps:
  example:
  * create a flavor with ephemeral disk
  * boot an instance with the flavor
  * resize the instance to a flavor with larger disk size and ephemeral disk 
size

  Expected result:
  * VM was running, disk and ephemeral are larger.

  Actual result:
  * VM was running, disk are larger but ephemeral are not larger.

  This is because the VM ephemeral name is disk.eph0, but nova check is 
disk.local,
   this leads to ephemeral can not be extended.
  I think it is unreasonable that ephemeral can not be extended.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559279] [NEW] Glance JS API getImage() is inefficient

2016-03-19 Thread Matt Borland
Public bug reported:

The Glance API JS has a getImage() function that calls
'/api/glance/images/' + id, with no trailing slash.  When this hits the
API, it is redirected via a 301 to '/api/glance/images/' + id + '/'
(with a trailing slash).  This adds a second network communication and
thus increases latency.

If the API is just called with a trailing slash, the API responds
directly with the result, which is a bit faster.

You can verify the bug and fix by watching the server in dev mode when
going to the Angular Image Detail page.  You will see a 301 call
immediately followed by another call with the trailing slash.  You can
see this in the JavaScript window as well.

After applying the fix, you should no longer see the 301 result as there
will be no call for the resource without the trailing slash.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Borland (palecrow)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1559279

Title:
  Glance JS API getImage() is inefficient

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The Glance API JS has a getImage() function that calls
  '/api/glance/images/' + id, with no trailing slash.  When this hits
  the API, it is redirected via a 301 to '/api/glance/images/' + id +
  '/' (with a trailing slash).  This adds a second network communication
  and thus increases latency.

  If the API is just called with a trailing slash, the API responds
  directly with the result, which is a bit faster.

  You can verify the bug and fix by watching the server in dev mode when
  going to the Angular Image Detail page.  You will see a 301 call
  immediately followed by another call with the trailing slash.  You can
  see this in the JavaScript window as well.

  After applying the fix, you should no longer see the 301 result as
  there will be no call for the resource without the trailing slash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1559279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558355] Re: gate-neutron-vpnaas-dsvm-functional-sswan gate failure

2016-03-19 Thread Armando Migliaccio
This must be related to the same issue.

** This bug is no longer a duplicate of bug 1558289
   Installing neutron_lbaas plugin via devstack fails because of incorrect 
image/package. Change devstack-trusty to ubuntu-trusty to support infra 
migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558355

Title:
  gate-neutron-vpnaas-dsvm-functional-sswan gate failure

Status in neutron:
  New

Bug description:
  This job seems to be failing persistently with:

  http://logs.openstack.org/47/293747/1/check/gate-neutron-vpnaas-dsvm-
  functional-sswan/411223c/console.html#_2016-03-16_22_51_18_437

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558090] Re: Delete multiple networks from CLI

2016-03-19 Thread Hirofumi Ichihara
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558090

Title:
  Delete multiple networks from CLI

Status in python-neutronclient:
  Confirmed

Bug description:
  If I try to delete multiple networks through cli, only one network gets 
deleted at a time.
  We should have the option to delete multiple networks simultaneously to be in 
sync with dashboard.

  For ex: If I create 2 networks n1 and n2 from dashboard, I am able to delete 
both of them through dashboard if I select 
  both the networks for delete.
  But if I try the same functionality in cli, I get the following output:

  [root@liberty2 ~(keystone_admin)]# neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | bda0b455-a838-4004-9f7c-dceeefd47473 | n2   | 
69275ced-1df0-4aa8-8fcd-54296dca1cee 30.0.0.0/24 |
  | cbe71faf-be6f-4e0d-b3a2-74d83b5fb1c9 | internal | 
2664b3a4-d587-43cb-8985-55f2bc9433aa 120.0.0.0/24|
  | 3da43b68-e8f8-4aa3-bf23-420ae7cc39fc | n1   | 
bb622270-98df-4529-b92f-816f59e9e2c9 20.0.0.0/24 |
  | effb89f5-34bb-4dbe-bc50-e434614419fe | public   | 
f2845986-7b21-46b1-a9db-8b178e739e4a 172.24.4.224/28 |
  
+--+--+--+
  [root@liberty2 ~(keystone_admin)]# neutron net-delete n1 n2
  Deleted network: n1
  [root@liberty2 ~(keystone_admin)]# neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | bda0b455-a838-4004-9f7c-dceeefd47473 | n2   | 
69275ced-1df0-4aa8-8fcd-54296dca1cee 30.0.0.0/24 |
  | cbe71faf-be6f-4e0d-b3a2-74d83b5fb1c9 | internal | 
2664b3a4-d587-43cb-8985-55f2bc9433aa 120.0.0.0/24|
  | effb89f5-34bb-4dbe-bc50-e434614419fe | public   | 
f2845986-7b21-46b1-a9db-8b178e739e4a 172.24.4.224/28 |
  
+--+--+--+

  Observation: The n2 networks do not get deleted and there is no notification 
for the same,  net delete only reads the 
  first argument that it gets. We should either have the option to delete 
multiple networks simultaneously and if my understanding
  if wrong then there should be a check for the number of arguments that have 
been passed for the net-delete command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1558090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558343] Re: configdrive is lost after resize.(libvirt driver)

2016-03-19 Thread Matt Riedemann
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Confirmed

** Changed in: nova/kilo
   Status: New => Confirmed

** Changed in: nova/liberty
   Importance: Undecided => High

** Changed in: nova/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558343

Title:
  configdrive is lost after resize.(libvirt driver)

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  Confirmed
Status in OpenStack Compute (nova) liberty series:
  Confirmed

Bug description:
  Used the trunk code as of 2016/03/16
  my environment disabled metadata agent and forced the use of config drive.

  
  console log before resize: http://paste.openstack.org/show/490825/
  console log after resize: http://paste.openstack.org/show/490824/

  
  qemu 18683 1  4 18:40 ?00:00:32 /usr/bin/qemu-system-x86_64 
-name instance-0002 -S -machine pc-i440fx-2.0,accel=tcg,usb=off -m 128 
-realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 
018892c7-8144-49c0-93d2-79ee83efd6a9 -smbios type=1,manufacturer=OpenStack 
Foundation,product=OpenStack 
Nova,version=13.0.0,serial=16c127e2-6369-4e19-a646-251a416a8dcd,uuid=018892c7-8144-49c0-93d2-79ee83efd6a9,family=Virtual
 Machine -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-0002/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive file=/opt/stack/da
 
ta/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none
 -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev 
tap,fd=23,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:34:d6:f3,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/opt/stack/data/nova/instances/018892c7-8144-49c0-93d2-79ee83efd6a9/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:1 -k en-us 
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on

  
  $ blkid
  /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" 
TYPE="ext3" 
  $ 

  
  another vm without resize:
  $ blkid 
  /dev/vda1: LABEL="cirros-rootfs" UUID="d42bb4a4-04bb-49b0-8821-5b813116b17b" 
TYPE="ext3" 
  /dev/sr0: LABEL="config-2" TYPE="iso9660" 
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558099] [NEW] neutron_lbaas: Stats socket not found for pool

2016-03-19 Thread Igor Meneguitte Ávila
Public bug reported:

Hi,

I am with this issue when I created VIP.

My environment:

OpenStack Kilo, Ubuntu 14.04

The controller node:

/etc/neutron/neutron.conf

service_plugins = router,lbaas
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

/etc/neutron/neutron_lbaas.conf

[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

/var/log/neutron/neutron-lbaas-agent.log

2016-03-16 10:59:02.100 24640 WARNING 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats 
socket not found for pool a630eb2b-85eb-4f4a-8c2a-e1c57baf69e2
2016-03-16 10:59:03.278 24640 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ca98d95b-eff5-4f0a-a803-bc917b9cd186 ] Create vip 
aca431e5-e485-436b-b788-ba6a05a89991 failed on device driver haproxy_ns
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 340, in create_vip
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 336, in _refresh_device
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 329, in deploy_instance
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 90, in create
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._plug(namespace, logical_config['vip']['port'])
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 259, in _plug
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager namespace=namespace
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 235, 
in plug
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.check_bridge_exists(bridge)
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 169, 
in check_bridge_exists
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager raise 
exceptions.BridgeDoesNotExist(bridge=bridge)
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager BridgeDoesNotExist: 
Bridge br-int does not exist.
2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
2016-03-16 10:59:12.103 24640 WARNING 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats 
socket not found for pool a630eb2b-85eb-4f4a-8c2a-e1c57baf69e2

Regards,

Igor Meneguitte Ávila

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558099


[Yahoo-eng-team] [Bug 1528465] Re: dashboard project network column display duplicate default public network randomly (with admin)

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/261191
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=11d973fbe36369cba4b450536c6dff57c0cd6d21
Submitter: Jenkins
Branch:master

commit 11d973fbe36369cba4b450536c6dff57c0cd6d21
Author: Akihiro Motoki 
Date:   Thu Dec 24 15:21:33 2015 +0900

Fix network duplication check logic

Previously api.neutron.Network object was compared by using _apidict
but it did not work when for example the order of subnet list is different.
This commit changes the comparison logic to use network ID explicitly.
In addition, it moves the logic to fetch external networks to api.neutron.

Change-Id: Ie3a42e29c32c17a7f3bf1596b0e09cb73eae9d2a
Closes-Bug: #1528465


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528465

Title:
  dashboard project network column display duplicate default public
  network randomly (with admin)

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  dashboard project network column display duplicate default public
  network randomly. this occurs randomly

  setup:
  devstack

  reproduce:
  yes

  steps:
  1.check default public network we have
  stack@45-59:~/devstack$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | 1931775c-4459-4c18-9910-53b1ffbe4d31 | private | 
9b1a99a3-e7ae-4a7d-b9d2-e035077d4e5e 10.0.0.0/24 |
  |  | | 
3ebaa37a-2b80-4186-9357-dd8b1202d542 fd7e:7e2b:56d0::/64 |
  | b9cedb82-6835-499b-885d-7646416ac93f | public  | 
aea6ea63-b70c-49fe-9bf5-3f593015a07d 172.24.4.0/24   |
  |  | | 
146a2d03-52e0-4c7d-ba77-a9a2df99da7f 2001:db8::/64   |

  2.check this on dashborad, we find that it displays two duplicate
  public network.  this occurs randomly

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558626] [NEW] physical_device_mappings allows only one physnet for per nic

2016-03-19 Thread Vladimir Eremin
Public bug reported:

Mitaka, ML2, ml2_sriov.agent_required=True

sriov_nic .physical_device_mappings is allowed to specify ony one NIC
per physnet. If I try to specify two nics like following

[sriov_nic]
physical_device_mappings=physnet2:enp1s0f0,physnet2:enp1s0f1

I've got next error on start

2016-03-17 15:26:48.818 6832 INFO neutron.common.config [-] Logging enabled!
2016-03-17 15:26:48.819 6832 INFO neutron.common.config [-] 
/usr/bin/neutron-sriov-nic-agent version 8.0.0.0b3
2016-03-17 15:26:48.819 6832 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-sriov-nic-agent 
--config-file=/etc/neutron/plugins/ml2/sriov_agent.ini 
--log-file=/var/log/neutron/neutron-sriov-agent.log 
--config-file=/etc/neutron/neutron.conf setup_logging 
/usr/lib/python2.7/dist-packages/neutron/common/config.py:266
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Failed on 
Agent configuration parse. Agent terminated!
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent Traceback (most 
recent call last):
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py",
 line 436, in main
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent 
config_parser.parse()
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py",
 line 411, in parse
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent 
cfg.CONF.SRIOV_NIC.physical_device_mappings)
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 240, in 
parse_mappings
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent "unique") % 
{'key': key, 'mapping': mapping})
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent ValueError: Key 
physnet2 in mapping: 'physnet2:enp1s0f1' not unique
2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558626

Title:
  physical_device_mappings allows only one physnet for per nic

Status in neutron:
  New

Bug description:
  Mitaka, ML2, ml2_sriov.agent_required=True

  sriov_nic .physical_device_mappings is allowed to specify ony one NIC
  per physnet. If I try to specify two nics like following

  [sriov_nic]
  physical_device_mappings=physnet2:enp1s0f0,physnet2:enp1s0f1

  I've got next error on start

  2016-03-17 15:26:48.818 6832 INFO neutron.common.config [-] Logging enabled!
  2016-03-17 15:26:48.819 6832 INFO neutron.common.config [-] 
/usr/bin/neutron-sriov-nic-agent version 8.0.0.0b3
  2016-03-17 15:26:48.819 6832 DEBUG neutron.common.config [-] command line: 
/usr/bin/neutron-sriov-nic-agent 
--config-file=/etc/neutron/plugins/ml2/sriov_agent.ini 
--log-file=/var/log/neutron/neutron-sriov-agent.log 
--config-file=/etc/neutron/neutron.conf setup_logging 
/usr/lib/python2.7/dist-packages/neutron/common/config.py:266
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Failed on 
Agent configuration parse. Agent terminated!
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent Traceback (most 
recent call last):
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py",
 line 436, in main
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent 
config_parser.parse()
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py",
 line 411, in parse
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent 
cfg.CONF.SRIOV_NIC.physical_device_mappings)
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 240, in 
parse_mappings
  2016-03-17 15:26:48.819 6832 ERROR 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent "unique") % 
{'key': key, 'mapping': mapping})
  2016-03-17 

[Yahoo-eng-team] [Bug 1558721] Re: neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294230
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9d21b5ad7edbf9ac1fd9254e97f56966f25de8e6
Submitter: Jenkins
Branch:master

commit 9d21b5ad7edbf9ac1fd9254e97f56966f25de8e6
Author: Alex Oughton 
Date:   Fri Mar 18 11:12:10 2016 -0500

Close XenAPI sessions in neutron-rootwrap-xen-dom0

Neutron with XenServer properly doesn't close XenAPI sessions.
If it creates these sessions so rapidly, the XenServer host eventually
exceeds its maximum allowed number of connections.
This patch adds a close process for session.

Closes-Bug: 1558721
Change-Id: Ida90a970c649745c492c28c41c4a151e4d940aa6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558721

Title:
  neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

Status in neutron:
  Fix Released

Bug description:
  Hello,

  When using OpenStack Liberty with XenServer, neutron is not properly
  closing its XenAPI sessions. Since it creates these so rapidly, the
  XenServer host eventually exceeds its maximum allowed number of
  connections:

  Mar 17 11:39:05 compute3 xapi:
  [debug|compute3.openstack.lab.eco.rackspace.com|25 db_gc|DB GC
  D:bb694b976766|db_gc] Number of disposable sessions in group
  'external' in database (401/401) exceeds limit (400): will delete the
  oldest

  This occurs roughly once per minute, with many sessions being
  invalidated. The effect is that any long-running hypervisor operations
  (for example a live-migration) will fail with an "unauthorized" error,
  as their session was invalidated while they were still running:

  2016-03-17 11:43:34.483 14310 ERROR nova.virt.xenapi.vmops Failure: 
['INTERNAL_ERROR', 
'Storage_interface.Internal_error("Http_client.Http_error(\\"401\\", \\"{ frame 
= false; method = POST; uri = /services/SM;
  query = [ session_id=OpaqueRef:8663a5b7-928e-6ef5-e312-9f430b553c7f ]; 
content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [  ]; task 
= ; subtask_of = ; content-type = ; host = ; user_agent = xe
  n-api-libs/1.0 }\\")")']

  The fix is to add a line to neutron-rootwrap-xen-dom0 to have it
  properly close the sessions.

  Before:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = XenAPI.Session(url)
  session.login_with_password(username, password)
  host = session.xenapi.session.get_this_host(session.handle)
  result = session.xenapi.host.call_plugin(
  host, 'netwrap', 'run_command',
  {'cmd': json.dumps(user_args), 'cmd_input': 
json.dumps(cmd_input)})
  return json.loads(result)
  except Exception as e:
  traceback.print_exc()
  sys.exit(RC_XENAPI_ERROR)

  After:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = XenAPI.Session(url)
  session.login_with_password(username, password)
  host = session.xenapi.session.get_this_host(session.handle)
  result = session.xenapi.host.call_plugin(
  host, 'netwrap', 'run_command',
  {'cmd': json.dumps(user_args), 'cmd_input': 
json.dumps(cmd_input)})
  session.xenapi.session.logout()
  return json.loads(result)
  except Exception as e:
  traceback.print_exc()
  sys.exit(RC_XENAPI_ERROR)

  
  After making this change, the logs still show the sessions being rapidly 
created, but it also shows them being destroyed. The "exceeds limit" error no 
longer occurs, and live-migrations now succeed.

  
  Regards,

  Alex Oughton

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558888] [NEW] Duplicate if statements in ovs_neutron_agent and ovs_dvr_neutron_agent when run in dvr mode

2016-03-19 Thread Jie Li
Public bug reported:

After the neutron-openvswitch-agent service starting or the  openvswitch 
restarting, 
 neutron-openvswitch-agent will check if it run in dvr mode and then determine 
to 
set dvr flows or not, howerver, i find  there are some dumplcate if statements, 
.

** Affects: neutron
 Importance: Undecided
 Assignee: Jie Li (jieli2087)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jie Li (jieli2087)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/155

Title:
  Duplicate if  statements  in  ovs_neutron_agent and
  ovs_dvr_neutron_agent  when run in dvr mode

Status in neutron:
  New

Bug description:
  After the neutron-openvswitch-agent service starting or the  openvswitch 
restarting, 
   neutron-openvswitch-agent will check if it run in dvr mode and then 
determine to 
  set dvr flows or not, howerver, i find  there are some dumplcate if 
statements, .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558665] [NEW] Network tab of Launch instance window is broken

2016-03-19 Thread Valerii Kovalchuk
Public bug reported:

Go to Project -> Instances, click Launch instance and go to Network tab.
+ signs are out of frame.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot from 2016-03-17 17:33:32.png"
   
https://bugs.launchpad.net/bugs/1558665/+attachment/4602361/+files/Screenshot%20from%202016-03-17%2017%3A33%3A32.png

** Project changed: murano => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558665

Title:
  Network tab of Launch instance window is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Go to Project -> Instances, click Launch instance and go to Network
  tab. + signs are out of frame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557938] Re: [doc]support matrix of vmware for chap is wrong

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293309
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=def71059a1fb0e01681833625b91c45a95ceedf4
Submitter: Jenkins
Branch:master

commit def71059a1fb0e01681833625b91c45a95ceedf4
Author: xhzhf 
Date:   Wed Mar 16 16:17:39 2016 +0800

Support-matrix of vmware for chap is wrong

Truely vmware driver can not attach cinder volume using chap
authentication over iscsi
Closes-Bug: #1557938

Change-Id: I05b1e81a3deffc855be34efff2d3e9dac8b63e82


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557938

Title:
  [doc]support matrix of vmware for chap is wrong

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In support-matrix, it says that vmware driver supports chap authentication 
over iscsi.
  In fact, vmware driver doesn't pass  authentication info to vSphere API.
  So the function doesn't work.

  
  Code: 
  def _iscsi_add_send_target_host(self, storage_system_mor, hba_device,
  target_portal):
  """Adds the iscsi host to send target host list."""
  client_factory = self._session.vim.client.factory
  send_tgt = client_factory.create('ns0:HostInternetScsiHbaSendTarget')
  (send_tgt.address, send_tgt.port) = target_portal.split(':')
  LOG.debug("Adding iSCSI host %s to send targets", send_tgt.address)
  self._session._call_method(
  self._session.vim, "AddInternetScsiSendTargets",
  storage_system_mor, iScsiHbaDevice=hba_device, targets=[send_tgt])

  Doc:
  
http://docs.openstack.org/developer/nova/support-matrix.html#storage_block_backend_iscsi_auth_chap_vmware

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558919] [NEW] Delete monitor show error and success message at the same time

2016-03-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In Horizon, while deleting a loadbalancer monitor that is associated
with a pool, an error message is thrown which is the correct behavior.
However, it also shows a success message that the deletion has been
scheduled.

version: Kilo

attached screenshot of messages.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Delete monitor show error and success message at the same time
https://bugs.launchpad.net/bugs/1558919
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550027] Re: db.migrations.create_foreign_keys forces new constraints to have ondelete=CASCADE option

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/287619
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5ca8d0152afdd8dd178432c112d1e4ad9bde42ff
Submitter: Jenkins
Branch:master

commit 5ca8d0152afdd8dd178432c112d1e4ad9bde42ff
Author: Zhengguang 
Date:   Thu Mar 3 14:40:01 2016 +0800

Fixes force to set ondelete=CASCADE in create_foreign_keys()

The create_foreign_keys method there forces any foreign keys that
created to have ondelete='CASCADE', no matter whether the
foreign_keys parameter passed to the method had that option set.
if the wrapper method remove_fks_from_table() is used to preserve
the state of foreign keys during a database migration, any foreign
keys on the table "perserved" this way will have cascading delete
added to them.

Change-Id: I04bdc863d67e2228f34a05f588c2e9f562918114
Closes-Bug: #1550027


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550027

Title:
  db.migrations.create_foreign_keys forces new constraints to have
  ondelete=CASCADE option

Status in neutron:
  Fix Released

Bug description:
  In neutron/db/migration/__init__.py, on line 168, the
  create_foreign_keys method there forces any foreign keys that created
  to have ondelete='CASCADE', no matter whether the foreign_keys
  parameter passed to the method had that option set. This is
  particularly bad, because if the wrapper method
  remove_fks_from_table() is used to preserve the state of foreign keys
  during a database migration, any foreign keys on the table "perserved"
  this way will have cascading delete added to them.

  The code in create_foreign_keys should look something more like this:

  def create_foreign_keys(table, foreign_keys):
  for fk in foreign_keys:
  ondelete = None
  if 'ondelete' in fk['options'].keys():
  ondelete = fk['options']['ondelete']
  op.create_foreign_key(
  constraint_name=fk['name'],
  source_table=table,
  referent_table=fk['referred_table'],
  local_cols=fk['constrained_columns'],
  remote_cols=fk['referred_columns'],
  ondelete=ondelete
  )

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1550027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558827] [NEW] port filter hook for network tenant id matching breaks counting

2016-03-19 Thread Kevin Benton
Public bug reported:

The filter hook added in https://review.openstack.org/#/c/255285 causes
SQLAlchemy to add the networks table to the FROM statement without a
restricted join condition. This results in many duplicate rows coming
back from the DB query. This is okay for normal record retrieval because
sqlalchemy would deduplicate the records. However, when calling .count()
on the query, it returns a number far too large.

This breaks the quota engine for plugins that don't use the newer method
of tracking resources.

** Affects: neutron
 Importance: Critical
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
Milestone: None => mitaka-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558827

Title:
  port filter hook for network tenant id matching breaks counting

Status in neutron:
  New

Bug description:
  The filter hook added in https://review.openstack.org/#/c/255285
  causes SQLAlchemy to add the networks table to the FROM statement
  without a restricted join condition. This results in many duplicate
  rows coming back from the DB query. This is okay for normal record
  retrieval because sqlalchemy would deduplicate the records. However,
  when calling .count() on the query, it returns a number far too large.

  This breaks the quota engine for plugins that don't use the newer
  method of tracking resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557238] Re: mapping yield no valid identity result in HTTP 500 error

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293184
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=e5dcb3b4b6bdecd0947cba32cb3732ca52ed07c3
Submitter: Jenkins
Branch:master

commit e5dcb3b4b6bdecd0947cba32cb3732ca52ed07c3
Author: guang-yee 
Date:   Tue Mar 15 17:29:42 2016 -0700

Mapping which yield no identities should result in ValidationError

Currently mapping produce a bogus "blind" default identity when no
rules match the incoming attributes. This is unnecessary and downright
dangerous. There's absolutely no use case for the "blind" identity.
Furthermore, consumers of mapped properties assumed that the "blind"
identity is legit. This lead to expected failures such as KeyError when they
try to reference the required identity attributes such as user['name'].

We should raise ValidationError if the rules yield no valid identity.
This patch also removed the tests where the bogus "blind" identity is
expected.

Change-Id: I117621673ffc0b4f8e2c48721329daa3b6090327
Closes-Bug: 1557238


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1557238

Title:
  mapping yield no valid identity result in HTTP 500 error

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  A mapping which yield no valid identity (i.e. no local user or group)
  will result in HTTP 500 instead of 401. There are two issues.

  1. We automatically return a default ephemeral user  mapped_properties when 
mapping yield no valid local identity or groups.
  2. In the mapped auth plugin, we assume the mapped_properties contains a 
valid local identity or group.

  To reproduce the problem:

  1. Set up WebSSO or K2K.
  2. Create a mapping rule for the given IdP and protocol which yield neither 
local identity or group. For example,

  [
   {
   "local": [
   {
  "user": {
  "type": "local",
  "name": "{0}",
  "domain": {
  "name": "{1}"
  },
  "type": "local"
  }
   }
  ],
  "remote": [
  {
  "type": "openstack_user"
  },
  {
  "type": "openstack_user_domain"
  },
  {
  "type": "openstack_roles",
  "any_one_of": [
  "bogus"
  ]
  }
  ]
  }
  ]

  3. do the federation dance and you'll get a HTTP 500 and a traceback
  as pretty as this one.

  2016-03-14 17:16:05.536 12497 DEBUG keystone.federation.utils 
[req-159bde9f-8a2d-4885-af31-304be9af8db7 - - - - -] updating a direct mapping: 
[u'Unset'] 2016-03-14 17:16:05.536 _verify_all_requirements 
/opt/stack/keystone/keystone/federation/utils.py:796
  2016-03-14 17:16:05.536 12497 DEBUG keystone.federation.utils 
[req-159bde9f-8a2d-4885-af31-304be9af8db7 - - - - -] identity_values: [] 
2016-03-14 17:16:05.536 process 
/opt/stack/keystone/keystone/federation/utils.py:534
  2016-03-14 17:16:05.536 12497 DEBUG keystone.federation.utils 
[req-159bde9f-8a2d-4885-af31-304be9af8db7 - - - - -] mapped_properties: 
{'group_ids': [], 'user': {'domain': {'id': 'Federated'}, 'type': 'ephemeral'}, 
'group_names': []} 2016-03-14 17:16:05.536 process 
/opt/stack/keystone/keystone/federation/utils.py:536
  2016-03-14 17:16:05.620 12497 ERROR keystone.common.wsgi 
[req-159bde9f-8a2d-4885-af31-304be9af8db7 - - - - -] 'name'
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/common/wsgi.py", line 249, in __call__
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi result = 
method(context, **params)
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/federation/controllers.py", line 302, in 
federated_authentication
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi   File 
"/opt/stack/keystone/keystone/auth/controllers.py", line 396, in 
authenticate_for_token
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
  2016-03-14 17:16:05.620 12497 TRACE keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1558939] [NEW] Truncated hint text due to the limited length set in the field

2016-03-19 Thread Yuko Katabami
Public bug reported:

Project > Instances > Launch Instance

Japanese version of input hint text "Click here for filters" is
truncated and it seems that there is a limit for the length.

English text fits in perfectly but there are a number of languages for
which translation is longer than English text.

It is better if it is not restricated to such short length, so that the
entire message can be shown to the users.

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: magnum-ui
 Importance: Undecided
 Status: New

** Attachment added: "TruncatedHint-ja.png"
   
https://bugs.launchpad.net/bugs/1558939/+attachment/4602853/+files/TruncatedHint-ja.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558939

Title:
  Truncated hint text due to the limited length set in the field

Status in OpenStack Dashboard (Horizon):
  New
Status in Magnum UI:
  New

Bug description:
  Project > Instances > Launch Instance

  Japanese version of input hint text "Click here for filters" is
  truncated and it seems that there is a limit for the length.

  English text fits in perfectly but there are a number of languages for
  which translation is longer than English text.

  It is better if it is not restricated to such short length, so that
  the entire message can be shown to the users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559246] [NEW] Performance issues when have 1k+ Ironic BM instances

2016-03-19 Thread sergiiF
Public bug reported:

We have an Ironic deployment with about 1500 BMs, 1k+ of them are
already provisioned.

The current Ironic architecture doesn't allow us to have more than one
'ironic compute node'. As a result nova-compute service is 100% busy
with periodic tasks like updating instances status (this task takes
about 1.5 minute!!).

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1559246

Title:
  Performance issues when have 1k+ Ironic BM instances

Status in OpenStack Compute (nova):
  New

Bug description:
  We have an Ironic deployment with about 1500 BMs, 1k+ of them are
  already provisioned.

  The current Ironic architecture doesn't allow us to have more than one
  'ironic compute node'. As a result nova-compute service is 100% busy
  with periodic tasks like updating instances status (this task takes
  about 1.5 minute!!).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1559246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..

2016-03-19 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 1:2014.1.5-0ubuntu4

---
neutron (1:2014.1.5-0ubuntu4) trusty; urgency=medium

  * neutron-openvswitch-agent stuck on 'q-agent-notifier-port-update_fanout-x'
is NOT_FOUND exception, this bug caused RabbitMQ HA(active + active) 
failover
in Icehouse not work with large scaled nova compute nodes. (LP: #1393391):
- d/p/fix-neutron-agent-fanout-queue-not-found-loop.patch

 -- Hui Xiang   Mon, 25 Jan 2016 15:25:13 +0800

** Changed in: neutron (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393391

Title:
  neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-
  update_fanout..

Status in neutron:
  Confirmed
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Trusty:
  Fix Released

Bug description:
  Under an HA deployment, neutron-openvswitch-agent can get stuck
  when receiving a close command on a fanout queue the agent is not subscribed 
to.

  It stops responding to any other messages, so it stops effectively
  working at all.

  2014-11-11 10:27:33.092 3027 INFO neutron.common.config [-] Logging enabled!
  2014-11-11 10:27:34.285 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:34.370 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:35.348 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent initialized successfully, 
now running...
  2014-11-11 10:27:35.351 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent out of sync with plugin!
  2014-11-11 10:27:35.401 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent tunnel out of sync with 
plugin!
  2014-11-11 10:27:35.414 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:32:33.143 3027 INFO neutron.agent.securitygroups_rpc 
[req-22c7fa11-882d-4278-9f83-6dd56ab95ba4 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:58:11.916 3027 INFO neutron.agent.securitygroups_rpc 
[req-484fd71f-8f61-496c-aa8a-2d3abf8de365 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:59:43.954 3027 INFO neutron.agent.securitygroups_rpc 
[req-2c0bc777-04ed-470a-aec5-927a59100b89 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 11:00:22.500 3027 INFO neutron.agent.securitygroups_rpc 
[req-df447d01-d132-40f2-8528-1c1c4d57c0f5 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-12 01:27:35.662 3027 ERROR neutron.openstack.common.rpc.common [-] 
Failed to consume message from queue: Socket closed
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
Traceback (most recent call last):
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 579, in ensure
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return method(*args, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 659, in _consume
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.connection.drain_events(timeout=timeout)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.transport.drain_events(self.connection, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return connection.drain_events(**kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 266, in drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
chanmap, None, timeout=timeout,
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 328, in 

[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-03-19 Thread Armando Migliaccio
** Tags added: functional-tests mitaka-rc-potential

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

** No longer affects: devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in neutron:
  In Progress

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558665] [NEW] Network tab of Launch instance window is broken

2016-03-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Go to Project -> Instances, click Launch instance and go to Network tab.
+ signs are out of frame.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Network tab of Launch instance window is broken
https://bugs.launchpad.net/bugs/1558665
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557748] Re: ERROR nova.compute.manager Instance failed to spawn

2016-03-19 Thread Hirofumi Ichihara
Probably this issue is in networking-onos.

** Project changed: neutron => networking-onos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557748

Title:
  ERROR nova.compute.manager Instance failed to spawn

Status in networking-onos:
  New

Bug description:
  Hi,

  I installed the version of the OpenStack Kilo, following this guide
  (http://docs.openstack.org/kilo/install-
  guide/install/apt/content/index.html)

  I joined the OpenStack to the controller SDN Onos version 1.3.

  I installed the plugin ml2 (networking-onos) for the integration ONOS.

  The ext-net works with flat, while the demo-net with vxlan.

  I shared the demo-net network with the admin user and network ext-net
  with the user demo.

  When I instantiate a VM with the demo-net for any user, the creation
  of the VM is performed normally and it works perfectly.

  When I instantiate a VM with network ext-net for any user, it appears
  an error message.

  I did a test assigning Floating IP for a VM with demo-net and it
  received external IP and managed to communicate with hosts on the
  Internet.

  I am pasting below the neutron-server.log and
  /etc/neutron/plugins/ml2/ml2_conf.ini.

  neutron net-list
  
+--+--+---+
  | id   | name | subnets   
|
  
+--+--+---+
  | 040f1e75-afa2-4d35-9a80-5e2b57d2a731 | ext-net  | 
87ee275c-fd8a-45a4-897c-3215f3e44671 200.XXX.XXX.0/24  |
  | 8dc2922f-c233-4061-bcd2-f2ac5e962bf8 | demo-net | 
b672fe74-5604-46cd-8046-d6164a0e5d68 192.168.100.0/24 |
  
+--+--+---+

  The command to create the VM with admin user on OpenStack:

  nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64 --nic net-
  id=040f1e75-afa2-4d35-9a80-5e2b57d2a731 --security-group default
  admin-instance1

  Following nova-compute.log's message:

  2016-03-15 17:37:39.262 8901 INFO nova.compute.manager 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Starting instance...
  2016-03-15 17:37:40.427 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Attempting claim: memory 512 MB, disk 1 GB
  2016-03-15 17:37:40.429 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Total memory: 7980 MB, used: 1536.00 MB
  2016-03-15 17:37:40.430 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] memory limit: 11970.00 MB, free: 10434.00 
MB
  2016-03-15 17:37:40.431 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Total disk: 450 GB, used: 20.00 GB
  2016-03-15 17:37:40.432 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] disk limit not specified, defaulting to 
unlimited
  2016-03-15 17:37:40.630 8901 INFO nova.compute.claims 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Claim successful
  2016-03-15 17:37:42.365 8901 INFO nova.scheduler.client.report 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] Compute_service record updated for 
('compute1', 'compute1')
  2016-03-15 17:37:56.739 8901 INFO nova.scheduler.client.report 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] Compute_service record updated for 
('compute1', 'compute1')
  2016-03-15 17:38:05.972 8901 INFO nova.virt.libvirt.driver 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] [instance: 
5a5e7552-82bc-47f8-b6dc-060d9c94dc77] Creating image
  2016-03-15 17:38:06.530 8901 INFO nova.virt.disk.vfs.api 
[req-211105cf-aa28-4c8b-a0bc-3c98e6d9f336 09a8fda4caf741df831ab3b6b8164590 
5a0fda70f39e4214813b10d38d527208 - - -] Unable to import guestfs, falling back 
to 

[Yahoo-eng-team] [Bug 1559579] [NEW] NovaException: Unable to get host UUID: /etc/machine-id is empty

2016-03-19 Thread Matt Riedemann
Public bug reported:

This is in a functional test locally:

Traceback (most recent call last):
  File "nova/compute/manager.py", line 2218, in _build_resources
yield resources
  File "nova/compute/manager.py", line 2064, in _build_and_run_instance
block_device_info=block_device_info)
  File "nova/virt/libvirt/driver.py", line 2767, in spawn
write_to_disk=True)
  File "nova/virt/libvirt/driver.py", line 4714, in _get_guest_xml
context)
  File "nova/virt/libvirt/driver.py", line 4563, in _get_guest_config
root_device_name)
  File "nova/virt/libvirt/driver.py", line 4370, in 
_configure_guest_by_virt_type
guest.sysinfo = self._get_guest_config_sysinfo(instance)
  File "nova/virt/libvirt/driver.py", line 3716, in 
_get_guest_config_sysinfo
sysinfo.system_serial = self._sysinfo_serial_func()
  File "nova/virt/libvirt/driver.py", line 3705, in 
_get_host_sysinfo_serial_auto
return self._get_host_sysinfo_serial_os()
  File "nova/virt/libvirt/driver.py", line 3699, in 
_get_host_sysinfo_serial_os
raise exception.NovaException(msg)
NovaException: Unable to get host UUID: /etc/machine-id is empty


From: 
nova.tests.functional.libvirt.test_rt_servers.RealTimeServersTest.test_success

The test should be mocking something out since it shouldn't expect
actual files to exist on the host system during test runs.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: functional libvirt low-hanging-fruit testing

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1559579

Title:
  NovaException: Unable to get host UUID: /etc/machine-id is empty

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This is in a functional test locally:

  Traceback (most recent call last):
File "nova/compute/manager.py", line 2218, in _build_resources
  yield resources
File "nova/compute/manager.py", line 2064, in _build_and_run_instance
  block_device_info=block_device_info)
File "nova/virt/libvirt/driver.py", line 2767, in spawn
  write_to_disk=True)
File "nova/virt/libvirt/driver.py", line 4714, in _get_guest_xml
  context)
File "nova/virt/libvirt/driver.py", line 4563, in _get_guest_config
  root_device_name)
File "nova/virt/libvirt/driver.py", line 4370, in 
_configure_guest_by_virt_type
  guest.sysinfo = self._get_guest_config_sysinfo(instance)
File "nova/virt/libvirt/driver.py", line 3716, in 
_get_guest_config_sysinfo
  sysinfo.system_serial = self._sysinfo_serial_func()
File "nova/virt/libvirt/driver.py", line 3705, in 
_get_host_sysinfo_serial_auto
  return self._get_host_sysinfo_serial_os()
File "nova/virt/libvirt/driver.py", line 3699, in 
_get_host_sysinfo_serial_os
  raise exception.NovaException(msg)
  NovaException: Unable to get host UUID: /etc/machine-id is empty

  
  From: 
nova.tests.functional.libvirt.test_rt_servers.RealTimeServersTest.test_success

  The test should be mocking something out since it shouldn't expect
  actual files to exist on the host system during test runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1559579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293843
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=44ef44c0ff97d5b166d48d2ef93feafa9a0f7ea6
Submitter: Jenkins
Branch:master

commit 44ef44c0ff97d5b166d48d2ef93feafa9a0f7ea6
Author: Michael Johnson 
Date:   Thu Mar 17 06:31:42 2016 +

Update devstack plugin for dependent packages

Recent changes to the gate base images [1] removed a package
neutron requires (netcat-openbsd). This patch installs the
required package.

[1] https://review.openstack.org/#/c/292573

Closes-bug: #1558397

Change-Id: I4041478ca09bd124827782774b8520908ef07be0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in neutron:
  Fix Released

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558461] Re: VMware NSX CI fails with quota issues

2016-03-19 Thread Armando Migliaccio
*** This bug is a duplicate of bug 1558827 ***
https://bugs.launchpad.net/bugs/1558827

** This bug has been marked a duplicate of bug 1558827
   port filter hook for network tenant id matching breaks counting

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558461

Title:
  VMware NSX CI fails with quota issues

Status in neutron:
  New

Bug description:
  The patch https://review.openstack.org/#/c/255285/ breaks the CI. The
  floating IP tests fail with:

  "Details: {u'message': u"Quota exceeded for resources: ['port'].",
  u'type': u'OverQuota', u'detail': u’’}"

  This does not happen when using the patch before this as the HEAD.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558876] [NEW] Inconsistent URLs reported from discovery endpoints

2016-03-19 Thread Julian Edwards
Public bug reported:

TLDR; the URLs returned from /, /v2.0 and /v3 are inconsistent.

In detail:
GET / returns
v3.4: http://172.16.14.31:35357/v3/
v2.0: http://172.16.14.31:35357/v2.0/

GET /v2.0 returns
http://172.16.14.31:35357/v2.0/

GET /v3 returns:
http://192.168.14.31:5000/v3/

Notice the /v3 URL is different, it reflects the public endpoint and the others 
are the admin endpoints.
The end effect is that 'openstack catalog list' (or any openstack CLI command) 
hangs when I access the v3 API because it gets redirected to an IP that I can't 
connect to (my client side can only see admin URLs).

More detail:

My catalog has the following in it:

+---+--+
| Field | Value|
+---+--+
| endpoints | regionOne|
|   |   admin: http://172.16.14.31:35357   |
|   | regionOne|
|   |   internal: http://172.16.14.31:5000 |
|   | regionOne|
|   |   public: http://192.168.14.31:5000  |
|   |  |
| id| 4709cfbbba2747f680750d0bcb31ef51 |
| name  | Identity Service |
| type  | identity |
+---+--+

Querying at /

$ curl http://172.16.14.20:35357/|python -m json.tool
{
"versions": {
"values": [
{
"id": "v3.4",
"links": [
{
"href": "http://172.16.14.31:35357/v3/;,
"rel": "self"
}
],
"media-types": [
{
"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"
}
],
"status": "stable",
"updated": "2015-03-30T00:00:00Z"
},
{
"id": "v2.0",
"links": [
{
"href": "http://172.16.14.31:35357/v2.0/;,
"rel": "self"
},
{
"href": "http://docs.openstack.org/;,
"rel": "describedby",
"type": "text/html"
}
],
"media-types": [
{
"base": "application/json",
"type": "application/vnd.openstack.identity-v2.0+json"
}
],
"status": "stable",
"updated": "2014-04-17T00:00:00Z"
}
]
}
}

Querying at /v2.0:

$ curl http://172.16.14.20:35357/v2.0|python -m json.tool
{
"version": {
"id": "v2.0",
"links": [
{
"href": "http://172.16.14.31:35357/v2.0/;,
"rel": "self"
},
{
"href": "http://docs.openstack.org/;,
"rel": "describedby",
"type": "text/html"
}
],
"media-types": [
{
"base": "application/json",
"type": "application/vnd.openstack.identity-v2.0+json"
}
],
"status": "stable",
"updated": "2014-04-17T00:00:00Z"
}
}

And at /v3.0:

$ curl  http://172.16.14.20:35357/v3|python -m json.tool
{
"version": {
"id": "v3.4",
"links": [
{
"href": "http://192.168.14.31:5000/v3/;,
"rel": "self"
}
],
"media-types": [
{
"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"
}
],
"status": "stable",
"updated": "2015-03-30T00:00:00Z"
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1558876

Title:
  Inconsistent URLs reported from discovery endpoints

Status in OpenStack Identity (keystone):
  New

Bug description:
  TLDR; the URLs returned from /, /v2.0 and /v3 are inconsistent.

  In detail:
  GET / returns
  v3.4: http://172.16.14.31:35357/v3/
  v2.0: http://172.16.14.31:35357/v2.0/

  GET /v2.0 returns
  http://172.16.14.31:35357/v2.0/

  GET /v3 returns:
  http://192.168.14.31:5000/v3/

  Notice the /v3 URL is different, it reflects the public endpoint and the 
others are the admin endpoints.
  The end effect is that 'openstack catalog list' (or any openstack CLI 
command) hangs when I access the v3 API because it gets redirected to an IP 
that I can't connect to 

[Yahoo-eng-team] [Bug 1555699] Re: Hyper-V: failed cold migrations cannot be reverted

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291733
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9740e18a31c0682d2852ac745242c69475d89113
Submitter: Jenkins
Branch:master

commit 9740e18a31c0682d2852ac745242c69475d89113
Author: Claudiu Belu 
Date:   Fri Mar 11 16:09:52 2016 +0200

hyper-v: Copies back files on failed migration

On cold migration, the contents of the instance folder are
copied to the new host. The original folder cannot be removed
because the VM configuration files cannot be deleted until the VM
is destroyed.

Because of this, when the migration fails to copy the files, it will
try to revert this through folder renaming. Since the original folder
still exists, an exception is raised.

Change-Id: Ia42ed873924999d57336a105bcaa2b856f3a3a9d
Closes-Bug: #1555699


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1555699

Title:
  Hyper-V: failed cold migrations cannot be reverted

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When performing a cold migration, the Hyper-V driver moves the
  instance files to a temporary folder, and from there, it copies them
  to the destination node.

  The instance folder is not moved entirely, as it will hold some
  Hyper-V specific files that cannot be deleted/moved while the instance
  exists, basically we just take the files that we care about, while
  leaving the folder there.

  If the migration fails, the driver tries to move the temporary
  directory to the actual instance path, but fails as there already is a
  folder.

  Trace: http://paste.openstack.org/show/490025/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1555699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555863] Re: Bootstrap Theme Preview links to sections don't work

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294317
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=83a2d36adcd328f56f47b3e1da0c57f430b1c5da
Submitter: Jenkins
Branch:master

commit 83a2d36adcd328f56f47b3e1da0c57f430b1c5da
Author: Eddie Ramirez 
Date:   Thu Mar 17 22:28:25 2016 +

Bootstrap Theme Preview links to sections don't work Edit

There's a bug indicating that the browser won't scroll to #element-id when
clicking anchor links (href="#element-id"). We can fix this issue bypassing
AngularJS router using target="_self". This fix works but Angular is still
rewriting the the hash to "its style", another way to fix this could be to
enable html5Mode and will allow us to use browser history.

This commit also inserts an element (using :before pseudo-selector) with
negative margin matching the navbar height, thus preventing h1 elements from
being overlapped by the navbar (which is position:fixed).

Change-Id: Ia51b385a8148440d86dd588918658f5454416d77
Closes-bug: #1555863


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1555863

Title:
  Bootstrap Theme Preview links to sections don't work

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  At the top of the Bootstrap Theme Preview there is a list of all the
  elements drill into.

  Navbar
  Buttons
  Typograph
  Tables
  Forms
  ...

  Clicking on each one doesn't jump to the section. Anchor tags don't
  work. :\

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1555863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558687] Re: launch this command line(under ubuntu) : nova baremetal-node-list

2016-03-19 Thread Matt Riedemann
What is openstack 2.0.1?

Looks like you probably have some config issue with your auth setup?
Are you using some specific ubuntu product for openstack?  We need more
details here.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558687

Title:
  launch this command line(under ubuntu) : nova baremetal-node-list

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. Version: openstack 2.0.1
  2. Relevant log files:
   (HTTP 500) (Request-ID: 
req-bff575c9-b83a-45c7-ac32-ab7defedd81a)
  3.
  im using devstack and i have 2 node. In my compute node, i did on my folder 
"/devstack":
   - source openrc admin admin
   - nova baremetal-node-list

   result: i got this error (500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558877] [NEW] libvirt.xml not change back after doing rescue and unrescue

2016-03-19 Thread leehom
Public bug reported:

When doing rescue, instance's xml info will be flush into
"unrescue.xml", and then rescue instance's xml info will be write into
"libvirt.xml"

def rescue(self, context, instance, network_info, image_meta,
   rescue_password):
unrescue_xml = self._get_existing_domain_xml(instance, network_info)
unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
libvirt_utils.write_to_file(unrescue_xml_path, unrescue_xml)
...
xml = self._get_guest_xml(context, instance, network_info, disk_info,
  image_meta, rescue=rescue_images,
  write_to_disk=True)

And when doing unrescue, nova will use xml info in "unrescue.xml" to restore  
domain, and then delete "unrescue.xml".
Though instance's domain info in memory is correct, xml info in "libvir.xml" is 
still rescue instance's.
And this should be fixed. 
 
def unrescue(self, instance, network_info):
"""Reboot the VM which is being rescued back into primary images.
"""
instance_dir = libvirt_utils.get_instance_path(instance)
unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
xml = libvirt_utils.load_file(unrescue_xml_path)

** Affects: nova
 Importance: Undecided
 Assignee: leehom (feli5)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => leehom (feli5)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558877

Title:
  libvirt.xml not change back after doing rescue and unrescue

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When doing rescue, instance's xml info will be flush into
  "unrescue.xml", and then rescue instance's xml info will be write into
  "libvirt.xml"

  def rescue(self, context, instance, network_info, image_meta,
 rescue_password):
  unrescue_xml = self._get_existing_domain_xml(instance, network_info)
  unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
  libvirt_utils.write_to_file(unrescue_xml_path, unrescue_xml)
  ...
  xml = self._get_guest_xml(context, instance, network_info, disk_info,
image_meta, rescue=rescue_images,
write_to_disk=True)

  And when doing unrescue, nova will use xml info in "unrescue.xml" to restore  
domain, and then delete "unrescue.xml".
  Though instance's domain info in memory is correct, xml info in "libvir.xml" 
is still rescue instance's.
  And this should be fixed. 
   
  def unrescue(self, instance, network_info):
  """Reboot the VM which is being rescued back into primary images.
  """
  instance_dir = libvirt_utils.get_instance_path(instance)
  unrescue_xml_path = os.path.join(instance_dir, 'unrescue.xml')
  xml = libvirt_utils.load_file(unrescue_xml_path)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557334] Re: [1.9.1] Vfat unsupported

2016-03-19 Thread Scott Moser
** No longer affects: cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1557334

Title:
  [1.9.1] Vfat unsupported

Status in curtin:
  Incomplete
Status in MAAS:
  Incomplete

Bug description:
  Can not deploy UEFI server with message: " An error ocurred handling
  'sda-part1_format': ValueError - unsupported fs type 'vfat'"

  xavier:/var/log/maas$  dpkg -l '*maas*'|cat
  Desired=Unknown/Install/Remove/Purge/Hold
  | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
  |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
  ||/ Name Version  
 Architecture Description
  
+++--=--
  ii  maas 1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server all-in-one metapackage
  ii  maas-cli 1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS command line API tool
  ii  maas-cluster-controller  1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server cluster controller
  ii  maas-common  1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server common files
  ii  maas-dhcp1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS DHCP server
  ii  maas-dns 1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS DNS server
  ii  maas-proxy   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS Caching Proxy
  ii  maas-region-controller   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server complete region controller
  ii  maas-region-controller-min   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS Server minimum region controller
  ii  python-django-maas   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server Django web framework
  ii  python-maas-client   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS python API client
  ii  python-maas-provisioningserver   1.9.1+bzr4543-0ubuntu1~trusty1   
 all  MAAS server provisioning libraries

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1557334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558679] [NEW] Live block migration fails with TypeError exception in driver.py

2016-03-19 Thread Angela
Public bug reported:

Hi,

When I attempt to do a live block migration of my VM instance, I see the
following exception in my nova-compute.log and migration did not happen:

2016-03-15 12:27:48.121 15330 ERROR oslo_messaging.rpc.dispatcher 
[req-e41b3f49-8bf8-4a4e-8511-1f8eea811dff b567c533c6a842908a3888a4ce80117e 
0a6eee33460e4c86ba591fd427cce163 - - -] Exception during message handling: 
string indices must be integers
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6845, in 
pre_live_migration
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher disk, 
migrate_data=migrate_data)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 461, in 
decorated_function
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher payload)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 357, in 
decorated_function
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5272, in 
pre_live_migration
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
migrate_data)
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6003, in 
pre_live_migration
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
image_file = os.path.basename(info['path'])
2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher TypeError: 
string indices must be integers

The Nova/OpenStack version that I'm running is as follows (as shown in 'rpm -qa 
| grep nova'):
python-nova-2015.1.2-18.1.el7ost.noarch
python-novaclient-2.23.0-2.el7ost.noarch
openstack-nova-common-2015.1.2-18.1.el7ost.noarch
openstack-nova-compute-2015.1.2-18.1.el7ost.noarch

The '/nova/virt/libvirt/driver.py' file running on my system is dated
Mar 4, 2016.  Please note that I have other systems running an older
version of the '/nova/virt/libvirt/driver.py' file (dated Jan 22) and
live block migration is working fine on those systems.

Upon further investigation, it was noted that the following section of
the code in the '/nova/virt/libvirt/driver.py' file that throws
exception was added very recently:

# Recreate the disk.info file and in doing so stop the
# imagebackend from recreating it incorrectly by inspecting the
# 

[Yahoo-eng-team] [Bug 1557888] Re: Nova CLI - No message on nova-agent deletion

2016-03-19 Thread Markus Zoeller (markus_z)
OK, got it now, so it's about the feedback of the python-novaclient.
It looks like we have 3 different strategies to provide feedback:

1. Print a message that the task is accepted
2. Print a table with details of the deleted object
3. Print nothing 

Feedback 1 is usefull for long running asynchronous tasks like the
creation of instances:

stack@stack:~$ nova boot --image cirros-0.3.4-x86_64-uec \
--flavor m1.tiny my-own-instance
# [... snip instance details ...]
stack@stack:~$ nova delete my-own-instance
Request to delete server my-own-instance has been accepted.
stack@stack:~$

Feeback 2 is used for deleting a flavor:

stack@stack:~$ nova flavor-create my-own-flavor 12345 512 0 3
# [... snip flavor details ...]
stack@stack:~$ nova flavor-delete my-own-flavor
+---+---+---+--+---+--+[...]
| ID| Name  | Memory_MB | Disk | Ephemeral | Swap |[...]
+---+---+---+--+---+--+[...]
| 12345 | my-own-flavor | 512   | 0| 0 |  |[...]
+---+---+---+--+---+--+[...]
stack@stack:~$ 

Feedback 3 is used for deleting an agent (as you find out) and also for
deleting a keypair:

stack@stack:~$ nova agent-create linux x86 1.0 http://dummy.com \
0e49760580a20076fbba7b1e3ccd20e2 libvirt
# [... snip agent details ...]
stack@stack:~$ nova agent-delete 1
stack@stack:~$ 

stack@stack:~$ nova keypair-add my-own-keypair
# [... snip keypair details ...]
stack@stack:~$ nova keypair-delete my-own-keypair
stack@stack:~$ 

Incomplete:
I'd say that "nova agent-delete" doesn't fall into the "feedback 1"
category as it isn't a long running task. Because other "nova *-delete"
commands also don't provide feedback, I'm not sure if this is a valid
bug report. I'm going to ask around.

Test Env:
I tested with Nova master (Mitaka cycle), commit 859ff48

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557888

Title:
  Nova CLI - No message on nova-agent deletion

Status in python-novaclient:
  Incomplete

Bug description:
  In Nova,  when deleting a nova-agent , no message or alert is generated.
  But for other commands eg. nova delete , after deleting an 
  instance a proper message is generated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1557888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484329] Re: we should use oslo.cache rather than memorycache.py

2016-03-19 Thread Matt Riedemann
*** This bug is a duplicate of bug 1483322 ***
https://bugs.launchpad.net/bugs/1483322

** Changed in: nova
   Status: In Progress => Invalid

** This bug has been marked a duplicate of bug 1483322
   python-memcached get_multi has much faster than get when get multiple value

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484329

Title:
  we should use oslo.cache rather than memorycache.py

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  according to http://git.openstack.org/cgit/openstack/nova/tree
  /openstack-common.conf nova/openstack/common/memorycache.py use oslo-
  incubator's code.

  But osolo-incubator core member says "incubator code needs to die and
  not be enhanced."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559543] [NEW] cloud-init does not configure or start networking on gentoo

2016-03-19 Thread Matthew Thode
Public bug reported:

the version of cloud-init I used was 0.7.6 as there are no newer
versions to test with

you can build an image to test with with diskimage-builder if you wish
to test

I'm also at castle so let me know if you want to meet up.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1559543

Title:
  cloud-init does not configure or start networking on gentoo

Status in cloud-init:
  New

Bug description:
  the version of cloud-init I used was 0.7.6 as there are no newer
  versions to test with

  you can build an image to test with with diskimage-builder if you wish
  to test

  I'm also at castle so let me know if you want to meet up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1559543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-03-19 Thread Armando Migliaccio
Still triaging

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in neutron:
  Confirmed

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558461] [NEW] VMware NSX CI fails with quota issues

2016-03-19 Thread Gary Kotton
Public bug reported:

The patch https://review.openstack.org/#/c/255285/ breaks the CI. The
floating IP tests fail with:

"Details: {u'message': u"Quota exceeded for resources: ['port'].",
u'type': u'OverQuota', u'detail': u’’}"

This does not happen when using the patch before this as the HEAD.

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
Milestone: None => mitaka-rc1

** Summary changed:

- VMware NSX CI files with quota issues
+ VMware NSX CI fails with quota issues

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558461

Title:
  VMware NSX CI fails with quota issues

Status in neutron:
  New

Bug description:
  The patch https://review.openstack.org/#/c/255285/ breaks the CI. The
  floating IP tests fail with:

  "Details: {u'message': u"Quota exceeded for resources: ['port'].",
  u'type': u'OverQuota', u'detail': u’’}"

  This does not happen when using the patch before this as the HEAD.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558048] Re: Raise exception when update quota with parameter

2016-03-19 Thread Hirofumi Ichihara
This isn't related to neutron server. That is client but I cannot see
the issue in my env. You should show version of python-neutronclient.

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558048

Title:
  Raise exception when update quota with parameter

Status in python-neutronclient:
  New

Bug description:
  Now there's no error when we try to run `neutron quota-update
  $tenant_id`. Actually it shows the tenant's quota from environment
  variables but NOT $tenant_id, which is really misleading. And the
  right way to do that is "neutron quota-update --tenant-id $tenant_id".

  It would be much better to raise any error when we pass the useless
  parameters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1558048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557959] Re: subnet rbac_entries not configured as a list

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293314
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=691f8f5ea54c04bfdfb76e25bda14665b05ed859
Submitter: Jenkins
Branch:master

commit 691f8f5ea54c04bfdfb76e25bda14665b05ed859
Author: Kevin Benton 
Date:   Wed Mar 16 01:35:26 2016 -0700

Add uselist=True to subnet rbac_entries relationship

Because the join conditions for Subnet rbac entries
are manually specified, SQLAlchemy is not
automatically detecting that this relationship is a list.
This adds the uselist=True kwarg to the relationship to
ensure that it's always handled as a list.

Change-Id: Ia4ae57ddd932260691584ae74c0305a79b2e60a9
Closes-Bug: #1557959


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557959

Title:
  subnet rbac_entries not configured as a list

Status in neutron:
  Fix Released

Bug description:
  These logs show up in the server:

  /usr/local/lib/python2.7/dist-
  packages/sqlalchemy/orm/strategies.py:1582: SAWarning: Multiple rows
  returned with uselist=False for  eagerly-loaded attribute
  'Subnet.rbac_entries'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559246] Re: Performance issues when have 1k+ Ironic BM instances

2016-03-19 Thread Matt Riedemann
This lacking quite a bit of information. First, what version of
nova/ironic are you on?

Have you done any profiling to see what bottlenecks there might be?

Which periodic tasks specifically are taking a long time?

Also, what is the size of the deployment (how big is the controller)?
Talking CPUs/RAM here.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1559246

Title:
  Performance issues when have 1k+ Ironic BM instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  We have an Ironic deployment with about 1500 BMs, 1k+ of them are
  already provisioned.

  The current Ironic architecture doesn't allow us to have more than one
  'ironic compute node'. As a result nova-compute service is 100% busy
  with periodic tasks like updating instances status (this task takes
  about 1.5 minute!!).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1559246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558825] Re: Some code changes are missing release notes

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294314
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=9be5b5ff70d1c453bc48f0369993b88d235824fa
Submitter: Jenkins
Branch:master

commit 9be5b5ff70d1c453bc48f0369993b88d235824fa
Author: Rob Cresswell 
Date:   Thu Mar 17 22:43:27 2016 +

Add missing release notes

Change-Id: I80d85df02ddb5a9e065080a95d05c5bbc38d4932
Closes-Bug: 1558825


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558825

Title:
  Some code changes are missing release notes

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Several large features in Horizon Mitaka are missing release notes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439791] Re: Wrong order (and login priority) of region list in horizon

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/292197
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d9ebebb736fef29d559511690e0c393607ce4311
Submitter: Jenkins
Branch:master

commit d9ebebb736fef29d559511690e0c393607ce4311
Author: Bo Wang 
Date:   Mon Mar 14 11:47:05 2016 +0800

Make region list case-insensitive sorted

Function sorted() output the result as ASCII sequence in default.
sorted(['Region1', 'myregion']) will not change sequence.
Fix it for tenant list too.

Change-Id: I2e4e546ac70af1f758b618cf253f518a475b8392
Closes-Bug: #1439791


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1439791

Title:
  Wrong order (and login priority) of region list in horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When regions are configured (NOT via local_settings.py but by adding multiple 
endpoints to keystone), regardless of what horizon is used for logging in, you 
will be logged into first region in drop down list. Drop down list itself, 
unsorted (I guess it is just python dictionary)
  Affected version: icehouse on ubuntu 12.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1439791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501722] Re: make security group optional in new launch instance

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501722

Title:
  make security group optional in new launch instance

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  security group available list is mess in new launch instance form.

  according to API doc and legacy launch instance form security group is 
optional.
  so in new launch instance form also it should be optional but it is not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286463] Re: Security-group-name is case sensitive when booting instance with neutron

2016-03-19 Thread Abhilash Goyal
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => Abhilash Goyal (abhilash-goyal)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286463

Title:
  Security-group-name is case sensitive when booting instance with
  neutron

Status in OpenStack Compute (nova):
  Confirmed
Status in python-novaclient:
  New

Bug description:
  When using nova-networking an instance boots correctly despite the
  case of the security-group name that is used (assuming it exists,
  case-insensitive). http://paste.openstack.org/show/70477/

  However, when using neutron the instance will queue with the scheduler
  but fail to boot.

  stack@devstack:~$ neutron security-group-list
  +--+-+-+
  | id   | name| description |
  +--+-+-+
  | 57597299-782e-4820-b814-b27c2f125ee2 | FooBar  | |
  | 9ae55da3-5246-4a28-b4d6-d45affe7b5d8 | default | default |
  +--+-+-+

  
  stack@devstack:~$ nova boot --image e051efff-ddd7-4b57-88af-d47b65aaa333 
--flavor 1 --security-group NotARealGroup myinst2
  ERROR: Unable to find security_group with name 'NotARealGroup' (HTTP 400) 
(Request-ID: req-bb34592c-fc38-4a39-be8f-787e2a754b98)

  
  stack@devstack:~/devstack$ nova boot --image 
e051efff-ddd7-4b57-88af-d47b65aaa333 --flavor 1 --security-group FOOBAR myinst2
  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| ZzsCcS5AHHGR 
  |
  | config_drive |  
  |
  | created  | 2014-03-01T07:30:24Z 
  |
  | flavor   | m1.tiny (1)  
  |
  | hostId   |  
  |
  | id   | 050af9f8-dbe0-4e69-afa4-d29d1e153913 
  |
  | image| cirros-0.3.1-x86_64-uec 
(e051efff-ddd7-4b57-88af-d47b65aaa333) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | myinst2  
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | FOOBAR   
  |
  | status   | BUILD
  |
  | tenant_id| be91fea7b53e4ad189dd66ef2d65cfa8 
  |
  | updated  | 2014-03-01T07:30:24Z 
  |
  | user_id  | 4f0af1fd11a140e5807f2c436fd2660f 
  |
  
+--++


  stack@devstack:~/devstack$ nova show 

[Yahoo-eng-team] [Bug 1558715] [NEW] By default access keypair is associated with instance launch

2016-03-19 Thread ank
Public bug reported:

When there is a single access keypair, by default it is populated as initial 
value for keypair (Access & Security tab) while launching instance.Due to this 
every instance that get created is referring the same keypair though user was 
not selected.
And also 'Access and Security' fields are not mandatory to select automatically.

Access keypair association with instance should happen only when user
selects explicitly, not by default.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Launch_Instance_Access_Security.png"
   
https://bugs.launchpad.net/bugs/1558715/+attachment/4602436/+files/Launch_Instance_Access_Security.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558715

Title:
  By default access keypair is associated with instance launch

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When there is a single access keypair, by default it is populated as initial 
value for keypair (Access & Security tab) while launching instance.Due to this 
every instance that get created is referring the same keypair though user was 
not selected.
  And also 'Access and Security' fields are not mandatory to select 
automatically.

  Access keypair association with instance should happen only when user
  selects explicitly, not by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558614] [NEW] The QoS notification_driver is just a service_provider, and we should look into moving to that

2016-03-19 Thread Miguel Angel Ajo
Public bug reported:

The notification_driver parameter for QoS is just a service provider, that it's 
then called
from the QoS plugin, when a policy is created, changed or deleted.

We should look into moving into the standard naming of
"service_providers" and deprecate the other.


https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/qos_base.py#L17

https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/manager.py

** Affects: neutron
 Importance: Low
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558614

Title:
  The QoS notification_driver is just a service_provider, and we should
  look into moving to that

Status in neutron:
  New

Bug description:
  The notification_driver parameter for QoS is just a service provider, that 
it's then called
  from the QoS plugin, when a policy is created, changed or deleted.

  We should look into moving into the standard naming of
  "service_providers" and deprecate the other.

  
  
https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/qos_base.py#L17

  
https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/manager.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543891] Re: Launch instance from Volumes Snapshot Page opens LEGACY launch instance, even if LEGACY is set to False in local_settings

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289379
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=93381d001e6d253a769217e8b5ee32f0ba2860f0
Submitter: Jenkins
Branch:master

commit 93381d001e6d253a769217e8b5ee32f0ba2860f0
Author: Matt Borland 
Date:   Mon Mar 7 07:53:46 2016 -0700

Allow Launch Instance (Angular) from Volume Snapshots

Just as with https://review.openstack.org/#/c/219925/ , right now on
the Volume Snapshots table, if you click on Launch as Instance you
get the legacy launch instance wizard even if local_settings is
configured for LAUNCH_INSTANCE_LEGACY_ENABLED = False.

This needs to recompile the Angular context due to the way Django
creates tab content.

Change-Id: Ibf027d523751cd4808591b8b24d8bb26c6351f5a
Fixes-Bug: 1543891


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543891

Title:
  Launch instance from Volumes Snapshot Page opens LEGACY launch
  instance, even if LEGACY is set to False in local_settings

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Same as a bug below, we need to address it in volume snapshot table.

  Launch instance from Volumes Page opens LEGACY launch instance, even if 
LEGACY is set to False in local_settings
  https://bugs.launchpad.net/horizon/+bug/1491645

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471261] Re: When I create a new user and enter the password but not the confirmation password, and then click "Create User". I only get one error message, stating that "This fie

2016-03-19 Thread Rob Cresswell
Given the discussion on the patch, I think it may be worth abandoning
this effort, as it seems a very low priority issue and brings possible
security implications. Marking bug as invalid for now; please update or
ping me on IRC (robcresswell) if you disagree.

** Changed in: horizon
Milestone: mitaka-2 => None

** Changed in: horizon
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1471261

Title:
  When I create a new user and enter the password but not the
  confirmation password, and then click "Create User". I only get one
  error message, stating that "This field is required" for Confirm
  Password but not for Password (with cleared field).

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  When I create a new user and enter the password but not the
  confirmation password, and then click "Create User". I only get one
  error message, stating that "This is field is required" for Confirm
  Password but not for Password(with cleared field).

  I would prefer that either:

  -  the password I entered is still in the “Password” field

  -  the red box/text is around both fields(Password and Confirm
  Password feild) stating that both fields are required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1471261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528777] Re: Remove not required packages in requirements.txt

2016-03-19 Thread Rob Cresswell
This patch addressed the bug: https://review.openstack.org/#/c/260425/

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1528777

Title:
  Remove not required packages in requirements.txt

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We need remove the not using packages in requirements.txt, Otherwise
  the requirements.txt will be updated when global requirements updated 
unnecessarily.

  Remove kombu since horizon never connect to message queue server.
  Remove evenlet since horizon does not use evenlet to handle threading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1528777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479722] Re: exclude network without subnet in create instance

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1479722

Title:
  exclude network without subnet in create instance

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  while creating new instance the provided network should contain a
  subnet. otherwise it is failing.

  In horizon we can filter the network to avoid the unnecessary server
  call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1479722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476439] Re: update_metadata for flavors and images shows blank. static basePath not set correctly.

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1476439

Title:
  update_metadata for flavors and images  shows blank.  static basePath
  not set correctly.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Currently using OpenStack Kilo on CentOS 7. Issue is with:

  openstack-dashboard-2015.1.0-7.el7.noarch
  /usr/share/openstack-dashboard/static/angular/widget.module.js

  When using the update_metadata feature in horizon in the flavors and
  images section, the meta data table is not displayed. Have also seen
  this cause problems when using heat.

  The basePath in the javascript is not being set correctly and
  resulting in a redirect loop:

  [Tue Jul 21 00:14:22.097739 2015] [core:error] [pid 14453] (36)File
  name too long: [client ] AH00036: access to
  
/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth/login/
  failed (filesystem path
  
'/var/www/html/dashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboarddashboardauth')

  I was able to fix by modifying the widget.module.js file

  $ diff -u /usr/share/openstack-dashboard/static/angular/widget.module.js.orig 
/usr/share/openstack-dashboard/static/angular/widget.module.js
  --- /usr/share/openstack-dashboard/static/angular/widget.module.js.orig   
2015-07-21 00:55:07.641502063 +
  +++ /usr/share/openstack-dashboard/static/angular/widget.module.js  
2015-07-21 00:41:37.476953146 +
  @@ -17,6 +17,6 @@
   'hz.widget.metadata-display',
   'hz.framework.validators'
 ])
  -.constant('basePath', '/static/angular/');
  +.constant('basePath', '/dashboard/static/angular/');
   
   })();

  Ideally this file should not need to be modified and should be
  generated using WEBROOT in local_settings, alternatively documentation
  should be updated if this file must be modified by hand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1476439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522655] Re: In the case of using keystone v3, user's description is not displayed in User Detail.

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1522655

Title:
  In the case of using keystone v3, user's description is not displayed
  in User Detail.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  user's description is displayed In user list, but it is not displayed
  in user detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529541] Re: Remove unused logging import in horizon

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1529541

Title:
  Remove unused logging import in horizon

Status in Cue:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Sahara:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  Remove unused logging import in horizon codes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cue/+bug/1529541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532048] Re: grenade fails setting up horizon with "ImportError: No module named utils" for compressor package

2016-03-19 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1532048

Title:
  grenade fails setting up horizon with "ImportError: No module named
  utils" for compressor package

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Seen on a stable/liberty change here, so the old side would be setting
  up kilo:

  http://logs.openstack.org/80/256180/1/gate/gate-grenade-
  dsvm/d1909a9/logs/grenade.sh.txt.gz#_2016-01-07_19_40_08_188

  2016-01-07 19:40:08.188 | Traceback (most recent call last):
  2016-01-07 19:40:08.188 |   File "/opt/stack/old/horizon/manage.py", line 23, 
in 
  2016-01-07 19:40:08.188 | execute_from_command_line(sys.argv)
  2016-01-07 19:40:08.188 |   File 
"/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 385, in execute_from_command_line
  2016-01-07 19:40:08.188 | utility.execute()
  2016-01-07 19:40:08.189 |   File 
"/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 354, in execute
  2016-01-07 19:40:08.189 | django.setup()
  2016-01-07 19:40:08.189 |   File 
"/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 21, in setup
  2016-01-07 19:40:08.189 | apps.populate(settings.INSTALLED_APPS)
  2016-01-07 19:40:08.189 |   File 
"/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 108, in 
populate
  2016-01-07 19:40:08.189 | app_config.import_models(all_models)
  2016-01-07 19:40:08.189 |   File 
"/usr/local/lib/python2.7/dist-packages/django/apps/config.py", line 202, in 
import_models
  2016-01-07 19:40:08.189 | self.models_module = 
import_module(models_module_name)
  2016-01-07 19:40:08.189 |   File "/usr/lib/python2.7/importlib/__init__.py", 
line 37, in import_module
  2016-01-07 19:40:08.196 | __import__(name)
  2016-01-07 19:40:08.196 |   File 
"/usr/local/lib/python2.7/dist-packages/compressor/models.py", line 1, in 

  2016-01-07 19:40:08.196 | from compressor.conf import CompressorConf  # 
noqa
  2016-01-07 19:40:08.196 |   File 
"/usr/local/lib/python2.7/dist-packages/compressor/conf.py", line 5, in 
  2016-01-07 19:40:08.196 | from django.template.utils import 
InvalidTemplateEngineError
  2016-01-07 19:40:08.196 | ImportError: No module named utils

  Looks like this is probably due to django_compressor 2.0 released
  today breaking all branches:

  https://pypi.python.org/pypi/django_compressor/2.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1532048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558097] Re: DVR SNAT HA - Documentation for Networking guide

2016-03-19 Thread Hirofumi Ichihara
** Project changed: neutron => openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558097

Title:
  DVR SNAT HA - Documentation for Networking guide

Status in openstack-manuals:
  Confirmed

Bug description:
  DVR SNAT HA - Documentation for Networking guide for Mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1558097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559276] Re: Doc page not displaying command-line with appropriate styling.

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294785
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=48193a517c7dab39088714ecf5b9d40c8b5339e4
Submitter: Jenkins
Branch:master

commit 48193a517c7dab39088714ecf5b9d40c8b5339e4
Author: Eddie Ramirez 
Date:   Fri Mar 18 19:53:11 2016 +

Doc page not displaying command-line with appropriate styling

The page locate at http://docs.openstack.org/developer/horizon/testing.html
is not displaying commands with appropiate styling, for instance there are
some command options using en dash instead of double hyphens.

This patch fixes the way these lines should appear in the browser: using
mono-spaced font, enclosed in a gray box and with double hyphens instead of
en dashes. This way we prevent users seeing errors when copy-pasting the
instructions.

Change-Id: Id76675cd6510d0491cdc08d9cc845c0fc66ab2c6
Closes-bug: #1559276


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1559276

Title:
  Doc page not displaying command-line with appropriate styling.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  This page http://docs.openstack.org/developer/horizon/testing.html
  contains instructions that tells the user how to start running tests
  for the Horizon project.

  I've detected two things wrong:
  1. There are some commands that are not properly formatted for visual 
purposes, e.g. mono-space font,  enclosed in a box.
  2.  Single hyphens are replaced by en dashes,
    $ ./run_tests.sh –with-selenium –selenium-headless should be $ 
./run_tests.sh --with-selenium --selenium-headless

  
  *See image attached to this bug report*

  If the user tries to execute the same commands, he/she will see the
  following error:

   $./run_tests.sh –with-selenium –selenium-headless
  Checking environment.
  Environment is up to date.
  Traceback (most recent call last):
    File "/opt/stack/horizon/manage.py", line 23, in 
  execute_from_command_line(sys.argv)
    File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 354, in execute_from_command_line
  utility.execute()
    File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 303, in execute
  settings.INSTALLED_APPS
    File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 48, in __getattr__
  self._setup(name)
    File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 44, in _setup
  self._wrapped = Settings(settings_module)
    File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 92, in __init__
  mod = importlib.import_module(self.SETTINGS_MODULE)
    File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
  __import__(name)
  ImportError: No module named –with-selenium

  That is because there's no option named –with-selenium but --with-
  selenium.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1559276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552594] Re: Magic search always re-appears after selecting facet

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/292692
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=44f80bcc378c8fddb01a6cd4bee426fcb7e71873
Submitter: Jenkins
Branch:master

commit 44f80bcc378c8fddb01a6cd4bee426fcb7e71873
Author: Diana Whitten 
Date:   Mon Mar 14 19:14:25 2016 -0700

Magic search shouldn't re-appear after selecting facet

When you select a facet, it instantly pops open again, obscuring the
results of the most recent facet.  This is fixed.

To test: Go to Launch Instance / Source, then click in the Available
search bar.  Add facets, and verify that after selecting each facet
opton that the facet drop-down is not shown.

Closes-bug: #1552594

Change-Id: Iffed933131163b6e64297ad8d7cd8912311c3ff0


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1552594

Title:
  Magic search always re-appears after selecting facet

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When you select a facet, it instantly pops open again, obscuring the
  results of the most recent facet.

  http://pasteboard.co/1Z5u9EcR.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1552594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553216] Re: keystone-manage bootstrap does not work for non-SQL identity drivers

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293488
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=4df45708a9f26107713dbc651caad64d0211fe2d
Submitter: Jenkins
Branch:master

commit 4df45708a9f26107713dbc651caad64d0211fe2d
Author: Kristi Nikolla 
Date:   Wed Mar 16 11:07:18 2016 -0400

Check for already present user without inserting in Bootstrap

keystone-manage bootstrap check for already present info in the DB
by trying an insert and catching a Conflict exception.
This will not work with databases where inserting an user is not
possible. Changed the code to try a get first, and insert when
the user is not found.

Change-Id: If15c284aae5d10c594688c588dde9b21675ff487
Closes-Bug: 1553216


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1553216

Title:
  keystone-manage bootstrap does not work for non-SQL identity drivers

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  keystone-manage bootstrap attempts to create the specified user and
  then handles a Conflict error as notice that the user already exists.
  This works for the default SQL identity driver, but does not work for
  drivers that do not support creating users. In order to work for all
  drivers, which is necessary to support role assignment bootstrapping
  whenever the driver configuration is changed, it should attempt to GET
  the user or otherwise check in a way that will work for drivers that
  do not support user creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1553216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514627] Re: Angular actions not evaluated properly when dependent on row update.

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289849
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2de6baae34f13ae1e0e3e70e2f08eceae346cfcc
Submitter: Jenkins
Branch:master

commit 2de6baae34f13ae1e0e3e70e2f08eceae346cfcc
Author: Timur Sufiev 
Date:   Tue Mar 8 13:32:22 2016 +0300

Fix non-working Angular actions in jquery modified tables

Fix the issue by re-$compile-ing the content dynamically inserted by
jQuery. Ideally we should solve it by replacing jQuery insert with
Angular one. This remains a TODO for Newton release.

Closes-Bug: #1514627
Co-Authored-By: Matt Borland 
Change-Id: Ifbe063e9dd6c20930a1ed4fa14dddb2d0f762902


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1514627

Title:
  Angular actions not evaluated properly when dependent on row update.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  See: https://review.openstack.org/#/c/219925/6

  Copying my comments from there:

  Ok, I've verified this as well and looked at a little bit of the code,
  but this problem is NOT a problem specific to this patch.  This patch
  works the same as the images table launch instance.

  It appears to actually be a problem with how angular is being handled
  when rows are added to the table via the create action when the row
  goes through a row update via XHR. The link is not getting evaluated
  by angular after a create and therefore is never getting into scope.
  This means the ng-click is not actually have any affect.

  You can tell by seeing that ng-scope is not added as a class by
  angular after create (right click on action and do inspect element).
  However, if you do a full page refresh after create (with browser
  refresh, coming back to the page, or clicking the volumes menu item),
  the action works and that is because it has been evaluated by angular
  and has a scope (it has an ng-scope class).

  You can actually see the same behavior with the images table depending
  on how you choose to create the image.

  Note that the images table is directly in the index and not nested
  underneath tabs like the volumes table, but that seems like a red
  herring.

  On the images table, if you put in a link to an image URL and uncheck
  the box to copy data, you'll see that the launch instance link works
  (it has angular scope).  In the background, only a DOCUMENT GET for
  the full page is issued.  If, however, you checkbox the link to copy
  data, you see that the link does NOT work (no angular scope) and you
  can also observe that after the Document request for the full page is
  sent, an XHR request is set for row_update.

  (e.g.
  
https://upload.wikimedia.org/wikipedia/commons/thumb/8/80/The_OpenStack_logo.svg
  /2000px-The_OpenStack_logo.svg.png)

  I think we need a separate bug and patch for handling the row update
  case, because that could be tracked and backported independently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1514627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557868] Re: the size of close icon are different between delete confirm dialog and other modal dialog .

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293252
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f88598f8137b683613d7803a1992d213873c8915
Submitter: Jenkins
Branch:master

commit f88598f8137b683613d7803a1992d213873c8915
Author: Kenji Ishii 
Date:   Wed Mar 16 14:27:24 2016 +0900

Fix the size of the close icon on delete confirm dialog

Change-Id: I35682659fd88d08eff1e65d645279b6b737438a7
Closes-bug: #1557868


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1557868

Title:
  the size of close icon are different between delete confirm dialog and
  other modal dialog .

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  At the moment, the code is below and the size of this icon are different 
between delete confirm dialog and other modal dialog .
  

  We should use the code below.

  


To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1557868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558774] Re: backward-incompat change in security group API: icmpv6 is not supported for protocol in Mitaka

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294460
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=85d638af455ae881ca45d2d390606ef1df5904b1
Submitter: Jenkins
Branch:master

commit 85d638af455ae881ca45d2d390606ef1df5904b1
Author: Akihiro Motoki 
Date:   Fri Mar 18 17:41:23 2016 +0900

Accept icmpv6 as protocol of SG rule for backward compatibility

The patch https://review.openstack.org/#/c/252155/ renamed
'icmpv6' protocol to 'ipv6-icmp'.
This leads to backward compatiblity of the security group API.
This commit allows to specify 'icmpv6' as well.

TODO(amotoki): The constant for 'icmpv6' will be moved to
neutron-lib soon after Mitaka is shipped.

Change-Id: I0d7e1cd9fc075902449c5eb5ef27069083ab95d4
Closes-Bug: #1558774


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558774

Title:
  backward-incompat change in security group API: icmpv6 is not
  supported for protocol in Mitaka

Status in neutron:
  Fix Released

Bug description:
  The patch https://review.openstack.org/#/c/252155/ adds various protocol 
names,
  but the change itself is backward incompatible.

  Previously we supported 'ipv6' for protocol to allow ICMPv6 specific 
type/code.
  In the new code, we no longer use 'ipv6' and we need to use a newly added 
protocol name.

  IMO it is better to keep the backward compatiblity.
  If we keep the new behavior, at least we MUST mention this 
backward-incompatible change in the release note.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553828] Re: Branding: Launch Instance: Metadata should inherit from theme

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289093
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0ba273871204f1fbd9191f16baf5aff721c29643
Submitter: Jenkins
Branch:master

commit 0ba273871204f1fbd9191f16baf5aff721c29643
Author: Diana Whitten 
Date:   Sun Mar 6 16:44:51 2016 -0700

Launch Instance: Metadata should inherit from theme

The new launch instance's select flavor metadata should inherit from
theme.  Also removing the unneccessary variables from
dashboard/_variables.scss

Closes-bug: #1553828
partial-bug: #1551492

Change-Id: I5a6f6950068e266ccb6b5c265a794393d1220414


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553828

Title:
  Branding: Launch Instance: Metadata should inherit from theme

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The new launch instance's select flavor metadata should inherit from
  theme.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557902] Re: Wait for all success after nova boot with poll

2016-03-19 Thread Markus Zoeller (markus_z)
Confirmed: 
The novaclient uses only one instance to show the progress [1] and
doesn't consider the real number of created instances. AFAIK the REST
API to create instances will always return the first instance and not
a full list of all created instances. Blueprints to change that [2]
didn't get implemented. There was a bug about that too (within the 
last 12 months) but I don't find it anymore. IIRC we accept this
behavior, but don't pin me down on this.

However, there is the REST API request parameter "return_reservation_id"
which could maybe used in the python-novaclient to list all instances
matching this reservation_id [3]. And then the progress can be polled
per instance. If this would make the testing more reliable it's worth
a shot IMO.

References:
[1] novaclient; Mitaka; create; poll for one instance: 
https://github.com/openstack/python-novaclient/blob/b80d8cb6e6cd1e86c7dc3c99c3e7d92641c00097/novaclient/v2/shell.py#L591-L592
[2] 
https://blueprints.launchpad.net/nova/+spec/return-all-servers-during-multiple-create
[3] nova api-ref; V2.1; create multiple instances: 
http://developer.openstack.org/api-ref-compute-v2.1.html#os-multiple-create-v2.1

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => Low

** Changed in: python-novaclient
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557902

Title:
  Wait for all success after nova boot with poll

Status in python-novaclient:
  In Progress

Bug description:
  Now we can use nova boot with poll parameter for one instance. But if
  we want to boot multiple instances, it return whenever the first
  instance succeeds or fails.

  It would be much better for testing to return the result of all
  instance when boot with max and min parameters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1557902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555830] Re: 'service provider show' returns a service provider when queried with wrong sp_id

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291584
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=cecf6048f2018e7cea864e3a6ff18b9088ac4254
Submitter: Jenkins
Branch:master

commit cecf6048f2018e7cea864e3a6ff18b9088ac4254
Author: Steve Martinelli 
Date:   Fri Mar 11 02:45:53 2016 -0500

Support `id` and `enabled` attributes when listing service providers

list SPs currently doesn't support to filter records by any
attributes, but this is used somewhere, such as OpenStack
Client using `name` to filter the record.

SP doesn't has `name` attribute but has `id`, `enabled`
attributes instead.

This patch enables the filtering of Service Provider based
on `id`, `enabled` attributes so that OpenStack Client or the
CURL query can benefit from it.

based off of: Ib672ba759d26bdd0eecd48451994b3451fb8648a

Closes-Bug: 1555830

Change-Id: Icdecaa44415786397ee8bb22de16d25cb8fe603a


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1555830

Title:
  'service provider show' returns a service provider when queried with
  wrong sp_id

Status in OpenStack Identity (keystone):
  Fix Released
Status in python-openstackclient:
  In Progress

Bug description:
  ubuntu@k2k-idp3:~$ openstack service provider list
  
+-+-+-+---+
  | ID  | Enabled | Description | Auth URL  
|
  
+-+-+-+---+
  | keystone-sp | True| None| 
http://xxx.xxx.xxx.xxx:35357/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth
 |
  
+-+-+-+---+
  ubuntu@k2k-idp3:~$ openstack service provider show nonexistent
  
++---+
  | Field  | Value  
   |
  
++---+
  | auth_url   | 
http://xxx.xxx.xxx.xxx:35357/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth
 |
  | description| None   
   |
  | enabled| True   
   |
  | id | keystone-sp
   |
  | relay_state_prefix | ss:mem:
   |
  | sp_url | http://xxx.xxx.xxx.xxx:5000/Shibboleth.sso/SAML2/ECP   
|
  
++---+
  ubuntu@k2k-idp3:~$ pip show python-openstackclient
  ---
  Metadata-Version: 2.0
  Name: python-openstackclient
  Version: 2.2.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1555830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559276] [NEW] Doc page not displaying command-line with appropriate styling.

2016-03-19 Thread Eddie Ramirez
Public bug reported:

This page http://docs.openstack.org/developer/horizon/testing.html
contains instructions that tells the user how to start running tests for
the Horizon project.

I've detected two things wrong:
1. There are some commands that are not properly formatted for visual purposes, 
e.g. mono-space font,  enclosed in a box.
2.  Single hyphens are replaced by en dashes,
  $ ./run_tests.sh –with-selenium –selenium-headless should be $ 
./run_tests.sh --with-selenium --selenium-headless


*See image attached to this bug report*

If the user tries to execute the same commands, he/she will see the
following error:

 $./run_tests.sh –with-selenium –selenium-headless
Checking environment.
Environment is up to date.
Traceback (most recent call last):
  File "/opt/stack/horizon/manage.py", line 23, in 
execute_from_command_line(sys.argv)
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 354, in execute_from_command_line
utility.execute()
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 303, in execute
settings.INSTALLED_APPS
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 48, in __getattr__
self._setup(name)
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 44, in _setup
self._wrapped = Settings(settings_module)
  File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named –with-selenium

That is because there's no option named –with-selenium but --with-
selenium.

** Affects: horizon
 Importance: Undecided
 Assignee: Eddie Ramirez (ediardo)
 Status: New

** Attachment added: "Screenshot depicting the bug"
   
https://bugs.launchpad.net/bugs/1559276/+attachment/4603633/+files/Horizon%E2%80%99s%20tests%20and%20you%20%E2%80%94%20horizon%209.0.0.0b4.dev143%20documentation.png

** Changed in: horizon
 Assignee: (unassigned) => Eddie Ramirez (ediardo)

** Description changed:

  This page http://docs.openstack.org/developer/horizon/testing.html
  contains instructions that tells the user how to start running tests for
  the Horizon project.
  
- I've detected two things wrong: 
+ I've detected two things wrong:
  1. There are some commands that are not properly formatted for visual 
purposes, e.g. mono-space font,  enclosed in a box.
- 2.  Single hyphens are replaced by en dashes, 
-   $ ./run_tests.sh –with-selenium –selenium-headless should be $ 
./run_tests.sh --with-selenium --selenium-headless
+ 2.  Single hyphens are replaced by en dashes,
+   $ ./run_tests.sh –with-selenium –selenium-headless should be $ 
./run_tests.sh --with-selenium --selenium-headless
+ 
+ 
+ *See image attached to this bug report*
  
  If the user tries to execute the same commands, he/she will see the
  following error:
  
-  $./run_tests.sh –with-selenium –selenium-headless 
+  $./run_tests.sh –with-selenium –selenium-headless
  Checking environment.
  Environment is up to date.
  Traceback (most recent call last):
-   File "/opt/stack/horizon/manage.py", line 23, in 
- execute_from_command_line(sys.argv)
-   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 354, in execute_from_command_line
- utility.execute()
-   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 303, in execute
- settings.INSTALLED_APPS
-   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 48, in __getattr__
- self._setup(name)
-   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 44, in _setup
- self._wrapped = Settings(settings_module)
-   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 92, in __init__
- mod = importlib.import_module(self.SETTINGS_MODULE)
-   File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
- __import__(name)
+   File "/opt/stack/horizon/manage.py", line 23, in 
+ execute_from_command_line(sys.argv)
+   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 354, in execute_from_command_line
+ utility.execute()
+   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py",
 line 303, in execute
+ settings.INSTALLED_APPS
+   File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/django/conf/__init__.py",
 line 48, in __getattr__
+ self._setup(name)
+   File 

[Yahoo-eng-team] [Bug 1557792] Re: unable to launch instances on ibm power8 nova-compute nodes

2016-03-19 Thread Matt Riedemann
*** This bug is a duplicate of bug 1511539 ***
https://bugs.launchpad.net/bugs/1511539

I believe Markus is saying this is a duplicate of bug 1511539 and since
that was only fixed on master for mitaka:

https://review.openstack.org/#/c/240612/

And wasn't backported to stable/liberty, the fix is not in liberty
(which you're using).

Could you test out https://review.openstack.org/#/c/240612/ and see if
that fixes your problem? If so, we could duplicate this bug and then
backport that change to stable/liberty.

Regarding comment 8 and using juju, you'd have to ask juju people.

** This bug has been marked a duplicate of bug 1511539
   libvirt evacute on ppcle failed with IDE controllers are unsupported for 
this QEMU binary or machine type

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557792

Title:
  unable to launch instances on ibm power8 nova-compute nodes

Status in OpenStack Compute (nova):
  New

Bug description:
  When I attempt to launch instances on IBM power8 compute nodes
  (ppc64el), the nodes do not come up. Connecting to the kvm console
  shows the following message "Guest has not initialized the display
  (yet). Also, if I ssh to the compute node and do virsh list it shows
  that the vm is paused. Also, the libvirt qemu log for the instance
  shows this in the log:

  error: kvm run failed Device or resource busy.

  I'm using the the Trusty Ubuntu cloud image for ppc64el with the
  architecture specified as ppc64.

  
  Openstack release: Libery
  ii  nova-common 2:12.0.1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-compute2:12.0.1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm2:12.0.1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt2:12.0.1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node libvirt support
  ii  python-nova 2:12.0.1-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
  ii  python-novaclient   2:2.30.1-1~cloud0 
all  client library for OpenStack Compute API

  
  /var/log off the nova-compute node is attached. As well as the juju bundle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1557792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558697] [NEW] [kilo] libvirt block migrations fail due to disk_info being an encoded JSON string

2016-03-19 Thread Lee Yarwood
Public bug reported:

The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that
disk_info is always a plain, decoded list. However prior to Liberty when
preforming a live block migration the compute manager populates
disk_info with an encoded JSON string when calling
self.driver.get_instance_disk_info. In the live migration case without
block migration disk_info remains a plain decoded list.

More details with an example trace downstream in the following bug :

live migration without shared storage fails in pre_live_migration after upgrade 
to 2015.1.2-18.2
https://bugzilla.redhat.com/show_bug.cgi?id=1318722

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558697

Title:
  [kilo] libvirt block migrations fail due to disk_info being an encoded
  JSON string

Status in OpenStack Compute (nova):
  New

Bug description:
  The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that
  disk_info is always a plain, decoded list. However prior to Liberty
  when preforming a live block migration the compute manager populates
  disk_info with an encoded JSON string when calling
  self.driver.get_instance_disk_info. In the live migration case without
  block migration disk_info remains a plain decoded list.

  More details with an example trace downstream in the following bug :

  live migration without shared storage fails in pre_live_migration after 
upgrade to 2015.1.2-18.2
  https://bugzilla.redhat.com/show_bug.cgi?id=1318722

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-03-19 Thread Armando Migliaccio
** Changed in: neutron
   Status: New => Confirmed

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in neutron:
  In Progress

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424698] Re: Backend fIlter testing could be more comprehensive

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/293159
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=1c0c7fcf91fd9a292f0cf09300d0673840e7fc54
Submitter: Jenkins
Branch:master

commit 1c0c7fcf91fd9a292f0cf09300d0673840e7fc54
Author: Colleen Murphy 
Date:   Tue Mar 15 15:18:52 2016 -0700

Make backend filter testing more comprehensive

This patch adds additional checks to the inexact filter tests. It
mostly cargo-cults the `test_list_users_inexact_filtered` test. In
order to be consistent with the `test_list_users_inexact_filtered`
test, it modifies the helper functions `_groups_for_user_data()` and
`_list_users_in_group_data()` to not initialize the hints object, since
the hints object now needs to be re-initialized between every filter
type.

Change-Id: I88b26406fcd25e30ea2beb7953c604576da38de3
Closes-bug: #1424698


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1424698

Title:
  Backend fIlter testing could be more comprehensive

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The current filter testing for backends covers some of the filtering
  combinations (such as startswith) . but not all of them.  These should
  be expanded to provide better coverage (especially as filtering is now
  supported by SQL and Ldap backends).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1424698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556178] Re: ipallocation instances live between retries

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291795
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7d9169967fca3d81076cf60eb772f4506735a218
Submitter: Jenkins
Branch:master

commit 7d9169967fca3d81076cf60eb772f4506735a218
Author: Kevin Benton 
Date:   Sun Mar 13 20:52:09 2016 -0700

Add IPAllocation object to session info to stop GC

This adds the IPAllocation object created in the _store_ip_allocation
method to the session info dictionary to prevent it from being
immediately garbage collected. This is necessary because otherwise a
new persistent object will be created when the fixed_ips relationship
is referenced during the rest of the port create/update opertions.
This persistent object will then interfere with a retry operation
that uses the same session if it tries to create a conflicting record.

By preventing the object from being garbage collected, the reference
to fixed IPs will re-use the newly created sqlalchemy object instead
which will properly be cleaned up on a rollback.

This also removes the 'passive_delete' option from the fixed_ips
relationship on ports because IPAllocation objects would now be
left in the session after port deletes. At first glance, this might
look like a performance penalty because fixed_ips would be looked
up before port deletes; however, we already do that in the IPAM
code as well as the ML2 code so this relationship is already being
loaded on the delete_port operation.

Closes-Bug: #1556178
Change-Id: Ieee1343bb90cf111c55e00b9cabc27943b46c350


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556178

Title:
  ipallocation instances live between retries

Status in neutron:
  Fix Released

Bug description:
  The retry decorator doesn't clear the session between each retry. This
  is normally not an issue; however, if some called code holds onto a
  reference a reference of a DB object that will conflict with a newly
  created one, we will get errors like the following:

  FlushError: New instance  with
  identity key (,
  (u'10.0.0.2', u'70dfccfd-f18a-423b-9323-095a38b301a9', u'4e0d6054-6f90
  -450d-87d6-fb86fa194a91')) conflicts with persistent instance
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1556178/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558687] [NEW] launch this command line(under ubuntu) : nova baremetal-node-list

2016-03-19 Thread spiritlight
Public bug reported:

1. Version: openstack 2.0.1
2. Relevant log files:
 (HTTP 500) (Request-ID: 
req-bff575c9-b83a-45c7-ac32-ab7defedd81a)
3.
im using devstack and i have 2 node. In my compute node, i did on my folder 
"/devstack":
 - source openrc admin admin
 - nova baremetal-node-list

 result: i got this error (500)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558687

Title:
  launch this command line(under ubuntu) : nova baremetal-node-list

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Version: openstack 2.0.1
  2. Relevant log files:
   (HTTP 500) (Request-ID: 
req-bff575c9-b83a-45c7-ac32-ab7defedd81a)
  3.
  im using devstack and i have 2 node. In my compute node, i did on my folder 
"/devstack":
   - source openrc admin admin
   - nova baremetal-node-list

   result: i got this error (500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550559] Re: Qos policy RBAC DB setup and migration

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/291276
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=2f8eb8b46e4a6a1d434ba3d46885ab5aacf358be
Submitter: Jenkins
Branch:master

commit 2f8eb8b46e4a6a1d434ba3d46885ab5aacf358be
Author: Haim Daniel 
Date:   Thu Mar 10 18:14:49 2016 +0200

Add rbac manual for qos-policy

Change-Id: I7ce9a2e91cfc8681bd874cee6dbf8e9391e708b6
Closes-Bug: #1550559


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550559

Title:
  Qos policy RBAC DB setup and migration

Status in neutron:
  Invalid
Status in openstack-api-site:
  Confirmed
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/250081
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit aeaf77a5295080a3010a2e9c32e24f47c8cc73cc
  Author: Haim Daniel 
  Date:   Wed Nov 25 18:49:45 2015 -0500

  Qos policy RBAC DB setup and migration
  
  This patch implements a new database model required for the
  qos-policy RBAC support. In addition it migrates the current qos-policy
  'shared' attribute to leverage the new 'qospolicyrbacs' table.
  
  'shared' is no longer a property of the QosPolicy DB model. Its status
  is based on the tenant ID of the API caller. From an API perspective the
  logic remains the same - tenants will see qos-policies as 'shared=True'
  in case the qos-policy is shared with them). However, internal callers
  (e.g. plugins, drivers, services) must not check for the 'shared'
  attribute on qos-policy db objects any more.
  
  DocImpact
  APIImpact
  
  Blueprint: rbac-qos
  Related-bug: #1512587
  
  Change-Id: I1c59073daa181005a3e878bc2fe033a0709fbf31

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1550559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558827] Re: port filter hook for network tenant id matching breaks counting

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/294321
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ff4067af5ba52cc205f38d12cdf68bd454445ced
Submitter: Jenkins
Branch:master

commit ff4067af5ba52cc205f38d12cdf68bd454445ced
Author: Kevin Benton 
Date:   Wed Mar 16 12:28:49 2016 -0700

Outerjoin to networks for port ownership filter

Change I55328cb43207654b9bb4cfb732923982d020ab0a
added a port filter to compare tenant ID to the
network owner as well. This caused the networks
table to be added to the FROM statement since
ports wasn't joined to networks for any other
reason. This resulted in an explosion of records
returned (networks * ports). SQLAlchemy would
de-dup this for us when iterating over results;
however, it would completely break the 'count()'
operation required by get_ports_count (which
the quota engine uses).

Change-Id: I5b780121ba408fba691fff9304d4a22e5892b85f
Closes-Bug: #1558827


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558827

Title:
  port filter hook for network tenant id matching breaks counting

Status in neutron:
  Fix Released

Bug description:
  The filter hook added in https://review.openstack.org/#/c/255285
  causes SQLAlchemy to add the networks table to the FROM statement
  without a restricted join condition. This results in many duplicate
  rows coming back from the DB query. This is okay for normal record
  retrieval because sqlalchemy would deduplicate the records. However,
  when calling .count() on the query, it returns a number far too large.

  This breaks the quota engine for plugins that don't use the newer
  method of tracking resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558048] [NEW] Raise exception when update quota with parameter

2016-03-19 Thread tobe
Public bug reported:

Now there's no error when we try to run `neutron quota-update
$tenant_id`. Actually it shows the tenant's quota from environment
variables but NOT $tenant_id, which is really misleading. And the right
way to do that is "neutron quota-update --tenant-id $tenant_id".

It would be much better to raise any error when we pass the useless
parameters.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558048

Title:
  Raise exception when update quota with parameter

Status in neutron:
  New

Bug description:
  Now there's no error when we try to run `neutron quota-update
  $tenant_id`. Actually it shows the tenant's quota from environment
  variables but NOT $tenant_id, which is really misleading. And the
  right way to do that is "neutron quota-update --tenant-id $tenant_id".

  It would be much better to raise any error when we pass the useless
  parameters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553781] Re: Branding: Create Network should inherit from theme

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289067
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=54b9506ed5f51ea1f13a1df1a105d5d7e6452289
Submitter: Jenkins
Branch:master

commit 54b9506ed5f51ea1f13a1df1a105d5d7e6452289
Author: Diana Whitten 
Date:   Sun Mar 6 12:21:00 2016 -0700

Branding: Create Network should inherit from theme

Create network is different than all other workflows, and it has a
lot of unnecessary style associated with it. This inhibits
themability. It should just use standard nav-pills.

partial-bug: #1551492
Closes-bug: #1553781

Change-Id: I6896df03b86ae0c4388ac15246739aeea5365a95


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553781

Title:
  Branding: Create Network should inherit from theme

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Branding: Create Network should inherit from theme

  Create network is different than all other workflows, and it has a lot
  of unnecessary style associated with it.  This inhibits themability.
  It should just use standard nav-pills.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558205] Re: neutron-lbaas plugin installation breaks on gates

2016-03-19 Thread Armando Migliaccio
*** This bug is a duplicate of bug 1558289 ***
https://bugs.launchpad.net/bugs/1558289

I assume this is a duplicate

** This bug has been marked a duplicate of bug 1558289
   Installing neutron_lbaas plugin via devstack fails because of incorrect 
image/package. Change devstack-trusty to ubuntu-trusty to support infra 
migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558205

Title:
  neutron-lbaas plugin installation breaks on gates

Status in neutron:
  New

Bug description:
  Because of latest changes for neutron-lbaas devstack configuration,
  all gates are broken.

  VM with devstack doesn't have `add-apt-repository` command. added in
  this commit https://github.com/openstack/neutron-
  lbaas/commit/eb2a61b4e9082f97d3bc5c8ec9d02d29914e68a7

  
  2016-03-16 17:40:42.787 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L10:
   [[ False == False ]]
  2016-03-16 17:40:42.787 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L11:
   BACKPORT='deb http://archive.ubuntu.com/ubuntu trusty-backports main 
restricted universe multiverse'
  2016-03-16 17:40:42.787 | +++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L12:
   grep '^' /etc/apt/sources.list '/etc/apt/sources.list.d/*'
  2016-03-16 17:40:42.787 | +++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L12:
   grep 'deb http://archive.ubuntu.com/ubuntu trusty-backports main restricted 
universe multiverse'
  2016-03-16 17:40:42.788 | grep: /etc/apt/sources.list.d/*: No such file or 
directory
  2016-03-16 17:40:42.789 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L12:
   BACKPORT_EXISTS=
  2016-03-16 17:40:42.789 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L12:
   true
  2016-03-16 17:40:42.789 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L13:
   [[ -z '' ]]
  2016-03-16 17:40:42.789 | ++ 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L14:
   sudo add-apt-repository 'deb http://archive.ubuntu.com/ubuntu 
trusty-backports main restricted universe multiverse' -y
  2016-03-16 17:40:42.792 | sudo: add-apt-repository: command not found
  2016-03-16 17:40:42.793 | + 
/opt/stack/new/neutron-lbaas/devstack/plugin.sh:neutron_agent_lbaas_install_agent_packages:L1:
   exit_trap
  2016-03-16 17:40:42.793 | + ./stack.sh:exit_trap:L474:   local r=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558958] [NEW] Status is not getting updated automatically if we do any action on stack from dashboard , we need to refresh the dashboard to get the updated status

2016-03-19 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Status is not getting updated automatically if we do any action on stack
from dashboard , we need to refresh the dashboard to get the updated
status

** Affects: horizon
 Importance: Undecided
 Assignee: monika (monika-parkar)
 Status: New

-- 
Status is not getting updated automatically if we do any action on stack from 
dashboard , we need to refresh the dashboard to get the updated status
https://bugs.launchpad.net/bugs/1558958
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558697] Re: [kilo] libvirt block migrations fail due to disk_info being an encoded JSON string

2016-03-19 Thread Matt Riedemann
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/kilo
   Status: New => In Progress

** Changed in: nova/kilo
   Importance: Undecided => High

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558697

Title:
  [kilo] libvirt block migrations fail due to disk_info being an encoded
  JSON string

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that
  disk_info is always a plain, decoded list. However prior to Liberty
  when preforming a live block migration the compute manager populates
  disk_info with an encoded JSON string when calling
  self.driver.get_instance_disk_info. In the live migration case without
  block migration disk_info remains a plain decoded list.

  More details with an example trace downstream in the following bug :

  live migration without shared storage fails in pre_live_migration after 
upgrade to 2015.1.2-18.2
  https://bugzilla.redhat.com/show_bug.cgi?id=1318722

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558392] Re: List command in Neutron throws list index out of range error, when there are no entries for a object

2016-03-19 Thread Armando Migliaccio
See bug 1548839

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558392

Title:
  List command in Neutron throws list index out of range error,when
  there are no entries for a object

Status in neutron:
  Invalid

Bug description:
  On removing the last entry of a object say network or floatingip ,if neutron 
list command is issued it throws "list index out of range" error 
  stack@hlm:~$ neutron net-list
  
-
  idnamesubnets
  
-
  206d8033-a577-488f-b6bc-5558e37e0e54  n1  
ac5353b7-edb5-4138-9be4-b84bc22b9718 1.1.1.0/24
  
-
  stack@hlm:~$ neutron net-delete n1
  Deleted network: n1
  stack@hlm:~$ neutron net-list
  list index out of range
  stack@hlm:~$
  stack@hlm:~$ neutron floatingip-list
  list index out of range
  stack@hlm:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558434] Re: neutron net-list throws error when there is no network created in the data base

2016-03-19 Thread Thalabathy
wrong place landed ...:)

** Project changed: openstack-api-site => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558434

Title:
  neutron net-list throws error when there is no network created in the
  data base

Status in neutron:
  Invalid

Bug description:
  When there is no  network created in the data base, neutron net-list
  sould not throw any error.

  stack@cs-ccp-c0-m1-management:~/cmc/HPN$ neutron net-list --debug
  DEBUG: keystoneclient.session REQ: curl -g -i --cacert 
"/etc/ssl/certs/ca-certificates.crt" -X GET http://101.10.20.8:5000/v3 -H 
"Accept: application/json" -H "User-Agent: neutron"
  DEBUG: keystoneclient.session RESP: [200] Content-Length: 250 Vary: 
X-Auth-Token Server: Apache/2.4.10 (Debian) Date: Thu, 17 Mar 2016 08:23:52 GMT 
Content-Type: application/json x-openstack-request-id: 
req-b10b9143-b346-4db2-ba0f-010e7b3c688d
  RESP BODY: {"version": {"status": "stable", "updated": 
"2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://101.10.20.8:5000/v3/;, "rel": "self"}]}}

  DEBUG: stevedore.extension found extension EntryPoint.parse('table = 
cliff.formatters.table:TableFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('json = 
cliff.formatters.json_format:JSONFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('csv = 
cliff.formatters.commaseparated:CSVLister')
  DEBUG: stevedore.extension found extension EntryPoint.parse('value = 
cliff.formatters.value:ValueFormatter')
  DEBUG: stevedore.extension found extension EntryPoint.parse('yaml = 
cliff.formatters.yaml_format:YAMLFormatter')
  DEBUG: neutronclient.neutron.v2_0.network.ListNetwork 
get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, 
noindent=False, page_size=None, quote_mode='nonnumeric', request_format='json', 
show_details=False, sort_dir=[], sort_key=[]))
  DEBUG: keystoneclient.auth.identity.v3.base Making authentication request to 
http://101.10.20.8:5000/v3/auth/tokens
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('l2_gateway_connection = 
networking_l2gw.l2gatewayclient.l2gw_client_ext._l2_gateway_connection')
  DEBUG: stevedore.extension found extension EntryPoint.parse('l2_gateway = 
networking_l2gw.l2gatewayclient.l2gw_client_ext._l2_gateway')
  DEBUG: stevedore.extension found extension 
EntryPoint.parse('ovsvapp_mitigated_cluster = 
networking_vsphere.neutronclient._ovsvapp_mitigated_cluster')
  DEBUG: stevedore.extension found extension EntryPoint.parse('ovsvapp_cluster 
= networking_vsphere.neutronclient._ovsvapp_cluster')
  DEBUG: keystoneclient.session REQ: curl -g -i --cacert 
"/etc/ssl/certs/ca-certificates.crt" -X GET 
http://101.10.20.8:9696/v2.0/networks.json -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}e564e605ec2b962001cf99f1b4e1fd4c5a2765c7"
  DEBUG: keystoneclient.session RESP: [200] Date: Thu, 17 Mar 2016 08:23:53 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 16 X-Openstack-Request-Id: 
req-2efd487d-ba6f-4241-9ddc-645245dfd7e0
  RESP BODY: {"networks": []}

  DEBUG: keystoneclient.session REQ: curl -g -i --cacert 
"/etc/ssl/certs/ca-certificates.crt" -X GET 
http://101.10.20.8:9696/v2.0/subnets.json?fields=id=cidr -H "User-Agent: 
python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}e564e605ec2b962001cf99f1b4e1fd4c5a2765c7"
  DEBUG: keystoneclient.session RESP: [200] Date: Thu, 17 Mar 2016 08:23:53 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 15 X-Openstack-Request-Id: 
req-628001ca-b252-4543-8c0a-32bce929130c
  RESP BODY: {"subnets": []}

  ERROR: neutronclient.shell list index out of range
  Traceback (most recent call last):
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/neutronclient/shell.py",
 line 814, in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/neutronclient/shell.py",
 line 110, in run_command
  return cmd.run(known_args)
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/neutronclient/common/command.py",
 line 29, in run
  return super(OpenStackCommand, self).run(parsed_args)
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/cliff/display.py",
 line 88, in run
  self.produce_output(parsed_args, column_names, data)
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/cliff/lister.py",
 line 51, in produce_output
  parsed_args,
File 
"/opt/stack/venv/neutronclient-20160301T130557Z/lib/python2.7/site-packages/cliff/formatters/table.py",
 

[Yahoo-eng-team] [Bug 1558939] Re: Truncated hint text due to the limited length set in the field

2016-03-19 Thread Yuko Katabami
** Attachment added: "This is the corresponding English version"
   
https://bugs.launchpad.net/horizon/+bug/1558939/+attachment/4602854/+files/InputHint-en.png

** Also affects: magnum-ui
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558939

Title:
  Truncated hint text due to the limited length set in the field

Status in OpenStack Dashboard (Horizon):
  New
Status in Magnum UI:
  New

Bug description:
  Project > Instances > Launch Instance

  Japanese version of input hint text "Click here for filters" is
  truncated and it seems that there is a limit for the length.

  English text fits in perfectly but there are a number of languages for
  which translation is longer than English text.

  It is better if it is not restricated to such short length, so that
  the entire message can be shown to the users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558958] Re: Status is not getting updated automatically if we do any action on stack from dashboard , we need to refresh the dashboard to get the updated status

2016-03-19 Thread monika
** Project changed: heat => horizon

** Changed in: horizon
   Status: Invalid => New

** Description changed:

- Status is not getting updated automatically if we do any action on stack
- from dashboard , we need to refresh the dashboard to get the updated
- status
+ To reproduce this issue we need to perform resume,suspend,check stack
+ operation more than 5 times.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558958

Title:
  Status is not getting updated automatically if we do any action on
  stack from dashboard , we need to refresh the dashboard to get the
  updated status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To reproduce this issue we need to perform resume,suspend,check stack
  operation more than 5 times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521797] Re: Support for Name field in Members and HMs

2016-03-19 Thread Armando Migliaccio
** Changed in: neutron
   Status: Fix Released => Invalid

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521797

Title:
  Support for Name field in Members and HMs

Status in neutron:
  Invalid
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/245664
  commit cb3ae497c0a6349dfea0a41788b962a4cd3ef3eb
  Author: Reedip Banerjee 
  Date:   Fri Nov 13 12:32:27 2015 +0530

  Support for Name field in Members and HMs
  
  This patch adds support to enable naming LBaasV2 Members and Health
  Monitors(HMs).
  
  DocImpact
  
  Closes-Bug: #1515506
  Change-Id: Ieb66386fac3a5a4dace0112838fe9afde212f055

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558397] Re: functional job fails due to missing netcat

2016-03-19 Thread Armando Migliaccio
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558397

Title:
  functional job fails due to missing netcat

Status in devstack:
  In Progress
Status in neutron:
  Invalid

Bug description:
  A good build:

  http://logs.openstack.org/39/293239/3/check/gate-neutron-dsvm-
  functional/f1284e9/logs/dpkg-l.txt.gz

  A bad build:

  http://logs.openstack.org/87/293587/1/check/gate-neutron-dsvm-
  functional/53d6bee/logs/dpkg-l.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1558397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558721] [NEW] neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

2016-03-19 Thread Alex Oughton
Public bug reported:

Hello,

When using OpenStack Liberty with XenServer, neutron is not properly
closing its XenAPI sessions. Since it creates these so rapidly, the
XenServer host eventually exceeds its maximum allowed number of
connections:

Mar 17 11:39:05 compute3 xapi:
[debug|compute3.openstack.lab.eco.rackspace.com|25 db_gc|DB GC
D:bb694b976766|db_gc] Number of disposable sessions in group 'external'
in database (401/401) exceeds limit (400): will delete the oldest

This occurs roughly once per minute, with many sessions being
invalidated. The effect is that any long-running hypervisor operations
(for example a live-migration) will fail with an "unauthorized" error,
as their session was invalidated while they were still running:

2016-03-17 11:43:34.483 14310 ERROR nova.virt.xenapi.vmops Failure: 
['INTERNAL_ERROR', 
'Storage_interface.Internal_error("Http_client.Http_error(\\"401\\", \\"{ frame 
= false; method = POST; uri = /services/SM;
query = [ session_id=OpaqueRef:8663a5b7-928e-6ef5-e312-9f430b553c7f ]; 
content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [  ]; task 
= ; subtask_of = ; content-type = ; host = ; user_agent = xe
n-api-libs/1.0 }\\")")']

The fix is to add a line to neutron-rootwrap-xen-dom0 to have it
properly close the sessions.

Before:

def run_command(url, username, password, user_args, cmd_input):
try:
session = XenAPI.Session(url)
session.login_with_password(username, password)
host = session.xenapi.session.get_this_host(session.handle)
result = session.xenapi.host.call_plugin(
host, 'netwrap', 'run_command',
{'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)})
return json.loads(result)
except Exception as e:
traceback.print_exc()
sys.exit(RC_XENAPI_ERROR)

After:

def run_command(url, username, password, user_args, cmd_input):
try:
session = XenAPI.Session(url)
session.login_with_password(username, password)
host = session.xenapi.session.get_this_host(session.handle)
result = session.xenapi.host.call_plugin(
host, 'netwrap', 'run_command',
{'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)})
session.xenapi.session.logout()
return json.loads(result)
except Exception as e:
traceback.print_exc()
sys.exit(RC_XENAPI_ERROR)


After making this change, the logs still show the sessions being rapidly 
created, but it also shows them being destroyed. The "exceeds limit" error no 
longer occurs, and live-migrations now succeed.


Regards,

Alex Oughton

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558721

Title:
  neutron-rootwrap-xen-dom0 not properly closing XenAPI sessions

Status in neutron:
  New

Bug description:
  Hello,

  When using OpenStack Liberty with XenServer, neutron is not properly
  closing its XenAPI sessions. Since it creates these so rapidly, the
  XenServer host eventually exceeds its maximum allowed number of
  connections:

  Mar 17 11:39:05 compute3 xapi:
  [debug|compute3.openstack.lab.eco.rackspace.com|25 db_gc|DB GC
  D:bb694b976766|db_gc] Number of disposable sessions in group
  'external' in database (401/401) exceeds limit (400): will delete the
  oldest

  This occurs roughly once per minute, with many sessions being
  invalidated. The effect is that any long-running hypervisor operations
  (for example a live-migration) will fail with an "unauthorized" error,
  as their session was invalidated while they were still running:

  2016-03-17 11:43:34.483 14310 ERROR nova.virt.xenapi.vmops Failure: 
['INTERNAL_ERROR', 
'Storage_interface.Internal_error("Http_client.Http_error(\\"401\\", \\"{ frame 
= false; method = POST; uri = /services/SM;
  query = [ session_id=OpaqueRef:8663a5b7-928e-6ef5-e312-9f430b553c7f ]; 
content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [  ]; task 
= ; subtask_of = ; content-type = ; host = ; user_agent = xe
  n-api-libs/1.0 }\\")")']

  The fix is to add a line to neutron-rootwrap-xen-dom0 to have it
  properly close the sessions.

  Before:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = XenAPI.Session(url)
  session.login_with_password(username, password)
  host = session.xenapi.session.get_this_host(session.handle)
  result = session.xenapi.host.call_plugin(
  host, 'netwrap', 'run_command',
  {'cmd': json.dumps(user_args), 'cmd_input': 
json.dumps(cmd_input)})
  return json.loads(result)
  except Exception as e:
  traceback.print_exc()
  sys.exit(RC_XENAPI_ERROR)

  After:

  def run_command(url, username, password, user_args, cmd_input):
  try:
  session = 

[Yahoo-eng-team] [Bug 1558866] [NEW] Architecture ValueError Uncaught API Exception

2016-03-19 Thread Russell Holloway
Public bug reported:

If an image is imported with an invalid Architecture, instances are
unable to launch and cause a ValueError exception. This exception is
only visible in logs and UI only tells user an exception occurred.
Running Mirantis Openstack 8.0 (nova-api 2:12.0.0-1~u14.04+mos43)

2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 457, in 
from_dict
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
obj._set_attr_from_legacy_names(image_props)
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 388, in 
_set_attr_from_legacy_names
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
setattr(self, new_key, image_props[legacy_key])
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in 
setter
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
field_value = field.coerce(self, name, value)
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 189, 
in coerce
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions return 
self._type.coerce(obj, attr, value)
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/fields.py", line 87, in coerce
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions raise 
ValueError(msg)
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions ValueError: 
Architecture name 'x64' is not valid
2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions
2016-03-18 01:13:35.848 28025 INFO nova.api.openstack.wsgi 
[req-f56ff830-6e2d-46ab-b1a3-50f021725374 813401d7df1d4ad68388dee16def6a6b 
9e90e9d0bb8c43b3a6fa3d2b1fb08efa - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.

Reproduce:
Import image with architecture named 'x64' (or presumably anything, since it's 
a freeform input), try to launch instance of image.

Expected Result:
Image launches, or if it cannot and error is needed, error should tell user 
there is an invalid architecture. If architecture can only be chosen from 
limited options, it should probably be a combobox rather than a freeform input 
when creating a new image.

Actual Result:
Generic API exception. Image fails to launch.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558866

Title:
  Architecture ValueError Uncaught API Exception

Status in OpenStack Compute (nova):
  New

Bug description:
  If an image is imported with an invalid Architecture, instances are
  unable to launch and cause a ValueError exception. This exception is
  only visible in logs and UI only tells user an exception occurred.
  Running Mirantis Openstack 8.0 (nova-api 2:12.0.0-1~u14.04+mos43)

  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 457, in 
from_dict
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
obj._set_attr_from_legacy_names(image_props)
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/image_meta.py", line 388, in 
_set_attr_from_legacy_names
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
setattr(self, new_key, image_props[legacy_key])
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 72, in 
setter
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions 
field_value = field.coerce(self, name, value)
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 189, 
in coerce
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions return 
self._type.coerce(obj, attr, value)
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/objects/fields.py", line 87, in coerce
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions raise 
ValueError(msg)
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions ValueError: 
Architecture name 'x64' is not valid
  2016-03-18 01:13:35.846 28025 ERROR nova.api.openstack.extensions
  2016-03-18 01:13:35.848 28025 INFO nova.api.openstack.wsgi 
[req-f56ff830-6e2d-46ab-b1a3-50f021725374 

[Yahoo-eng-team] [Bug 1433687] Re: devstack logs do not contain pid information for log messages with context

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/172510
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=198887e8903696ea9fcbec0f8a91c2f6ca5a34c7
Submitter: Jenkins
Branch:master

commit 198887e8903696ea9fcbec0f8a91c2f6ca5a34c7
Author: Ihar Hrachyshka 
Date:   Fri Apr 10 18:45:35 2015 +0200

logging: don't set logging format strings for keystone

Don't override those format strings since the overridden
values are identical to those used by oslo.log by default [1].

logging_exception_prefix is still set since it changes the logging
format to use TRACE label for exceptions instead of default ERROR.

[1]: 
https://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/_options.py?id=c47a91dbbb586c27d8521b1016bf7901c47b1c90#n110

Closes-Bug: #1433687
Change-Id: Ibd11cd6b0defb6dc709dbd3e718a49fd71cce6b6


** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433687

Title:
  devstack logs do not contain pid information for log messages with
  context

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Compare:

  2015-03-18 15:00:15.990 INFO neutron.wsgi 
[req-412094f3-6b4e-41e8-9f2b-833ff6b3ee7a SecurityGroupsTestJSON-724004567 
SecurityGroupsTestJSON-664869352] 127.0.0.1 - - [18/Mar/2015 15:00:15] "DELETE 
/v2.0/security-groups/9cc93b9a-2d06-46e6-9160-1521683f13f9.json HTTP/1.1" 204 
149 0.060949
  2015-03-18 15:00:16.001 15709 INFO neutron.wsgi [-] (15709) accepted 
('127.0.0.1', 60381)

  This is because in devstack, we override the default log format string
  with the one that misses the info. Note that to make it work, it is
  not enough to fall back to default string, since it uses user_identity
  context field that is missing in neutron context object. That is
  because neutron.context.Context does not rely on oslo_context.Context
  when transforming it to_dict().

  The proper fix would be:

  - make neutron context reuse oslo_context.Context.to_dict()
  - make devstack not overwrite the default log format string

  Also note that log colorizer from devstack also rewrites the default
  format string value. In that case, we just need to update the string
  to include pid information.

  Also note that the issue may be more far reaching, since devstack
  rewrites the string for other services too (nova, ironic, among
  others).

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1433687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558099] Re: neutron_lbaas: Stats socket not found for pool

2016-03-19 Thread Armando Migliaccio
Kilo is EOL, only security fixes.

http://releases.openstack.org/

** Changed in: neutron
   Status: New => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558099

Title:
  neutron_lbaas: Stats socket not found for pool

Status in neutron:
  Incomplete

Bug description:
  Hi,

  I am with this issue when I created VIP.

  My environment:

  OpenStack Kilo, Ubuntu 14.04

  The controller node:

  /etc/neutron/neutron.conf

  service_plugins = router,lbaas
  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  /etc/neutron/neutron_lbaas.conf

  [service_providers]
  
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

  /var/log/neutron/neutron-lbaas-agent.log

  2016-03-16 10:59:02.100 24640 WARNING 
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats 
socket not found for pool a630eb2b-85eb-4f4a-8c2a-e1c57baf69e2
  2016-03-16 10:59:03.278 24640 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ca98d95b-eff5-4f0a-a803-bc917b9cd186 ] Create vip 
aca431e5-e485-436b-b788-ba6a05a89991 failed on device driver haproxy_ns
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 340, in create_vip
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 336, in _refresh_device
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 329, in deploy_instance
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 90, in create
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._plug(namespace, logical_config['vip']['port'])
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 259, in _plug
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager namespace=namespace
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 235, 
in plug
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.check_bridge_exists(bridge)
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py", line 169, 
in check_bridge_exists
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager raise 
exceptions.BridgeDoesNotExist(bridge=bridge)
  2016-03-16 10:59:03.278 24640 TRACE 
neutron_lbaas.services.loadbalancer.agent.agent_manager BridgeDoesNotExist: 
Bridge br-int does not exist.
  2016-03-16 10:59:03.278 24640 TRACE 

[Yahoo-eng-team] [Bug 1558774] [NEW] backward-incompat change in security group API: icmpv6 is not supported for protocol in Mitaka

2016-03-19 Thread Akihiro Motoki
Public bug reported:

The patch https://review.openstack.org/#/c/252155/ adds various protocol names,
but the change itself is backward incompatible.

Previously we supported 'ipv6' for protocol to allow ICMPv6 specific type/code.
In the new code, we no longer use 'ipv6' and we need to use a newly added 
protocol name.

IMO it is better to keep the backward compatiblity.
If we keep the new behavior, at least we MUST mention this 
backward-incompatible change in the release note.

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558774

Title:
  backward-incompat change in security group API: icmpv6 is not
  supported for protocol in Mitaka

Status in neutron:
  New

Bug description:
  The patch https://review.openstack.org/#/c/252155/ adds various protocol 
names,
  but the change itself is backward incompatible.

  Previously we supported 'ipv6' for protocol to allow ICMPv6 specific 
type/code.
  In the new code, we no longer use 'ipv6' and we need to use a newly added 
protocol name.

  IMO it is better to keep the backward compatiblity.
  If we keep the new behavior, at least we MUST mention this 
backward-incompatible change in the release note.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2016-03-19 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254276
Committed: 
https://git.openstack.org/cgit/openstack/swift/commit/?id=be54d0c928528cdc1b12e1bcb1614ea8859fae2e
Submitter: Jenkins
Branch:master

commit be54d0c928528cdc1b12e1bcb1614ea8859fae2e
Author: janonymous 
Date:   Mon Dec 7 21:45:43 2015 +0530

clear pycache and remove all pyc/pyo before starting unit test

Delete python bytecode before every test run.
Because python creates pyc files during tox runs, certain
changes in the tree, like deletes of files, or switching
branches, can create spurious errors.

Closes-Bug: #1368661
Change-Id: Iedcb400fa3b0417f5bb8e943b17758fcfb4070c6


** Changed in: swift
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.cache:
  Invalid
Status in oslo.concurrency:
  Invalid
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  Fix Released
Status in Python client library for Sahara:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in Trove:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439869] Re: encrypted iSCSI volume attach fails when iscsi_use_multipath is enabled

2016-03-19 Thread Matt Riedemann
** Also affects: nova/mitaka
   Importance: Medium
 Assignee: Tomoki Sekiyama (tsekiyama)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439869

Title:
  encrypted iSCSI volume attach fails when iscsi_use_multipath is
  enabled

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  When attempting to attach an encrypted iSCSI volume to an instance
  with iscsi_use_multipath set to True in nova.conf an error occurs in
  n-cpu.

  The devstack system being used had the following nova version:

  commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
  Merge "instance termination with update_dns_entries set fails"

  The following error occurs in n-cpu:

  Stack Trace:

  2015-04-02 13:46:22.641 ERROR nova.virt.block_device 
[req-61f49ff8-b814-42c0-8cf8-ffe7b6a3561c admin admin] [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Driver failed to attach volume 
4778e71c-a1b5-4d
  b5-b677-1d8191468e87 at /dev/vdb
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Traceback (most recent call last):
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 251, in attach
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] device_type=self['device_type'], 
encryption=encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1064, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] 
self._disconnect_volume(connection_info, disk_dev)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1051, in attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] encryptor.attach_volume(context, 
**encryption)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 93, in 
attach_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] self._open_volume(passphrase, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/opt/stack/nova/nova/volume/encryptors/cryptsetup.py", line 78, in _open_volume
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] check_exit_code=True, 
run_as_root=True)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File "/opt/stack/nova/nova/utils.py", 
line 206, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] return processutils.execute(*cmd, 
**kwargs)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
233, in execute
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] cmd=sanitized_cmd)
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] ProcessExecutionError: Unexpected error 
while running command.
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup create --key-file=- 36000eb37601bcf0200
  00036c /dev/mapper/36000eb37601bcf02036c
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Exit code: 1
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stdout: u''
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stderr: u''
  2015-04-02 13:46:22.641 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]

  

[Yahoo-eng-team] [Bug 1558756] [NEW] metadata proxy returns 500 error

2016-03-19 Thread Brian O'Donnell
Public bug reported:

I am running RDO Liberty, with openstack-neutron v7.0.1

When an instance attempts to fetch metadata, the proxy logs on the
neutron server show this error:

2016-03-17 15:00:47.268 20766 ERROR
neutron.agent.metadata.namespace_proxy BadStatusLine: ''

However, I cannot find a corresponding error in the nova controller
node's logs, which is making this difficult to troubleshoot. I have
`Debug = True` set in nova.conf but there is no activity in the logs
when these requests to the proxy are failing.

I have attached a snippet showing the entire traceback, as well as a
sanitized version of my metadata_agent.ini from the neutron server.

I can curl https://nova:8775 from the neutron server, and I get a 200
response showing the following, so I have reason to believe nova is
configured correctly:

1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: liberty metadata neutron nova proxy

** Attachment added: "details.txt"
   
https://bugs.launchpad.net/bugs/1558756/+attachment/4602534/+files/details.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558756

Title:
  metadata proxy returns 500 error

Status in neutron:
  New

Bug description:
  I am running RDO Liberty, with openstack-neutron v7.0.1

  When an instance attempts to fetch metadata, the proxy logs on the
  neutron server show this error:

  2016-03-17 15:00:47.268 20766 ERROR
  neutron.agent.metadata.namespace_proxy BadStatusLine: ''

  However, I cannot find a corresponding error in the nova controller
  node's logs, which is making this difficult to troubleshoot. I have
  `Debug = True` set in nova.conf but there is no activity in the logs
  when these requests to the proxy are failing.

  I have attached a snippet showing the entire traceback, as well as a
  sanitized version of my metadata_agent.ini from the neutron server.

  I can curl https://nova:8775 from the neutron server, and I get a 200
  response showing the following, so I have reason to believe nova is
  configured correctly:

  1.0
  2007-01-19
  2007-03-01
  2007-08-29
  2007-10-10
  2007-12-15
  2008-02-01
  2008-09-01
  2009-04-04

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558773] [NEW] new launch instance isn't tab key friendly

2016-03-19 Thread David Lyle
Public bug reported:

The angular launch instance does not support tabbing through the wizard
to select from the transfer tables or complete successfully.

To recreate, open launch instance in Mitaka and try to complete the
wizard navigating with the tab key and not the mouse.

** Affects: horizon
 Importance: Low
 Status: New


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558773

Title:
  new launch instance isn't tab key friendly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The angular launch instance does not support tabbing through the
  wizard to select from the transfer tables or complete successfully.

  To recreate, open launch instance in Mitaka and try to complete the
  wizard navigating with the tab key and not the mouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558670] [NEW] Internal server error when updating an identity provider

2016-03-19 Thread Rodrigo Duarte
Public bug reported:

Remote IDs for identity providers can not be reused, so during the
creation of an identity provider, keystone returns a 409 Conflict when
we try to do so. However, the same problem occurs when updating an
identity provider and using a remote ID from another registered identity
provider, but the duplicate entry error is not treated and a HTTP 500 is
returned.

Error trace: http://paste.openstack.org/show/490946/

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  Remote IDs for identity providers can not be reused, so during the
  creation of an identity provider, keystone returns a 409 Conflict when
  we try to do so. However, the same problem occurs when updating an
  identity provider and using a remote ID from another registered identity
  provider, but the duplicate entry error is not treated and a HTTP 500 is
  returned.
  
- See the trace error: http://paste.openstack.org/show/490946/
+ Error trace: http://paste.openstack.org/show/490946/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1558670

Title:
  Internal server error when updating an identity provider

Status in OpenStack Identity (keystone):
  New

Bug description:
  Remote IDs for identity providers can not be reused, so during the
  creation of an identity provider, keystone returns a 409 Conflict when
  we try to do so. However, the same problem occurs when updating an
  identity provider and using a remote ID from another registered
  identity provider, but the duplicate entry error is not treated and a
  HTTP 500 is returned.

  Error trace: http://paste.openstack.org/show/490946/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1558670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558690] [NEW] project set works for invalid properties

2016-03-19 Thread Matthew Edmonds
Public bug reported:

openstack project set accepts invalid properties, and even somehow sets
their values

# openstack project set ABC --property xyz=pqr
# openstack project show ABC
+-+--+
| Field   | Value|
+-+--+
| description |  |
| domain_id   | ef8acb82bebd4c4abdc6b2056440b596 |
| enabled | True |
| id  | 315700c2a1384b1ca21543504e3513bb |
| is_domain   | False|
| name| ABC  |
| xyz | pqr  |
+-+--+

As seen above, the new "xyz" field was created with the specified value.
This is not a valid property and should not have been created.

Also, specifying an invalid property without a value did not return an
error:

# openstack project set ABC --property QQQ
# openstack project show ABC
+-+--+
| Field   | Value|
+-+--+
| description |  |
| domain_id   | ef8acb82bebd4c4abdc6b2056440b596 |
| enabled | True |
| id  | 315700c2a1384b1ca21543504e3513bb |
| is_domain   | False|
| name| ABC  |
| xyz | pqr  |
+-+--+

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: python-openstackclient
 Importance: Undecided
 Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1558690

Title:
  project set works for invalid properties

Status in OpenStack Identity (keystone):
  New
Status in python-openstackclient:
  New

Bug description:
  openstack project set accepts invalid properties, and even somehow
  sets their values

  # openstack project set ABC --property xyz=pqr
  # openstack project show ABC
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | domain_id   | ef8acb82bebd4c4abdc6b2056440b596 |
  | enabled | True |
  | id  | 315700c2a1384b1ca21543504e3513bb |
  | is_domain   | False|
  | name| ABC  |
  | xyz | pqr  |
  +-+--+

  As seen above, the new "xyz" field was created with the specified
  value. This is not a valid property and should not have been created.

  Also, specifying an invalid property without a value did not return an
  error:

  # openstack project set ABC --property QQQ
  # openstack project show ABC
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | domain_id   | ef8acb82bebd4c4abdc6b2056440b596 |
  | enabled | True |
  | id  | 315700c2a1384b1ca21543504e3513bb |
  | is_domain   | False|
  | name| ABC  |
  | xyz | pqr  |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1558690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558398] [NEW] instance snapshot with large root disk fails when glance api using ssl

2016-03-19 Thread Jeremy Pugh
Public bug reported:

in Kilo, using glance ssl with or without a load-balancer in between
will cause instance snapshot of an instance with a large root disk (>~60
GB) to fail.  Smaller root disks work fine.  The flavor test was
specifically 4 vcpu, 16GB ram, 160GB root disk, 0 ephemeral and 0 swap.
This is when using local file storage for glance and you see pretty much
64GB worth of data hit the glance server disk before the file disappears
off the server and the connection has issues.

Glance will only say that the client closed the connection before all the data 
could be sent.
WARNING glance.api.v1.upload_utils [req-a5279f16-1be8-4cbf-8625-491e25bcf5c7 
766dcc03c6e0454296b449105c71db8c 1a63bbf53e8949e7928ac9e2381a0c04 - - -] Client 
disconnected before sending all data to backend

This occurs when trying through horizon, openstack client, or even a
curl to the glance api.  If you use glance purely with http this issue
does not occur.

When you use a load-balancer in between for ssl, (nginx in particular in my 
case) it will show the below error when the connection dies.  Note this setup 
had glance endpoints ssl and nova doing https for glance to loadbalancer, but 
backend glance was http:
2016/03/15 16:28:23 [info] 23#23: *4346 SSL_read() failed (SSL: 
error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record 
mac) while sending request to upstream, client: 10.240.118.7, server: 
vip-cream-1.lss.emc.com, request: "PUT 
/v1/images/5d4d9e36-1d66-45da-8a4e-0c1aede75cd7 HTTP/1.1", upstream: 
"http://10.240.118.5:9292/v1/images/5d4d9e36-1d66-45da-8a4e-0c1aede75cd7;, 
host: "10.240.118.24:9292" 

and nova will show this error:
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
 executor_callback))
   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
 executor_callback)
   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 130, in _do_dispatch
 result = func(ctxt, **new_args)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6933, 
in snapshot_instance
 return self.manager.snapshot_instance(ctxt, image_id, instance)
   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in 
wrapped
 payload)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 
__exit__
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in 
wrapped
 return f(self, context, *args, **kw)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 333, 
in decorated_function
 LOG.warning(msg, e, instance_uuid=instance_uuid)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 
__exit__
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 304, 
in decorated_function
 return function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 361, 
in decorated_function
 kwargs['instance'], e, sys.exc_info())
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 
__exit__
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 349, 
in decorated_function
 return function(self, context, *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 409, 
in decorated_function
 instance=instance)
   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in 
__exit__
 six.reraise(self.type_, self.value, self.tb)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, 
in decorated_function
 *args, **kwargs)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3291, 
in snapshot_instance
 task_states.IMAGE_SNAPSHOT)
   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3321, 
in _snapshot_instance
 update_task_state)
   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
1447, in snapshot
 image_file)
   File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 130, in 
update
 purge_props=purge_props)
   File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 397, in 
update
 _reraise_translated_image_exception(image_id)
   File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 395, in 
update
 image_id, **image_meta)
   File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 218, in 
call
 return getattr(client.images, method)(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 360, 
in update
 resp, body = self.client.put(url, headers=hdrs, data=image_data)
   File 

[Yahoo-eng-team] [Bug 1558697] Re: [kilo] libvirt block migrations fail due to disk_info being an encoded JSON string

2016-03-19 Thread Tristan Cacqueray
Since f302bf04 was referenced in the advisory, we may have to send
another ERRATA to include the additional patch. I've added an OSSA task
to keep track of that effort.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558697

Title:
  [kilo] libvirt block migrations fail due to disk_info being an encoded
  JSON string

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that
  disk_info is always a plain, decoded list. However prior to Liberty
  when preforming a live block migration the compute manager populates
  disk_info with an encoded JSON string when calling
  self.driver.get_instance_disk_info. In the live migration case without
  block migration disk_info remains a plain decoded list.

  More details with an example trace downstream in the following bug :

  live migration without shared storage fails in pre_live_migration after 
upgrade to 2015.1.2-18.2
  https://bugzilla.redhat.com/show_bug.cgi?id=1318722

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >