[Yahoo-eng-team] [Bug 1378640] [NEW] Incorrect parameters passed to delete_from_backend method

2014-10-08 Thread Pranali Deore
Public bug reported:

delete_from_backend(self.admin_context, uri) in this method uri should be
the first argument instead of self.admin_context.

https://github.com/openstack/glance/blob/master/glance/scrubber.py#L552
https://github.com/openstack/glance_store/blob/master/glance_store/backend.py#L273

NOTE:
As of now, in current master File Queue is not used, the method 
delete_from_backend() will never get called.

** Affects: glance
 Importance: Undecided
 Assignee: Pranali Deore (pranali-deore)
 Status: New


** Tags: ntt

** Changed in: glance
 Assignee: (unassigned) = Pranali Deore (pranali-deore)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378640

Title:
  Incorrect parameters passed to delete_from_backend method

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  delete_from_backend(self.admin_context, uri) in this method uri should be
  the first argument instead of self.admin_context.

  https://github.com/openstack/glance/blob/master/glance/scrubber.py#L552
  
https://github.com/openstack/glance_store/blob/master/glance_store/backend.py#L273

  NOTE:
  As of now, in current master File Queue is not used, the method 
delete_from_backend() will never get called.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378661] [NEW] Angular js module load error during download keypair

2014-10-08 Thread Pradeep Kumar
Public bug reported:

Description:

When creating a keypair, the download screen is non responsive at the
navigation pane. On further investigation I got the following error in
the browser JavaScript debug console:

Uncaught Error: [$injector:modulerr] Failed to instantiate module hz due to:
Error: [$injector:modulerr] Failed to instantiate module hz.conf due to:
Error: [$injector:nomod] Module 'hz.conf' is not available! You either 
misspelled the module name or forgot to load it. If registering a module ensure 
that you specify the dependencies as the second argument.
http://errors.angularjs.org/1.2.1/$injector/nomod?p0=hz.conf
at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
at http://localhost/static/dashboard/js/33e6196225f6.js:783:59
at ensure (http://localhost/static/dashboard/js/33e6196225f6.js:781:165)
at module (http://localhost/static/dashboard/js/33e6196225f6.js:783:8)
at http://localhost/static/dashboard/js/33e6196225f6.js:858:220
at Array.forEach (native)
at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
at loadModules (http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
at Array.forEach (native)
http://errors.angularjs.org/1.2.1/$injector/modulerr?p0=hz.confp1=Error%3A…s%2F33e6196225f6.js%3A858%3A269%0A%20%20%20%20at%20Array.forEach%20(native)
at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
at http://localhost/static/dashboard/js/33e6196225f6.js:860:7
at Array.forEach (native)
at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
at loadModules (http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
at Array.forEach (native)
at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
at loadModules (http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
at createInjector 
(http://localhost/static/dashboard/js/33e6196225f6.js:849:738)
http://errors.angularjs.org/1.2.1/$injector/modulerr?p0=hzp1=Error%3A%20%5…%3A%2F%2Flocalhost%2Fstatic%2Fdashboard%2Fjs%2F33e6196225f6.js%3A849%3A738)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Browser JavaScript Debug console
   https://bugs.launchpad.net/bugs/1378661/+attachment/4228108/+files/bug.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378661

Title:
  Angular js module load error during download keypair

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description:

  When creating a keypair, the download screen is non responsive at the
  navigation pane. On further investigation I got the following error in
  the browser JavaScript debug console:

  Uncaught Error: [$injector:modulerr] Failed to instantiate module hz due to:
  Error: [$injector:modulerr] Failed to instantiate module hz.conf due to:
  Error: [$injector:nomod] Module 'hz.conf' is not available! You either 
misspelled the module name or forgot to load it. If registering a module ensure 
that you specify the dependencies as the second argument.
  http://errors.angularjs.org/1.2.1/$injector/nomod?p0=hz.conf
  at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
  at http://localhost/static/dashboard/js/33e6196225f6.js:783:59
  at ensure (http://localhost/static/dashboard/js/33e6196225f6.js:781:165)
  at module (http://localhost/static/dashboard/js/33e6196225f6.js:783:8)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:220
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
  at Array.forEach (native)
  
http://errors.angularjs.org/1.2.1/$injector/modulerr?p0=hz.confp1=Error%3A…s%2F33e6196225f6.js%3A858%3A269%0A%20%20%20%20at%20Array.forEach%20(native)
  at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
  at http://localhost/static/dashboard/js/33e6196225f6.js:860:7
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at createInjector 
(http://localhost/static/dashboard/js/33e6196225f6.js:849:738)
  

[Yahoo-eng-team] [Bug 1370767] Re: Update driver metadata definitions to Juno

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126703
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=eec47084783e931248690a22c143af6f9030b293
Submitter: Jenkins
Branch:proposed/juno

commit eec47084783e931248690a22c143af6f9030b293
Author: Travis Tripp travis.tr...@hp.com
Date:   Wed Sep 17 17:30:03 2014 -0600

Update driver metadata definitions to Juno

vmware and libvirt support different hw_vif_model settings.
This patch updates them so that each namespace can specify
the models they support.

vmware api is updated with the vmware_disktype

Change-Id: Iec5901097c9621a052a930b99d5cbe7872d4f3ff
Closes-bug: 1370767
(cherry picked from commit ebafdbeef6420d0fcc4922f245956096ca9e50b3)


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1370767

Title:
  Update driver metadata definitions to Juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  vmware and libvirt support different hw_vif_model settings.  This
  patch updates them so that each namespace can specify the models they
  support.

  vmware api is updated with the vmware_disktype

  
  See below for references to source code.

  vmware:

  hw_vif_model:
  From: 
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vm_util.py

  ALL_SUPPORTED_NETWORK_DEVICES = ['VirtualE1000', 'VirtualE1000e',
   'VirtualPCNet32', 'VirtualSriovEthernetCard',
   'VirtualVmxnet']

  And:

  def convert_vif_model(name):
  Converts standard VIF_MODEL types to the internal VMware ones.
  if name == network_model.VIF_MODEL_E1000:
  return 'VirtualE1000'
  if name == network_model.VIF_MODEL_E1000E:
  return 'VirtualE1000e'
  if name not in ALL_SUPPORTED_NETWORK_DEVICES:
  msg = _('%s is not supported.') % name
  raise exception.Invalid(msg)
  return name

  vmware disktype:

  https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/constants.py

  DISK_TYPE_SPARSE = 'sparse'
  SUPPORTED_FLAT_VARIANTS = [thin, preallocated, thick, 
eagerZeroedThick]

  
  libvirt:
  From:  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/vif.py

  def is_vif_model_valid_for_virt(virt_type, vif_model):
  valid_models = {
  'qemu': [network_model.VIF_MODEL_VIRTIO,
   network_model.VIF_MODEL_NE2K_PCI,
   network_model.VIF_MODEL_PCNET,
   network_model.VIF_MODEL_RTL8139,
   network_model.VIF_MODEL_E1000,
   network_model.VIF_MODEL_SPAPR_VLAN],
  'kvm': [network_model.VIF_MODEL_VIRTIO,
  network_model.VIF_MODEL_NE2K_PCI,
  network_model.VIF_MODEL_PCNET,
  network_model.VIF_MODEL_RTL8139,
  network_model.VIF_MODEL_E1000,
  network_model.VIF_MODEL_SPAPR_VLAN],
  'xen': [network_model.VIF_MODEL_NETFRONT,
  network_model.VIF_MODEL_NE2K_PCI,
  network_model.VIF_MODEL_PCNET,
  network_model.VIF_MODEL_RTL8139,
  network_model.VIF_MODEL_E1000],
  'lxc': [],
  'uml': [],
  }

  
  From:  https://github.com/openstack/nova/blob/master/nova/network/model.py

  VIF_MODEL_VIRTIO = 'virtio'
  VIF_MODEL_NE2K_PCI = 'ne2k_pci'
  VIF_MODEL_PCNET = 'pcnet'
  VIF_MODEL_RTL8139 = 'rtl8139'
  VIF_MODEL_E1000 = 'e1000'
  VIF_MODEL_E1000E = 'e1000e'
  VIF_MODEL_NETFRONT = 'netfront'
  VIF_MODEL_SPAPR_VLAN = 'spapr-vlan'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1370767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374814] Re: Neutron server says Failed on bind port, while launching VM

2014-10-08 Thread Ravi Gupta
Hi ,
 I have checked OVS configuration and in logs bridge mappings was empty during 
bind port. I have corrected this configuration and it is working fine. 


** Changed in: neutron
   Status: Incomplete = Opinion

** Changed in: neutron
   Status: Opinion = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374814

Title:
  Neutron server says Failed on bind port,while launching VM

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Hi,

  I am new to open stack environment and trying to install openstack using 
installation guide on 2 node running ubuntu 14.04 LTS.
  Node 1( controller + network node configuration) , Node 2(compute node 
configuration),

  I am facing an issue when i try to launch VM from horizon. I get error in 
dash board the for instacne-id  maxiumum try 3 exceeded and no valid host 
found. I checked nova-scheduler.log and similar error is there too. When 
checked neutron's server .log 
  i found that it says Failed on bind port .  

  I searched on web and align my configuration as per suggested but no luck so 
far.
  Kindly suggest. any help will be appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378683] [NEW] nova-cell, cannot delete VM once deleting VM with failure in nova-compute

2014-10-08 Thread Rajesh Tailor
Public bug reported:

Not able to delete the VM once the VM deletion failed in nova compute.

Steps to reproduce:

1. create VM
2. wait until VM becomes available
3. Stop nova-cell child process
4. delete VM
   nova delete vm_id or vm_name
5. stop neutron service
6. start child nova-cell process
7. start neutron service
8. delete VM again

VM is not deleted and will be listed in the nova list output.

$ nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| 9d7c9fb2-010f-4de6-975a-1a2de825155b | vm09 | ERROR  | -  | Running   
  | private=10.0.0.2 |
+--+--+++-+--+

Following log messages is logged in n-child-cell screen:

2014-10-07 04:36:57.159 INFO nova.compute.api 
[req-11c20157-23ac-4892-9fdf-3e60201a9bb4 admin admin]
[instance: 77aabf6c-7b33-4c49-8061-eb9805214085] Instance is already in 
deleting state, ignoring this request

Note: VM never gets deleted.

** Affects: nova
 Importance: Undecided
 Assignee: Rajesh Tailor (rajesh-tailor)
 Status: New


** Tags: ntt

** Changed in: nova
 Assignee: (unassigned) = Rajesh Tailor (rajesh-tailor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378683

Title:
  nova-cell, cannot delete VM once deleting VM with failure in nova-
  compute

Status in OpenStack Compute (Nova):
  New

Bug description:
  Not able to delete the VM once the VM deletion failed in nova compute.

  Steps to reproduce:

  1. create VM
  2. wait until VM becomes available
  3. Stop nova-cell child process
  4. delete VM
 nova delete vm_id or vm_name
  5. stop neutron service
  6. start child nova-cell process
  7. start neutron service
  8. delete VM again

  VM is not deleted and will be listed in the nova list output.

  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 9d7c9fb2-010f-4de6-975a-1a2de825155b | vm09 | ERROR  | -  | Running 
| private=10.0.0.2 |
  
+--+--+++-+--+

  Following log messages is logged in n-child-cell screen:

  2014-10-07 04:36:57.159 INFO nova.compute.api 
[req-11c20157-23ac-4892-9fdf-3e60201a9bb4 admin admin]
  [instance: 77aabf6c-7b33-4c49-8061-eb9805214085] Instance is already in 
deleting state, ignoring this request

  Note: VM never gets deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378689] [NEW] error when rebuilding a instance booted from volume

2014-10-08 Thread Liusheng
Public bug reported:

with libvirt driver as compute virt driver, when rebuild a instance booted from 
a bootable volume, instance will be set to ERROR state, the error log in log:
libvirtError: Failed to terminate process 8804 with SIGKILL: Device or resource 
busy

** Affects: nova
 Importance: Undecided
 Assignee: Liusheng (liusheng)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Liusheng (liusheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378689

Title:
  error when rebuilding a instance booted from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  with libvirt driver as compute virt driver, when rebuild a instance booted 
from a bootable volume, instance will be set to ERROR state, the error log in 
log:
  libvirtError: Failed to terminate process 8804 with SIGKILL: Device or 
resource busy

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378661] Re: Angular js module load error during download keypair

2014-10-08 Thread Pradeep Kumar
*** This bug is a duplicate of bug 1359649 ***
https://bugs.launchpad.net/bugs/1359649

** This bug has been marked a duplicate of bug 1359649
   Level 1 and level 2 links doesn’t work when keypair is created

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378661

Title:
  Angular js module load error during download keypair

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description:

  When creating a keypair, the download screen is non responsive at the
  navigation pane. On further investigation I got the following error in
  the browser JavaScript debug console:

  Uncaught Error: [$injector:modulerr] Failed to instantiate module hz due to:
  Error: [$injector:modulerr] Failed to instantiate module hz.conf due to:
  Error: [$injector:nomod] Module 'hz.conf' is not available! You either 
misspelled the module name or forgot to load it. If registering a module ensure 
that you specify the dependencies as the second argument.
  http://errors.angularjs.org/1.2.1/$injector/nomod?p0=hz.conf
  at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
  at http://localhost/static/dashboard/js/33e6196225f6.js:783:59
  at ensure (http://localhost/static/dashboard/js/33e6196225f6.js:781:165)
  at module (http://localhost/static/dashboard/js/33e6196225f6.js:783:8)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:220
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
  at Array.forEach (native)
  
http://errors.angularjs.org/1.2.1/$injector/modulerr?p0=hz.confp1=Error%3A…s%2F33e6196225f6.js%3A858%3A269%0A%20%20%20%20at%20Array.forEach%20(native)
  at http://localhost/static/dashboard/js/33e6196225f6.js:676:8
  at http://localhost/static/dashboard/js/33e6196225f6.js:860:7
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at http://localhost/static/dashboard/js/33e6196225f6.js:858:269
  at Array.forEach (native)
  at forEach (http://localhost/static/dashboard/js/33e6196225f6.js:683:253)
  at loadModules 
(http://localhost/static/dashboard/js/33e6196225f6.js:858:80)
  at createInjector 
(http://localhost/static/dashboard/js/33e6196225f6.js:849:738)
  
http://errors.angularjs.org/1.2.1/$injector/modulerr?p0=hzp1=Error%3A%20%5…%3A%2F%2Flocalhost%2Fstatic%2Fdashboard%2Fjs%2F33e6196225f6.js%3A849%3A738)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378705] Re: Neutron do not pick up existing physical device bridge

2014-10-08 Thread Tom Fifield
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378705

Title:
  Neutron do not pick up existing physical device bridge

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Manuals:
  New

Bug description:
  Hello

  Maybe this is normal user story, but it is very inconvenient, when I
  have server with already configured bridge and I have to turn off
  network interface, so Neutron can create new one.

  For example I have configured br0 with eth0 port for libvirtd, which has ten 
virtual machines.
  If I decide to switch to Neutron, I will have to bring all of them down, 
delete bridge, start neutron, specify new bridge in libvirt network 
configuration XML. Such process implies big downtime, which is inconvenient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378732] [NEW] migrate_to_ml2 script doesn't work for Juno release

2014-10-08 Thread Xu Han Peng
Public bug reported:

The error looks like:

 Traceback (most recent call last):
File migrate_to_ml2.py, line 485, in module
  main()
File migrate_to_ml2.py, line 481, in main
  args.vxlan_udp_port)
File migrate_to_ml2.py, line 135, in __call__
  self.define_ml2_tables(metadata)
  AttributeError: 'MigrateOpenvswitchToMl2_Juno' object has no 
attribute 'define_ml2_tables'


This is caused by define_ml2_tables is a method of 
BaseMigrateToMl2_IcehouseMixin but not MigrateOpenvswitchToMl2_Juno. We should 
make MigrateOpenvswitchToMl2_Juno based on BaseMigrateToMl2_IcehouseMixin to 
solve this problem.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378732

Title:
  migrate_to_ml2 script doesn't work for Juno release

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The error looks like:

   Traceback (most recent call last):
  File migrate_to_ml2.py, line 485, in module
main()
  File migrate_to_ml2.py, line 481, in main
args.vxlan_udp_port)
  File migrate_to_ml2.py, line 135, in __call__
self.define_ml2_tables(metadata)
AttributeError: 'MigrateOpenvswitchToMl2_Juno' object has no 
attribute 'define_ml2_tables'

  
  This is caused by define_ml2_tables is a method of 
BaseMigrateToMl2_IcehouseMixin but not MigrateOpenvswitchToMl2_Juno. We should 
make MigrateOpenvswitchToMl2_Juno based on BaseMigrateToMl2_IcehouseMixin to 
solve this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306559] Re: Fix python26 compatibility for RFCSysLogHandler

2014-10-08 Thread Thierry Carrez
** Changed in: cinder
   Status: Fix Committed = Fix Released

** Changed in: cinder
Milestone: juno-rc2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1306559

Title:
  Fix python26 compatibility for RFCSysLogHandler

Status in Cinder:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  Currently used pattern 
https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 
471-479)  will fail for Python 2.6.x.
  In order to fix the broken Python 2.6.x compatibility, old style explicit 
superclass method calls should be used instead.

  Here is an example of how to check this for Python v2.7 and v2.6: 
  import logging.handlers
  print type(logging.handlers.SysLogHandler)
  print type(logging.Handler)

  Results would be:
  Python 2.7: type 'type', so super() may be used for 
RFCSysLogHandler(logging.handlers.SysLogHandler)
  Python 2.6:type 'classobj', so super() may *NOT* be used for 
RFCSysLogHandler(logging.handlers.SysLogHandler)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1306559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368036] Re: Missing metadata definition for graceful shutdown

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126794
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=cecc9497c158fbf43c26aa8943a41b4e6fcc5fed
Submitter: Jenkins
Branch:proposed/juno

commit cecc9497c158fbf43c26aa8943a41b4e6fcc5fed
Author: Travis Tripp travis.tr...@hp.com
Date:   Fri Sep 12 11:55:52 2014 -0600

Add missing metadefs for shutdown behavior

The following Nova patch adds support for graceful shutdown
of a guest VM and allows setting timeout properties on images.
The properties should be updated in the Metadata Definitions catalog.

https://review.openstack.org/#/c/68942/

Change-Id: I58145d9d0114b3932b63263ea123c4662146d14b
Closes-bug: 1368036
(cherry picked from commit 5fcb3aa2e35e9af17cb8be9e24c6613626036f2b)


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1368036

Title:
  Missing metadata definition for graceful shutdown

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The following Nova patch adds support for graceful shutdown of a guest
  VM and allows setting timeout properties on images.  The properties
  should be updated in the Metadata Definitions catalog.

  Please note that the related spec does not seem to match the code in
  terms of the properties to set, so this needs to be investigated a bit
  to determine the correct properties to set.

  https://review.openstack.org/#/c/68942/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1368036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378756] [NEW] set_context in L3NatTestCaseMixin.floatingip_with_assoc not work

2014-10-08 Thread Wei Wang
Public bug reported:

We have following code in
neutron.test.unit.L3NatTestCaseMixin.floatingip_with_assoc get
set_context from external but not use it.

@contextlib.contextmanager
def floatingip_with_assoc(self, port_id=None, fmt=None, fixed_ip=None,

##
  set_context=False):   
  #  We get set_context here
##

with self.subnet(cidr='11.0.0.0/24') as public_sub:
self._set_net_external(public_sub['subnet']['network_id'])
private_port = None
if port_id:
private_port = self._show('ports', port_id)
with test_db_plugin.optional_ctx(private_port,
 self.port) as private_port:
with self.router() as r:
sid = private_port['port']['fixed_ips'][0]['subnet_id']
private_sub = {'subnet': {'id': sid}}
floatingip = None

self._add_external_gateway_to_router(
r['router']['id'],
public_sub['subnet']['network_id'])
self._router_interface_action(
'add', r['router']['id'],
private_sub['subnet']['id'], None)

floatingip = self._make_floatingip(
fmt or self.fmt,
public_sub['subnet']['network_id'],
port_id=private_port['port']['id'],
fixed_ip=fixed_ip,

##
set_context=False)  
  ###  But we don't really use it
##

yield floatingip

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  We have following code in
  neutron.test.unit.L3NatTestCaseMixin.floatingip_with_assoc get
  set_context from external but not use it.
  
- @contextlib.contextmanager
- def floatingip_with_assoc(self, port_id=None, fmt=None, fixed_ip=None,
-   set_context=False): 
 We get set_context here
- with self.subnet(cidr='11.0.0.0/24') as public_sub:
- self._set_net_external(public_sub['subnet']['network_id'])
- private_port = None
- if port_id:
- private_port = self._show('ports', port_id)
- with test_db_plugin.optional_ctx(private_port,
-  self.port) as private_port:
- with self.router() as r:
- sid = private_port['port']['fixed_ips'][0]['subnet_id']
- private_sub = {'subnet': {'id': sid}}
- floatingip = None
  
- self._add_external_gateway_to_router(
- r['router']['id'],
- public_sub['subnet']['network_id'])
- self._router_interface_action(
- 'add', r['router']['id'],
- private_sub['subnet']['id'], None)
+ @contextlib.contextmanager
+ def floatingip_with_assoc(self, port_id=None, fmt=None, fixed_ip=None,
+   set_context=False): 
 We get set_context here
+ with self.subnet(cidr='11.0.0.0/24') as public_sub:
+ self._set_net_external(public_sub['subnet']['network_id'])
+ private_port = None
+ if port_id:
+ private_port = self._show('ports', port_id)
+ with test_db_plugin.optional_ctx(private_port,
+  self.port) as private_port:
+ with self.router() as r:
+ sid = private_port['port']['fixed_ips'][0]['subnet_id']
+ private_sub = {'subnet': {'id': sid}}
+ floatingip = None
  
- floatingip = self._make_floatingip(
- fmt or self.fmt,
- public_sub['subnet']['network_id'],
- port_id=private_port['port']['id'],
- fixed_ip=fixed_ip,
- set_context=False)
  But we don't really use it
- yield floatingip
+ self._add_external_gateway_to_router(
+ r['router']['id'],
+ public_sub['subnet']['network_id'])
+ self._router_interface_action(
+ 'add', r['router']['id'],
+ private_sub['subnet']['id'], None)
+ 
+ floatingip 

[Yahoo-eng-team] [Bug 1378766] [NEW] horizon.d3piechart.js is failing jshint

2014-10-08 Thread John Davidge
Public bug reported:

The current master of horizon.d3piechart.js is causing jshint to fail
with the following error:

horizon/static/horizon/js/horizon.d3piechart.js: line 240, col 13, It's
not necessary to initialize 'item' to 'undefined'.

** Affects: horizon
 Importance: Undecided
 Assignee: John Davidge (john-davidge)
 Status: New


** Tags: low-hanging-fruit

** Changed in: horizon
 Assignee: (unassigned) = John Davidge (john-davidge)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378766

Title:
  horizon.d3piechart.js is failing jshint

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The current master of horizon.d3piechart.js is causing jshint to fail
  with the following error:

  horizon/static/horizon/js/horizon.d3piechart.js: line 240, col 13,
  It's not necessary to initialize 'item' to 'undefined'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378783] [NEW] IPv6 namespaces are not updated upon router interface deletion

2014-10-08 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
In case the namespace contains both IPv4 and IPv6 Interfaces, they will not be 
deleted with interfaces are detached from the router.

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=

Steps to Reproduce:
===
1. Create a neutron Router
2. Attach an IPv6 interface
3. Attach an IPv4 interface
4. Delete both interfaces
5. Check if interfaces were deleted from the router namespace:
   # ip netns exec qrouter-id ifconfig | grep inet

Actual results:
===
Interfaces were not deleted.

Expected results:
=
Interfaces should be deleted.

Additional info:

Tested with RHEL7

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378783

Title:
  IPv6 namespaces are not updated upon router interface deletion

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  In case the namespace contains both IPv4 and IPv6 Interfaces, they will not 
be deleted with interfaces are detached from the router.

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =

  Steps to Reproduce:
  ===
  1. Create a neutron Router
  2. Attach an IPv6 interface
  3. Attach an IPv4 interface
  4. Delete both interfaces
  5. Check if interfaces were deleted from the router namespace:
 # ip netns exec qrouter-id ifconfig | grep inet

  Actual results:
  ===
  Interfaces were not deleted.

  Expected results:
  =
  Interfaces should be deleted.

  Additional info:
  
  Tested with RHEL7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378786] [NEW] Update rpc version aliases for juno

2014-10-08 Thread Russell Bryant
Public bug reported:

Update all of the rpc client API classes to include a version alias
for the latest version implemented in Juno.  This alias is needed when
doing rolling upgrades from Juno to Kilo.  With this in place, you can
ensure all services only send messages that both Juno and Kilo will
understand.

** Affects: nova
 Importance: Medium
 Assignee: Russell Bryant (russellb)
 Status: Fix Committed

** Affects: nova/juno
 Importance: Medium
 Assignee: Russell Bryant (russellb)
 Status: In Progress

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova
   Status: New = Fix Committed

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova/juno
   Importance: Undecided = Medium

** Changed in: nova/juno
   Status: New = In Progress

** Changed in: nova/juno
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova/juno
Milestone: None = juno-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378786

Title:
  Update rpc version aliases for juno

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  Update all of the rpc client API classes to include a version alias
  for the latest version implemented in Juno.  This alias is needed when
  doing rolling upgrades from Juno to Kilo.  With this in place, you can
  ensure all services only send messages that both Juno and Kilo will
  understand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378786] Re: Update rpc version aliases for juno

2014-10-08 Thread Thierry Carrez
** Changed in: nova
Milestone: None = juno-rc2

** No longer affects: nova/juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378786

Title:
  Update rpc version aliases for juno

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Update all of the rpc client API classes to include a version alias
  for the latest version implemented in Juno.  This alias is needed when
  doing rolling upgrades from Juno to Kilo.  With this in place, you can
  ensure all services only send messages that both Juno and Kilo will
  understand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376368] Re: nova.crypto.revoke_cert always raises ProjectNotFound

2014-10-08 Thread Russell Bryant
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New = Confirmed

** Changed in: nova/juno
   Importance: Undecided = Critical

** Changed in: nova/juno
 Assignee: (unassigned) = Russell Bryant (russellb)

** Changed in: nova/juno
Milestone: None = juno-rc2

** Changed in: nova
 Assignee: Davanum Srinivas (DIMS) (dims-v) = Russell Bryant (russellb)

** Changed in: nova
Milestone: juno-rc2 = kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376368

Title:
  nova.crypto.revoke_cert always raises ProjectNotFound

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) juno series:
  Confirmed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  (Marked this as a security issue for now, since cert revocation not
  working is pretty serious)

  https://github.com/openstack/nova/blob/master/nova/crypto.py#L277-L278

  os.chdir *always* returns None, which means that path is always taken
  and the cert is never revoked

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376368] Re: nova.crypto.revoke_cert always raises ProjectNotFound

2014-10-08 Thread Thierry Carrez
** Changed in: nova
Milestone: kilo-1 = juno-rc2

** No longer affects: nova/juno

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376368

Title:
  nova.crypto.revoke_cert always raises ProjectNotFound

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  (Marked this as a security issue for now, since cert revocation not
  working is pretty serious)

  https://github.com/openstack/nova/blob/master/nova/crypto.py#L277-L278

  os.chdir *always* returns None, which means that path is always taken
  and the cert is never revoked

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377981] Re: Missing fix for ssh_execute (Exceptions thrown may contain passwords) (CVE-2014-7230, CVE-2014-7231)

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126594
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ee3594072a7ef1c3f5661021fb31118069cbd646
Submitter: Jenkins
Branch:proposed/juno

commit ee3594072a7ef1c3f5661021fb31118069cbd646
Author: Tristan Cacqueray tristan.cacque...@enovance.com
Date:   Fri Oct 3 19:53:42 2014 +

Mask passwords in exceptions and error messages

When a ProcessExecutionError is thrown by processutils.ssh_execute(),
the exception may contain information such as password. Upstream
applications that just log the message (as several appear to do)
could inadvertently expose these passwords to a user with read access to
the log files. It is therefore considered prudent to invoke
strutils.mask_password() on the command, stdout and stderr in the
exception. A test case has been added (to oslo-incubator) in order to
ensure that all three are properly masked.

An earlier commit (853d8f9897f8563851441108a9be26b10908c076) failed
to address ssh_execute(). This change set addresses ssh_execute.

OSSA is aware of this change request.

Change-Id: Ie0caf32469126dd9feb44867adf27acb6e383958
Closes-Bug: #1377981


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377981

Title:
  Missing fix for ssh_execute (Exceptions thrown may contain passwords)
  (CVE-2014-7230, CVE-2014-7231)

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The Oslo library incubator:
  Fix Released
Status in oslo-incubator icehouse series:
  New
Status in OpenStack Security Advisories:
  In Progress

Bug description:
  Former bugs:
https://bugs.launchpad.net/ossa/+bug/1343604
https://bugs.launchpad.net/ossa/+bug/1345233

  The ssh_execute method is still affected in Cinder and Nova Icehouse release.
  It is prone to password leak if:
  - passwords are used on the command line
  - execution fail
  - calling code catch and log the exception

  The missing fix from oslo-incubator to be merged is:
  6a60f84258c2be3391541dbe02e30b8e836f6c22

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1377981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357055] Re: Race to delete shared subnet in Tempest neutron full jobs

2014-10-08 Thread Salvatore Orlando
I came to the same conclusions as Alex: the servers are not deleted
hence the error.

However, the logging which Alex is claiming for is already there.
Indeed here are the delete operations on teardown for a failing test:

salvatore@trustillo:~$ cat tempest.txt.gz | grep -i 
ServerRescueNegativeTestJSON.*tearDownClass.*DELETE
2014-10-07 17:49:04.444 25908 INFO tempest.common.rest_client 
[req-75c758b3-d8cb-48d6-9cb6-3670147aca41 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/829473406bb545c895a5cd0320624812/os-volumes/ffc6-ba25-413d-8ff1-839d3643299d
 0.135s
2014-10-07 17:49:04.444 25908 DEBUG tempest.common.rest_client 
[req-75c758b3-d8cb-48d6-9cb6-3670147aca41 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/829473406bb545c895a5cd0320624812/os-volumes/ffc6-ba25-413d-8ff1-839d3643299d
 0.135s
2014-10-07 17:52:21.452 25908 INFO tempest.common.rest_client 
[req-d0fa5615-9e64-4faa-bd8d-2ad1ac6afb53 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/0ecd1539-70af-4500-aa4d-9e131fa1fffc 0.238s
2014-10-07 17:52:21.452 25908 DEBUG tempest.common.rest_client 
[req-d0fa5615-9e64-4faa-bd8d-2ad1ac6afb53 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/0ecd1539-70af-4500-aa4d-9e131fa1fffc 0.238s
2014-10-07 17:52:21.513 25908 INFO tempest.common.rest_client 
[req-89562c45-7448-41bb-8e3e-0beec8460aab None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 409 DELETE 
http://127.0.0.1:9696/v2.0/subnets/9614d778-66b3-4b81-83fc-f7a47602ceb2 0.060s
2014-10-07 17:52:21.514 25908 DEBUG tempest.common.rest_client 
[req-89562c45-7448-41bb-8e3e-0beec8460aab None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 409 DELETE 
http://127.0.0.1:9696/v2.0/subnets/9614d778-66b3-4b81-83fc-f7a47602ceb2 0.060s


No DELETE server command is specified.
Instead for a successful test the two servers are deleted.

2014-09-26 11:48:05.532 7755 INFO tempest.common.rest_client 
[req-6d9072aa-dbcb-4398-b4c4-46aeb2140e4b None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 202 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/os-volumes/a24f4dc2-bfb5-4a60-be05-9051f08cc447
 0.086s
2014-09-26 11:50:06.733 7755 INFO tempest.common.rest_client 
[req-f3752c9f-8de5-4bde-98c9-879c5a37ff44 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/servers/1d754b6e-128f-4b42-88ab-9dbefedd887f
 0.155s
2014-09-26 11:50:06.882 7755 INFO tempest.common.rest_client 
[req-dcb05efc-229a-4f40-ac67-2e95a80373c0 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:8774/v2/0cdef32ce1b746fa957a013a3638b3ec/servers/d26d1144-58d2-4900-93af-bf7fecdd7a60
 0.148s
2014-09-26 11:50:09.531 7755 INFO tempest.common.rest_client 
[req-73246abe-5e0e-4d16-88dc-75ca05593b2c None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/755eeb0d-6eb8-4acd-b36d-91bd4787cf4e 0.180s
2014-09-26 11:50:09.583 7755 INFO tempest.common.rest_client 
[req-549a5b98-dbb1-43ae-b4ce-f8182ebc10e2 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/subnets/999c8033-7684-4fc0-a09e-1ccb70196278 0.051s
2014-09-26 11:50:09.662 7755 INFO tempest.common.rest_client 
[req-c74637c0-f985-42c4-b5b0-f980bc431858 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/networks/ef773f01-7b53-4207-bc75-120563f36d7f 0.078s
2014-09-26 11:50:09.809 7755 INFO tempest.common.rest_client [-] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:35357/v2.0/users/fe26c8cb709644e1862d1b69c63b802b 0.146s
2014-09-26 11:50:09.877 7755 INFO tempest.common.rest_client 
[req-ea281fc3-70e2-487d-977b-7aa65db86722 None] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:9696/v2.0/security-groups/10b16895-5b18-48c5-88e4-9c384eb9d1b8 
0.043s
2014-09-26 11:50:10.016 7755 INFO tempest.common.rest_client [-] Request 
(ServerRescueNegativeTestJSON:tearDownClass): 204 DELETE 
http://127.0.0.1:35357/v2.0/tenants/0cdef32ce1b746fa957a013a3638b3ec 0.137s

This happens consistently.
Also note that in the case of the failing tests the same events are logged both 
at DEBUG and INFO level. This might indicate that some concurrency problem 
among test runners is probably installing an additional log handler, but I have 
no idea whether this is even possible.
What is probably happening is that the servers class variable gets resetted, 
and therefore the servers are not removed on resource_cleanup.

However, this still has to be proved. Further logging might be added to
this aim, which might be helpful to validate this hypothesis (I could
not find any clue through static code and log 

[Yahoo-eng-team] [Bug 1378786] Re: Update rpc version aliases for juno

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126712
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6ed57972093835f449ad645b3783bbb8b3c4245e
Submitter: Jenkins
Branch:proposed/juno

commit 6ed57972093835f449ad645b3783bbb8b3c4245e
Author: Russell Bryant rbry...@redhat.com
Date:   Fri Oct 3 16:41:03 2014 -0400

Update rpc version aliases for juno

Update all of the rpc client API classes to include a version alias
for the latest version implemented in Juno.  This alias is needed when
doing rolling upgrades from Juno to Kilo.  With this in place, you can
ensure all services only send messages that both Juno and Kilo will
understand.

Closes-bug: #1378786
Change-Id: Ia81538130bf8530b70b5f55c7a3d565903ff54b4
(cherry picked from commit f98d725103c53e767a1cddb0b7e2c3822309db17)


** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378786

Title:
  Update rpc version aliases for juno

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Update all of the rpc client API classes to include a version alias
  for the latest version implemented in Juno.  This alias is needed when
  doing rolling upgrades from Juno to Kilo.  With this in place, you can
  ensure all services only send messages that both Juno and Kilo will
  understand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364814] Re: Neutron multiple api workers can't send cast message to agent when use zeromq

2014-10-08 Thread Elena Ezhova
As I found out the problem is in zmq context which is a singleton and thus is 
created only once. [1] This leads to problems when there is more than one 
process working with it. [2]
The solution is to make zmq context thread-local by using threading.local 
class. [3]

I have a working fix that I will upload shortly.

[1] 
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_drivers/impl_zmq.py#L813
[2] http://lists.zeromq.org/pipermail/zeromq-dev/2011-December/014900.html
[3] https://docs.python.org/2/library/threading.html#threading.local

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Confirmed = Opinion

** Changed in: oslo.messaging
   Status: New = Confirmed

** Changed in: oslo.messaging
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364814

Title:
  Neutron multiple api workers can't send cast message to agent when use
  zeromq

Status in OpenStack Neutron (virtual network service):
  Opinion
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  When I set api_workers  0 in Neutron configuration, delelting or adding 
router interface, Neutron L3 agent can't receive message from Neutron Server.
  In this situation, L3 agent report state can cast to Neutron Server, 
meanwhile it can receive cast message from Neutron Server.(use call method)

  Obviously, Neutron Server can use cast method for sending message to
  L3 agent, But why cast routers_updated fails? This also occurs in
  other Neutron agent.

  Then I make a test, write some codes in  Neutron server starts or
  l3_router_plugins, sends cast periodic message to L3 agent directly.
  From L3 agent rpc-zmq-receiver log file shows it receives message from
  Neutron Server.

  By the way, everything works well when api_workers = 0.

  Test environment:
  neutron(master) + oslo.messaging(master) + zeromq

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378855] [NEW] juno capstone migration is missing

2014-10-08 Thread Mark McClain
Public bug reported:

The Juno capstone migration is missing.

** Affects: neutron
 Importance: Critical
 Assignee: Mark McClain (markmcclain)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378855

Title:
  juno capstone migration is missing

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The Juno capstone migration is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367892] Re: delete port fails with RouterNotHostedByL3Agent exception

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126565
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=75f34fbbd930a143ed2c4b868f33c117e467e98e
Submitter: Jenkins
Branch:proposed/juno

commit 75f34fbbd930a143ed2c4b868f33c117e467e98e
Author: Ed Bak ed.b...@hp.com
Date:   Mon Sep 29 14:15:52 2014 -0600

Don't fail when trying to unbind a router

If a router is already unbound from an l3 agent, don't fail.  Log
the condition and go on.  This is harmless since it can happen
due to a delete race condition between multiple neutron-server
processes.  One delete request can determine that it needs to
unbind the router.  A second process may also determine that it
needs to unbind the router.  The exception thrown will result
in a port delete failure and cause nova to mark a deleted instance
as ERROR.

Change-Id: Ia667ea77a0a483deff8acfdcf90ca84cd3adf44f
Closes-Bug: 1367892


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367892

Title:
  delete port fails with RouterNotHostedByL3Agent exception

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  When deleting a vm, port_delete sometimes fails with a
  RouterNotHostedByL3Agent exception.  This error is created by a script
  which boots a vm, associates a floating ip, tests that the vm is
  pingable, disassociates the fip and then deletes the vm.  The
  following stack trace has been seen multiple times.

  2014-09-09 11:55:59 7648 DEBUG neutronclient.v2_0.client 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Error message: {NeutronError: {message: 
The router 0ffc5634-d7ff-4bc7-8dca-cbdb10414924 is not hosted by L3 agent 
35f71627-3c41-4226-96dd-15faa6ec44c3., type: RouterNotHostedByL3Agent, 
detail: }} _handle_fault_response 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py:1202
  2014-09-09 11:55:59 7648 ERROR nova.network.neutronv2.api 
[req-16883a09-7ec6-4159-9580-9cfa1880f786 73ae929bd62c4eddbe2f38a709265f2b 
3d4668d03b5e4ac7b316aac9ff88e2db] Failed to delete neutron port 
41b8e31b-f459-4159-9311-d8701885f43a
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api Traceback (most 
recent call last):
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py,
 line 448, in deallocate_for_instance
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
neutron.delete_port(port)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 101, in with_params
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api ret = 
self.function(instance, *args, **kwargs)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 328, in delete_port
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api return 
self.delete(self.port_path % (port))
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1311, in delete
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
headers=headers, params=params)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1300, in retry_request
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
headers=headers, params=params)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1243, in do_request
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
self._handle_fault_response(status_code, replybody)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 1211, in _handle_fault_response
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
exception_handler_v20(status_code, des_error_body)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api File 
/opt/stack/venvs/nova/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py,
 line 68, in exception_handler_v20
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api 
status_code=status_code)
  2014-09-09 11:55:59.153 7648 TRACE nova.network.neutronv2.api Conflict: The 
router 0ffc5634-d7ff-4bc7-8dca-cbdb10414924 is not hosted by L3 agent 
35f71627-3c41-4226-96dd-15faa6ec44c3.

To manage 

[Yahoo-eng-team] [Bug 1378874] [NEW] ca-cert support in CentOS

2014-10-08 Thread jaxxstorm
Public bug reported:

According to the source, adding a ca-cert is only supported on Ubuntu 
Debian:

distros = ['ubuntu', 'debian']

This function should be added to CentOS too, so that adding CA certs can
be done on CentOS and other RPM based distros

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1378874

Title:
  ca-cert support in CentOS

Status in Init scripts for use on cloud images:
  New

Bug description:
  According to the source, adding a ca-cert is only supported on Ubuntu
   Debian:

  distros = ['ubuntu', 'debian']

  This function should be added to CentOS too, so that adding CA certs
  can be done on CentOS and other RPM based distros

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1378874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-10-08 Thread Alexander Gubanov
** Changed in: mos
   Status: Fix Committed = Fix Released

** Changed in: mos/5.1.x
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Released
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/rpcapi.py, line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/transport.py, line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378895] [NEW] Router details page structue is inconsistent with other detail pages

2014-10-08 Thread Aaron Sahlin
Public bug reported:

The Routers details page has an overview section at the top, then below
Interfaces table is displayed in a tabgroup.This is inconsistent
with other detail pages.   (Included screenshot to show the
differences).

Question 1.   Why is Interfaces displayed a part of a tab group?   Should it 
just be a table (like Network Details).
Question 2.   If interfaces needs to be part of a tab group then:
a.  Overview should be on its own tab, 2nd tab 
interfaces, then tabgroup appears at top (like Instances details page)
b.  Or rename single tab to overview and include 
overview detail on interfaces tab.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: RouterDetailsPageInconsistent.png
   
https://bugs.launchpad.net/bugs/1378895/+attachment/4228586/+files/RouterDetailsPageInconsistent.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378895

Title:
  Router details page structue is inconsistent with other detail pages

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Routers details page has an overview section at the top, then
  below Interfaces table is displayed in a tabgroup.This is
  inconsistent with other detail pages.   (Included screenshot to show
  the differences).

  Question 1.   Why is Interfaces displayed a part of a tab group?   Should it 
just be a table (like Network Details).
  Question 2.   If interfaces needs to be part of a tab group then:
  a.  Overview should be on its own tab, 2nd tab 
interfaces, then tabgroup appears at top (like Instances details page)
  b.  Or rename single tab to overview and include 
overview detail on interfaces tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378904] [NEW] renaming availability zone doesn't modify host's availability zone

2014-10-08 Thread Guillaume Winter
Public bug reported:

Hi,

After renaming our availability zones via Horizon Dashboard, we couldn't
migrate any old instance anymore, the scheduler returning No valid
Host found...

After searching, we found in the nova DB `instances` table, the
availability_zone field contains the name of the availability zone,
instead of the ID ( or maybe it is intentional ;) ).

So renaming AZ leaves the hosts created prior to this rename orphan and
the scheduler cannot find any valid host for them...

Our openstack install is on debian wheezy, with the icehouse official
repository from archive.gplhost.com/debian/, up to date.

If you need any more infos, I'd be glad to help.

Cheers

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378904

Title:
  renaming availability zone doesn't modify host's availability zone

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi,

  After renaming our availability zones via Horizon Dashboard, we
  couldn't migrate any old instance anymore, the scheduler returning
  No valid Host found...

  After searching, we found in the nova DB `instances` table, the
  availability_zone field contains the name of the availability zone,
  instead of the ID ( or maybe it is intentional ;) ).

  So renaming AZ leaves the hosts created prior to this rename orphan
  and the scheduler cannot find any valid host for them...

  Our openstack install is on debian wheezy, with the icehouse
  official repository from archive.gplhost.com/debian/, up to date.

  If you need any more infos, I'd be glad to help.

  Cheers

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378903] [NEW] Xen snapshot uploads can fail without retry under retryable circumstances

2014-10-08 Thread Christopher Lefelhocz
Public bug reported:

If a glance server is completely down, the xen server taking a snapshot
will fail and report back as a non-retryable exception.  This is not
correct and the compute node should really go to the next server in the
list and retry.

** Affects: nova
 Importance: Undecided
 Assignee: Christopher Lefelhocz (christopher-lefelhoc)
 Status: New


** Tags: xenserver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378903

Title:
  Xen snapshot uploads can fail without retry under retryable
  circumstances

Status in OpenStack Compute (Nova):
  New

Bug description:
  If a glance server is completely down, the xen server taking a
  snapshot will fail and report back as a non-retryable exception.
  This is not correct and the compute node should really go to the next
  server in the list and retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378907] [NEW] Lots of text output in the unit test results

2014-10-08 Thread Julie Pichon
Public bug reported:

Running the unit tests on master with a fresh venv results in a ton of
noise. The tests don't fail but we should have a clean output.

Highlights include:

ConnectionFailed: Connection to neutron failed: ('Connection aborted.',
gaierror(-2, 'Name or service not known'))

  File /home/jpichon/devel/horizon/horizon/exceptions.py, line 326, in handle
raise Http302(redirect)
Http302

ConnectionError: ('Connection aborted.', gaierror(-2, 'Name or service
not known'))

Error while checking action permissions.
Traceback (most recent call last):
  File /home/jpichon/devel/horizon/horizon/tables/base.py, line 1236, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File /home/jpichon/devel/horizon/horizon/tables/actions.py, line 137, in 
_allowed
return self.allowed(request, datum)
  File 
/home/jpichon/devel/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py,
 line 46, in allowed
if (snapshot._volume and
  File /home/jpichon/devel/horizon/openstack_dashboard/api/base.py, line 81, 
in __getattribute__
return object.__getattribute__(self, attr)
AttributeError: 'VolumeSnapshot' object has no attribute '_volume'

DEBUG:cinderclient.client:Connection error: ('Connection aborted.',
gaierror(-2, 'Name or service not known'))

But there really is a lot...

This doesn't seem to be happening on the juno-rc1 tag.

** Affects: horizon
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1378907

Title:
  Lots of text output in the unit test results

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Running the unit tests on master with a fresh venv results in a ton of
  noise. The tests don't fail but we should have a clean output.

  Highlights include:

  ConnectionFailed: Connection to neutron failed: ('Connection
  aborted.', gaierror(-2, 'Name or service not known'))

File /home/jpichon/devel/horizon/horizon/exceptions.py, line 326, in 
handle
  raise Http302(redirect)
  Http302

  ConnectionError: ('Connection aborted.', gaierror(-2, 'Name or service
  not known'))

  Error while checking action permissions.
  Traceback (most recent call last):
File /home/jpichon/devel/horizon/horizon/tables/base.py, line 1236, in 
_filter_action
  return action._allowed(request, datum) and row_matched
File /home/jpichon/devel/horizon/horizon/tables/actions.py, line 137, in 
_allowed
  return self.allowed(request, datum)
File 
/home/jpichon/devel/horizon/openstack_dashboard/dashboards/project/volumes/snapshots/tables.py,
 line 46, in allowed
  if (snapshot._volume and
File /home/jpichon/devel/horizon/openstack_dashboard/api/base.py, line 
81, in __getattribute__
  return object.__getattribute__(self, attr)
  AttributeError: 'VolumeSnapshot' object has no attribute '_volume'

  DEBUG:cinderclient.client:Connection error: ('Connection aborted.',
  gaierror(-2, 'Name or service not known'))

  But there really is a lot...

  This doesn't seem to be happening on the juno-rc1 tag.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1378907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357491] Re: Detach service from compute_node

2014-10-08 Thread Sylvain Bauza
This change requires a spec as it involves DB migrations

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357491

Title:
  Detach service from compute_node

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  AFAICT, there's no good reason to have a foreign key relation between
  compute_nodes and services. In fact, I see no reason why compute_nodes
  needs to have a service_id column at all.

  The service is the representation of the message bus between the nova-
  conductor and the nova-compute worker processes. The compute node is
  merely the collection of resources for a provider of compute
  resources. There's really no reason to relate the two with each other.

  The fact that they are related to each other means that the resource
  tracker ends up needing to find its compute node record by first
  looking up the service record for the 'compute' topic and the host for
  the resource tracker, and then grabs the first compute_node record
  that is related to the service record that matches that query. There
  is no reason to do this in the resource tracker ... other than the
  fact that right now the compute_node table has a service_id field and
  a relation to the services table. But this relationship is contrived
  and is not needed AFAICT.

  The solution to this is to remove the service_id column from the
  compute_nodes table and model, remove the foreign key relation to the
  services table from the compute_nodes table, and then simply look up a
  compute_node record directly from the host and nodename fields instead
  of looking up a service record first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366905] Re: Migration from havana to icehouse takes forever if large subset of data is present

2014-10-08 Thread Dolph Mathews
** Changed in: keystone
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1366905

Title:
  Migration from havana to icehouse takes forever if large subset of
  data is present

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi guys,

  I think the upgrade documentation from Havana to Icehouse should
  be updated so keystone-manage token_flush is used before anything
  else.   If keystone token_flush is present in Havana, it should
  definitely be ran before doing anything else too.

  Still runing after 30 minutes:

  mysql show processlist;
  
++--+---+--+-+--+---++
  | Id | User | Host  | db   | Command | Time | State | 
Info   |
  
++--+---+--+-+--+---++
  | 547285 | root | localhost | keystone | Query   |0 | NULL  | 
show processlist   |
  | 547349 | root | localhost | keystone | Query   | 1899 | copy to tmp table | 
ALTER TABLE keystone.token CONVERT TO CHARACTER SET 'utf8' |
  
++--+---+--+-+--+---++
  2 rows in set (0.00 sec)

  
  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1366905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357084] Re: IPv6 slaac is broken when subnet is less than /64

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126905
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a56a35572d7b7d4b534825fe7b4f681028121a74
Submitter: Jenkins
Branch:proposed/juno

commit a56a35572d7b7d4b534825fe7b4f681028121a74
Author: Eugene Nikanorov enikano...@mirantis.com
Date:   Mon Aug 25 00:59:02 2014 +0400

Raise exception if ipv6 prefix is inappropriate for address mode

Address prefix to use with slaac and stateless ipv6 address modes
should be equal to 64 in order to work properly.
The patch adds corresponding validation and fixes unit tests
accordingly.

Change-Id: I6c344b21a69f85f2885a72377171f70309b26775
Closes-Bug: #1357084
(cherry picked from commit 0d895e1b722da2f1e92f444e53b3ee32)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357084

Title:
  IPv6 slaac is broken when subnet is less than /64

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  SLAAC and DHCPv6 stateless work only with subnets with mask /64 and more 
(/63, /62) because EUI-64 calculated IP takes 8 octets.
  If subnet mask is /65, /66, .., /128 SLAAC/DHCP stateless should be disabled. 
  API call for creating subnet with SLAAC/DHCP stateless and mask more than /64 
should fail.

  Example:
  let's create net and subnet with mask /96:

  $ neutron net-create 14
  $ neutron subnet-create 14 --ipv6-ra-mode=slaac --ipv6-address-mode=slaac 
--ip-version=6 2003::/96 
  Created a new subnet:
  ...
  | allocation_pools  | {start: 2003::2, end: 2003:::fffe} |
  | cidr  | 2003::/96  |
     |
  | gateway_ip| 2003::1|
  ...
  | ipv6_address_mode | slaac  |
  | ipv6_ra_mode  | slaac  |
  ...

  Let's create port in this network:

  $  neutron port-create 14 --mac-address=11:22:33:44:55:66
  Created a new port:
  ...
  | fixed_ips | {subnet_id: 
1bfe4522-3b71-4e74-bb80-44c853ff868d, ip_address: 
2003::1322:33ff:fe44:5566} |
  ...
  | mac_address   | 11:22:33:44:55:66   
 |
  ...

  As you see port gets IP 2003::1322:33ff:fe44:5566 which is not from
  original network 2003::/96.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369685] Re: Trace in ovs agent when NoopFirewallDriver is configured

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126902
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0a64b61f8cbf41e1bf74961c235f03ac6cc6ead6
Submitter: Jenkins
Branch:proposed/juno

commit 0a64b61f8cbf41e1bf74961c235f03ac6cc6ead6
Author: Eugene Nikanorov enikano...@mirantis.com
Date:   Mon Sep 15 22:10:45 2014 +0400

Add missing methods to NoopFirewallDriver

The fix adds missing methods into generic Firewall class
and in NoopFirewall driver class.

Change-Id: I6402448075ed414434dc007f5c403fc85b6b1456
Closes-Bug: #1369685
Related-Bug: #1365806
(cherry picked from commit 9a6c073656a7e0b1a26b2bca0ba381489d04e322)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369685

Title:
  Trace in ovs agent when NoopFirewallDriver is configured

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  When NoopFirewallDriver is configured for ovs agent, the following
  trace could be seen:

  http://paste.openstack.org/show/111808/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255142] Re: unable to get router's external IP when non admin (blocker for VPNaaS)

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126911
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b1282b8410ca546bfa15e1174ab9bafe1c29ee43
Submitter: Jenkins
Branch:proposed/juno

commit b1282b8410ca546bfa15e1174ab9bafe1c29ee43
Author: Kevin Benton blak...@gmail.com
Date:   Wed Jun 18 12:03:01 2014 -0700

Allow reading a tenant router's external IP

Adds an external IPs field to the external gateway information
for a router so the external IP address of the router can be
read by the tenant.

DocImpact

Closes-Bug: #1255142
Change-Id: If4e77c445e9b855ff77deea6c8df4a0b3cf249d4
(cherry picked from commit c7baaa068ed1d3c8b02717232edef60ba1b655f6)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255142

Title:
  unable to get router's external IP when non admin (blocker for VPNaaS)

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  In order to set up VPNaaS, a user needs to know his router's external
  IP (to configure it as endpoint).

  PROBLEM : When a user is not admin, the external IP of a router is not
  visible:

  source openrc demo demo
  neutron router-list
  
+--+-+-+
  | id   | name| external_gateway_info  
 |
  
+--+-+-+
  | 2bd1f015-6c98-4861-a078-5a69256ca7b0 | router1 | {network_id: 
8ae6890d-5bb5-4f07-9059-77499628048c, enable_snat: true} |
  
+--+-+-+
  neutron router-port-list 2bd1f015-6c98-4861-a078-5a69256ca7b0
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 8ae7206d-19af-4a2a-a15b-0f8cdb98861e |  | fa:16:3e:0a:ee:14 | 
{subnet_id: c69b14f9-c2e4-4877-8516-57ff2bdeaa9e, ip_address: 
172.17.0.1} |
  
+--+--+---+---+

  It's visible only as admin:
  source openrc admin demo
  neutron router-port-list 2bd1f015-6c98-4861-a078-5a69256ca7b0
  
+--+--+---+---+
  | id   | name | mac_address   | fixed_ips 
|
  
+--+--+---+---+
  | 8ae7206d-19af-4a2a-a15b-0f8cdb98861e |  | fa:16:3e:0a:ee:14 | 
{subnet_id: c69b14f9-c2e4-4877-8516-57ff2bdeaa9e, ip_address: 
172.17.0.1} |
  | fd56a686-480d-4ede-b021-010253c3de42 |  | fa:16:3e:a5:d2:92 | 
{subnet_id: 29f5737c-417f-4aa9-a95e-2bef3a04729e, ip_address: 
192.168.57.226} |
  
+--+--+---+---+

  Since users need to know the external IP of their router in order to
  set up VPNaaS this is quite blocking because it requires users to be
  admin in order to use this feature. It's not an issue for a private
  cloud, but a big issue for public clouds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1255142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367771] Re: glance-manage db load_metadefs will fail if DB is not empty

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126856
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=76c66343a45cba0068c97d1ad21b46c63977e13d
Submitter: Jenkins
Branch:proposed/juno

commit 76c66343a45cba0068c97d1ad21b46c63977e13d
Author: Bartosz Fic bartosz@intel.com
Date:   Tue Oct 7 18:13:51 2014 +0200

Use ID for namespace generated by DB

In current implementation ID that is used in namespace to
insert data to DB is generated by built-in function - enumerate.
This causes problems with loading the metadata when there are already
namespaces in DB.

This patch removes 'enumerate' and asks for namespace ID
generated by database.

Closes-Bug: #1367771
Co-Authored-By: Bartosz Fic bartosz@intel.com
Co-Authored-By: Pawel Koniszewski pawel.koniszew...@intel.com
(cherry picked from commit 89c04904416270d3c306d430f443a7127c5fc206)

Conflicts:
glance/db/sqlalchemy/metadata.py

Change-Id: I235c6310077526cafb898ac007c3601b4d66c9fe


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367771

Title:
  glance-manage db load_metadefs will fail if DB is not empty

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  To insert data into DB 'glance-manage db load_metadefs' uses IDs for
  namespaces which are generated by built-in function in Python -
  enumerate:

  for namespace_id, json_schema_file in enumerate(json_schema_files,
  start=1):

  For empty database it works fine, but this causes problems when there
  are already metadata namespaces in database. The problem is that when
  there are already metadata definitions in DB then every invoke of
  glance-manage db load_metadefs leads to IntegrityErrors because of
  duplicated IDs.

  There are two approaches to fix this:
  1. Ask for a namespace just after inserting it. Unfortunately in current 
implementation we need to do one more query.
  2. When this go live - https://review.openstack.org/#/c/120414/ - then we 
won't need to do another query, because ID is available just after inserting a 
namespace to DB (namespace.save(session=session)).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368479] Re: Metadef Property and Object schema columns should use JSONEncodedDict

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126855
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=da93f408dde9652a3f5e2daaa534852576b8f6f2
Submitter: Jenkins
Branch:proposed/juno

commit da93f408dde9652a3f5e2daaa534852576b8f6f2
Author: Wayne Okuma wayne.ok...@hp.com
Date:   Thu Sep 11 13:59:29 2014 -0700

Metadef Property and Object schema columns should use JSONEncodedDict

The MetadefProperty and MetadefObject ORM classes currently specify the
JSON schema columns as type Text. It is preferred to use the
JSONEncodedDict Type Decorator instead. This fix also includes necessary
code changes to remove JSON encoding/decoding that was previously done
in other layers. Fixes for unit tests involving the schema columns are
also included.

Closes-Bug: 1368479

Conflicts:
glance/db/__init__.py
glance/db/sqlalchemy/models_metadef.py

Change-Id: I2c574210f8d62c77a438afab83ff80f3e5bd2fe7
(cherry picked from commit 824d9620b0b90483baf45981d2cb328855943e06)


** Changed in: glance
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1368479

Title:
  Metadef Property and Object schema columns should use JSONEncodedDict

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released

Bug description:
  The MetadefProperty and MetadefObject ORM classes currently specify
  the JSON schema columns as type Text. It is preferred to use the
  JSONEncodedDict Type Decorator instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1368479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378964] [NEW] GPFS should snap the glance image file only the image format is 'raw'

2014-10-08 Thread Nilesh Bhosale
Public bug reported:

GPFS 'copy_on_write' mode works only when creating a volume from an image, with 
image format being 'raw'.
When the image format is other than 'raw' like for example 'qcow2', the GPFS 
driver copies the image file to the volume by converting it to 'raw' format.
But, currently during this operation as well, GPFS driver is snap'ing the 
glance image, making it a clone parent, with no child associated.
Though this does not break any functionality, this is a unnecessary operation 
and needs to be avoided.

** Affects: cinder
 Importance: Undecided
 Assignee: Nilesh Bhosale (nilesh-bhosale)
 Status: New


** Tags: cinder gpfs

** Project changed: glance = cinder

** Changed in: cinder
 Assignee: (unassigned) = Nilesh Bhosale (nilesh-bhosale)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378964

Title:
  GPFS should snap the glance image file only the image format is 'raw'

Status in Cinder:
  New

Bug description:
  GPFS 'copy_on_write' mode works only when creating a volume from an image, 
with image format being 'raw'.
  When the image format is other than 'raw' like for example 'qcow2', the GPFS 
driver copies the image file to the volume by converting it to 'raw' format.
  But, currently during this operation as well, GPFS driver is snap'ing the 
glance image, making it a clone parent, with no child associated.
  Though this does not break any functionality, this is a unnecessary operation 
and needs to be avoided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1378964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378968] [NEW] Metadef schema column name is a reserved word in MySQL

2014-10-08 Thread Wayne
Public bug reported:

The metadef_properties and metadef_objects tables both have a column
named schema. Unfortunately, schema is a reserved word in some
relational database products, including MySQL and PostgreSQL. The
metadef_properties.schema and metadef_objects.schema columns should be
renamed to a non reserved word.

** Affects: glance
 Importance: Undecided
 Assignee: Wayne (wayne-okuma)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Wayne (wayne-okuma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378968

Title:
  Metadef schema column name is a reserved word in MySQL

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The metadef_properties and metadef_objects tables both have a column
  named schema. Unfortunately, schema is a reserved word in some
  relational database products, including MySQL and PostgreSQL. The
  metadef_properties.schema and metadef_objects.schema columns should be
  renamed to a non reserved word.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379016] [NEW] Neutron LBAAS agent do not use custom routes configured on their subnet

2014-10-08 Thread Diego Lima
Public bug reported:

Neutron load balancer namespaces do not inherit any settings on their
subnets, including custom routes, since it does not use DHCP to
configure its interface. Users expect that load balancers are able to
reach the same networks their instances do.

To reproduce the problem take the following steps:

1 - Create a subnetwork
2 - Create 2 routers connecting to different networks
3 - Edit the subnetwork and add some custom routes using the second router
4 - Launch an instance. It should receive the custom routes via DHCP and reach 
them through the second router.
5 - Create a load balancer
6 - Check the load balancer's namespace routing table. There will be no custom 
routes.


This is related to https://bugs.launchpad.net/neutron/+bug/1376446,
although its a different bug. As a workaround an user can create a
second subnet which use the second router as its default gateway and
place the load balancers there.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379016

Title:
  Neutron LBAAS agent do not use custom routes configured on their
  subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron load balancer namespaces do not inherit any settings on their
  subnets, including custom routes, since it does not use DHCP to
  configure its interface. Users expect that load balancers are able to
  reach the same networks their instances do.

  To reproduce the problem take the following steps:

  1 - Create a subnetwork
  2 - Create 2 routers connecting to different networks
  3 - Edit the subnetwork and add some custom routes using the second router
  4 - Launch an instance. It should receive the custom routes via DHCP and 
reach them through the second router.
  5 - Create a load balancer
  6 - Check the load balancer's namespace routing table. There will be no 
custom routes.


  This is related to https://bugs.launchpad.net/neutron/+bug/1376446,
  although its a different bug. As a workaround an user can create a
  second subnet which use the second router as its default gateway and
  place the load balancers there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379033] [NEW] Fix file permissions that were inadvertently changed

2014-10-08 Thread Aaron Sahlin
Public bug reported:

This patch was recently merged into master that inadvertently changed file 
permissions.
https://review.openstack.org/#/c/125206/

This defect will be used to change the permissions back.Files that
had permissions changed.

openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/keypairs/_create.html
openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/keypairs/_import.html
openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/security_groups/_add_rule.html
openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/security_groups/_create.html
openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/security_groups/_update.html
openstack_dashboard/dashboards/project/containers/templates/containers/_copy.html
openstack_dashboard/dashboards/project/containers/templates/containers/_create.html
openstack_dashboard/dashboards/project/containers/templates/containers/_create_pseudo_folder.html
openstack_dashboard/dashboards/project/containers/templates/containers/_update.html
 
openstack_dashboard/dashboards/project/containers/templates/containers/_upload.html
openstack_dashboard/dashboards/project/data_processing/cluster_templates/templates/data_processing.cluster_templates/_details.html
openstack_dashboard/dashboards/project/data_processing/cluster_templates/templates/data_processing.cluster_templates/_nodegroups_details.html
openstack_dashboard/dashboards/project/data_processing/cluster_templates/templates/data_processing.cluster_templates/cluster_node_groups_template.html
openstack_dashboard/dashboards/project/data_processing/clusters/templates/data_processing.clusters/_details.html
openstack_dashboard/dashboards/project/data_processing/clusters/templates/data_processing.clusters/_nodegroups_details.html
openstack_dashboard/dashboards/project/data_processing/job_executions/templates/data_processing.job_executions/_details.html

openstack_dashboard/dashboards/project/data_processing/nodegroup_templates/templates/data_processing.nodegroup_templates/_service_confs.html

openstack_dashboard/dashboards/project/instances/templates/instances/_decryptpassword.html
openstack_dashboard/dashboards/project/instances/templates/instances/_instance_flavor.html
openstack_dashboard/dashboards/project/instances/templates/instances/_rebuild.html
openstack_dashboard/dashboards/project/loadbalancers/templates/loadbalancers/_vip_details.html
openstack_dashboard/dashboards/project/networks/templates/networks/_create.html
openstack_dashboard/dashboards/project/networks/templates/networks/_detail_overview.html
openstack_dashboard/dashboards/project/routers/templates/routers/_detail_overview.html
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/_create.html
openstack_dashboard/dashboards/project/routers/templates/routers/extensions/routerrules/grid.html
openstack_dashboard/dashboards/project/routers/templates/routers/ports/_create.html
openstack_dashboard/dashboards/project/routers/templates/routers/ports/_setgateway.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_change_template.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_create.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_detail_overview.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_resource_overview.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_select_template.html
openstack_dashboard/dashboards/project/stacks/templates/stacks/_update.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/backups/_create_backup.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/backups/_detail_overview.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/backups/_restore_backup.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_extend_limits.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_limits.html
openstack_dashboard/dashboards/project/volumes/templates/volumes/volumes/_update.html
openstack_dashboard/dashboards/router/nexus1000v/templates/nexus1000v/_create_network_profile.html

** Affects: horizon
 Importance: Undecided
 Assignee: Aaron Sahlin (asahlin)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Aaron Sahlin (asahlin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379033

Title:
  Fix file permissions that were inadvertently changed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This patch was recently merged into master that inadvertently changed file 
permissions.
  https://review.openstack.org/#/c/125206/

  This defect will be used to 

[Yahoo-eng-team] [Bug 1379044] [NEW] remove double header titles

2014-10-08 Thread Cindy Lu
Public bug reported:

For panels with a single table (including those that don't have tabs), I
think we should remove the second heading.

We don't need to say: 
Instances
Instances

Images
Images

Flavors
Flavors

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379044

Title:
  remove double header titles

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For panels with a single table (including those that don't have tabs),
  I think we should remove the second heading.

  We don't need to say: 
  Instances
  Instances

  Images
  Images

  Flavors
  Flavors

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1379044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379054] [NEW] run_tests.sh --makemessages is broken

2014-10-08 Thread Gloria Gu
Public bug reported:

[stack@gloria-stack:/home/stack/horizon]↥ master+ ± ./run_tests.sh 
--makemessages
Checking environment.
Environment is up to date.
horizon: processing locale en
horizon javascript: processing locale en
openstack_dashboard: Traceback (most recent call last):
  File /home/stack/horizon/manage.py, line 23, in module
execute_from_command_line(sys.argv)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 399, in execute_from_command_line
utility.execute()
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 242, in run_from_argv
self.execute(*args, **options.__dict__)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 285, in execute
output = self.handle(*args, **options)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 415, in handle
return self.handle_noargs(**options)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 262, in handle_noargs
potfile = self.build_pot_file(localedir)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 294, in build_pot_file
f.process(self, potfile, self.domain, self.keep_pot)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 96, in process
content = templatize(src_data, orig_file[2:])
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/utils/translation/__init__.py,
 line 172, in templatize
return _trans.templatize(src, origin)
  File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py,
 line 559, in templatize
raise SyntaxError(Translation blocks must not include other block tags: %s 
(%sline %d) % (t.contents, filemsg, t.lineno))
SyntaxError: Translation blocks must not include other block tags: autoescape 
off (file 
dashboards/project/data_processing/clusters/templates/data_processing.clusters/_details.html,
 line 86)

** Affects: horizon
 Importance: Undecided
 Assignee: Gloria Gu (gloria-gu)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Gloria Gu (gloria-gu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379054

Title:
  run_tests.sh --makemessages is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  [stack@gloria-stack:/home/stack/horizon]↥ master+ ± ./run_tests.sh 
--makemessages
  Checking environment.
  Environment is up to date.
  horizon: processing locale en
  horizon javascript: processing locale en
  openstack_dashboard: Traceback (most recent call last):
File /home/stack/horizon/manage.py, line 23, in module
  execute_from_command_line(sys.argv)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 399, in execute_from_command_line
  utility.execute()
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
  self.fetch_command(subcommand).run_from_argv(self.argv)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 242, in run_from_argv
  self.execute(*args, **options.__dict__)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 285, in execute
  output = self.handle(*args, **options)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/base.py,
 line 415, in handle
  return self.handle_noargs(**options)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 262, in handle_noargs
  potfile = self.build_pot_file(localedir)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 294, in build_pot_file
  f.process(self, potfile, self.domain, self.keep_pot)
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/core/management/commands/makemessages.py,
 line 96, in process
  content = templatize(src_data, orig_file[2:])
File 
/home/stack/horizon/.venv/local/lib/python2.7/site-packages/django/utils/translation/__init__.py,
 line 172, in templatize
  return _trans.templatize(src, origin)
File 

[Yahoo-eng-team] [Bug 1353953] Re: Race between neutron-server and l3-agent

2014-10-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/126903
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6e79981b7caad2119461034dfe7b4d1c1a64
Submitter: Jenkins
Branch:proposed/juno

commit 6e79981b7caad2119461034dfe7b4d1c1a64
Author: Derek Higgins der...@redhat.com
Date:   Fri Sep 12 16:31:44 2014 +0100

Retry getting the list of service plugins

On systems that start both neutron-server and neutron-l3-agent together,
there is a chance that the first call to neutron will timeout. Retry upto
4 more times to avoid the l3 agent exiting on startup.

This should make the l3 agent a little more robust on startup but still
not ideal, ideally it wouldn't exit and retry periodically.

Change-Id: I2171a164f3f77bccd89895d73c1c8d67f7190488
Closes-Bug: #1353953
Closes-Bug: #1368152
Closes-Bug: #1368795
(cherry picked from commit e7f0b56d74fbfbb08a3b7a0d2da4cefb6fe2aa67)


** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353953

Title:
  Race between neutron-server and l3-agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  http://logs.openstack.org/58/87758/24/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/848e217/console.html

  2014-08-07 10:35:52.753 | + wait_for 30 10 ping -c 1 192.0.2.46
  2014-08-07 10:42:23.169 | Timing out after 300 seconds:
  2014-08-07 10:42:23.169 | COMMAND=ping -c 1 192.0.2.46
  2014-08-07 10:42:23.169 | OUTPUT=PING 192.0.2.46 (192.0.2.46) 56(84) bytes of 
data.
  2014-08-07 10:42:23.169 | From 192.0.2.46 icmp_seq=1 Destination Host 
Unreachable
  2014-08-07 10:42:23.169 | 
  2014-08-07 10:42:23.169 | --- 192.0.2.46 ping statistics 

  looks like neutron dhcp agent issues

  http://logs.openstack.org/58/87758/24/check-tripleo/check-tripleo-
  novabm-overcloud-f20-nonha/848e217/logs/overcloud-controller0_logs
  /neutron-dhcp-agent.txt.gz

  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv sudo[14027]: neutron : 
TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6 ip link set tap7e59533d-32 up
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
2014-08-07 10:31:10.476 12316 ERROR neutron.agent.linux.utils 
[req-fdebecfd-81d2-48c2-8765-7545e1e9dbb1 None]
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6', 'ip', 
'link', 'set', 'tap7e59533d-32', 'up']
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Exit code: 1
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stdout: ''
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stderr: 'Cannot open network namespace 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6: No such file or directory\n'
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv sudo[14032]: neutron : 
TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6 ip -o link show tap7e59533d-32
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
2014-08-07 10:31:10.596 12316 ERROR neutron.agent.linux.utils 
[req-fdebecfd-81d2-48c2-8765-7545e1e9dbb1 None]
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6', 'ip', 
'-o', 'link', 'show', 'tap7e59533d-32']
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Exit code: 1
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stdout: ''
  Aug 07 10:31:10 overcloud-controller0-s42ewsqhiswv neutron-dhcp-agent[12316]: 
Stderr: 'Cannot open network namespace 
qdhcp-09fcf8a1-ffd3-4f99-869a-8b227de009f6: No such file or directory\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379085] [NEW] Syntax error in _resource_overview.html

2014-10-08 Thread Martin André
Public bug reported:

Commit 'a1e770dc108bd101b79d5f9cf7de3238749556ff' introduced a syntax
error in _resource_overview.html that prevents rendering the Stack
Resource Detail page.

TemplateSyntaxError at /project/stacks/stack/dfacc35b-045a-4db0-a7aa-
abfa926bc1ba/server/

Unknown argument for u'blocktrans' tag: u'%'.

Request Method: GET
Request URL:
http://172.20.20.20/project/stacks/stack/dfacc35b-045a-4db0-a7aa-abfa926bc1ba/server/
Django Version: 1.6.7
Exception Type: TemplateSyntaxError
Exception Value:

Unknown argument for u'blocktrans' tag: u'%'.

Exception Location: 
/usr/local/lib/python2.7/dist-packages/django/templatetags/i18n.py in 
do_block_translate, line 434
Python Executable:  /usr/bin/python
Python Version: 2.7.6
Python Path:

['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
 '/opt/stack/python-keystoneclient',
 '/opt/stack/python-glanceclient',
 '/opt/stack/python-cinderclient',
 '/opt/stack/python-novaclient',
 '/opt/stack/python-swiftclient',
 '/opt/stack/python-neutronclient',
 '/opt/stack/python-heatclient',
 '/opt/stack/python-openstackclient',
 '/opt/stack/keystone',
 '/opt/stack/glance_store',
 '/opt/stack/glance',
 '/opt/stack/cinder',
 '/opt/stack/neutron',
 '/opt/stack/nova',
 '/opt/stack/horizon',
 '/opt/stack/heat',
 '/opt/stack/tempest-lib',
 '/opt/stack/tempest',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-x86_64-linux-gnu',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/opt/stack/horizon/openstack_dashboard']

** Affects: horizon
 Importance: Undecided
 Assignee: Martin André (mandre)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Martin André (mandre)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1379085

Title:
  Syntax error in _resource_overview.html

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Commit 'a1e770dc108bd101b79d5f9cf7de3238749556ff' introduced a syntax
  error in _resource_overview.html that prevents rendering the Stack
  Resource Detail page.

  TemplateSyntaxError at /project/stacks/stack/dfacc35b-045a-4db0-a7aa-
  abfa926bc1ba/server/

  Unknown argument for u'blocktrans' tag: u'%'.

  Request Method:   GET
  Request URL:  
http://172.20.20.20/project/stacks/stack/dfacc35b-045a-4db0-a7aa-abfa926bc1ba/server/
  Django Version:   1.6.7
  Exception Type:   TemplateSyntaxError
  Exception Value:  

  Unknown argument for u'blocktrans' tag: u'%'.

  Exception Location:   
/usr/local/lib/python2.7/dist-packages/django/templatetags/i18n.py in 
do_block_translate, line 434
  Python Executable:/usr/bin/python
  Python Version:   2.7.6
  Python Path:  

  ['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
   '/opt/stack/python-keystoneclient',
   '/opt/stack/python-glanceclient',
   '/opt/stack/python-cinderclient',
   '/opt/stack/python-novaclient',
   '/opt/stack/python-swiftclient',
   '/opt/stack/python-neutronclient',
   '/opt/stack/python-heatclient',
   '/opt/stack/python-openstackclient',
   '/opt/stack/keystone',
   '/opt/stack/glance_store',
   '/opt/stack/glance',
   '/opt/stack/cinder',
   '/opt/stack/neutron',
   '/opt/stack/nova',
   '/opt/stack/horizon',
   '/opt/stack/heat',
   '/opt/stack/tempest-lib',
   '/opt/stack/tempest',
   '/usr/lib/python2.7',
   '/usr/lib/python2.7/plat-x86_64-linux-gnu',
   '/usr/lib/python2.7/lib-tk',
   '/usr/lib/python2.7/lib-old',
   '/usr/lib/python2.7/lib-dynload',
   '/usr/local/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages',
   '/opt/stack/horizon/openstack_dashboard']

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1379085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379089] [NEW] get_sync_data is too convoluted

2014-10-08 Thread Isaku Yamahata
Public bug reported:

l3_db get_sync_data is too convoluted, thus l3_dvr_db fully implements its own 
version.
The method needs to be sort out to be easily extended by its subclass like 
l3_dvr_db

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379089

Title:
  get_sync_data is too convoluted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  l3_db get_sync_data is too convoluted, thus l3_dvr_db fully implements its 
own version.
  The method needs to be sort out to be easily extended by its subclass like 
l3_dvr_db

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379148] [NEW] Remove RPC calls dvr_vmarp_table_update from within DB transactions for delete_port

2014-10-08 Thread Swaminathan Vasudevan
Public bug reported:

Context switching in neutron creates performance issues. So we need to do some 
clean up in moving the RPC calls out of the DB transactions.
Similar to the create_port and update_port in ml2/plugin.py the delete_port 
should also be addressed to split the RPC from the DB transactions.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379148

Title:
  Remove RPC calls dvr_vmarp_table_update from within DB transactions
  for delete_port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Context switching in neutron creates performance issues. So we need to do 
some clean up in moving the RPC calls out of the DB transactions.
  Similar to the create_port and update_port in ml2/plugin.py the delete_port 
should also be addressed to split the RPC from the DB transactions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp