[Yahoo-eng-team] [Bug 1475156] [NEW] DeprecationWarning message occurs in gate-neutron-python27

2015-07-16 Thread fumihiko kakuma
Public bug reported:

oslo_utils.timeutils.strtime() is deprecated in version '1.6'.

2015-07-16 00:51:00.081 | {0} 
neutron.tests.unit.agent.test_rpc.AgentPluginReportState.test_plugin_report_state_use_call
 [0.010257s] ... ok
2015-07-16 00:51:00.081 | 
2015-07-16 00:51:00.081 | Captured stderr:
2015-07-16 00:51:00.081 | 
2015-07-16 00:51:00.081 | neutron/agent/rpc.py:83: DeprecationWarning: 
Using function/method 'oslo_utils.timeutils.strtime()' is deprecated in version 
'1.6' and will be removed in a future version: use either 
datetime.datetime.isoformat() or datetime.datetime.strftime() instead
2015-07-16 00:51:00.081 |   'time': timeutils.strtime(),
2015-07-16 00:51:00.081 | 
2015-07-16 00:51:00.084 | {7} 
neutron.tests.unit.agent.linux.test_iptables_firewall.IptablesFirewallTestCase.test_filter_ipv4_ingress
 [0.013247s] ... ok


The whole log can be gotten from the following.

http://logs.openstack.org/11/183411/7/check/gate-neutron-
python27/9c769e9/console.html

** Affects: neutron
 Importance: Undecided
 Assignee: fumihiko kakuma (kakuma)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475156

Title:
  DeprecationWarning message occurs in gate-neutron-python27

Status in neutron:
  In Progress

Bug description:
  oslo_utils.timeutils.strtime() is deprecated in version '1.6'.

  2015-07-16 00:51:00.081 | {0} 
neutron.tests.unit.agent.test_rpc.AgentPluginReportState.test_plugin_report_state_use_call
 [0.010257s] ... ok
  2015-07-16 00:51:00.081 | 
  2015-07-16 00:51:00.081 | Captured stderr:
  2015-07-16 00:51:00.081 | 
  2015-07-16 00:51:00.081 | neutron/agent/rpc.py:83: DeprecationWarning: 
Using function/method 'oslo_utils.timeutils.strtime()' is deprecated in version 
'1.6' and will be removed in a future version: use either 
datetime.datetime.isoformat() or datetime.datetime.strftime() instead
  2015-07-16 00:51:00.081 |   'time': timeutils.strtime(),
  2015-07-16 00:51:00.081 | 
  2015-07-16 00:51:00.084 | {7} 
neutron.tests.unit.agent.linux.test_iptables_firewall.IptablesFirewallTestCase.test_filter_ipv4_ingress
 [0.013247s] ... ok

  
  The whole log can be gotten from the following.

  http://logs.openstack.org/11/183411/7/check/gate-neutron-
  python27/9c769e9/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475190] [NEW] Network Profile is not supported in Kilo

2015-07-16 Thread Shiv Prasad Rao
Public bug reported:

Cisco N1kv ML2 driver does not support network profiles extension in
kilo. The network profile support in horizon needs to be removed for
stable/kilo.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475190

Title:
  Network Profile is not supported in Kilo

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cisco N1kv ML2 driver does not support network profiles extension in
  kilo. The network profile support in horizon needs to be removed for
  stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475254] [NEW] NovaObjectSerializer cannot handle backporting a nested object

2015-07-16 Thread Nikola Đipanov
Public bug reported:

NovaObjectSerializer will call obj_from_primitive, and tries to guard
against IncompatibleObjectVersion in which case it will call on the
conductor to backport the object to the highest version it knows about.
See:

https://github.com/openstack/nova/blob/35375133398d862a61334783c1e7a90b95f34cdb/nova/objects/base.py#L634

The problem is if a top-level object can be serialized but one of the
nested objects throws an IncompatibleObjectVersion what happens, due to
the way that we handle all exceptions from the recursion at the top
level is that conductor gets asked to backport the top-level object to
the nested object's latest known version - completely wrong!

https://github.com/openstack/nova/blob/35375133398d862a61334783c1e7a90b95f34cdb/nova/objects/base.py#L643

This happens in our case when trying to fix
https://bugs.launchpad.net/nova/+bug/1474074, and running upgrade tests
with unpatched Kilo code - we bumped the PciDeviceList version on
master, and need to do it on Kilo but the stable/kilo patch cannot be
landed first, so the highest PciDeviceList kilo node know about is 1.1,
however we end up asking the conductor to backport the Instance to 1.1
which drops a whole bunch of things we need, which then causes
lazy_loading exception (copied from the gate logs of
https://review.openstack.org/#/c/201280/ PS 6)

2015-07-15 16:55:15.377 ERROR nova.compute.manager 
[req-fb91e079-1eef-4768-b315-9233c6b9946d 
tempest-ServerAddressesTestJSON-1642250859 
tempest-ServerAddressesTestJSON-713705678] [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] Instance failed to spawn
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] Traceback (most recent call last):
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/compute/manager.py, line 2461, in _build_resources
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] yield resources
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/compute/manager.py, line 2333, in 
_build_and_run_instance
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] block_device_info=block_device_info)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/virt/libvirt/driver.py, line 2378, in spawn
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] write_to_disk=True)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/virt/libvirt/driver.py, line 4179, in _get_guest_xml
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] context)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/virt/libvirt/driver.py, line 3989, in 
_get_guest_config
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] pci_devs = 
pci_manager.get_instance_pci_devs(instance, 'all')
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/pci/manager.py, line 279, in get_instance_pci_devs
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] pci_devices = inst.pci_devices
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/objects/base.py, line 72, in getter
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] self.obj_load_attr(name)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/objects/instance.py, line 1018, in obj_load_attr
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] self._load_generic(attrname)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23]   File 
/opt/stack/old/nova/nova/objects/instance.py, line 908, in _load_generic
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] reason='loading %s requires 
recursion' % attrname)
2015-07-15 16:55:15.377 21515 TRACE nova.compute.manager [instance: 
25387a96-e47f-47f1-8e3c-3716072c9c23] ObjectActionError: Object action 
obj_load_attr failed because: loading pci_devices requires recursion
2015-07-15 16:55:15.377 21515 TRACE 

[Yahoo-eng-team] [Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-16 Thread James Page
Confirmed as fixed in Wily (oslo.log 1.2.0)

** Changed in: nova (Ubuntu Utopic)
   Status: In Progress = Won't Fix

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/juno
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/juno
   Status: New = In Progress

** Changed in: cloud-archive/juno
   Importance: Undecided = High

** Changed in: nova (Ubuntu)
   Status: In Progress = Invalid

** Also affects: python-oslo.log (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: python-oslo.log (Ubuntu Trusty)
   Status: New = Invalid

** Changed in: python-oslo.log (Ubuntu Utopic)
   Status: New = Invalid

** Changed in: python-oslo.log (Ubuntu)
   Status: New = Fix Released

** Also affects: nova (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: python-oslo.log (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Vivid)
   Status: New = Invalid

** Also affects: nova (Ubuntu Wily)
   Importance: High
 Assignee: Edward Hope-Morley (hopem)
   Status: Invalid

** Also affects: python-oslo.log (Ubuntu Wily)
   Importance: Undecided
   Status: Fix Released

** Changed in: python-oslo.log (Ubuntu Vivid)
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

Status in ubuntu-cloud-archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  In Progress
Status in ubuntu-cloud-archive juno series:
  In Progress
Status in ubuntu-cloud-archive kilo series:
  Confirmed
Status in ubuntu-cloud-archive liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in oslo.log:
  Fix Committed
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  In Progress
Status in python-oslo.log source package in Trusty:
  Invalid
Status in nova source package in Utopic:
  Won't Fix
Status in python-oslo.log source package in Utopic:
  Invalid
Status in nova source package in Vivid:
  Invalid
Status in python-oslo.log source package in Vivid:
  Confirmed
Status in nova source package in Wily:
  Invalid
Status in python-oslo.log source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * If Nova services are configured to log to syslog (use_syslog=True) they
 will currently fail with ECONNREFUSED if they cannot connect to syslog.
 This patch adds support for allowing nova to retry connecting a 
 configurable number of times before print an error message and continuing
 with startup.

  [Test Case]

   * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
 restart nova services. Check that upstart nova logs to see retries 
 occurring then start rsyslog and observe connection succeed and 
 nova-compute startup.

  [Regression Potential]

   * None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459046] Re: [SRU] nova-* services do not start if rsyslog is not yet started

2015-07-16 Thread James Page
Raised bug tasks for python-oslo.log, as we have  fix to land for
vivid/kilo

** Changed in: python-oslo.log (Ubuntu Vivid)
   Importance: Undecided = High

** Changed in: python-oslo.log (Ubuntu Wily)
   Importance: Undecided = High

** Changed in: nova (Ubuntu Wily)
   Importance: High = Undecided

** Changed in: nova (Ubuntu Wily)
 Assignee: Edward Hope-Morley (hopem) = (unassigned)

** Also affects: cloud-archive/kilo
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/icehouse
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/liberty
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/liberty
   Status: New = Fix Released

** Changed in: cloud-archive/kilo
   Status: New = Confirmed

** Changed in: cloud-archive/icehouse
   Status: New = In Progress

** Changed in: cloud-archive/icehouse
   Importance: Undecided = High

** Changed in: cloud-archive/kilo
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459046

Title:
  [SRU] nova-* services do not start if rsyslog is not yet started

Status in ubuntu-cloud-archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  In Progress
Status in ubuntu-cloud-archive juno series:
  In Progress
Status in ubuntu-cloud-archive kilo series:
  Confirmed
Status in ubuntu-cloud-archive liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in oslo.log:
  Fix Committed
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.log package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  In Progress
Status in python-oslo.log source package in Trusty:
  Invalid
Status in nova source package in Utopic:
  Won't Fix
Status in python-oslo.log source package in Utopic:
  Invalid
Status in nova source package in Vivid:
  Invalid
Status in python-oslo.log source package in Vivid:
  Confirmed
Status in nova source package in Wily:
  Invalid
Status in python-oslo.log source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * If Nova services are configured to log to syslog (use_syslog=True) they
 will currently fail with ECONNREFUSED if they cannot connect to syslog.
 This patch adds support for allowing nova to retry connecting a 
 configurable number of times before print an error message and continuing
 with startup.

  [Test Case]

   * Configure nova with use_syslog=True in nova.conf, stop rsyslog service and
 restart nova services. Check that upstart nova logs to see retries 
 occurring then start rsyslog and observe connection succeed and 
 nova-compute startup.

  [Regression Potential]

   * None

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1459046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474959] Re: Cloud Image launched by Heat, creates a ec2-user user without Shell.

2015-07-16 Thread Steven Hardy
*** This bug is a duplicate of bug 1474194 ***
https://bugs.launchpad.net/bugs/1474194

If you're getting ec2-user, then you need to set instance_user to an
empty string, as I mentioned in comment #1, and this is a duplicate of
bug #1474194

** Changed in: cloud-init
   Status: New = Invalid

** This bug has been marked a duplicate of bug 1474194
   When launching a template with the OS::Nova::Server type the 
user_data_format attribute determines the user on Ubuntu images

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1474959

Title:
  Cloud Image launched by Heat, creates a ec2-user user without Shell.

Status in cloud-init:
  Invalid
Status in heat:
  New

Bug description:
  Guys,

  If I launch an Ubuntu Trusty Instance using Heat, there is no ubuntu
  user available.

  Instead, there is a ec2-user user, without shell!

  Look:

  No ubuntu user:
  ---
  username@kilo-1:~$ ssh ubuntu@172.31.254.158
  Permission denied (publickey).
  ---

  Instead, there is a ec2-user user without shell:
  ---
  username@kilo-1:~$ ssh ec2-user@172.31.254.158
  Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-57-generic x86_64)

  ...

  $

  $ bash -i
  ec2-user@ubuntu-1:~$ grep ec2-user /etc/passwd
  ec2-user:x:1000:1000::/home/ec2-user:
  ---

  No shell (/bin/bash) for ec2-user user!

  Heat template block:
  ---
ubuntusrv1:
  type: OS::Nova::Server
  properties:
name: 
key_name: { get_param: 'ssh_key' }
image: { get_param: 'ubuntusrv1_image' }
flavor: m1.small
networks:
  - network: { get_resource: data_sub_net1 }
  ---

  But, if I launch the very same Ubuntu Trusty image, using Horizon,
  then, the ubuntu user becomes available, without any problems.

  And, if your specify admin_user: cloud, for example, it also have no
  shell.

  I'm using OpenStack Kilo, on top of Trusty using Ubuntu Cloud
  Archives.

  Trusty Image: http://uec-
  images.ubuntu.com/releases/14.04.2/release/ubuntu-14.04-server-
  cloudimg-amd64-disk1.img

  Thanks!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1474959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475244] [NEW] Removing rule from policy causes DBError while updating firewall

2015-07-16 Thread Elena Ezhova
Public bug reported:

When an firewall rule is being removed _rpc_update_firewall is  called
which makes an attempt to get the list of router associated with the
firewall. [1] Passing id instead of firewall_id to
self.get_firewall_routers leads to the following DBError:

2015-07-16 09:39:18.189 ^[[00;32mDEBUG 
neutron_fwaas.db.firewall.firewall_router_insertion_db 
[^[[01;36mreq-4c46e87c-a8c0-4edb-94c9-f1bf63a165cd ^[[00;36madmin 
d824ce1d57644755a6e7e62681c38af3^[[00;32m] 
^[[01;35m^[[00;32mneutron_fwaas.services.firewall.fwaas_plugin.FirewallPlugin 
method get_firewall_routers called with arguments (neutron.context.Context 
object at 0x7fac41e43ed0, built-in function id) {}^[[00m ^[[00;33mfrom 
(pid=2443) wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py:45^[[00m
2015-07-16 09:39:18.193 ^[[01;31mERROR oslo_db.sqlalchemy.exc_filters 
[^[[01;36mreq-4c46e87c-a8c0-4edb-94c9-f1bf63a165cd ^[[00;36madmin 
d824ce1d57644755a6e7e62681c38af3^[[01;31m] ^[[01;35m^[[01;31mDB exception 
wrapped.^[[00m
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1139, 
in _execute_context
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mcontext)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
450, in do_execute
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mcursor.execute(statement, parameters)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py, line 132, in 
execute
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mquery = query % self._escape_args(args, conn)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py, line 98, in 
_escape_args
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mreturn tuple(conn.escape(arg) for arg in args)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py, line 98, in 
genexpr
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mreturn tuple(conn.escape(arg) for arg in args)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/connections.py, line 729, in 
escape
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mreturn escape_item(obj, self.charset, mapping=mapping)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/converters.py, line 33, in 
escape_item
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mval = encoder(val, mapping)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/converters.py, line 74, in 
escape_unicode
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mreturn escape_str(value, mapping)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/converters.py, line 71, in 
escape_str
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mreturn '%s' % escape_string(value, mapping)
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m  File 
/usr/local/lib/python2.7/dist-packages/pymysql/converters.py, line 68, in 
escape_string
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mlambda match: ESCAPE_MAP.get(match.group(0)), value),))
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00mTypeError: expected string or buffer
^[[01;31m2015-07-16 09:39:18.193 TRACE oslo_db.sqlalchemy.exc_filters 
^[[01;35m^[[00m
2015-07-16 09:39:18.205 ^[[01;31mERROR neutron.api.v2.resource 
[^[[01;36mreq-4c46e87c-a8c0-4edb-94c9-f1bf63a165cd ^[[00;36madmin 
d824ce1d57644755a6e7e62681c38af3^[[01;31m] ^[[01;35m^[[01;31mremove_rule 
failed^[[00m

The complete trace can be found here: [2]

[1] 
https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/services/firewall/fwaas_plugin.py#L164-L175
[2] http://paste.openstack.org/show/380339/

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: 

[Yahoo-eng-team] [Bug 1475252] [NEW] occational FT failure with Row removed from DB

2015-07-16 Thread YAMAMOTO Takashi
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlJ1bnRpbWVFcnJvcjogUm93IHJlbW92ZWQgZnJvbSBEQiBkdXJpbmcgbGlzdGluZy4gUmVxdWVzdCBpbmZvOiBUYWJsZT1Qb3J0LiBDb2x1bW5zPVsnbmFtZScsICdvdGhlcl9jb25maWcnLCAndGFnJ10uIFJlY29yZHM9Tm9uZS5cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzA0Mzk3MDE3NH0=


ft1.48: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_bridges_ports_vxlan(native)_StringException:
 Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-07-16 05:45:28,544 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Mapping 
physical network physnet to bridge br-int746017284
2015-07-16 05:45:28,582  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int746017284 exceeds the 15 character limitation. It was 
shortened to int-br-inc6274e to fit.
2015-07-16 05:45:28,582  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int746017284 exceeds the 15 character limitation. It was 
shortened to phy-br-inc6274e to fit.
2015-07-16 05:45:29,206ERROR [neutron.agent.ovsdb.impl_idl] Traceback (most 
recent call last):
  File neutron/agent/ovsdb/native/connection.py, line 84, in run
txn.results.put(txn.do_commit())
  File neutron/agent/ovsdb/impl_idl.py, line 92, in do_commit
ctx.reraise = False
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File neutron/agent/ovsdb/impl_idl.py, line 87, in do_commit
command.run_idl(txn)
  File neutron/agent/ovsdb/native/commands.py, line 389, in run_idl
Records=%(records)s.) % self.requested_info)
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.

2015-07-16 05:45:29,207ERROR [neutron.agent.ovsdb.native.commands] Error 
executing command
Traceback (most recent call last):
  File neutron/agent/ovsdb/native/commands.py, line 35, in execute
txn.add(self)
  File neutron/agent/ovsdb/api.py, line 70, in __exit__
self.result = self.commit()
  File neutron/agent/ovsdb/impl_idl.py, line 70, in commit
raise result.ex
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.
}}}

Traceback (most recent call last):
  File neutron/tests/functional/agent/test_l2_ovs_agent.py, line 284, in 
test_assert_bridges_ports_vxlan
agent = self.create_agent()
  File neutron/tests/functional/agent/test_l2_ovs_agent.py, line 112, in 
create_agent
conf=self.config)
  File neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, 
line 281, in __init__
self._restore_local_vlan_map()
  File neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, 
line 316, in _restore_local_vlan_map
Port, columns=[name, other_config, tag])
  File neutron/agent/common/ovs_lib.py, line 148, in db_list
execute(check_error=check_error, log_errors=log_errors))
  File neutron/agent/ovsdb/native/commands.py, line 42, in execute
ctx.reraise = False
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File neutron/agent/ovsdb/native/commands.py, line 35, in execute
txn.add(self)
  File neutron/agent/ovsdb/api.py, line 70, in __exit__
self.result = self.commit()
  File neutron/agent/ovsdb/impl_idl.py, line 70, in commit
raise result.ex
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475252

Title:
  occational FT failure with Row removed from DB

Status in neutron:
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlJ1bnRpbWVFcnJvcjogUm93IHJlbW92ZWQgZnJvbSBEQiBkdXJpbmcgbGlzdGluZy4gUmVxdWVzdCBpbmZvOiBUYWJsZT1Qb3J0LiBDb2x1bW5zPVsnbmFtZScsICdvdGhlcl9jb25maWcnLCAndGFnJ10uIFJlY29yZHM9Tm9uZS5cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzA0Mzk3MDE3NH0=

  
  ft1.48: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_bridges_ports_vxlan(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-07-16 05:45:28,544 INFO 

[Yahoo-eng-team] [Bug 1475253] [NEW] occational FT failure with Row removed from DB

2015-07-16 Thread YAMAMOTO Takashi
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlJ1bnRpbWVFcnJvcjogUm93IHJlbW92ZWQgZnJvbSBEQiBkdXJpbmcgbGlzdGluZy4gUmVxdWVzdCBpbmZvOiBUYWJsZT1Qb3J0LiBDb2x1bW5zPVsnbmFtZScsICdvdGhlcl9jb25maWcnLCAndGFnJ10uIFJlY29yZHM9Tm9uZS5cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzA0Mzk3MDE3NH0=


ft1.48: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_bridges_ports_vxlan(native)_StringException:
 Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-07-16 05:45:28,544 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Mapping 
physical network physnet to bridge br-int746017284
2015-07-16 05:45:28,582  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int746017284 exceeds the 15 character limitation. It was 
shortened to int-br-inc6274e to fit.
2015-07-16 05:45:28,582  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int746017284 exceeds the 15 character limitation. It was 
shortened to phy-br-inc6274e to fit.
2015-07-16 05:45:29,206ERROR [neutron.agent.ovsdb.impl_idl] Traceback (most 
recent call last):
  File neutron/agent/ovsdb/native/connection.py, line 84, in run
txn.results.put(txn.do_commit())
  File neutron/agent/ovsdb/impl_idl.py, line 92, in do_commit
ctx.reraise = False
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File neutron/agent/ovsdb/impl_idl.py, line 87, in do_commit
command.run_idl(txn)
  File neutron/agent/ovsdb/native/commands.py, line 389, in run_idl
Records=%(records)s.) % self.requested_info)
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.

2015-07-16 05:45:29,207ERROR [neutron.agent.ovsdb.native.commands] Error 
executing command
Traceback (most recent call last):
  File neutron/agent/ovsdb/native/commands.py, line 35, in execute
txn.add(self)
  File neutron/agent/ovsdb/api.py, line 70, in __exit__
self.result = self.commit()
  File neutron/agent/ovsdb/impl_idl.py, line 70, in commit
raise result.ex
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.
}}}

Traceback (most recent call last):
  File neutron/tests/functional/agent/test_l2_ovs_agent.py, line 284, in 
test_assert_bridges_ports_vxlan
agent = self.create_agent()
  File neutron/tests/functional/agent/test_l2_ovs_agent.py, line 112, in 
create_agent
conf=self.config)
  File neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, 
line 281, in __init__
self._restore_local_vlan_map()
  File neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, 
line 316, in _restore_local_vlan_map
Port, columns=[name, other_config, tag])
  File neutron/agent/common/ovs_lib.py, line 148, in db_list
execute(check_error=check_error, log_errors=log_errors))
  File neutron/agent/ovsdb/native/commands.py, line 42, in execute
ctx.reraise = False
  File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File neutron/agent/ovsdb/native/commands.py, line 35, in execute
txn.add(self)
  File neutron/agent/ovsdb/api.py, line 70, in __exit__
self.result = self.commit()
  File neutron/agent/ovsdb/impl_idl.py, line 70, in commit
raise result.ex
RuntimeError: Row removed from DB during listing. Request info: Table=Port. 
Columns=['name', 'other_config', 'tag']. Records=None.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475253

Title:
  occational FT failure with Row removed from DB

Status in neutron:
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlJ1bnRpbWVFcnJvcjogUm93IHJlbW92ZWQgZnJvbSBEQiBkdXJpbmcgbGlzdGluZy4gUmVxdWVzdCBpbmZvOiBUYWJsZT1Qb3J0LiBDb2x1bW5zPVsnbmFtZScsICdvdGhlcl9jb25maWcnLCAndGFnJ10uIFJlY29yZHM9Tm9uZS5cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzA0Mzk3MDE3NH0=

  
  ft1.48: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_assert_bridges_ports_vxlan(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-07-16 05:45:28,544 INFO 

[Yahoo-eng-team] [Bug 1475215] [NEW] cloudinit.cs_utils.Cepko doesn't work under Python 3

2015-07-16 Thread Dan Watkins
Public bug reported:

In Python 3, Serial objects expect to be communicated with in bytes, we
still try to pass strings in.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: In Progress

** Affects: ubuntu
 Importance: Undecided
 Status: New

** Also affects: ubuntu
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New = In Progress

** Changed in: cloud-init
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1475215

Title:
  cloudinit.cs_utils.Cepko doesn't work under Python 3

Status in cloud-init:
  In Progress
Status in Ubuntu:
  New

Bug description:
  In Python 3, Serial objects expect to be communicated with in bytes,
  we still try to pass strings in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1475215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475218] [NEW] oslo db retry decorator doesn't apply to add_router_interface

2015-07-16 Thread Kevin Benton
Public bug reported:

http://logs.openstack.org/64/197564/17/check/gate-tempest-dsvm-neutron-
full/1493b21/logs/screen-q-svc.txt.gz#_2015-07-16_07_38_42_299

2015-07-16 07:38:42.299 ERROR neutron.api.v2.resource 
[req-38378c88-9223-434e-92b5-2124b1a6442a 
tempest-TestNetworkAdvancedServerOps-2072830200 
tempest-TestNetworkAdvancedServerOps-1241923202] add_router_interface failed
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 210, in _handle_action
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 291, in 
add_router_interface
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource context, router, 
interface_info['subnet_id'], device_owner)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 603, in 
_add_interface_by_subnet
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource 'name': ''}}), 
[subnet], True
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 1015, in 
create_port
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource result, 
mech_context = self._create_port_db(context, port)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 990, in 
_create_port_db
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_port(context, port)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 837, in 
create_port
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource 
self.ipam.allocate_ips_for_port_and_store(context, port, port_id)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py, line 205, in 
allocate_ips_for_port_and_store
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource ips = 
self._allocate_ips_for_port(context, port)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py, line 396, in 
_allocate_ips_for_port
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource p['mac_address'])
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py, line 329, in 
_allocate_fixed_ips
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource context, 
fixed['subnet_id'], fixed['ip_address'])
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py, line 147, in 
_allocate_specific_ip
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource for ip_range in 
results:
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2516, in 
__iter__
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource return 
self._execute_and_instances(context)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2531, in 
_execute_and_instances
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 914, 
in execute
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource return 
meth(self, multiparams, params)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, line 323, 
in _execute_on_connection
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1010, 
in _execute_clauseelement
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource compiled_sql, 
distilled_params
2015-07-16 07:38:42.299 9946 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1146, 
in _execute_context
2015-07-16 07:38:42.299 9946 ERROR 

[Yahoo-eng-team] [Bug 1196924] Re: Stop and Delete operations should give the Guest a chance to shutdown

2015-07-16 Thread James Page
** Changed in: nova (Ubuntu)
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1196924

Title:
  Stop and Delete operations should give the Guest a chance to shutdown

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  In Progress

Bug description:
  This feature will cause an ACPI event to be sent to the system while
  shutting down, and the acpid running inside the system can catch the
  event, thus giving the system a chance to shutdown cleanly.

  [Impact]

   * VMs being shutdown with any signal/notification from the The
  hypervisor level, services running inside VMs have no chance to
  perform a clean shutoff

  [Test Case]

   * 1. stop a VM
     2. the VM is shutdown without any notification

  The can be easily seen by ssh into the system before shutting down.
  With the patch in place, the ssh session will be close during
  shutdown, because the sshd has the chance to close the connection
  before being brought down. Without the patch, the ssh session will
  just hang there for a while until timeout, because the connection is
  not promptly closed.

  
  To leverage the clean shutdown feature, one can create a file named 
/etc/acpi/events/power that contains the following:

event=button/power
action=/etc/acpi/power.sh %e

  Then   create   a  file  named  /etc/acpi/power.sh  that  contains  whatever 
required to gracefully shutdown a particular server (VM).
  With the apicd running, shutdown of the VM will cause  the rule in 
/etc/acpi/events/power to trigger the script in /etc/acpi/power.sh, thus 
cleanly shutdown the system.

  
  [Regression Potential]

   * none

  
  Currently in libvirt stop and delete operations simply destroy the underlying 
VM. Some GuestOS's do not react well to this type of power failure, and it 
would be better if these operations followed the same approach a a soft_reboot 
and give the guest a chance to shutdown gracefully.   Even where VM is being 
deleted, it may be booted from a volume which will be reused on another server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475172] [NEW] VPNService status doesn't go to PENDING_UPDATE when being updated

2015-07-16 Thread huaxiang
Public bug reported:

In the function update_vpnservice in module neutron-
vpnaas.services.vpn.plugin, it should have existed a statement assigning
PENDING_UPDATE to vpnservice['vpnservice']['status'] before invoke the
corresponding db operation. Am I missing something?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475172

Title:
  VPNService status doesn't go to PENDING_UPDATE when being updated

Status in neutron:
  New

Bug description:
  In the function update_vpnservice in module neutron-
  vpnaas.services.vpn.plugin, it should have existed a statement
  assigning PENDING_UPDATE to vpnservice['vpnservice']['status'] before
  invoke the corresponding db operation. Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475173] [NEW] IPsecSiteConnection status doesn't go to PENDING_UPDATE when being updated

2015-07-16 Thread huaxiang
Public bug reported:

In the function update_ipsec_site_connection in module neutron-
vpnaas.services.vpn.plugin, it should have existed a statement assigning
PENDING_UPDATE to
ipsec_site_connection['ipsec_site_connection']['status'] before invoke
the corresponding db operation. Am I missing something?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475173

Title:
  IPsecSiteConnection status doesn't go to PENDING_UPDATE when being
  updated

Status in neutron:
  New

Bug description:
  In the function update_ipsec_site_connection in module neutron-
  vpnaas.services.vpn.plugin, it should have existed a statement
  assigning PENDING_UPDATE to
  ipsec_site_connection['ipsec_site_connection']['status'] before invoke
  the corresponding db operation. Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475202] [NEW] Snapshot deleting of attached volume fails with remotefs volume drivers

2015-07-16 Thread Dmitry Guryanov
Public bug reported:

cinder create --image-id 3dc83685-ed82-444c-8863-1e962eb33de8 1  # ID of
cirros image

nova boot qwe  --flavor m1.tiny --block-device id=d62c5786-1d13-46bb-
be13-3b110c144de7,source=volume,dest=volume,type=disk,bootindex=0

cinder snapshot-create --force=True 46b22595-31b0-41ca-8214-8ad6b81a06b6

cinder snapshot-delete 43fb72a4-963f-45f7-8b42-89e7c2cbd720


Then check nova-compute log:

2015-07-16 08:44:26.841 ERROR nova.virt.libvirt.driver 
[req-f92f3dd2-1bef-4c2c-8208-54d765592985 nova service] Error occurred during 
volume_snapshot_delete, sending error status to Cinder.
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver Traceback (most 
recent call last):
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2004, in 
volume_snapshot_delete
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver 
self._volume_snapshot_delete(context, instance, volume_id,
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 1939, in 
_volume_snapshot_delete
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver dev = 
guest.get_block_device(rebase_disk)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/guest.py, line 302, in rebase
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver self._disk, 
base, self.REBASE_DEFAULT_BANDWIDTH, flags=flags)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 183, in doit
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 141, in proxy_call
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver rv = 
execute(f, *args, **kwargs)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 122, in execute
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver six.reraise(c, 
e, tb)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 80, in tworker
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver rv = 
meth(*args, **kwargs)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/usr/lib/python2.7/site-packages/libvirt.py, line 865, in blockRebase
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver if ret == -1: 
raise libvirtError ('virDomainBlockRebase() failed', dom=self)
2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver libvirtError: 
invalid argument: flag VIR_DOMAIN_BLOCK_REBASE_RELATIVE is valid only with 
non-null base

** Affects: nova
 Importance: Critical
 Assignee: Dmitry Guryanov (dguryanov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475202

Title:
  Snapshot deleting of attached volume fails with remotefs volume
  drivers

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  cinder create --image-id 3dc83685-ed82-444c-8863-1e962eb33de8 1  # ID
  of cirros image

  nova boot qwe  --flavor m1.tiny --block-device id=d62c5786-1d13-46bb-
  be13-3b110c144de7,source=volume,dest=volume,type=disk,bootindex=0

  cinder snapshot-create --force=True
  46b22595-31b0-41ca-8214-8ad6b81a06b6

  cinder snapshot-delete 43fb72a4-963f-45f7-8b42-89e7c2cbd720

  
  Then check nova-compute log:

  2015-07-16 08:44:26.841 ERROR nova.virt.libvirt.driver 
[req-f92f3dd2-1bef-4c2c-8208-54d765592985 nova service] Error occurred during 
volume_snapshot_delete, sending error status to Cinder.
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver Traceback (most 
recent call last):
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2004, in 
volume_snapshot_delete
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver 
self._volume_snapshot_delete(context, instance, volume_id,
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 1939, in 
_volume_snapshot_delete
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver dev = 
guest.get_block_device(rebase_disk)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 
/opt/stack/new/nova/nova/virt/libvirt/guest.py, line 302, in rebase
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver self._disk, 
base, self.REBASE_DEFAULT_BANDWIDTH, flags=flags)
  2015-07-16 08:44:26.841 29626 ERROR nova.virt.libvirt.driver   File 

[Yahoo-eng-team] [Bug 1475176] [NEW] SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is not cleaning up the subnet pool at the end

2015-07-16 Thread Numan Siddique
Public bug reported:

API test -
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
is not cleaning up the subnet pool at the end

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475176

Title:
  SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is
  not cleaning up the subnet pool at the end

Status in neutron:
  In Progress

Bug description:
  API test -
  
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
  is not cleaning up the subnet pool at the end

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475175] [NEW] SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is not cleaning up the subnet pool at the end

2015-07-16 Thread Numan Siddique
*** This bug is a duplicate of bug 1475176 ***
https://bugs.launchpad.net/bugs/1475176

Public bug reported:

API test -
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
is not cleaning up the subnet pool at the end

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475175

Title:
  SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is
  not cleaning up the subnet pool at the end

Status in neutron:
  New

Bug description:
  API test -
  
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
  is not cleaning up the subnet pool at the end

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475172] Re: VPNService status doesn't go to PENDING_UPDATE when being updated

2015-07-16 Thread Paul Michali
The pending states are used for cases when an action takes a long time -
namely, longer than the request response. For example, when creating an
IPSec connection, there is a period of time where the tunnel must be set
up and negotiated. As a result, the create request will return a
response, but the operation is still in progress. The actual creation
may take 30+ seconds to complete.

All update requests, are performed immediately, and will have completed
by the time the request returns a response. Hence, no intermediate
PENDING state is needed.

This operates as designed.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475172

Title:
  VPNService status doesn't go to PENDING_UPDATE when being updated

Status in neutron:
  Invalid

Bug description:
  In the function update_vpnservice in module neutron-
  vpnaas.services.vpn.plugin, it should have existed a statement
  assigning PENDING_UPDATE to vpnservice['vpnservice']['status'] before
  invoke the corresponding db operation. Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467570] Re: Nova can't provision instance from snapshot with a ceph backend

2015-07-16 Thread lyanchih
** No longer affects: horizon

** Changed in: nova
 Assignee: lyanchih (lyanchih) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467570

Title:
  Nova can't provision instance from snapshot with a ceph backend

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is a weird issue that does not happen in our Juno setup, but
  happens in our Kilo setup. The configuration between the two setups is
  pretty much the same, with only kilo-specific changes done (namely,
  moving lines around to new sections).

  Here's how to reproduce:
  1.Provision an instance.
  2.Make a snapshot of this instance.
  3.Try to provision an instance with that snapshot.

  Nova-compute will complain that it can't find the disk and the
  instance will fall in error.

  Here's what the default behavior is supposed to be from my observations:
  -When the image is uploaded into ceph, a snapshot is created automatically 
inside ceph (this is NOT an instance snapshot per say, but a ceph internal 
snapshot).
  -When an instance is booted from image in nova, this snapshot gets a clone in 
the nova ceph pool. Nova then uses that clone as the instance's disk. This is 
called copy-on-write cloning.

  Here's when things get funky: -When an instance is booted from a
  snapshot, the copy-on-write cloning does not happen. Nova looks for
  the disk and, of course, fails to find it in its pool, thus failing to
  provision the instance . There's no trace anywhere of the copy-on-
  write clone failing (In part because ceph doesn't log client commands,
  from what I see).

  The compute logs I got are in this pastebin :
  http://pastebin.com/ADHTEnhn

  There's a few things I notice here that I'd like to point out :

  -Nova create an ephemeral drive file, then proceeds to delete it
  before using rbd_utils instead. While strange, this may be the
  intended but somewhat dirty behavior, as nova consider it an ephemeral
  instance, before realizing that it's actually a ceph instance and
  doesn't need its ephemeral disk. Or maybe these conjectures are
  completely wrong and this is part of the issue.

  -Nova creates the image (I'm guessing it's the copy-on-write cloning
  happening here). What exactly happens here isn't very clear, but then
  it complains that it can't find the clone in its pool to use as block
  device.

  This issue does not happen on ephemeral storage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475297] [NEW] Unbind segment not working correctly

2015-07-16 Thread Sam Betts
Public bug reported:

A recent commit https://review.openstack.org/#/c/196908/21 changed the
order of some of the calls in update_port and its causing a failure of
segment unbind in the Cisco nexus driver.

** Affects: neutron
 Importance: Undecided
 Assignee: Sam Betts (sambetts)
 Status: In Progress

** Description changed:

- A recent commit changed the order of some of the calls and its causing a
- failure of segment unbind in the Cisco nexus driver.
+ A recent commit https://review.openstack.org/#/c/196908/21 changed the
+ order of some of the calls and its causing a failure of segment unbind
+ in the Cisco nexus driver.

** Description changed:

  A recent commit https://review.openstack.org/#/c/196908/21 changed the
- order of some of the calls and its causing a failure of segment unbind
- in the Cisco nexus driver.
+ order of some of the calls in update_port and its causing a failure of
+ segment unbind in the Cisco nexus driver.

** Changed in: neutron
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475297

Title:
  Unbind segment not working correctly

Status in neutron:
  In Progress

Bug description:
  A recent commit https://review.openstack.org/#/c/196908/21 changed the
  order of some of the calls in update_port and its causing a failure of
  segment unbind in the Cisco nexus driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475173] Re: IPsecSiteConnection status doesn't go to PENDING_UPDATE when being updated

2015-07-16 Thread Paul Michali
The pending states are used for cases when an action takes a long time -
namely, longer than the request response. For example, when creating an
IPSec connection, there is a period of time where the tunnel must be set
up and negotiated. As a result, the create request will return a
response, but the operation is still in progress. The actual creation
may take 30+ seconds to complete.

All update requests, are performed immediately, and will have completed
by the time the request returns a response. Hence, no intermediate
PENDING state is needed.

This operates as designed.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475173

Title:
  IPsecSiteConnection status doesn't go to PENDING_UPDATE when being
  updated

Status in neutron:
  Invalid

Bug description:
  In the function update_ipsec_site_connection in module neutron-
  vpnaas.services.vpn.plugin, it should have existed a statement
  assigning PENDING_UPDATE to
  ipsec_site_connection['ipsec_site_connection']['status'] before invoke
  the corresponding db operation. Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475256] [NEW] sriov: VFs attributes (vlan, mac address) are not cleaned up after port delete

2015-07-16 Thread Roman Bogorodskiy
Public bug reported:

Image we create a port like this:

$ neutron port-create  --binding:vnic_type=direct --name rjuly013 sriovtest0
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | direct
  |
| device_id |   
  |
| device_owner  |   
  |
| fixed_ips | {subnet_id: ffa84ccf-ba49-4a23-a8ab-9295bc7d93f2, 
ip_address: 166.168.0.15} |
| id| 2ec3b30e-e3cf-4a8f-a7cb-68a910a59e9a  
  |
| mac_address   | fa:16:3e:ca:11:87 
  |
| name  | rjuly013  
  |
| network_id| 26a0f22b-42b0-41d2-9b76-41270ce9b655  
  |
| security_groups   | b0ef012a-96b2-458f-bd28-c46306f063fa  
  |
| status| DOWN  
  |
| tenant_id | 2ebabf166ecd43dd8093b70a37f26be4  
  |
+---+-+
$

And then create a VM with this port:

$ nova boot --image 3c3a5387-7471-4e88-a19e-09e0c9a08707 --flavor 3
--nic port-id=2ec3b30e-e3cf-4a8f-a7cb-68a910a59e9a rjuly013

Now we can see a VF configured:

$ ip link|grep fa:16:3e:ca:11:87
vf 7 MAC fa:16:3e:ca:11:87, spoof checking on, link-state auto
$

After deletion of VM, we can see that the VF is still configured:

$ ip link|grep fa:16:3e:ca:11:87
vf 7 MAC fa:16:3e:ca:11:87, spoof checking on, link-state auto
$

This situation could cause troubles, for example, if user would want to
create a new port with the mac address of the removed port, and if a
port would be allocated on the same PF, there would be 2 VFs with the
same MAC address in result. This could cause an unexpected behavior,
with 'ixgbe' at least.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475256

Title:
  sriov: VFs attributes (vlan, mac address) are not cleaned up after
  port delete

Status in neutron:
  New

Bug description:
  Image we create a port like this:

  $ neutron port-create  --binding:vnic_type=direct --name rjuly013 sriovtest0
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | direct  
|
  | device_id |

[Yahoo-eng-team] [Bug 1475279] [NEW] Horizon failing with os profiler issue

2015-07-16 Thread Sudheer Kalla
Public bug reported:

Whenever i try to open horizon i will get an error and horizon won't
show up .The following is the error i got

[Thu Jul 16 08:11:48.966357 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] Traceback (most recent call last):, referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966400 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/openstack_dashboard/wsgi/django.wsgi, line 
14, in module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966534 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] application = get_wsgi_application(), referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966572 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/django/core/wsgi.py, line 14, in 
get_wsgi_application, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966677 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] django.setup(), referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966708 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/django/__init__.py, line 20, in setup, 
referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966807 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] configure_logging(settings.LOGGING_CONFIG, 
settings.LOGGING), referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.966844 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/django/conf/__init__.py, line 46, in 
__getattr__, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967004 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] self._setup(name), referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967034 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/django/conf/__init__.py, line 42, in _setup, 
referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967086 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] self._wrapped = Settings(settings_module), 
referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967112 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/django/conf/__init__.py, line 94, in 
__init__, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967150 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] mod = 
importlib.import_module(self.SETTINGS_MODULE), referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967178 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File /usr/lib/python2.7/importlib/__init__.py, 
line 37, in import_module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967284 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] __import__(name), referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967314 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py,
 line 27, in module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967512 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] from openstack_dashboard import exceptions, 
referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967582 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/openstack_dashboard/wsgi/../../openstack_dashboard/exceptions.py,
 line 22, in module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967706 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] from keystoneclient import exceptions as 
keystoneclient, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967737 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/openstack_dashboard/wsgi/../../keystoneclient/__init__.py,
 line 34, in module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.967946 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270] from keystoneclient import client, referer: 
https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.968005 2015] [:error] [pid 3442:tid 139893606598400] 
[client 192.168.122.1:37270]   File 
/usr/lib/python2.7/dist-packages/openstack_dashboard/wsgi/../../keystoneclient/client.py,
 line 13, in module, referer: https://10.35.58.13:5290/project/
[Thu Jul 16 08:11:48.968132 2015] 

[Yahoo-eng-team] [Bug 1475326] [NEW] Wrong networking_odl url in README of ODL mechanism driver

2015-07-16 Thread Sylvain Afchain
Public bug reported:

now located here https://git.openstack.org/openstack/networking-odl

** Affects: neutron
 Importance: Undecided
 Assignee: Sylvain Afchain (sylvain-afchain)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sylvain Afchain (sylvain-afchain)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475326

Title:
  Wrong networking_odl url in README of ODL mechanism driver

Status in neutron:
  New

Bug description:
  now located here https://git.openstack.org/openstack/networking-odl

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475326] Re: Wrong networking_odl url in README of ODL mechanism driver

2015-07-16 Thread Sylvain Afchain
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475326

Title:
  Wrong networking_odl url in README of ODL mechanism driver

Status in neutron:
  Invalid

Bug description:
  now located here https://git.openstack.org/openstack/networking-odl

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-07-16 Thread Louis Bouchard
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New
Status in nova source package in Trusty:
  New

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File /opt/stack/old/nova/nova/compute/manager.py, line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File /opt/stack/old/nova/nova/openstack/common/jsonutils.py, line 164, 
in loads
   return json.loads(s)
     File /usr/lib/python2.7/json/__init__.py, line 326, in loads
   return _default_decoder.decode(s)
     File /usr/lib/python2.7/json/decoder.py, line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:Exception during message handling AND message:expected
  string or buffer AND message:connection_info =
  jsonutils.loads(bdm.connection_info) AND tags:screen-n-cpu.txt

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475353] [NEW] _get_host_sysinfo_serial_os fails if the machine-id file is empty

2015-07-16 Thread Andrea Rosa
Public bug reported:

the _get_host_sysinfo_serial_os method try to read the machine-id file to get 
an UUID for the host operating system.
If the file is there but it is empty the code will raise an exception while it 
tries to parse the content of the file.

To reproduce the issue just add this test to the
nova/tests/unit/virt/libvirt/test_driver.py

def test_get_guest_config_sysinfo_serial_os_empty_machine_id(self):
self.flags(sysinfo_serial=os, group=libvirt)

real_open = __builtin__.open
with contextlib.nested(
mock.patch.object(__builtin__, open),
) as (mock_open, ):
theuuid = 

def fake_open(filename, *args, **kwargs):
if filename == /etc/machine-id:
h = mock.MagicMock()
h.read.return_value = theuuid
h.__enter__.return_value = h
return h
return real_open(filename, *args, **kwargs)

mock_open.side_effect = fake_open

self._test_get_guest_config_sysinfo_serial(None)

** Affects: nova
 Importance: Undecided
 Assignee: Andrea Rosa (andrea-rosa-m)
 Status: In Progress


** Tags: low-hanging-fruit

** Changed in: nova
 Assignee: (unassigned) = Andrea Rosa (andrea-rosa-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475353

Title:
  _get_host_sysinfo_serial_os fails if the machine-id file is empty

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  the _get_host_sysinfo_serial_os method try to read the machine-id file to get 
an UUID for the host operating system.
  If the file is there but it is empty the code will raise an exception while 
it tries to parse the content of the file.

  To reproduce the issue just add this test to the
  nova/tests/unit/virt/libvirt/test_driver.py

  def test_get_guest_config_sysinfo_serial_os_empty_machine_id(self):
  self.flags(sysinfo_serial=os, group=libvirt)

  real_open = __builtin__.open
  with contextlib.nested(
  mock.patch.object(__builtin__, open),
  ) as (mock_open, ):
  theuuid = 

  def fake_open(filename, *args, **kwargs):
  if filename == /etc/machine-id:
  h = mock.MagicMock()
  h.read.return_value = theuuid
  h.__enter__.return_value = h
  return h
  return real_open(filename, *args, **kwargs)

  mock_open.side_effect = fake_open

  self._test_get_guest_config_sysinfo_serial(None)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457551] Re: Another Horizon login page vulnerability to a DoS attack

2015-07-16 Thread Lin Hua Cheng
** Changed in: horizon
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1457551

Title:
  Another Horizon login page vulnerability to a DoS attack

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  This bug is very similar to: https://bugs.launchpad.net/bugs/1394370

  Steps to reproduce:
  1) Setup Horizon to use db as session engine (using this doc: 
http://docs.openstack.org/admin-guide-cloud/content/dashboard-session-database.html).
 I've used MySQL.
  2)  Run 'for i in {1..100}; do  curl -b sessionid=a; 
http://HORIZON__IP/auth/login/  /dev/null; done' from your terminal.
  I've got 100 rows in django_session after this.

  I've used devstack installation just with updated master branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1457551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275675] Re: Version change in ObjectField does not work with back-levelling

2015-07-16 Thread Nikola Đipanov
** Also affects: oslo.versionedobjects
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275675

Title:
  Version change in ObjectField does not work with back-levelling

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.versionedobjects:
  New

Bug description:
  When a NovaObject primitive is deserialized the object version is
  checked and an IncompatibleObjectVersion exception is raised if the
  serialized primitive is labelled with a version that is not known
  locally. The exception indicates what version is known locally, and
  the deserialization attempts to backport the primitive to the local
  version.

  If a NovaObject A has an ObjectField b containing NovaObject B and it
  is B that has the incompatible version, the version number in the
  exception will be the the locally supported version for B. The
  desrialization will then attempt to backport the primitive of object A
  to the locally supported version number for object B.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448245] Re: VPNaaS UTs broken in neutron/tests/unit/extensions/test_l3.py

2015-07-16 Thread Thierry Carrez
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448245

Title:
  VPNaaS UTs broken in neutron/tests/unit/extensions/test_l3.py

Status in neutron:
  Fix Released
Status in neutron kilo series:
  In Progress

Bug description:
  Recently, VPNaaS repo UTs are failing in tests that inherit from
  Neutron tests. The tests worked 4/22/2015 and are broken on 4/24/2015.
  Will try to bisect to find change in Neutron that affects tests.

  Example failure:

  2015-04-24 06:40:39.838 | Captured pythonlogging:
  2015-04-24 06:40:39.838 | ~~~
  2015-04-24 06:40:39.838 | 2015-04-24 06:40:38,704ERROR 
[neutron.api.extensions] Extension path 'neutron/tests/unit/extensions' doesn't 
exist!
  2015-04-24 06:40:39.838 | 
  2015-04-24 06:40:39.838 | 
  2015-04-24 06:40:39.838 | Captured traceback:
  2015-04-24 06:40:39.838 | ~~~
  2015-04-24 06:40:39.838 | Traceback (most recent call last):
  2015-04-24 06:40:39.838 |   File 
neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py, line 886, in 
test_delete_router_interface_in_use_by_vpnservice
  2015-04-24 06:40:39.839 | expected_code=webob.exc.
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/src/neutron/neutron/tests/unit/extensions/test_l3.py,
 line 401, in _router_interface_action
  2015-04-24 06:40:39.839 | self.assertEqual(res.status_int, 
expected_code, msg)
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  2015-04-24 06:40:39.839 | self.assertThat(observed, matcher, message)
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  2015-04-24 06:40:39.839 | raise mismatch_error
  2015-04-24 06:40:39.839 | testtools.matchers._impl.MismatchError: 200 != 
409
  2015-04-24 06:40:39.839 | 
  2015-04-24 06:40:39.839 | 
  2015-04-24 06:40:39.839 | Captured pythonlogging:
  2015-04-24 06:40:39.839 | ~~~
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Loading core plugin: 
neutron_vpnaas.tests.unit.db.vpn.test_vpn_db.TestVpnCorePlugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Service L3_ROUTER_NAT is supported by the core plugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Loading Plugin: neutron_vpnaas.services.vpn.plugin.VPNPlugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,704ERROR 
[neutron.api.extensions] Extension path 'neutron/tests/unit/extensions' doesn't 
exist!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474907] Re: HTTP 500 error during create task

2015-07-16 Thread Long Quan Sha
** Changed in: glance
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1474907

Title:
  HTTP 500 error during create task

Status in Glance:
  Invalid

Bug description:
  When I run glance task-create,  missing --input,  it will result in
  http 500 error. It is straightforward there is another parameter
  missing. It should show user a correct message.

  [root@vm134 ]# glance  task-create --type import
  HTTPInternalServerError (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1474907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475387] [NEW] selenium tests are not running in gate and many are broken

2015-07-16 Thread David Lyle
Public bug reported:

running selenium tests locally, I see

Running Horizon application tests
[snip]
Ran 160 tests in 106.527s
FAILED (SKIP=153, errors=3, failures=2)

Running openstack_dashboard tests
[snip]
Ran 1385 tests in 168.336s
FAILED (SKIP=1374, errors=4, failures=5)


Sample output from the gate jobs:

Running Horizon application tests
2015-07-16 12:21:50.680 |
2015-07-16 12:21:57.048 |
2015-07-16 12:21:57.049 | 
--
2015-07-16 12:21:57.049 | Ran 153 tests in 6.368s
2015-07-16 12:21:57.049 |
2015-07-16 12:21:57.077 | OK (SKIP=153)
2015-07-16 12:21:57.131 | nosetests horizon --nocapture --nologcapture 
--exclude-dir=horizon/conf/ --exclude-dir=horizon/test/customization 
--cover-package=horizon --cover-inclusive --all-modules 
--exclude-dir=openstack_dashboard/test/integration_tests --verbosity=1
2015-07-16 12:21:57.131 | Creating test database for alias 'default'...
2015-07-16 12:21:57.131 | Destroying test database for alias 'default'...
2015-07-16 12:21:57.138 | Running openstack_dashboard tests
2015-07-16 12:22:00.595 | WARNING:root:No local_settings file found.
2015-07-16 12:22:00.993 |
2015-07-16 12:22:09.731 | 
2015-07-16 12:22:09.731 | 
--
2015-07-16 12:22:09.731 | Ran 1372 tests in 8.737s
2015-07-16 12:22:09.732 |
2015-07-16 12:22:09.759 | OK (SKIP=1371)

** Affects: horizon
 Importance: Critical
 Status: New


** Tags: unittest

** Description changed:

  running selenium tests locally, I see
  
  Running Horizon application tests
  [snip]
  Ran 160 tests in 106.527s
  FAILED (SKIP=153, errors=3, failures=2)
  
  Running openstack_dashboard tests
  [snip]
  Ran 1385 tests in 168.336s
  FAILED (SKIP=1374, errors=4, failures=5)
  
  
  Sample output from the gate jobs:
  
  Running Horizon application tests
- 2015-07-16 12:21:50.680 | 
- 2015-07-16 12:21:57.048 | 
+ 2015-07-16 12:21:50.680 |
+ 2015-07-16 12:21:57.048 |
  2015-07-16 12:21:57.049 | 
--
  2015-07-16 12:21:57.049 | Ran 153 tests in 6.368s
- 2015-07-16 12:21:57.049 | 
+ 2015-07-16 12:21:57.049 |
  2015-07-16 12:21:57.077 | OK (SKIP=153)
  2015-07-16 12:21:57.131 | nosetests horizon --nocapture --nologcapture 
--exclude-dir=horizon/conf/ --exclude-dir=horizon/test/customization 
--cover-package=horizon --cover-inclusive --all-modules 
--exclude-dir=openstack_dashboard/test/integration_tests --verbosity=1
  2015-07-16 12:21:57.131 | Creating test database for alias 'default'...
  2015-07-16 12:21:57.131 | Destroying test database for alias 'default'...
  2015-07-16 12:21:57.138 | Running openstack_dashboard tests
  2015-07-16 12:22:00.595 | WARNING:root:No local_settings file found.
- 2015-07-16 12:22:00.993 | 
+ 2015-07-16 12:22:00.993 |
  2015-07-16 12:22:09.731 | 
  2015-07-16 12:22:09.731 | 
--
  2015-07-16 12:22:09.731 | Ran 1372 tests in 8.737s
- 2015-07-16 12:22:09.732 | 
+ 2015-07-16 12:22:09.732 |
  2015-07-16 12:22:09.759 | OK (SKIP=1371)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475387

Title:
  selenium tests are not running in gate and many are broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  running selenium tests locally, I see

  Running Horizon application tests
  [snip]
  Ran 160 tests in 106.527s
  FAILED (SKIP=153, errors=3, failures=2)

  Running openstack_dashboard tests
  [snip]
  Ran 1385 tests in 168.336s
  FAILED (SKIP=1374, errors=4, failures=5)


  Sample output from the gate jobs:

  Running Horizon application tests
  2015-07-16 12:21:50.680 |
  2015-07-16 12:21:57.048 |
  2015-07-16 12:21:57.049 | 
--
  2015-07-16 12:21:57.049 | Ran 153 tests in 6.368s
  2015-07-16 12:21:57.049 |
  2015-07-16 12:21:57.077 | OK (SKIP=153)
  2015-07-16 12:21:57.131 | nosetests horizon --nocapture --nologcapture 
--exclude-dir=horizon/conf/ --exclude-dir=horizon/test/customization 
--cover-package=horizon --cover-inclusive --all-modules 
--exclude-dir=openstack_dashboard/test/integration_tests --verbosity=1
  2015-07-16 12:21:57.131 | Creating test database for alias 'default'...
  2015-07-16 12:21:57.131 | Destroying test database for alias 'default'...
  2015-07-16 12:21:57.138 | Running openstack_dashboard tests
  2015-07-16 12:22:00.595 | WARNING:root:No local_settings file found.
  2015-07-16 12:22:00.993 |
  2015-07-16 12:22:09.731 | 
  2015-07-16 12:22:09.731 | 
--
  2015-07-16 12:22:09.731 | Ran 1372 tests in 8.737s
  2015-07-16 12:22:09.732 |
  2015-07-16 12:22:09.759 | OK (SKIP=1371)

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1474284] Re: Adding users from different domain to a group

2015-07-16 Thread Steve Martinelli
as Henry said, this isn't a bug.

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474284

Title:
  Adding users from different domain to a group

Status in Keystone:
  Invalid

Bug description:
  I have created two domains. And I have created users in both the
  domains. I created a group in first domain, and I tried adding those
  users from other domains to this group, it added successfully.

  
  But according to this page https://wiki.openstack.org/wiki/Domains, it should 
not allow.

  Here are the steps to reproduce this :-
  created new domain Domain9


  
  curl -i -k -X POST https://url/v3/domains -H Content-Type: application/json 
-H X-Auth-Token: $token -d @domain.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:48:15 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 214
  Content-Type: application/json

  {domain: {links: {self:
  https://url/v3/domains/dc1d36c037ac4e47b3b21424f1a13273}, enabled:
  true, description: Description., name: Domain9, id:
  dc1d36c037ac4e47b3b21424f1a13273}}



  
  created  user fd22 in domain Domain9


   curl -i -k -X POST https://url/v3/users -H Content-Type: application/json 
-H X-Auth-Token: $token -d @user.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:49:27 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 269
  Content-Type: application/json

  {user: {links: {self:
  https://url/v3/users/533979e9b80645799028c51ccec55cce},
  description: Sample keystone test user, name: fd22, enabled:
  true, id: 533979e9b80645799028c51ccec55cce, domain_id:
  dc1d36c037ac4e47b3b21424f1a13273}}

  
  created user fd23 in default domain

  
  vi user.json
  provo-sand:~/bajarang # curl -i -k -X POST https://url/v3/users -H 
Content-Type: application/json -H X-Auth-Token: $token -d @user.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:50:56 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 244
  Content-Type: application/json

  {user: {links: {self:
  https://url/v3/users/8a43e5f3facb4fc2985a18a40de2046e},
  description: Sample keystone test user, name: fd23, enabled:
  true, id: 8a43e5f3facb4fc2985a18a40de2046e, domain_id:
  default}}


  created group DomainGroup10 in default domain


  curl -i -k -X POST https://url/v3/groups -H Content-Type: application/json 
-H X-Auth-Token: $token -d @newgroup.json
  HTTP/1.1 201 Created
  Date: Fri, 10 Jul 2015 09:52:49 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 225
  Content-Type: application/json

  {group: {domain_id: default, description: Description.,
  id: 0b72f1dd6f514adb989a752b9a72e005, links: {self:
  url/v3/groups/0b72f1dd6f514adb989a752b9a72e005}, name:
  DomainGroup10}}


  Added user 'fd22' from  Domain9 to DomainGroup10

  
  curl -i -k -X PUT 
https://url/v3/groups/0b72f1dd6f514adb989a752b9a72e005/users/533979e9b80645799028c51ccec55cce
 -H Content-Type: application/json -H X-Auth-Token: $token
  HTTP/1.1 204 No Content
  Date: Fri, 10 Jul 2015 09:53:17 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 0

  Added user 'fd23'  from Default  to DomainGroup10

   curl -i -k -X PUT 
https:/url/v3/groups/0b72f1dd6f514adb989a752b9a72e005/users/8a43e5f3facb4fc2985a18a40de2046e
 -H Content-Type: application/json -H X-Auth-Token: $token
  HTTP/1.1 204 No Content
  Date: Fri, 10 Jul 2015 09:54:20 GMT
  Server: Apache/2.4.10 (Linux/SUSE)
  Vary: X-Auth-Token
  Content-Length: 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1052161] Re: setup.py build fails on Windows due to hardcoded paths

2015-07-16 Thread Doug Hellmann
** Changed in: python-swiftclient
   Status: Fix Committed = Fix Released

** Changed in: python-swiftclient
Milestone: None = 2.5.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1052161

Title:
  setup.py build fails on Windows due to hardcoded paths

Status in neutron:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo-incubator grizzly series:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in python-swiftclient:
  Fix Released

Bug description:
  python setup.py build fails on Windows due to the following hardcoded
  /bib/sh path in setup.py, line 120:

  def _run_shell_command(cmd):
  output = subprocess.Popen([/bin/sh, -c, cmd],
    stdout=subprocess.PIPE)

  A possible solution consists in  replacing /bin/sh -c with cmd /C
  when os.name == 'nt'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1052161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255876] Re: need to ignore swap files from getting into repository

2015-07-16 Thread Doug Hellmann
** Changed in: python-swiftclient
   Status: Fix Committed = Fix Released

** Changed in: python-swiftclient
Milestone: None = 2.5.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1255876

Title:
  need to ignore swap files from getting into repository

Status in Ceilometer:
  Invalid
Status in Heat Templates:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Fix Released
Status in oslo-incubator:
  Won't Fix
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in Sahara:
  Invalid

Bug description:
  need to ignore swap files from getting into repository
  currently the implemented ignore in .gitignore is *.swp
  however vim goes beyond to generate these so to improve it could be done *.sw?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1255876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-16 Thread Doug Hellmann
** Changed in: glance-store
   Status: Fix Committed = Fix Released

** Changed in: glance-store
Milestone: None = 0.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Released
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Committed
Status in python-muranoclient:
  Fix Committed
Status in python-muranoclient kilo series:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Committed

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475402] [NEW] Magic Search needs to be updated to 0.2.5

2015-07-16 Thread Aaron Sahlin
Public bug reported:

Update Magic Search to pick up fix to allow for multiple Magic Search
widget on same page.  This is needed for ng Launch Instance wizard.

** Affects: horizon
 Importance: Undecided
 Assignee: Aaron Sahlin (asahlin)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Aaron Sahlin (asahlin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475402

Title:
  Magic Search needs to be updated to 0.2.5

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Update Magic Search to pick up fix to allow for multiple Magic Search
  widget on same page.  This is needed for ng Launch Instance wizard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475058] Re: Host and device info need to get migrated to the VM host paired port that is found on the FIP table

2015-07-16 Thread Jeremy Stanley
Per private E-mail from the original reporter, this bug was opened in
error and another (unspecified) bug has been opened to replace it.

** Information type changed from Public Security to Public

** Changed in: ossa
   Status: Incomplete = Invalid

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: Lynn (lynn-li) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475058

Title:
  Host and device info need to get migrated to the VM host paired port
  that is found on the FIP table

Status in neutron:
  Invalid
Status in OpenStack Security Advisory:
  Invalid

Bug description:
  When a VM host is created, a port is bound to this VM. Later on, 
  if an FIP agent gateway port gets paired with this VM host port,
  it is bound to this VM.  However, this FIP port's host and device
  information remains empty as of today. Moreover, while performing
  port disassociation on FIP table, this FIP port would get deleted
  as it can't be recognized as a DVR serviceable port.

  Host and device info needs to get migrated during the assigning
  process.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475411] [NEW] During post_live_migration the nova libvirt driver assumes that the destination connection info is the same as the source, which is not always true

2015-07-16 Thread Anthony Lee
Public bug reported:

The post_live_migration step for Nova libvirt driver is currently making
a bad assumption about the source and destination connector information.
The destination connection info may be different from the source which
ends up causing LUNs to be left dangling on the source as the BDM has
overridden the connection info with that of the destination.

Code section where this problem is occuring:

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

At line 6038 the potentially wrong connection info will be passed to
_disconnect_volume which then ends up not finding the proper LUNs to
remove (and potentially removes the LUNs for a different volume
instead).

By adding debug logging after line 6036 and then comparing that to the
connection info of the source host (by making a call to Cinder's
initialize_connection API) you can see that the connection info does not
match:

http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

Version of nova being used:

commit 35375133398d862a61334783c1e7a90b95f34cdb
Merge: 83623dd b2c5542
Author: Jenkins jenk...@review.openstack.org
Date:   Thu Jul 16 02:01:05 2015 +

Merge Port crypto to Python 3

** Affects: nova
 Importance: Undecided
 Assignee: Anthony Lee (anthony-mic-lee)
 Status: New


** Tags: live-migration

** Changed in: nova
 Assignee: (unassigned) = Anthony Lee (anthony-mic-lee)

** Description changed:

  The post_live_migration step for Nova libvirt driver is currently making
  a bad assumption about the source and destination connector information.
  The destination connection info may be different from the source which
  ends up causing LUNs to be left dangling on the source as the BDM has
  overridden the connection info with that of the destination.
  
  Code section where this problem is occuring:
  
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036
  
  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).
  
- By adding debug logging after line 6036 and then compare that to the
+ By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does not
  match:
  
  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/
  
  Version of nova being used:
  
  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins jenk...@review.openstack.org
  Date:   Thu Jul 16 02:01:05 2015 +
  
- Merge Port crypto to Python 3
+ Merge Port crypto to Python 3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475411

Title:
  During post_live_migration the nova libvirt driver assumes that the
  destination connection info is the same as the source, which is not
  always true

Status in OpenStack Compute (nova):
  New

Bug description:
  The post_live_migration step for Nova libvirt driver is currently
  making a bad assumption about the source and destination connector
  information. The destination connection info may be different from the
  source which ends up causing LUNs to be left dangling on the source as
  the BDM has overridden the connection info with that of the
  destination.

  Code section where this problem is occuring:

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6036

  At line 6038 the potentially wrong connection info will be passed to
  _disconnect_volume which then ends up not finding the proper LUNs to
  remove (and potentially removes the LUNs for a different volume
  instead).

  By adding debug logging after line 6036 and then comparing that to the
  connection info of the source host (by making a call to Cinder's
  initialize_connection API) you can see that the connection info does
  not match:

  http://paste.openstack.org/show/TjBHyPhidRuLlrxuGktz/

  Version of nova being used:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins jenk...@review.openstack.org
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge Port crypto to Python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475396] [NEW] Start Instances should not be enabled for an Running Instance

2015-07-16 Thread Mohan Seri
Public bug reported:

Start Instances gets enabled by selecting any Instance using check box
irrespective of the Instance Power State which is misleading and throws
error if Start Instances operation is performed,

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Start_Instances issue.JPG
   
https://bugs.launchpad.net/bugs/1475396/+attachment/4430174/+files/Start_Instances%20issue.JPG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475396

Title:
  Start Instances should not be enabled for an Running Instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Start Instances gets enabled by selecting any Instance using check box
  irrespective of the Instance Power State which is misleading and
  throws error if Start Instances operation is performed,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361186] Re: nova service-delete fails for services on non-child (top) cell

2015-07-16 Thread Matt Riedemann
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New = In Progress

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361186

Title:
  nova service-delete fails for services on non-child (top) cell

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  Nova service-delete fails for services on non-child (top) cell.

  How to reproduce:

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:56.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:06:55.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:06:50.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:59.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:58.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:06:57.00 | -   |
  
++--+-+--+-+---++-+

  Stop one of the services on top cell (e.g. nova-cert).

  $ nova --os-username admin service-list

  
++--+-+--+-+---++-+
  | Id | Binary   | Host| Zone | Status 
 | State | Updated_at | Disabled Reason |
  
++--+-+--+-+---++-+
  | region!child@1 | nova-conductor   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:26.00 | -   |
  | region!child@2 | nova-compute | region!child@ubuntu | nova | 
enabled | up| 2014-08-18T06:09:25.00 | -   |
  | region!child@3 | nova-cells   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region!child@4 | nova-scheduler   | region!child@ubuntu | internal | 
enabled | up| 2014-08-18T06:09:20.00 | -   |
  | region@1   | nova-cells   | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:19.00 | -   |
  | region@2   | nova-cert| region@ubuntu   | internal | 
enabled | down  | 2014-08-18T06:08:28.00 | -   |
  | region@3   | nova-consoleauth | region@ubuntu   | internal | 
enabled | up| 2014-08-18T06:09:27.00 | -   |
  
++--+-+--+-+---++-+

  Nova service-delete:
  $ nova --os-username admin service-delete 'region@2'

  Check the request id from nova-api.log:

  2014-08-18 15:10:23.491 INFO nova.osapi_compute.wsgi.server [req-
  e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] 192.168.101.31
  DELETE /v2/d66804d2e78549cd8f5efcedd0abecb2/os-services/region@2
  HTTP/1.1 status: 204 len: 179 time: 0.1334069

  Error log in n-cell-region service:

  2014-08-18 15:10:23.464 ERROR nova.cells.messaging 
[req-e134d915-ad66-41ba-a6f8-33ec51b7daee admin demo] Error locating next hop 
for message: 'NoneType' object has no attribute 'count'
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 406, in process
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging next_hop = 
self._get_next_hop()
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 361, in _get_next_hop
  2014-08-18 15:10:23.464 TRACE nova.cells.messaging dest_hops = 
target_cell.count(_PATH_CELL_SEP)

[Yahoo-eng-team] [Bug 1475447] [NEW] frequent changes in settings.py causes problems for deployers

2015-07-16 Thread Eric Peterson
Public bug reported:

there are lots of changes recently, especially around 
fd.populate_horizon_config() calls

deployers find the need to make changes to settings.py and
local_settings.py, because of what can be done where.

The churn in settings.py should be minimized, especially around settings
that are required for the application to function correctly.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1475447

Title:
  frequent changes in settings.py causes problems for deployers

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  there are lots of changes recently, especially around 
  fd.populate_horizon_config() calls

  deployers find the need to make changes to settings.py and
  local_settings.py, because of what can be done where.

  The churn in settings.py should be minimized, especially around
  settings that are required for the application to function correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1475447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp