[Yahoo-eng-team] [Bug 1566159] [NEW] ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

2016-04-04 Thread michelvaillant
Public bug reported:

2016-04-05 10:54:42.673 989 INFO oslo_service.service [-] Child 1100 exited 
with status 0
2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1096 exited 
with status 0
2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1076 exited 
with status 0
2016-04-05 10:54:42.680 989 INFO oslo_service.service [-] Child 1090 exited 
with status 0
2016-04-05 10:54:42.681 989 INFO oslo_service.service [-] Child 1077 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1091 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1079 exited 
with status 0
2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1094 exited 
with status 0
2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1075 exited 
with status 0
2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1071 exited 
with status 0
2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1097 exited 
with status 0
2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1092 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1098 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1099 exited 
with status 0
2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1073 exited 
with status 0
2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1067 exited 
with status 0
2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1068 exited 
with status 0
2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1069 exited 
with status 0
2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1072 exited 
with status 0
2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1066 exited 
with status 0
2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1074 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1078 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1080 exited 
with status 0
2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1081 exited 
with status 0
2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1093 killed by 
signal 15
2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1095 killed by 
signal 15
2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1101 exited 
with status 0
2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1102 exited 
with status 0
2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1103 exited 
with status 0
2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1104 exited 
with status 0
2016-04-05 10:54:42.713 989 INFO oslo_service.service [-] Child 1105 exited 
with status 0
2016-04-05 10:54:46.299 1259 INFO oslo_service.periodic_task [-] Skipping 
periodic task _periodic_update_dns because its interval is negative
2016-04-05 10:54:46.529 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 
'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 
'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 
'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 
'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 
'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 
'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 
'os-evacuate', 'os-extended-availability-zone', 
'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 
'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 
'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 
'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 
'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 
'os-instance-actions', 'os-instance-usage-audit-
 log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 
'os-multinic', 'os-multiple-create', 'os-networks', 'os-networks-associate', 
'os-pause-server', 'os-personality', 'os-preserve-ephemeral-rebuild', 
'os-quota-class-sets', 'os-quota-sets', 'os-remote-consoles', 'os-rescue', 
'os-scheduler-hints', 'os-security-group-default-rules', 'os-security-groups', 
'os-server-diagnostics', 'os-server-external-events', 'os-server-groups', 
'os-server-password', 'os-server-usage', 'os-services', 'os-shelve', 
'os-simple-tenant-usage', 'os-suspend-server', 'os-tenant-networks', 
'os-used-limits', 'os-user-data', 'os-virtual-interfaces', 'os-volumes', 
'server-metadata', 'servers', 'versions']
2016-04-05 10:54:46.533 1259 WARNING oslo_config.cfg [-] Option "username" from 
group "keystone_authtoken" is deprecated. Use option "user-name" from group 
"keystone_authtoken".
2016-04-05 10:54:46.703 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors', 'ima

[Yahoo-eng-team] [Bug 1565732] Re: glance_store run_tests.sh fails due to missing dependencies

2016-04-04 Thread Hemanth Makkapati
** Changed in: glance
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1565732

Title:
  glance_store run_tests.sh fails due to missing dependencies

Status in Glance:
  Won't Fix

Bug description:
  Calling rund_tests.sh from the glance store repository fails with:

  
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  '`
  running testr
  Non-zero exit code (2) from test listing.
  error: testr failed (3)
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./glance_store/tests --list 
  --- import errors ---
  Failed to import test module: glance_store.tests.unit.test_cinder_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "glance_store/tests/unit/test_cinder_store.py", line 27, in 
  from os_brick.initiator import connector
  ImportError: No module named os_brick.initiator

  Failed to import test module: glance_store.tests.unit.test_s3_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "glance_store/tests/unit/test_s3_store.py", line 22, in 
  import boto.s3.connection
  ImportError: No module named boto.s3.connection

  Failed to import test module: glance_store.tests.unit.test_swift_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "glance_store/tests/unit/test_swift_store.py", line 35, in 
  import swiftclient
  ImportError: No module named swiftclient

  Failed to import test module: glance_store.tests.unit.test_vmware_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "glance_store/tests/unit/test_vmware_store.py", line 23, in 
  from oslo_vmware import api
  ImportError: No module named oslo_vmware

  
  Ran 0 tests in 1.632s

  OK
  

  The root cause is that the created virtualenv is missing some packages
  defined in setup.cfg as extras for optional stores but missing those
  packages in test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1565732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442098] Re: instance_group_member entries not deleted when the instance deleted

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/289392
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e2f4370b04598833939f2b869e7bac11c02a4921
Submitter: Jenkins
Branch:master

commit e2f4370b04598833939f2b869e7bac11c02a4921
Author: zte-hanrong 
Date:   Mon Mar 7 23:21:32 2016 +0800

Soft delete instance group member when delete instance

Currently after instance deleted, the instance is still as member
of instance group. This patch make sure the instance will removed
from instance group when execute instance_destroy db call.

Closes-Bug: #1442098

Change-Id: I8cae3e5c317f0797944ecf3bea21c571ff24d9cf


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442098

Title:
  instance_group_member entries not deleted when the instance deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Just the not deleted members needs to be selected, an instance group
  can gather many-many deleted instances during on his lifetime.

  The selecting query contains a condition for omitting the deleted
  records:

  SELECT instance_groups.created_at AS instance_groups_created_at,
  instance_groups.updated_at AS instance_groups_updated_at,
  instance_groups.deleted_at AS instance_groups_deleted_at,
  instance_groups.deleted AS instance_groups_deleted, instance_groups.id
  AS instance_groups_id, instance_groups.user_id AS
  instance_groups_user_id, instance_groups.project_id AS
  instance_groups_project_id, instance_groups.uuid AS
  instance_groups_uuid, instance_groups.name AS instance_groups_name,
  instance_group_policy_1.created_at AS
  instance_group_policy_1_created_at, instance_group_policy_1.updated_at
  AS instance_group_policy_1_updated_at,
  instance_group_policy_1.deleted_at AS
  instance_group_policy_1_deleted_at, instance_group_policy_1.deleted AS
  instance_group_policy_1_deleted, instance_group_policy_1.id AS
  instance_group_policy_1_id, instance_group_policy_1.policy AS
  instance_group_policy_1_policy, instance_group_policy_1.group_id AS
  instance_group_policy_1_group_id, instance_group_member_1.created_at
  AS instance_group_member_1_created_at,
  instance_group_member_1.updated_at AS
  instance_group_member_1_updated_at, instance_group_member_1.deleted_at
  AS instance_group_member_1_deleted_at, instance_group_member_1.deleted
  AS instance_group_member_1_deleted, instance_group_member_1.id AS
  instance_group_member_1_id, instance_group_member_1.instance_id AS
  instance_group_member_1_instance_id, instance_group_member_1.group_id
  AS instance_group_member_1_group_id  FROM instance_groups LEFT OUTER
  JOIN instance_group_policy AS instance_group_policy_1 ON
  instance_groups.id = instance_group_policy_1.group_id AND
  instance_group_policy_1.deleted = 0 AND instance_groups.deleted = 0
  LEFT OUTER JOIN instance_group_member AS instance_group_member_1 ON
  instance_groups.id = instance_group_member_1.group_id AND
  instance_group_member_1.deleted = 0 AND instance_groups.deleted = 0
  WHERE instance_groups.deleted = 0 AND instance_groups.project_id =
  '6da55626d6a04f4c99980dc17d34235f';

  (Captured at $nova server-group-list)

  But actually nova fetches the deleted records because the `deleted` field is 
0,
  even if the instance already deleted.

  For figuring out the instance  is actually deleted the nova API issues
  other otherwise  not needed queries.

  The instance_group_member records actually set to deleted only when
  instance_group deleted.

  show create table instance_group_member;

  CREATE TABLE `instance_group_member` (
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`deleted_at` datetime DEFAULT NULL,
`deleted` int(11) DEFAULT NULL,
`id` int(11) NOT NULL AUTO_INCREMENT,
`instance_id` varchar(255) DEFAULT NULL,
`group_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `group_id` (`group_id`),
KEY `instance_group_member_instance_idx` (`instance_id`),
CONSTRAINT `instance_group_member_ibfk_1` FOREIGN KEY (`group_id`) 
REFERENCES `instance_groups` (`id`)
  ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8

  1, Please  delete the instance_group_member  records when the instance gets 
deleted.
  2, Please add (`deleted`,`group_id`) BTREE index  as combined index, in this 
way it will be  usable in other situations as well, for example  when only a 
single group's members is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1442098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566113] [NEW] Create volume transfer name required error

2016-04-04 Thread qiaomin032
Public bug reported:

In the 'Create Transfer Form', if the name input is blank, there will
cast error:"Error: Unable to create volume transfer",So the name field
should be required.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "create_transfer.jpg"
   
https://bugs.launchpad.net/bugs/1566113/+attachment/4624038/+files/create_transfer.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566113

Title:
  Create volume transfer name required error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the 'Create Transfer Form', if the name input is blank, there will
  cast error:"Error: Unable to create volume transfer",So the name field
  should be required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1566113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288696
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=1787244db42422db4f853a6acd845c800e7ce995
Submitter: Jenkins
Branch:master

commit 1787244db42422db4f853a6acd845c800e7ce995
Author: Tom Barron 
Date:   Fri Mar 4 14:20:52 2016 -0500

Run py34 tests with plain 'tox' command

Now that all cinder unit tests pass on python 3.4 [1], we can run py34
tests by default alongside py27 and pep8.

This commit also addresses the annoyance of py34 tox tests failing with
'db type could not be determined' if py27 tests were initially run in
the workspace.

[1] https://haypo.github.io/openstack_mitaka_python3.html

Change-Id: If60e5d0d3185e78f38fa2bfc7b6bb4840f09d840
Closes-bug: #1489059


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Bareon:
  Fix Released
Status in Cinder:
  Fix Released
Status in cloudkitty:
  Fix Committed
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Released
Status in hacking:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in kolla:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-ofagent:
  Fix Released
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  Fix Released
Status in tempest:
  Fix Released
Status in zaqar:
  Fix Released
Status in python-ironicclient package in Ubuntu:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393391] Re: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout..

2016-04-04 Thread Mathew Hodson
** Changed in: neutron
   Status: Confirmed => Invalid

** No longer affects: neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1393391

Title:
  neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-
  update_fanout..

Status in neutron:
  Invalid
Status in neutron source package in Trusty:
  Fix Released

Bug description:
  Under an HA deployment, neutron-openvswitch-agent can get stuck
  when receiving a close command on a fanout queue the agent is not subscribed 
to.

  It stops responding to any other messages, so it stops effectively
  working at all.

  2014-11-11 10:27:33.092 3027 INFO neutron.common.config [-] Logging enabled!
  2014-11-11 10:27:34.285 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:34.370 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:27:35.348 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent initialized successfully, 
now running...
  2014-11-11 10:27:35.351 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent out of sync with plugin!
  2014-11-11 10:27:35.401 3027 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Agent tunnel out of sync with 
plugin!
  2014-11-11 10:27:35.414 3027 INFO neutron.openstack.common.rpc.common 
[req-66ba318b-0fcc-42c2-959e-9a5233c292ef None] Connected to AMQP server on 
vip-rabbitmq:5672
  2014-11-11 10:32:33.143 3027 INFO neutron.agent.securitygroups_rpc 
[req-22c7fa11-882d-4278-9f83-6dd56ab95ba4 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:58:11.916 3027 INFO neutron.agent.securitygroups_rpc 
[req-484fd71f-8f61-496c-aa8a-2d3abf8de365 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 10:59:43.954 3027 INFO neutron.agent.securitygroups_rpc 
[req-2c0bc777-04ed-470a-aec5-927a59100b89 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-11 11:00:22.500 3027 INFO neutron.agent.securitygroups_rpc 
[req-df447d01-d132-40f2-8528-1c1c4d57c0f5 None] Security group member updated 
[u'4c7b3ad2-4526-48a7-959e-a8b8e4da6413']
  2014-11-12 01:27:35.662 3027 ERROR neutron.openstack.common.rpc.common [-] 
Failed to consume message from queue: Socket closed
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
Traceback (most recent call last):
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 579, in ensure
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return method(*args, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 659, in _consume
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.connection.drain_events(timeout=timeout)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.transport.drain_events(self.connection, **kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return connection.drain_events(**kwargs)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 266, in drain_events
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
chanmap, None, timeout=timeout,
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 328, in 
_wait_multiple
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
channel, method_sig, args, content = read_timeout(timeout)
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 292, in read_timeout
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common 
return self.method_reader.read_method()
  2014-11-12 01:27:35.662 3027 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/site-packages/amqp/metho

[Yahoo-eng-team] [Bug 1566046] [NEW] Fix TypeError when trying to update an arp entry for ports with allowed_address_pairs on DVR router

2016-04-04 Thread Swaminathan Vasudevan
Public bug reported:

TypeError is seen when trying to update an arp entry for ports with 
allowed_address_pairs on DVR router.
This was seen in the master branch while I was testing the allowed_address_pair 
with floatingips on DVR router.


plugin.update_arp_entry_for_dvr_service_port(context, port)
^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 775, 
in update_arp_entry_for_dvr_service_port
^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mself.l3_rpc_notifier.add_arp_entry)
^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 729, 
in _generate_arp_table_and_notify_agent
^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mip_address = fixed_ip['ip_address']
^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mTypeError: string indices must be integers


How to reproduce it.

1. Create a vrrp-network
2. Create a vrrp-subnet
3. Create a dvr router
4. Attach the vrrp-subnet to the router
5. Create security group rules for the vrrp-net and add rules to it.
6. Now create a VM on the vrrp-subnet
8. Now create a vrrp-port (allowed_address_pair) on the vrrp-subnet
9. Associate a floatingip to the vrrp-port.
10. Now update the VM port with the allowed_address_pair IP.

You should see this in the neutron-server logs.

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Swaminathan Vasudevan (swaminathan-vasudevan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566046

Title:
  Fix TypeError when trying to update an arp entry for ports with
  allowed_address_pairs on DVR router

Status in neutron:
  New

Bug description:
  TypeError is seen when trying to update an arp entry for ports with 
allowed_address_pairs on DVR router.
  This was seen in the master branch while I was testing the 
allowed_address_pair with floatingips on DVR router.

  
  plugin.update_arp_entry_for_dvr_service_port(context, port)
  ^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 775, 
in update_arp_entry_for_dvr_service_port
  ^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mself.l3_rpc_notifier.add_arp_entry)
  ^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00m  File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 729, 
in _generate_arp_table_and_notify_agent
  ^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mip_address = fixed_ip['ip_address']
  ^[[01;31m2016-03-30 12:06:00.910 TRACE neutron.callbacks.manager 
^[[01;35m^[[00mTypeError: string indices must be integers

  
  How to reproduce it.

  1. Create a vrrp-network
  2. Create a vrrp-subnet
  3. Create a dvr router
  4. Attach the vrrp-subnet to the router
  5. Create security group rules for the vrrp-net and add rules to it.
  6. Now create a VM on the vrrp-subnet
  8. Now create a vrrp-port (allowed_address_pair) on the vrrp-subnet
  9. Associate a floatingip to the vrrp-port.
  10. Now update the VM port with the allowed_address_pair IP.

  You should see this in the neutron-server logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562834] Re: There are some trivial errors in doc/tutorials/plugin.rst

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/299468
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=6a880bd0c16ecdff7ae9917fb7abb4535b09cd78
Submitter: Jenkins
Branch:master

commit 6a880bd0c16ecdff7ae9917fb7abb4535b09cd78
Author: Bo Wang 
Date:   Wed Mar 30 23:13:18 2016 +0800

Fix some trivial errors in plugin.rst

1. ADD_JS_FILES should be mentioned
2. scss files can not be discovered automatically
3. remove incorrect scss exmaple codes

Change-Id: Id543673a925eedb824b8222982e9ba35110fc44a
Closes-Bug: #1562834


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562834

Title:
  There are some trivial errors in doc/tutorials/plugin.rst

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  1. ADD_JS_FILES is not menthoned but only ADD_SCSS_FILES
  
http://docs.openstack.org/developer/horizon/tutorials/plugin.html#file-structure

  
  http://docs.openstack.org/developer/horizon/tutorials/plugin.html#mypanel-scss
  2. static files under static/ could be discovery automatically
  3. "div" should not in scss file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566025] [NEW] Unable to delete security groups; security_group table 'deleted' field needs migration

2016-04-04 Thread Andrew Bogott
Public bug reported:

My long-standing Nova installation has the following columns in the
security_groups table:

 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra  |
 +-+--+--+-+-++
 | created_at  | datetime | YES  | | NULL||
 | updated_at  | datetime | YES  | | NULL||
 | deleted_at  | datetime | YES  | | NULL||
 | deleted | tinyint(1)   | YES  | MUL | NULL||
 | id  | int(11)  | NO   | PRI | NULL| auto_increment |
 | name| varchar(255) | YES  | | NULL||
 | description | varchar(255) | YES  | | NULL||
 | user_id | varchar(255) | YES  | | NULL||
 | project_id  | varchar(255) | YES  | | NULL||
 +-+--+--+-+-++

A more recent install looks like this:

 +-+--+--+-+-++
 | Field   | Type | Null | Key | Default | Extra  |
 +-+--+--+-+-++
 | created_at  | datetime | YES  | | NULL||
 | updated_at  | datetime | YES  | | NULL||
 | deleted_at  | datetime | YES  | | NULL||
 | id  | int(11)  | NO   | PRI | NULL| auto_increment |
 | name| varchar(255) | YES  | | NULL||
 | description | varchar(255) | YES  | | NULL||
 | user_id | varchar(255) | YES  | | NULL||
 | project_id  | varchar(255) | YES  | MUL | NULL||
 | deleted | int(11)  | YES  | | NULL||
 +-+--+--+-+-++

Note that the 'deleted' field has changed types.  It now stores a group
ID upon deletion.  But, the old table can't store that group ID because
of the tinyint data type.  This means that security groups cannot be
deleted.

I haven't yet located the source of this regression, but presumably it
happened when the table definition was changed to use
models.SoftDeleteMixin, and the accompanying migration change was
overlooked.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566025

Title:
  Unable to delete security groups; security_group table 'deleted' field
  needs migration

Status in OpenStack Compute (nova):
  New

Bug description:
  My long-standing Nova installation has the following columns in the
  security_groups table:

   +-+--+--+-+-++
   | Field   | Type | Null | Key | Default | Extra  |
   +-+--+--+-+-++
   | created_at  | datetime | YES  | | NULL||
   | updated_at  | datetime | YES  | | NULL||
   | deleted_at  | datetime | YES  | | NULL||
   | deleted | tinyint(1)   | YES  | MUL | NULL||
   | id  | int(11)  | NO   | PRI | NULL| auto_increment |
   | name| varchar(255) | YES  | | NULL||
   | description | varchar(255) | YES  | | NULL||
   | user_id | varchar(255) | YES  | | NULL||
   | project_id  | varchar(255) | YES  | | NULL||
   +-+--+--+-+-++

  A more recent install looks like this:

   +-+--+--+-+-++
   | Field   | Type | Null | Key | Default | Extra  |
   +-+--+--+-+-++
   | created_at  | datetime | YES  | | NULL||
   | updated_at  | datetime | YES  | | NULL||
   | deleted_at  | datetime | YES  | | NULL||
   | id  | int(11)  | NO   | PRI | NULL| auto_increment |
   | name| varchar(255) | YES  | | NULL||
   | description | varchar(255) | YES  | | NULL||
   | user_id | varchar(255) | YES  | | NULL||
   | project_id  | varchar(255) | YES  | MUL | NULL||
   | deleted | int(11)  | YES  | | NULL||
   +-+--+--+-+-++

  Note that the 'deleted' field has changed types.  It now stores a
  gro

[Yahoo-eng-team] [Bug 1403660] Re: Resource Usage Map Keys overrun

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252036
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=dfcb6f35ce0e9877d207514e7de52d6b00aa3cc1
Submitter: Jenkins
Branch:master

commit dfcb6f35ce0e9877d207514e7de52d6b00aa3cc1
Author: Alexis Rivera 
Date:   Mon Nov 30 23:49:15 2015 -0600

fix-legend-overflow

* removed the float on swatch fixes alignment and overflow issue.

Change-Id: I5a9ba35ce833af58cdafdf3f47add93f03024236
Closes-Bug: #1403660


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1403660

Title:
  Resource Usage Map Keys overrun

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Resource Map Stat's Map Key overruns its bounds.   Image attached
  to clarify.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566007] [NEW] l3 iptables floating IP rules don't match iptables rules

2016-04-04 Thread Kevin Benton
Public bug reported:

The floating IP translation rules generated by the l3 agent do not match
the format in which they are returned by iptables. This causes the
iptables diffing code to think they are different and replace every one
of them on an iptables apply call, which is very expensive.

See https://gist.github.com/busterswt/479e4e5484df7e91017da48b38fa5814
for an example diff.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566007

Title:
  l3 iptables floating IP rules don't match iptables rules

Status in neutron:
  New

Bug description:
  The floating IP translation rules generated by the l3 agent do not
  match the format in which they are returned by iptables. This causes
  the iptables diffing code to think they are different and replace
  every one of them on an iptables apply call, which is very expensive.

  See https://gist.github.com/busterswt/479e4e5484df7e91017da48b38fa5814
  for an example diff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565859] Re: Can't detach volume from an instance

2016-04-04 Thread Matt Riedemann
What version of nova/cinder are you using?

** Summary changed:

- Can't detach volume from an instance
+ Can't detach SVC volume from an instance

** Also affects: cinder
   Importance: Undecided
   Status: New

** Tags added: libvirt storwize volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565859

Title:
  Can't detach SVC volume from an instance; guest detach device times
  out

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Steps to Reproduce:
  1. Setup instance with a libvirt/KVM host and SVC storage
  2. Spawn a VM
  3. Attach a volume to the VM
  4. Wait for volume attachment to complete successfully
  5. Detach the volume

  Expected Result:
  1. The volume is detached from the VM
  2. The volume's status becomes "Available"
  3. The volume can be deleted.

  Actual result:
  1. Volume remains attached to the VM (waited over 10 minutes)
  2. The volume's state stays "In-Use"

  Logs:
  016-03-24 16:34:13.852 143842 INFO nova.compute.resource_tracker [-] Final 
resource view: name=C387f19U21-KVM phys_ram=260533MB used_ram=4608MB 
phys_disk=545GB used_disk=40GB total_vcpus=160 used_vcpus=2 pci_stats=[]
  2016-03-24 16:34:14.081 143842 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for C387f19U21_KVM:C387f19U21-KVM
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall [-] Dynamic 
interval looping call 'oslo_service.loopingcall._func' failed
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 136, in 
_run_loop
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 377, in 
_func
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall return 
self._sleep_time
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
self.force_reraise()
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 356, in 
_func
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
f(*args, **kwargs)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 342, in 
_do_wait_and_retry_detach
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
reason=reason)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest 
transient domain.)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager 
[req-fab1608e-cffe-40f9-82d0-a4c7a9cebf10 
3ebcf1a38bc7b4977b7f8da32faad97bdef843372a670bb2817f8a066f042b9b 
e10bc17f58d8499a8fab1b05687123e5 - - -] [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Failed to detach volume 
4221ccad-0f98-4f78-ad06-92ea4941afc1 from /dev/vdb
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Traceback (most recent call last):
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4767, in 
_driver_detach_volume
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] encryption=encryption)
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1469, in 
detach_volume
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] wait_for_detach()
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 385, in 
func
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return evt.wait()
  2016-03-24 16:34:44.921 1

[Yahoo-eng-team] [Bug 1565859] Re: Can't detach SVC volume from an instance

2016-04-04 Thread Matt Riedemann
Nevermind about detaching the volume from cinder, the compute manager
tries to detach in the virt driver first, and then when that is done the
compute manager terminates the connection via cinder API and then
detaches the volume via cinder API.

Following:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4785

Do you see anything in the libvirtd logs for the volume/instance uuid
here?

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565859

Title:
  Can't detach SVC volume from an instance; guest detach device times
  out

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Steps to Reproduce:
  1. Setup instance with a libvirt/KVM host and SVC storage
  2. Spawn a VM
  3. Attach a volume to the VM
  4. Wait for volume attachment to complete successfully
  5. Detach the volume

  Expected Result:
  1. The volume is detached from the VM
  2. The volume's status becomes "Available"
  3. The volume can be deleted.

  Actual result:
  1. Volume remains attached to the VM (waited over 10 minutes)
  2. The volume's state stays "In-Use"

  Logs:
  016-03-24 16:34:13.852 143842 INFO nova.compute.resource_tracker [-] Final 
resource view: name=C387f19U21-KVM phys_ram=260533MB used_ram=4608MB 
phys_disk=545GB used_disk=40GB total_vcpus=160 used_vcpus=2 pci_stats=[]
  2016-03-24 16:34:14.081 143842 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for C387f19U21_KVM:C387f19U21-KVM
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall [-] Dynamic 
interval looping call 'oslo_service.loopingcall._func' failed
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 136, in 
_run_loop
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 377, in 
_func
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall return 
self._sleep_time
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
self.force_reraise()
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 356, in 
_func
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
f(*args, **kwargs)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 342, in 
_do_wait_and_retry_detach
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
reason=reason)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest 
transient domain.)
  2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager 
[req-fab1608e-cffe-40f9-82d0-a4c7a9cebf10 
3ebcf1a38bc7b4977b7f8da32faad97bdef843372a670bb2817f8a066f042b9b 
e10bc17f58d8499a8fab1b05687123e5 - - -] [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Failed to detach volume 
4221ccad-0f98-4f78-ad06-92ea4941afc1 from /dev/vdb
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Traceback (most recent call last):
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4767, in 
_driver_detach_volume
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] encryption=encryption)
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1469, in 
detach_volume
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] wait_for_detach()
  2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/o

[Yahoo-eng-team] [Bug 1565244] Re: Unexpected API Error in attach interface

2016-04-04 Thread Matt Riedemann
Also need the neutron server logs, it looks like update_port is failing
in neutron and nova is getting back a 500 which it's not handling (since
it doesn't expect a 500).

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: api neutron

** Tags added: network

** Changed in: nova
   Status: New => Incomplete

** Changed in: neutron
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565244

Title:
  Unexpected API Error in attach interface

Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  2016-04-02 02:58:32.646 6090 ERROR  [req-771d66c4-a48e-
  411d-9562-95314ef0733e - -] Failed to attach interface
  edfaa520-1775-4803-812c-22a92eeabdf8 to instance a16267cd-0502-466c-
  ae57-d97e58f01972  Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.

  Nova API log attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553345] Re: Chef gem installer fails on ubuntu 14.04

2016-04-04 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1553345

Title:
  Chef gem installer fails on ubuntu 14.04

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Running the "gem" version of the chef install fails on the latest
  Ubuntu server 14.04 LTS AMI (ami-fce3c696).

  Here is part of my user-data for cloud init:

  bootcmd:
- apt-get update && apt-get upgrade cloud-init
- apt-get install build-essential
- apt-get install -q -y <%= find_in_map("SoftwarePropertiesPackage", 
ref("LCOSVersion"), "PackageName") %>
- apt-add-repository -y ppa:brightbox/ruby-ng
- echo "Updating package lists..."
- apt-get update -qq
- echo "Installing ruby..."
- apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>
- apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>-dev
- update-alternatives --set ruby /usr/bin/ruby<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
- update-alternatives --set gem /usr/bin/gem<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
- echo "Updating rubygems to latest version..."
- gem update --system --no-rdoc --no-ri
  chef:
install_type: gems
version: <%= ref("ChefVersion") %>
  ...

  Here is the output from cloud-init

  Mar  4 18:00:04 ip-xxx[CLOUDINIT] util.py[DEBUG]: Running chef
  () failed#012Traceback (most
  recent call last):#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/stages.py", line 658, in _run_modules#012
  cc.run(run_name, mod.handle, func_args, freq=freq)#012  File
  "/usr/lib/python2.7/dist-packages/cloudinit/cloud.py", line 63, in
  run#012return self._runners.run(name, functor, args, freq,
  clear_on_fail)#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/helpers.py", line 197, in run#012results =
  functor(*args)#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/config/cc_chef.py", line 99, in handle#012
  install_chef_from_gems(cloud.distro, ruby_version, chef_version)#012
  File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_chef.py",
  line 128, in install_chef_from_gems#012
  distro.install_packages(get_ruby_packages(ruby_version))#012AttributeError:
  'str' object has no attribute 'install_packages'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1553345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565711] Re: vlan configuration/unconfigured interfaces creates slow boot time

2016-04-04 Thread Blake Rouse
This might also be related to curtin and cloud-init. Targeting them as
well.

** Also affects: curtin
   Importance: Undecided
   Status: New

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: maas
   Status: New => Incomplete

** Changed in: maas
   Importance: Undecided => High

** Changed in: maas
Milestone: None => 1.9.2

** Tags added: networking

** Changed in: cloud-init
   Status: New => Incomplete

** Changed in: curtin
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565711

Title:
  vlan configuration/unconfigured interfaces creates slow boot time

Status in cloud-init:
  Incomplete
Status in curtin:
  Incomplete
Status in MAAS:
  Incomplete

Bug description:
  maas: 1.9.1+bzr4543-0ubuntu1~trusty1 (from proposed PPA)

  Deploying juju bootstrap node on Ubuntu 14.04 with the following
  network configuration:

  eth0
  static assigned IP address, default VLAN (no trunking)

  eth1
 static assigned IP address, secondary VLAN

 eth1.2667
 static assigned IP address, VLAN 2667

 eth1.2668
 static assigned IP address, VLAN 2668

 eth1.2669
 static assigned IP address, VLAN 2669

 eth1.2670
 static assigned IP address, VLAN 2670

  eth2
unconfigured

  eth3
unconfigured

  
  MAAS generates a /e/n/i which auto stanzas for the VLAN devices and the 
unconfigured network interfaces; the upstart process which checks that network 
configuration is complete waits for /var/run/ifup. to exists for all auto 
interfaces; these will never appear for either the VLAN interfaces or the 
unconfigured network interfaces.

  As a result, boot time if very long as cloud-init and networking both
  take 2 minutes to timeout waiting for network interfaces that will
  never appear to be configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1565711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538264] Re: neutron-lbaas-dashboard npm dependencies are out of date

2016-04-04 Thread Neela Shah
** Project changed: horizon => neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1538264

Title:
  neutron-lbaas-dashboard npm dependencies are out of date

Status in Neutron LBaaS Dashboard:
  New

Bug description:
  tox won't run all of the tests for neutron-lbaas-dashboard.
  Dependencies are out of date and should be synced with Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1538264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548772] Re: mkfs fails on interactive input when no partition is used

2016-04-04 Thread Scott Moser
fixed in revno 1196

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: New => Fix Committed

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1548772

Title:
  mkfs fails on interactive input when no partition is used

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  When an attempt is made to format a block device directly on a cloud
  platform like Azure as below, this attempt fails as follows:

  fs_setup:
- label: data
  device: /dev/sdc
  filesystem: ext4

  2016-02-23 11:01:50,344 - util.py[WARNING]: Failed during filesystem operation
  Failed to exec of '['/sbin/mkfs.ext4', '/dev/sdc', '-L', 'data']':
  Unexpected error while running command.
  Command: ['/sbin/mkfs.ext4', '/dev/sdc', '-L', 'data']
  Exit code: 1
  Reason: -
  Stdout: '/dev/sdc is entire device, not just one partition!\nProceed anyway? 
(y,n) '
  Stderr: 'mke2fs 1.42.9 (4-Feb-2014)\n'

  It looks like the -F option needs to be added to mkfs.ext4 to force
  mkfs to be non-interactive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1548772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563547] Re: Unable to delete instance when cinder is down

2016-04-04 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/mitaka
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563547

Title:
  Unable to delete instance when cinder is down

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  When an instance is attached to a volume and cinder is down, you are
  unable to delete the instance, and the instance status is ERROR.

  This bug is reproducible on master (currently newton) using devstack.

  1. Create an instance
  2. Create a volume
  3. Attach volume to instance
  4. Bring the cinder api down via screen
  5. Attempt to delete the instance
  6. Note that the instance is not deleted
  7. Note that the instance state is ERROR

  For example:

  http://paste.openstack.org/show/492359/

  This bug was initially reported downstream here:

  https://bugzilla.redhat.com/show_bug.cgi?id=1318883

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555238] Re: cloud-init FAIL status message doesn't differentiate between a critical failure vs

2016-04-04 Thread Scott Moser
Generally speaking, I think this bug is off-the-cuff response to an issue
that is now fixed (bug 1554152).  Spending engineering resources developing a
complex solution is not justified by a single bug.

In that regard
a.) that bug is fixed, and wont happen again.
b.) cloud-init is not in the position of being able to determine "NONFATAL"
 some security conscious users may consider failure good entropy in
 system boot to be fatal.  Some of those users use MAAS to deploy their
 systems.
c.) What would maas be expected to do in a case where it received a
FAILCONTINUE ?  Would it then look at that log and make a decision?
Clearly something went wrong that something may have adverse affects.

cloud-init generally knows less about what is supposed to happen than the
user of it does, so is less in a position to make such desicions.
I find it much better for our system as a whole to *not* have transient
failures, even unimportant ones.

If there are more cases of cloud-init correctly reporting failure that are
problematic to maas we can consider engineering a way that MAAS can tell
cloud-init what types of failures it would consider not important.

Consider the following cases:
1.) /dev/random did not get seeded with data from entropy.canonical.com
  Some people might consider this fatal, some people might find it
  desirable.
2.) cloud-init failed to add a configured user
3.) cloud-init failed to add one a configured user to a specific group
4.) user provided code (runcmd / user-data-script) exited non-zero
   (Note, this is how 'curtin install' is provided to cloud-init)

Each of these is fatal for some users and non-fatal for others.
Generally looking at the things above, maas might consider 1, 2 and 3
to be FAILCONTINUE , as it does not need the users at all.

However, a user that launched a system expected for their admins to be
able to get into it is very much bothered by '2'.

case '4' is pretty straight forward, but lots of times my scripts fail
and things deal with that.

Implementing a solution to this really means the user of cloud-init (in
this case maas) needs to know what *they* consider fatal or non-fatal and
either tell cloud-init to report those things as fatal, or interpret the
reports as fatal or non-fatal itself.


** Changed in: cloud-init
   Status: New => Won't Fix

** Changed in: cloud-init
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1555238

Title:
  cloud-init FAIL status message doesn't differentiate between a
  critical failure vs

Status in cloud-init:
  Won't Fix

Bug description:
  Cloud-init status messages that are send to MAAS provide a
  SUCCESS/FAIL results for the different modules that cloud-init runs.
  As such, if a module failed, MAAS captures that FAIL message and acts
  upon on it; for example, it marks a machine Failed Deployment.

  That being said, when using a MAAS data source/endpoint, there are
  some cloud-init modules for which a failure is not critical; meaning
  that cloud-init won't stop working or cause a deployment failure if
  the module has failed. However, this doesn't reflect in the messaging.
  Even if it is not a critical module, cloud-init will continue to send
  a FAIL message to MAAS, which causes MAAS to mark a machine Failed
  Deployment.

  As such, cloud-init shouldn't be tell MAAS that a module run has
  FAILED if it is not critical to a MAAS deployment (that will cause a
  machine to FAIL). In turn, cloud-init should be sending:

  A different 'result' i.e. SUCCESS/FAIL/WARNING (or FAILCONTINUE)

  As an example, the info sent to MAAS is:

  "event_type": "finish",
  "origin": "curtin",
  "description": "Finished XYZ",
  "name": "cmd-install",
  "result": "FAIL",

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1555238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565638] Re: Cloud-init on Xenial won't deploy compressed content in "write_files" - says DecompressionError: Not a gzipped file

2016-04-04 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565638

Title:
  Cloud-init on Xenial won't deploy compressed content in "write_files"
  - says DecompressionError: Not a gzipped file

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  I'm trying to deploy an Amazon EC2 instance using cloud-init.

  When I'm compressing the files in the write_files section, the cloud-
  init process doesn't write the files, and I get the following error in
  the cloud-init.log :

  Apr  3 16:27:16 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: 
init-network/config-write-files: FAIL: running config-write-files with 
frequency once-per-instance
  Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[WARNING]: Running module 
write-files ()
   failed
  Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[DEBUG]: Running module write-files 
() failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 393, in 
decomp_gzip
  return decode_binary(gh.read())
File "/usr/lib/python3.5/gzip.py", line 274, in read
  return self._buffer.read(size)
File "/usr/lib/python3.5/gzip.py", line 461, in read
  if not self._read_gzip_header():
File "/usr/lib/python3.5/gzip.py", line 409, in _read_gzip_header
  raise OSError('Not a gzipped file (%r)' % magic)
  OSError: Not a gzipped file (b"b'")

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 735, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 39, in handle
  write_files(name, files, log)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 74, in write_files
  contents = extract_contents(f_info.get('content', ''), extractions)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 98, in extract_contents
  result = util.decomp_gzip(result, quiet=False)
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 400, in 
decomp_gzip
  raise DecompressionError(six.text_type(e))
  cloudinit.util.DecompressionError: Not a gzipped file (b"b'")

  I've verified that the cloud init I'm submitting does encode the files
  correctly by running this ruby code on the cloud-init YAML file:

  > File.write("test.gz",
  YAML.load(File.read("test.init"))["write_files"].first["content"])

  Then:

  $ file test.gz
  test.gz: gzip compressed data, last modified: Mon Apr  4 06:55:53 2016, max 
compression, from Unix
  $ python
  Python 2.7.11+ (default, Mar 30 2016, 21:00:42) 
  [GCC 5.3.1 20160330] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import gzip
  >>> with gzip.open('test.gz', 'rb') as f:
  ... file_content = f.read()
  ... print file_content
  ... 
  

  I've never implemented this with trusty, so I'm not sure how cloud-
  init on trusty handles that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1565638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558069] Re: Login complains "Your environment specifies an invalid locale", doesn't say which locale

2016-04-04 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1558069

Title:
  Login complains "Your environment specifies an invalid locale",
  doesn't say which locale

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Triaged

Bug description:
  On login to a brand new trusty machine with all updates applied, the
  following message appears:

  _
  WARNING! Your environment specifies an invalid locale.
   This can affect your user experience significantly, including the
   ability to manage packages. You may install the locales by running:

 sudo apt-get install language-pack-UTF-8
   or
 sudo locale-gen UTF-8

  To see all available language packs, run:
 apt-cache search "^language-pack-[a-z][a-z]$"
  To disable this message for all users, run:
 sudo touch /var/lib/cloud/instance/locale-check.skip
  _

  - The message complains about an invalid locale, but then doesn't tell
  you what the locale is or what is invalid about it.

  - The suggested advice "sudo apt-get install language-pack-UTF-8"
  breaks as follows:

  ubuntu@z4-dev-black-wap01:~$ sudo apt-get install language-pack-UTF-8
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  E: Unable to locate package language-pack-UTF-8

  The above warning needs to be fixed to contain the locate that is
  invalid, and to provide accurate package names in the advice given to
  fix it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1558069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424537] Re: Error creating nova Instance for cirros image 0.3.1 on AIO-HAVANA Ubuntu 12.0.4 package

2016-04-04 Thread Rob Cresswell
Havana is a long way out of support, and wont be fixed at this stage.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424537

Title:
  Error creating nova Instance for cirros image 0.3.1 on AIO-HAVANA
  Ubuntu 12.0.4 package

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Problem - Error creating Nova Instance using Cirros 0.3.1 image on
  AIO-HAVANA Ubuntu 12.04 LTS package.

  Steps to reproduce -
  -Install VMware workstation.
  -Create VM with 2-VCPU, 1 GB RAM, 40 GB DIsk, 2 NICs, VT-X enabled.
  -Install Ubuntu 12.04 LTS.
  -Install GIT, and Openstack packages keystone, glance, nova, python, etc.
  -Using SSH open Horizon Dashboard.
  -Create an instance using Cirros 0.3.1 image.
  -Observe the error--> Problem

  Refer this for detailed installation procedure that was folowed ==
  http://www.discoposse.com/index.php/2014/01/26/openstack-havana-all-
  in-one-lab-on-vmware-workstation/

  Snapshots and logs are attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453260] Re: Should use generated unique ID for Launch Instance wizard form field

2016-04-04 Thread Rob Cresswell
I've looked over the proposed patch, and I just don't think this is a
bug at all. There is no use case for wanting to UUID form elements. Its
just extra code to maintain and hugely over engineered.

** Changed in: horizon
   Status: In Progress => Won't Fix

** Changed in: horizon
 Assignee: Shaoquan Chen (sean-chen2) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453260

Title:
  Should use generated unique ID for Launch Instance wizard form field

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Currently, in Launch Instance wizard, form field uses hard-coded ID
  value which cannot be guaranteed to be unique in the page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565466] Re: pci detach failed with 'PciDevice' object has no attribute '__getitem__'

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300885
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=871368bc65eb9ededd053055c35673dbb61dd3ad
Submitter: Jenkins
Branch:master

commit 871368bc65eb9ededd053055c35673dbb61dd3ad
Author: Moshe Levi 
Date:   Sun Apr 3 15:53:36 2016 +0300

libvirt: pci detach devices should use dev.address

The _detach_pci_devices gets pci_devs which is list
of PciDevice objects. The code that check if pci is
detach is using dev['address'] and as not in the obj
format dev.address.

Closes-Bug: #1565466

Change-Id: I9ba58707d03d19018a025d7760f2a77f84d23aad


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565466

Title:
  pci detach failed with 'PciDevice' object has no attribute
  '__getitem__'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  when doing suspend with pci device, nova tries to detach the pci device from 
libvrit dom.
  after calling  guest.detach_device nova checks the dom to ensure the 
detaching is finished.
   if that detach failed (because of using old qemu in my case) the 
_detach_pci_devices method failed with the following error instead of raising 
PciDeviceDetachFailed

  
  2016-03-31 08:50:46.727 10338 DEBUG nova.objects.instance 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Lazy-loading 
'pci_devices' on Instance uuid 7114fa62-10bb-45dc-b64e-b301bfce4dfa 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:895
  2016-03-31 08:50:46.727 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_id: 
c96a579643054867adc0e119d93cc6a9 exchange 'nova' topic 'conductor' _send 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:454
  2016-03-31 08:50:46.745 10338 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: c96a579643054867adc0e119d93cc6a9 __call__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:302
  2016-03-31 08:50:46.751 10338 DEBUG nova.virt.libvirt.config 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Generated XML ('\n  \n\n  \n\n',)  
to_xml /opt/stack/nova/nova/virt/libvirt/config.py:82
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Setting instance vm_state to ERROR
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Traceback (most recent call last):
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6588, in 
_error_out_instance_on_exception
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] yield
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4196, in suspend_instance
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self.driver.suspend(context, instance)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2641, in suspend
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_sriov_ports(context, 
instance, guest)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3432, in _detach_sriov_ports
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_pci_devices(guest, 
sriov_devs)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3350, in _detach_pci_devices
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] dbsf = 
pci_utils.parse_address(dev['address'])
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] TypeError: 'PciDevice' object has no 
attribute '__getitem__'
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]
  2016-03-31 08:50:51.792 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_

[Yahoo-eng-team] [Bug 1565698] Re: novncproxy missed the vnc section in cli options

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300736
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2fe96e87fe857635c1007075e6c8b04b9514fe7e
Submitter: Jenkins
Branch:master

commit 2fe96e87fe857635c1007075e6c8b04b9514fe7e
Author: Allen Gao 
Date:   Sat Apr 2 21:56:16 2016 +0800

config options: fix the missed cli options of novncproxy

Commit I5da0ad8cd42ef8b969ec05c07c497238e92f1f41 moved the
'vnc' section to 'nova/conf/vnc.py', but forgot to register
cli options of 'novncproxy'.

Change-Id: I3732f237a110d33e51a0b7e31cd557ca45840af1
Implements: blueprint centralize-config-options-newton
Closes-Bug: #1565698


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565698

Title:
  novncproxy missed the vnc section in cli options

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  $ nova-novncproxy --help
  usage: nova-novncproxy [-h] [--cert CERT] [--config-dir DIR]
 [--config-file PATH] [--daemon] [--debug] [--key KEY]
 [--log-config-append PATH]
 [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR]
 [--log-file PATH] [--nodaemon] [--nodebug] [--norecord]
 [--nosource_is_ipv6] [--nossl_only] [--nouse-syslog]
 [--noverbose] [--nowatch-log-file] [--record]
 [--source_is_ipv6] [--ssl_only]
 [--syslog-log-facility SYSLOG_LOG_FACILITY]
 [--use-syslog] [--verbose] [--version]
 [--watch-log-file] [--web WEB]
 [--remote_debug-host REMOTE_DEBUG_HOST]
 [--remote_debug-port REMOTE_DEBUG_PORT]

  optional arguments:
-h, --helpshow this help message and exit
--cert CERT   SSL certificate file
--config-dir DIR  Path to a config directory to pull *.conf files from.
  This file set is sorted, so as to provide a
  predictable parse order if individual options are
  over-ridden. The set is parsed after the file(s)
  specified via previous --config-file, arguments hence
  over-ridden options in the directory take precedence.
--config-file PATHPath to a config file to use. Multiple config files
  can be specified, with values in later files taking
  precedence. Defaults to None.
--daemon  Become a daemon (background process)
--debug, -d   If set to true, the logging level will be set to DEBUG
  instead of the default INFO level.
--key KEY SSL key file (if separate from cert)
--log-config-append PATH, --log_config PATH
  The name of a logging configuration file. This file is
  appended to any existing logging configuration files.
  For details about logging configuration files, see the
  Python logging module documentation. Note that when
  logging configuration files are used then all logging
  configuration is set in the configuration file and
  other logging configuration options are ignored (for
  example, logging_context_format_string).
--log-date-format DATE_FORMAT
  Defines the format string for %(asctime)s in log
  records. Default: None . This option is ignored if
  log_config_append is set.
--log-dir LOG_DIR, --logdir LOG_DIR
  (Optional) The base directory used for relative
  log_file paths. This option is ignored if
  log_config_append is set.
--log-file PATH, --logfile PATH
  (Optional) Name of log file to send logging output to.
  If no default is set, logging will go to stderr as
  defined by use_stderr. This option is ignored if
  log_config_append is set.
--nodaemonThe inverse of --daemon
--nodebug The inverse of --debug
--norecordThe inverse of --record
--nosource_is_ipv6The inverse of --source_is_ipv6
--nossl_only  The inverse of --ssl_only
--nouse-syslogThe inverse of --use-syslog
--noverbose   The inverse of --verbose
--nowatch-log-fileThe inverse of --watch-log-file
--record  Record sessions to FILE.[session_number]
--source_is_ipv6

[Yahoo-eng-team] [Bug 1469858] Re: Update contributing.rst for eslint

2016-04-04 Thread Rob Cresswell
This has been fixed.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1469858

Title:
  Update contributing.rst for eslint

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In our contributing documentation (doc/source/contributing.rst) we
  discuss and give instructions for installing and setting up JSHint for
  development.  This is not a bad thing, but with the switch in Horizon
  to use eslint instead of jscs and jshint in our tool chain
  (https://review.openstack.org/#/c/192327/)  we should give
  instructions on how to setup and install eslint in your development
  environment instead to match what the gate is checking for.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1469858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471808] Re: Horizon wont start with rejoin-stack.sh

2016-04-04 Thread Rob Cresswell
Running stack after unstack would bring Horizon back up; clean is
unnecessary. Your instances have gone because you unstacked and took
down your endpoints.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1471808

Title:
  Horizon wont start with rejoin-stack.sh

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I have a fresh ubuntu install and i git DevStack   All on a VM, now here is 
how i reproduce my bug:
  1-stack.sh  and open my ip adress for horizon (dashboard) everything works 
fine and i am deploying instances .
  2-unstack.sh to bring services down.
  3-rejoin-stack.sh and after its done i try to open the dashboard but the ip 
doesnt work ( no login appear,its unable to login to webserver)

  the only way to get everything started again is to :
  ./clean.sh

  and then
  ./stack.sh

  but then every instance is lost :(

  So any help how to debug this problem ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1471808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536540] Re: AttributeError: 'NoneType' object has no attribute 'port_name' when deleting an instance with QoS policy attached

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300679
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=213d48df014eebdc96a70a9369ffc9701f419f9f
Submitter: Jenkins
Branch:master

commit 213d48df014eebdc96a70a9369ffc9701f419f9f
Author: Sławek Kapłoński 
Date:   Fri Apr 1 22:00:41 2016 +

Improve handle port_update and port_delete events in ovs qos agent

This patch improves getting vif port name from port info in Openvswitch QoS
extension driver. It will prevent to have tracebacks with info that NoneType
object has no attribute. Such situation could happen if extension driver 
handled
event on port which was already deleted

Change-Id: Ib76630183f1091436f1cd282a91cbce5fb151716
Closes-Bug: #1536540


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536540

Title:
  AttributeError: 'NoneType' object has no attribute 'port_name' when
  deleting an instance with QoS policy attached

Status in neutron:
  Fix Released

Bug description:
  After deleting an instance with a port the has a QoS policy attached
  the following Trace occurs in the OVS agent log:

  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
Traceback (most recent call last):
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/manager.py", 
line 77, in delete_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
extension.obj.delete_port(context, data)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 239, in delete_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._process_reset_port(port)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 254, in _process_reset_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self.qos_driver.delete(port)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 89, in delete
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._handle_rule_delete(port, rule_type)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 104, in _handle_rule_delete
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
handler(port)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py",
 line 49, in delete_bandwidth_limit
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
port_name = port['vif_port'].port_name
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
AttributeError: 'NoneType' object has no attribute 'port_name'
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
  2016-01-21 04:02:22.636 21316 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-b605ce4f-c832-4d8c-a7a7-ee8b89f47e4a - - - - -] port_unbound(): net_uuid 
None not in local_vlan_map
  2016-01-21 04:02:22.637 21316 INFO neutron.agent.securitygroups_rpc 
[req-b605ce4f-c832-4d8c-a7a7-ee8b8

  How to reproduce
  ===
  1. Enable QoS
  2. Create a QoS policy and a rule
  3. Launch an instance 
  4. Attach the QoS policy to a the port of the instance
  5. Delete the instance and check the OVS agent's log

  Version
  ==
  RHEL7.2
  Liberty
  python-neutron-7.0.1-6.el7ost.noarch
  openstack-neutron-ml2-7.0.1-6.el7ost.noarch
  openstack-neutron-openvswitch-7.0.1-6.el7ost.noarch
  openstack-neutron-common-7.0.1-6.el7ost.noarch
  openstack-neutron-7.0.1-6.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565724] Re: Launch Instance layout breaks on long names in transfer table

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301089
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=36ead1432830dff5bd97e753be2dd8857dc15836
Submitter: Jenkins
Branch:master

commit 36ead1432830dff5bd97e753be2dd8857dc15836
Author: Rob Cresswell 
Date:   Mon Apr 4 13:30:10 2016 +0100

Prevent transfer tables expanding out of modal

The transfer tables can sometimes expand out of the modal when they have
columns containing very long UUID entries, which is common for Networks/
Subnets. We should force word-break to prevent this.

Change-Id: I65fece6139010b128f8b60e3a53e0b5ac7b82edb
Closes-Bug: 1565724


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1565724

Title:
  Launch Instance layout breaks on long names in transfer table

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  See the screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1565724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565895] [NEW] Hyper-V: cold migrations cannot handle shared storage

2016-04-04 Thread Lucian Petrut
Public bug reported:

At the moment, if the destination host is other than the source host, we 
attempt to move the instance files without checking if
 shared storage is being used.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: driver hyper-v

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: driver hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565895

Title:
  Hyper-V: cold migrations cannot handle shared storage

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  At the moment, if the destination host is other than the source host, we 
attempt to move the instance files without checking if
   shared storage is being used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1565895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552786] Re: VMware: Port Group and Port ID not explicit from port binding

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288076
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e964b4778cfe6a1864718bdad4ab037ddf976766
Submitter: Jenkins
Branch:master

commit e964b4778cfe6a1864718bdad4ab037ddf976766
Author: Thomas Bachman 
Date:   Thu Mar 3 10:32:09 2016 -0500

VMware: Use Port Group and Key in binding details

This uses the port group and port key information passed
via the binding:vif_details attribute, if available. This
allows these parameters to be passed explicitly.

Change-Id: I41949e8134c2ca860e7b7ad3a2679b9f2884a99a
Closes-Bug: #1552786


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552786

Title:
  VMware: Port Group and Port ID not explicit from port binding

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Various Neutron core plugins and/or ML2 mechanism drivers that support
  VMware vCenter as a Nova compute backend have different ways to map
  Neutron resources to vCenter constructs. The vCenter VIF driver code
  in Nova currently assumes a particular mapping. The Neutron plugin or
  driver should be able to use the port's binding:vif_details attribute
  to explicitly specify the vCenter port key and port group to be used
  for the VIF.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1552786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561904] Re: [Functional tests] is_marked method of checkbox field always return False

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/297587
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0b3d81132e225db20af97f6d533e2be4380e1c70
Submitter: Jenkins
Branch:master

commit 0b3d81132e225db20af97f6d533e2be4380e1c70
Author: Georgy Dyuldin 
Date:   Fri Mar 25 12:02:20 2016 +0300

Fix CheckBoxMixin:is_marked behavior

is_marked method of CheckBoxMixin was returned False always because it
worked with  element instead of corresponding 

Change-Id: I08e5f698695718dcb7125f12e1d80753b0eca8d3
Closes-Bug: #1561904


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561904

Title:
  [Functional tests] is_marked method of checkbox field always return
  False

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  CheckBoxMixin:is_marked always returns False, even if corresponding
  element is marked

  Location of method:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/regions/forms.py#L88

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463729] Re: refactor checking env. var INTEGRATION_TESTS

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/190077
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8e9b609fc63b15f8077e3e646f5f20b416dc7335
Submitter: Jenkins
Branch:master

commit 8e9b609fc63b15f8077e3e646f5f20b416dc7335
Author: Martin Pavlasek 
Date:   Wed Jun 10 10:53:38 2015 +0200

Refactor of BaseTestCase

Made code more readable by check conditions first.

Closes-Bug: 1463729
Partially implements blueprint: selenium-integration-testing
Change-Id: I34edd7261022f7a0a441e0716be8b84f90e8cde9


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1463729

Title:
  refactor checking env. var INTEGRATION_TESTS

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There is positive branch, much bigger than negative (it just raise
  exception). It's not so clear and this nesting is no necessary at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1463729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565466] Re: pci detach failed with 'PciDevice' object has no attribute '__getitem__'

2016-04-04 Thread Matt Riedemann
This was a regression in liberty due to
https://github.com/openstack/nova/commit/e464267085ca45129ef9b092db41112697ddf3ca
and the fact that the unit tests were passing dicts rather than the
actual PciDeviceList that we get from the Instance object.

** Tags added: libvirt mitaka-backport-potential

** Tags added: liberty-backport-potential

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Tags removed: liberty-backport-potential mitaka-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565466

Title:
  pci detach failed with 'PciDevice' object has no attribute
  '__getitem__'

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  when doing suspend with pci device, nova tries to detach the pci device from 
libvrit dom.
  after calling  guest.detach_device nova checks the dom to ensure the 
detaching is finished.
   if that detach failed (because of using old qemu in my case) the 
_detach_pci_devices method failed with the following error instead of raising 
PciDeviceDetachFailed

  
  2016-03-31 08:50:46.727 10338 DEBUG nova.objects.instance 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Lazy-loading 
'pci_devices' on Instance uuid 7114fa62-10bb-45dc-b64e-b301bfce4dfa 
obj_load_attr /opt/stack/nova/nova/objects/instance.py:895
  2016-03-31 08:50:46.727 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_id: 
c96a579643054867adc0e119d93cc6a9 exchange 'nova' topic 'conductor' _send 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:454
  2016-03-31 08:50:46.745 10338 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received reply msg_id: c96a579643054867adc0e119d93cc6a9 __call__ 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:302
  2016-03-31 08:50:46.751 10338 DEBUG nova.virt.libvirt.config 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] Generated XML ('\n  \n\n  \n\n',)  
to_xml /opt/stack/nova/nova/virt/libvirt/config.py:82
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Setting instance vm_state to ERROR
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] Traceback (most recent call last):
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6588, in 
_error_out_instance_on_exception
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] yield
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/compute/manager.py", line 4196, in suspend_instance
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self.driver.suspend(context, instance)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2641, in suspend
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_sriov_ports(context, 
instance, guest)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3432, in _detach_sriov_ports
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] self._detach_pci_devices(guest, 
sriov_devs)
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3350, in _detach_pci_devices
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] dbsf = 
pci_utils.parse_address(dev['address'])
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa] TypeError: 'PciDevice' object has no 
attribute '__getitem__'
  2016-03-31 08:50:51.784 10338 ERROR nova.compute.manager [instance: 
7114fa62-10bb-45dc-b64e-b301bfce4dfa]
  2016-03-31 08:50:51.792 10338 DEBUG oslo_messaging._drivers.amqpdriver 
[req-225f9ed4-1f93-427b-a045-84535b3aeb55 admin demo] CALL msg_id: 
b5353aecfd4a44aa8735c46a0427a12d exchange 'nova' topic 'conductor' _send 
/usr/lib/python2.7

[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2016-04-04 Thread Kairat Kushaev
Fixed in Glance 12.0.0.0b3

** No longer affects: glance-store

** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Bareon:
  Fix Released
Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Committed
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Released
Status in hacking:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in kolla:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-midonet:
  Fix Released
Status in networking-ofagent:
  Fix Released
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  Fix Released
Status in tempest:
  Fix Released
Status in zaqar:
  Fix Released
Status in python-ironicclient package in Ubuntu:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473953] Re: rbd cannot delete residual image from ceph in some situations

2016-04-04 Thread Kairat Kushaev
** Changed in: glance-store
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1473953

Title:
  rbd cannot delete residual image from ceph in some situations

Status in Glance:
  Invalid
Status in glance_store:
  Fix Released

Bug description:
  When user through glance RESTful api upload image, the image generally
  has a large size.

  In fact, uploading a large enough image may be failed due to http connection 
broken or other situations like that.
  RBD supports a mechanism that when add operation failed, rollback operation 
must be triggered (delete residual image if it was created).

  Base on a condition, we have already encountered, that the incomplete image 
has not been taken snapshot yet, then rollback operation do unprotect snap will 
throw exception "rbd.ImageNotFound".
  This exception will be disposed finally, while the code relating to remove 
residual image from ceph has been skipped.

  Therefore, re-uploading image will failed using the same image id due
  to above reason (residual image already exists) & residual image need
  to be deleted manually from ceph.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565859] [NEW] Can't detach volume from an instance

2016-04-04 Thread Lauren Taylor
Public bug reported:

Steps to Reproduce:
1. Setup instance with a libvirt/KVM host and SVC storage
2. Spawn a VM
3. Attach a volume to the VM
4. Wait for volume attachment to complete successfully
5. Detach the volume

Expected Result:
1. The volume is detached from the VM
2. The volume's status becomes "Available"
3. The volume can be deleted.

Actual result:
1. Volume remains attached to the VM (waited over 10 minutes)
2. The volume's state stays "In-Use"

Logs:
016-03-24 16:34:13.852 143842 INFO nova.compute.resource_tracker [-] Final 
resource view: name=C387f19U21-KVM phys_ram=260533MB used_ram=4608MB 
phys_disk=545GB used_disk=40GB total_vcpus=160 used_vcpus=2 pci_stats=[]
2016-03-24 16:34:14.081 143842 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for C387f19U21_KVM:C387f19U21-KVM
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall [-] Dynamic 
interval looping call 'oslo_service.loopingcall._func' failed
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall Traceback (most 
recent call last):
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 136, in 
_run_loop
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 377, in 
_func
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall return 
self._sleep_time
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
self.force_reraise()
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
six.reraise(self.type_, self.value, self.tb)
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 356, in 
_func
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall result = 
f(*args, **kwargs)
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 342, in 
_do_wait_and_retry_detach
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall reason=reason)
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall 
DeviceDetachFailed: Device detach failed for vdb: Unable to detach from guest 
transient domain.)
2016-03-24 16:34:44.919 143842 ERROR oslo.service.loopingcall
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager 
[req-fab1608e-cffe-40f9-82d0-a4c7a9cebf10 
3ebcf1a38bc7b4977b7f8da32faad97bdef843372a670bb2817f8a066f042b9b 
e10bc17f58d8499a8fab1b05687123e5 - - -] [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Failed to detach volume 
4221ccad-0f98-4f78-ad06-92ea4941afc1 from /dev/vdb
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] Traceback (most recent call last):
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4767, in 
_driver_detach_volume
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] encryption=encryption)
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1469, in 
detach_volume
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] wait_for_detach()
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 385, in 
func
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return evt.wait()
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return hubs.get_hub().switch()
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6]   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2016-03-24 16:34:44.921 143842 ERROR nova.compute.manager [instance: 
88724ae0-38d6-4b06-9236-7d5f4d6d6cd6] return self.greenlet.switch()
2016-03-24 16:34:44.921 143842 ERROR no

[Yahoo-eng-team] [Bug 1564829] Re: Hyper-V SMBFS volume driver cannot handle missing mount options

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300401
Committed: 
https://git.openstack.org/cgit/openstack/compute-hyperv/commit/?id=01eec24a13d2886440d2ea71eb84d9598b09380c
Submitter: Jenkins
Branch:master

commit 01eec24a13d2886440d2ea71eb84d9598b09380c
Author: Lucian Petrut 
Date:   Fri Apr 1 13:19:57 2016 +0300

SMBFS: properly handle empty connection info options

We use regex to fetch the crentials from the connection info options
when mounting SMB shares.

The issue is that this field may be empty, in which case we may get
a TypeError. This patch fixes this issue.

Change-Id: I44f8a74ad38e4419f58bf8f59682af79fff8070c
Closes-Bug: #1564829


** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564829

Title:
  Hyper-V SMBFS volume driver cannot handle missing mount options

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  We use regex to fetch the crentials from the connection info options
  when mounting SMB shares.

  The issue is that this field may be empty, in which case we may get
  a TypeError.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1564829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369876] Re: Missing HttpOnly Attribute in Session Cookie

2016-04-04 Thread Matt Borland
Yeah, reference: https://docs.djangoproject.com/en/1.8/ref/settings
/#csrf-cookie-secure ; I'm pretty sure that is the preferred method of
dealing with cookie disclosure.

** Changed in: horizon
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1369876

Title:
  Missing HttpOnly Attribute in Session Cookie

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Affected URL: https://Ip_address/admin/
  Entity: csrftoken (Cookie)
  Risk: It is possible to steal or manipulate customer session and cookies, 
which might be used to impersonate a legitimate user, allowing the hacker to 
view or alter user records, and to perform transactions as that user.
  Causes: The web application sets session cookies without the HttpOnly 
attribute
  Recommend Fix: Add the 'HttpOnly' attribute to all session cookies.

  The Test Requests and Responses:
  GET /admin/ HTTP/1.1
  Host: 9.5.29.52
  User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 
Firefox/24.0
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: en-US,en;q=0.5
  Referer: https://9.5.29.52/
  Cookie: csrftoken=JPjBiDp6Ex6YDw3sgfZPCTPUwWKZdZTm; 
sessionid=oad3bpy15qm8ntml9wx604yr79cc6zpb
  Connection: keep-alive
  HTTP/1.1 200 OK
  Date: Fri, 12 Sep 2014 07:52:50 GMT
  Server: Apache
  Vary: Accept-Language,Cookie,Accept-Encoding
  X-Frame-Options: SAMEORIGIN
  Content-Language: en
  Keep-Alive: timeout=5, max=100
  Connection: Keep-Alive
  2014/9/12 504
  Transfer-Encoding: chunked
  Content-Type: text/html
  Set-Cookie: csrftoken=silTP6ARbLvXohF6YYFLlWHce0KZkjPy; expires=Fri, 
11-Sep-2015 07:52:52 GMT; Max-Age=31449600; Path=/; secure
  Set-Cookie: sessionid=ygq094phgr6og471j6n0asq7x6q37j6n; httponly; Path=/; 
secure
  
  
  
  
  Usage Overview - Cloud Management Dashboard
  
  
  
  
  
  
  
  /*
  Added so that we can append Horizon scoped JS events to
  the DOM load events without running in to the "horizon"
  name-space not currently being defined since we load the
  scripts at the bottom of the page.
  */
  var addHorizonLoadEvent = function(func) {
  var old_onload = window.onload;
  if (typeof window.onload != 'function') {
  window.onload = func;
  } else {
  window.onload = function() {
  old_onload();
  func();
  }
  }
  }
  
  
  
  
  
  
  Cloud Management Dashboard
  
  
  admin
  
  
  
  admin
  
  
  
  Settings
  2014/9/12 505
  TOC
  http://docs.openstack.org"; target="_new">Help
  Sign Out
  
  
  
  
  
  
  
  
  
  
  
  
  Project
  
  
  Compute
  
  Overview
  Instances
  Volumes
  Images
  Access & 
Security
  
  
  Network
  
  Network 
Topology
  Networks
  Routers
  
  
  Orchestration
  
  Stacks
  
  
  ...
  ...
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1369876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564872] Re: Not supported error message of nova baremetal-node-delete is incorrect

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300437
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f7a46c8d19bf0c6c4c1f2f7576b4d05c772a
Submitter: Jenkins
Branch:master

commit f7a46c8d19bf0c6c4c1f2f7576b4d05c772a
Author: Yuiko Takada 
Date:   Fri Apr 1 21:05:53 2016 +0900

Fix error message of nova baremetal-node-delete

When execute "nova baremetal-node-delete" command,
below error message is shown:
 ERROR (BadRequest): Command Not supported.
 Please use Ironic command port-create to perform this action. (HTTP 400)

Ironic command corresponds to nova baremetal-node-delete is
not port-create, but node-delete.
This patch set fixes this bug.

Change-Id: I065e1efdce7a82d25d6d68908b0b1c43e6be7000
Closes-bug: #1564872


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564872

Title:
  Not supported error message of nova baremetal-node-delete is incorrect

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When execute "nova baremetal-node-delete" command, above error message is 
shown.
  ERROR (BadRequest): Command Not supported. Please use Ironic command 
port-create to perform this action. (HTTP 400)

  port-create is incorrect, node-delete is correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1564872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-04-04 Thread Steve McLellan
** Also affects: searchlight
   Importance: Undecided
   Status: New

** Changed in: searchlight
   Importance: Undecided => Low

** Changed in: searchlight
   Status: New => Confirmed

** Changed in: searchlight
 Assignee: (unassigned) => Swapnil Kulkarni (coolsvap)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Committed
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  In Progress
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Confirmed
Status in shaker:
  In Progress
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  In Progress
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565824] [NEW] config option generation doesn't work with itertools.chain

2016-04-04 Thread Markus Zoeller (markus_z)
Public bug reported:

Config options code like this doesn't generate output in the
sample.config file:

ALL_OPTS = itertools.chain(
   compute_opts,
   resource_tracker_opts,
   allocation_ratio_opts
   )


def register_opts(conf):
conf.register_opts(ALL_OPTS)


def list_opts():
return {'DEFAULT': ALL_OPTS}

The reason is that the generator created by "itertools.chain" doesn't
get reset after getting used in "register_opts". A simple complete
example:

import itertools

a = [1, 2]
b = [3, 4]

ab = itertools.chain(a, b)

print("printing 'ab' for the first time")
for i in ab:
print(i)

print("printing 'ab' for the second time")
for i in ab:
print(i)

The combined list 'ab' won't get printed a second time. The same thing
happens when the oslo.config generator wants to print the sample.config
file. This means we use either:

ab = list(itertools.chain(a, b))

or

ab = a + b

** Affects: nova
 Importance: High
 Assignee: Markus Zoeller (markus_z) (mzoeller)
 Status: Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Markus Zoeller (markus_z) (mzoeller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565824

Title:
  config option generation doesn't work with itertools.chain

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Config options code like this doesn't generate output in the
  sample.config file:

  ALL_OPTS = itertools.chain(
 compute_opts,
 resource_tracker_opts,
 allocation_ratio_opts
 )

  
  def register_opts(conf):
  conf.register_opts(ALL_OPTS)

  
  def list_opts():
  return {'DEFAULT': ALL_OPTS}

  The reason is that the generator created by "itertools.chain" doesn't
  get reset after getting used in "register_opts". A simple complete
  example:

  import itertools

  a = [1, 2]
  b = [3, 4]

  ab = itertools.chain(a, b)

  print("printing 'ab' for the first time")
  for i in ab:
print(i)

  print("printing 'ab' for the second time")
  for i in ab:
print(i)

  The combined list 'ab' won't get printed a second time. The same thing
  happens when the oslo.config generator wants to print the
  sample.config file. This means we use either:

  ab = list(itertools.chain(a, b))

  or

  ab = a + b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398314] Re: The API doesn't return nullable fields when they're null

2016-04-04 Thread Kairat Kushaev
Won't fix due to Flavio's comment.

** Changed in: python-glanceclient
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1398314

Title:
  The API doesn't return nullable fields when they're null

Status in Glance:
  Fix Released
Status in python-glanceclient:
  Won't Fix

Bug description:
  The schema filter method removes all fields that are null from the
  object schema. This is causing incompatibilities from the client side
  and it also makes glance's responses inconsistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1398314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2016-04-04 Thread Kairat Kushaev
Fix released in glanceclient 0.17.3 as per Matt's comment:
https://review.openstack.org/#/c/246996/

** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in keystonemiddleware:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in openstack-ansible kilo series:
  Fix Released
Status in openstack-ansible liberty series:
  Fix Released
Status in openstack-ansible trunk series:
  Fix Released
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  Fix Released

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_

[Yahoo-eng-team] [Bug 1523863] Re: Tutorial for customising horizon

2016-04-04 Thread Rob Cresswell
I don't think this is a bug. Step 1 is an acceptable way to describe
this; foundation step sounds odd to me.

** Changed in: horizon
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523863

Title:
  Tutorial for customising horizon

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  The Step 1 present in the title of Branding Horizon may look better if
  we rename title to foundation step.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565801] [NEW] [RFE] Add process monitor for haproxy

2016-04-04 Thread Nir Magnezi
Public bug reported:

Bug 1565511 aims to solve cases where the lbaas agent goes offline.
To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

This[1] neutron spec offers the following approach:  
"We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
 
[1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801

Title:
  [RFE] Add process monitor for haproxy

Status in neutron:
  New

Bug description:
  Bug 1565511 aims to solve cases where the lbaas agent goes offline.
  To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

  This[1] neutron spec offers the following approach:  
  "We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
   
  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565680] Re: conflict when we try to set router gateway

2016-04-04 Thread Akihiro Motoki
The router-gateway-set API operation is a PUT operation. 
In the case you reported, the external network is not changed,
so neutron server determines there is no need to change the external network.

As the server side, there is no need to change the current behavior.


On the other hand, there is a discussion in neutron CLI side.
https://bugs.launchpad.net/python-neutronclient/+bug/1422371
There are several opinions and there is no consensus now.

** Changed in: neutron
   Status: New => Won't Fix

** Tags added: api l3-bgp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565680

Title:
  conflict when we try to set router gateway

Status in neutron:
  Won't Fix

Bug description:
  When the router is already attached to a external network and again
  when we try to set the same router to the same router it is gicing
  success message but it does not changes the allocated ip. In this case
  neutron should returns some messages like router is already set to the
  external network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565785] [NEW] SR-IOV PF passthrough device claiming/allocation does not work for physical functions devices

2016-04-04 Thread Nikola Đipanov
Public bug reported:

Enable PCI passthrough on a compute host (whitelist devices explained in
more detail in the docs), and create a network, subnet and a port that
represents a SR-IOV physical function passthrough:

$ neutron net-create --provider:physical_network=phynet 
--provider:network_type=flat sriov-net
$ neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subne
$ neutron port-create sriov-net --binding:vnic_type=direct-physical --name pf

After that try to boot an instance using the created port (provided the
pci_passthrough_whitelist was setup correctly) this should work:

$ boot --image xxx --flavor 1 --nic port-id=$PORT_ABOVE testvm

My test env has 2 PFs with 7 VFs each, after spawning an instance, the
PF gets marked as allocated, but non of the VFs do, even though they are
removed from the host (note that device_pools are correctly updated.

So after the instance was successfully booted we get

MariaDB [nova]> select count(*) from pci_devices where status="available" and 
deleted=0;
+--+
| count(*) |
+--+
|   15 |
+--+

# This should be 8 - we are leaking 7 VFs belonging to the attached PF
that never get updated.

MariaDB [nova]> select pci_stats from compute_nodes;
| pci_stats 

 


 
| {"nova_object.version": "1.1", "nova_object.changes": ["objects"], 
"nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": 
[{"nova_object.version": "1.1", "nova_object.changes": ["count", "numa_
node", "vendor_id", "product_id", "tags"], "nova_object.name": "PciDevicePool", 
"nova_object.data": {"count": 1, "numa_node": 0, "vendor_id": "8086", 
"product_id": "1521", "tags": {"dev_type": "type-PF", "physical
_network": "phynet"}}, "nova_object.namespace": "nova"}, 
{"nova_object.version": "1.1", "nova_object.changes": ["count", "numa_node", 
"vendor_id", "product_id", "tags"], "nova_object.name": "PciDevicePool", "nova_
object.data": {"count": 7, "numa_node": 0, "vendor_id": "8086", "product_id": 
"1520", "tags": {"dev_type": "type-VF", "physical_network": "phynet"}}, 
"nova_object.namespace": "nova"}]}, "nova_object.namespace": "n
ova"} |

This is correct - shows 8 available devices

Once a new resource_tracker run happens we hit
https://bugs.launchpad.net/nova/+bug/1565721 so we stop updating based
on what is found on the host.

The root cause of this is (I believe) that we update PCI objects in the
local scope, but never call save() on those particular instances. So we
grap and update the status here:

https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/objects/pci_device.py#L339-L349

but never call save inside that method.

The save is eventually called here referencing completely different
instances that never see the update:

https://github.com/openstack/nova/blob/d57a4e8be9147bd79be12d3f5adccc9289a375b6/nova/compute/resource_tracker.py#L646

** Affects: nova
 Importance: High
 Status: New


** Tags: pci

** Changed in: nova
   Importance: Undecided => High

** Tags added: pci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565785

Title:
  SR-IOV PF passthrough device claiming/allocation does not work for
  physical functions devices

Status in OpenStack Compute (nova):
  New

Bug description:
  Enable PCI passthrough on a compute host (whitelist devices explained
  in more detail in the docs), and create a network, subnet and a port
  that represents a SR-IOV physical function passthrough:

  $ neutron net-create --provider:physical_network=phynet 
--provider:network_type=flat sriov-net
  $ neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subne
  $ neutron port-create sriov-net --binding:vnic_type=direct-physical --name pf

  After that try to boot an instance using the created port (provided
  the pci_passthrough_whitelist was setup correctly) this should work:

  $ boot --image xxx --flavor 1 --nic port-id=$PORT_ABOVE testvm

  My test env has 2 PFs with 7 VFs each, after spawning an instance, the
  PF gets marked as allocated, but non of the VFs do, even though they
  are removed from the host (note that device_pools are correctly
  updated.

  So after the instance was successfully booted we get

  MariaDB [nova]> select count(*) from pci_devices where status="available" and 
deleted=0;
  +--+
  | count(*) |
  +--+
  |   15 |
  +--+

  # This should be 8 - we are leaking 7 VFs belonging to the attached PF
  that never get updated.

[Yahoo-eng-team] [Bug 1565752] [NEW] Too many PIPEs are created when subprocess.Open fails

2016-04-04 Thread Nguyen Truong Son
Public bug reported:

1. How to reproduce:

Set max process (soft, hard) for particular user

Example: modify file /etc/security/limits.conf
hunters hardnproc   70
hunters softnproc   70

And then, start neutron-openvswitch-agent with this user. 
Try to start many another applications to get all the free processes, then the 
error log will be thrown.

In root user, check number of current open files of neutron-openvswitch-agent 
service.
# ps -ef | grep neutron-openvswitch
501  29401 1  2 Mar30 ?03:13:53 /usr/bin/python 
/usr/bin/neutron-openvswitch-agent ...
# lsof -p 29401
neutron-o 29401 openstack   10r  FIFO0,8  0t0 3849643462 pipe
neutron-o 29401 openstack   11w  FIFO0,8  0t0 3849643462 pipe
neutron-o 29401 openstack   12r  FIFO0,8  0t0 3849643463 pipe
neutron-o 29401 openstack   13w  FIFO0,8  0t0 3849643463 pipe
neutron-o 29401 openstack   14r  FIFO0,8  0t0 3849643464 pipe
neutron-o 29401 openstack   15w  FIFO0,8  0t0 3849643464 pipe
...

Too many PIPE are created.

2. Summary:

At weekend, when server runs at high load for rotating logs or something
else, neutron-openvswitch-agent gets error:

2016-04-04 18:05:33.942 7817 ERROR neutron.agent.common.ovs_lib 
[req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Unable to execute 
['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: [Errno 11] 
Resource temporarily unavailable
2016-04-04 18:05:33.944 7817 ERROR neutron.agent.common.ovs_lib 
[req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Traceback (most recent 
call last):
File "/home/hunters/neutron-7.0.0/neutron/agent/common/ovs_lib.py", line 226, 
in run_ofctl
process_input=process_input)
File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 120, in 
execute
addl_env=addl_env)
File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 89, in 
create_process
stderr=subprocess.PIPE)
File "/home/hunters/neutron-7.0.0/neutron/common/utils.py", line 199, in 
subprocess_popen
close_fds=close_fds, env=env)
File 
"/home/hunters/neutron-7.0.0/.venv/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 53, in init
subprocess_orig.Popen.init(self, args, 0, argss, *kwds)
File "/usr/lib/python2.7/subprocess.py", line 710, in init
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1223, in _execute_child
self.pid = os.fork()
OSError: [Errno 11] Resource temporarily unavailable

And then, the PIPEs are not closed. About 700 PIPE are created. After 2
week, it throws error "Too many open files" and then neutron-
openvswitch-agent stops.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- How to reproduce:
+ 1. How to reproduce:
  
  Set max process (soft, hard) for particular user
  
  Example: modify file /etc/security/limits.conf
  hunters hardnproc   70
  hunters softnproc   70
  
  And then, start neutron-openvswitch-agent with this user.
  
  In root user, check number of current open files of neutron-openvswitch-agent 
service.
  # ps -ef | grep neutron-openvswitch
  501  29401 1  2 Mar30 ?03:13:53 /usr/bin/python 
/usr/bin/neutron-openvswitch-agent ...
  # lsof -p 29401
  neutron-o 29401 openstack   10r  FIFO0,8  0t0 3849643462 pipe
  neutron-o 29401 openstack   11w  FIFO0,8  0t0 3849643462 pipe
  neutron-o 29401 openstack   12r  FIFO0,8  0t0 3849643463 pipe
  neutron-o 29401 openstack   13w  FIFO0,8  0t0 3849643463 pipe
  neutron-o 29401 openstack   14r  FIFO0,8  0t0 3849643464 pipe
  neutron-o 29401 openstack   15w  FIFO0,8  0t0 3849643464 pipe
  ...
  
  Too many PIPE are created.
+ 
+ 2. Summary:
  
  At weekend, when server runs at high load for rotating logs or something
  else, neutron-openvswitch-agent gets error:
  
  2016-04-04 18:05:33.942 7817 ERROR neutron.agent.common.ovs_lib 
[req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Unable to execute 
['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']. Exception: [Errno 11] 
Resource temporarily unavailable
  2016-04-04 18:05:33.944 7817 ERROR neutron.agent.common.ovs_lib 
[req-42b082b1-2fbf-48a2-b2f3-3b7d774141f0 - - - - -] Traceback (most recent 
call last):
  File "/home/hunters/neutron-7.0.0/neutron/agent/common/ovs_lib.py", line 226, 
in run_ofctl
  process_input=process_input)
  File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 120, in 
execute
  addl_env=addl_env)
  File "/home/hunters/neutron-7.0.0/neutron/agent/linux/utils.py", line 89, in 
create_process
  stderr=subprocess.PIPE)
  File "/home/hunters/neutron-7.0.0/neutron/common/utils.py", line 199, in 
subprocess_popen
  close_fds=close_fds, env=env)
  File 
"/home/hunters/neutron-7.0.0/.venv/local/lib/python2.7/site-packages/eventlet/green/subprocess.py",
 line 53, in init
  subprocess_orig.Popen.init(self, args, 0, argss, *kwds)
  F

[Yahoo-eng-team] [Bug 1565753] [NEW] notification_format config option is not part of the sample config

2016-04-04 Thread Balazs Gibizer
Public bug reported:

The generated sample config [1]  does not include the
notification_format option introduced in Mitaka.

[1] http://docs.openstack.org/developer/nova/sample_config.html

** Affects: nova
 Importance: Undecided
 Assignee: Balazs Gibizer (balazs-gibizer)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565753

Title:
  notification_format config option is not part of the sample config

Status in OpenStack Compute (nova):
  New

Bug description:
  The generated sample config [1]  does not include the
  notification_format option introduced in Mitaka.

  [1] http://docs.openstack.org/developer/nova/sample_config.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550250] Re: migrate in-use status volume, the volume's "delete_on_termination" flag lost

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288433
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=43d00cbfa65fca29773b7f9cf9ed2db25359e827
Submitter: Jenkins
Branch:master

commit 43d00cbfa65fca29773b7f9cf9ed2db25359e827
Author: zhengyao1 
Date:   Fri Mar 4 21:27:23 2016 +0800

After migrate in-use volume the BDM information lost

This flag delete_on_termination is important for use,
but after the volume migration, this flag is the
hardcoded "false". It should be consistent with the
information on the migration before.

Closes-Bug: #1550250

Change-Id: Ifa1fb061df697f03893171a8c6ba96154ec8a29d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550250

Title:
  migrate in-use status volume, the volume's "delete_on_termination"
  flag lost

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Reproducing method as following:
  1. create a blank volume  named  "test_show"
  2. create a vm instance named test and attach volume "test_show".
  [root@2C5_10_DELL05 ~(keystone_admin)]# nova boot --flavor 1 --image 
fd8330b3-a307-4140-8fe0-01341b583e26 --block-device-mapping 
vdb=4ee8dc8e-9ebc-4f82-bab1-862ee7866f2f:::1 --nic 
net-id=5c8f7e7a-5a75-48eb-9c68-096278585c18 test
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | -
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
|
  | OS-EXT-SRV-ATTR:instance_name| instance-063f
|
  | OS-EXT-STS:power_state   | 0
|
  | OS-EXT-STS:task_state| scheduling   
|
  | OS-EXT-STS:vm_state  | building 
|
  | OS-SRV-USG:launched_at   | -
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | adminPass| 8SGyuuuESf8n 
|
  | autostart| TRUE 
|
  | boot_index_type  |  
|
  | config_drive |  
|
  | created  | 2016-02-26T09:15:43Z 
|
  | flavor   | m1.tiny (1)  
|
  | hostId   |  
|
  | id   | 9010a596-d0e7-42e3-a472-d164f02c0e34 
|
  | image| cirros 
(fd8330b3-a307-4140-8fe0-01341b583e26)|
  | key_name | -
|
  | metadata | {}   
|
  | move | TRUE 
|
  | name | test 
|
  | novnc| TRUE 
|
  | os-extended-volumes:volumes_attached | [{"id": 
"4ee8dc8e-9ebc-4f82-bab1-862ee7866f2f"}] |
  | priority | 50   
|
  | progress | 0
|
  | qos  |  
|
  | security_groups  | default  
|
  | status   | BUILD
|
  | tenant_id| 181a578bc97642f2b9e153bec622f130 
|
  | updated  | 2016-02-26T09:15:43Z 
|
  | user_id

[Yahoo-eng-team] [Bug 1411566] Re: Type and Code fields' not marked manadatory in Access-Security tab

2016-04-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/245792
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d7e71487c3c9f270afc477e6fc43b8f923254521
Submitter: Jenkins
Branch:master

commit d7e71487c3c9f270afc477e6fc43b8f923254521
Author: Itxaka 
Date:   Mon Nov 16 15:24:58 2015 +0100

Set mandatory fields

icmp_code and icmp_type fields set to required.
Cleans both fields on rules that do not require them
so validation for other rules still work.

Closes-Bug: 1411566

Change-Id: Id88eaaf5f636854d19ead4e22df8adc625d21a4b
Signed-off-by: Itxaka 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1411566

Title:
  Type and Code fields' not marked manadatory in Access-Security tab

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The 'Type' and 'Code' are the required fields for managing rules in
  Access and Security tab, but they do not have the asterisk marked
  against them.

  To replicate this, please follow -
  Projects -> Access and Security -> Add/Edit Rules (on any security group that 
has been created) -> 'Custom ICMP Rule'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1411566/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565746] [NEW] Use growpart even if it doesn't support --update

2016-04-04 Thread Hannes
Public bug reported:

cc_growpart is only using growpart if it supports --update. I have to
spawn some older linuxes ( like ubuntu precise ) that don't support this
and I would like to have recent cloud-init for it. My workaround is to
use growpart even if it doesn't support --update but create /var/run
/reboot-required like the gdisk-resizer does. This way growing the disk
always works but some machines require a reboot afterwards. Does it make
sense to you to pull this into trunk?

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "patch-growpart.diff"
   
https://bugs.launchpad.net/bugs/1565746/+attachment/4622883/+files/patch-growpart.diff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565746

Title:
  Use growpart even if it doesn't support --update

Status in cloud-init:
  New

Bug description:
  cc_growpart is only using growpart if it supports --update. I have to
  spawn some older linuxes ( like ubuntu precise ) that don't support
  this and I would like to have recent cloud-init for it. My workaround
  is to use growpart even if it doesn't support --update but create
  /var/run/reboot-required like the gdisk-resizer does. This way growing
  the disk always works but some machines require a reboot afterwards.
  Does it make sense to you to pull this into trunk?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1565746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565732] [NEW] glance_store run_tests.sh fails due to missing dependencies

2016-04-04 Thread Danny Al-Gaaf
Public bug reported:

Calling rund_tests.sh from the glance store repository fails with:


Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  '`
running testr
Non-zero exit code (2) from test listing.
error: testr failed (3)
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./glance_store/tests --list 
--- import errors ---
Failed to import test module: glance_store.tests.unit.test_cinder_store
Traceback (most recent call last):
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "glance_store/tests/unit/test_cinder_store.py", line 27, in 
from os_brick.initiator import connector
ImportError: No module named os_brick.initiator

Failed to import test module: glance_store.tests.unit.test_s3_store
Traceback (most recent call last):
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "glance_store/tests/unit/test_s3_store.py", line 22, in 
import boto.s3.connection
ImportError: No module named boto.s3.connection

Failed to import test module: glance_store.tests.unit.test_swift_store
Traceback (most recent call last):
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "glance_store/tests/unit/test_swift_store.py", line 35, in 
import swiftclient
ImportError: No module named swiftclient

Failed to import test module: glance_store.tests.unit.test_vmware_store
Traceback (most recent call last):
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
module = self._get_module_from_name(name)
  File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
__import__(name)
  File "glance_store/tests/unit/test_vmware_store.py", line 23, in 
from oslo_vmware import api
ImportError: No module named oslo_vmware


Ran 0 tests in 1.632s

OK


The root cause is that the created virtualenv is missing some packages
defined in setup.cfg as extras for optional stores but missing those
packages in test-requirements.txt.

** Affects: glance
 Importance: Undecided
 Assignee: Danny Al-Gaaf (danny-al-gaaf)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Danny Al-Gaaf (danny-al-gaaf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1565732

Title:
  glance_store run_tests.sh fails due to missing dependencies

Status in Glance:
  New

Bug description:
  Calling rund_tests.sh from the glance store repository fails with:

  
  Running `tools/with_venv.sh python setup.py testr --testr-args='--subunit 
--concurrency 0  '`
  running testr
  Non-zero exit code (2) from test listing.
  error: testr failed (3)
  running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./glance_store/tests --list 
  --- import errors ---
  Failed to import test module: glance_store.tests.unit.test_cinder_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "glance_store/tests/unit/test_cinder_store.py", line 27, in 
  from os_brick.initiator import connector
  ImportError: No module named os_brick.initiator

  Failed to import test module: glance_store.tests.unit.test_s3_store
  Traceback (most recent call last):
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/develop/OpenStack/glance_store/.venv/lib/python2.7/site-packages/unitte

[Yahoo-eng-team] [Bug 1565722] [NEW] glance_store should support RBD cluster name

2016-04-04 Thread Danny Al-Gaaf
Public bug reported:

As the cinder rbd driver the glance_store should support to pass the ceph 
cluster name to the Rados/RBD driver while initiating a connection. This 
will be useful if you have either non-default cluster names or e.g. multiple
clusters setup.

See similar issue with cinder in bug #1563889

** Affects: glance
 Importance: Undecided
 Assignee: Danny Al-Gaaf (danny-al-gaaf)
 Status: In Progress

** Description changed:

- As the cinder rbd driver the glance_store should support to pass the
- ceph cluster name to the Rados/RBD driver while initiating a connection.
+ As the cinder rbd driver the glance_store should support to pass the ceph 
cluster name to the Rados/RBD driver while initiating a connection. This 
+ will be useful if you have either non-default cluster names or e.g. multiple
+ clusters setup.
  
  See similar issue with cinder in bug #1563889

** Changed in: glance
 Assignee: (unassigned) => Danny Al-Gaaf (danny-al-gaaf)

** Changed in: glance
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1565722

Title:
  glance_store should support RBD cluster name

Status in Glance:
  In Progress

Bug description:
  As the cinder rbd driver the glance_store should support to pass the ceph 
cluster name to the Rados/RBD driver while initiating a connection. This 
  will be useful if you have either non-default cluster names or e.g. multiple
  clusters setup.

  See similar issue with cinder in bug #1563889

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1565722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565724] [NEW] Launch Instance layout breaks on long names in transfer table

2016-04-04 Thread Timur Sufiev
Public bug reported:

See the screenshot.

** Affects: horizon
 Importance: Medium
 Assignee: Rob Cresswell (robcresswell)
 Status: Confirmed

** Attachment added: "Screen Shot 2016-04-04 at 12.32.29.png"
   
https://bugs.launchpad.net/bugs/1565724/+attachment/4622869/+files/Screen%20Shot%202016-04-04%20at%2012.32.29.png

** Changed in: horizon
   Importance: Undecided => Medium

** Changed in: horizon
Milestone: None => newton-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1565724

Title:
  Launch Instance layout breaks on long names in transfer table

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  See the screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1565724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565721] [NEW] SR-IOV PF passthrough breaks resource tracking

2016-04-04 Thread Nikola Đipanov
Public bug reported:

Enable PCI passthrough on a compute host (whitelist devices explained in
more detail in the docs), and create a network, subnet and a port  that
represents a SR-IOV physical function passthrough:

$ neutron net-create --provider:physical_network=phynet 
--provider:network_type=flat sriov-net
$ neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subne
$ neutron port-create sriov-net --binding:vnic_type=direct-physical --name pf

After that try to boot an instance using the created port (provided the
pci_passthrough_whitelist was setup correctly) this should work:

$ boot --image xxx --flavor 1 --nic port-id=$PORT_ABOVE testvm

however, the next resource tracker run fails with:

2016-04-04 11:25:34.663 ERROR nova.compute.manager 
[req-d8095318-9710-48a8-a054-4581641c3bf3 None None] Error updating resources 
for node kilmainham-ghost.
2016-04-04 11:25:34.663 TRACE nova.compute.manager Traceback (most recent call 
last):
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6442, in 
update_available_resource_for_node
2016-04-04 11:25:34.663 TRACE nova.compute.manager 
rt.update_available_resource(context)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 458, in 
update_available_resource
2016-04-04 11:25:34.663 TRACE nova.compute.manager 
self._update_available_resource(context, resources)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
2016-04-04 11:25:34.663 TRACE nova.compute.manager return f(*args, **kwargs)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 493, in 
_update_available_resource
2016-04-04 11:25:34.663 TRACE nova.compute.manager 
self.pci_tracker.update_devices_from_hypervisor_resources(dev_json)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/manager.py", line 118, in 
update_devices_from_hypervisor_resources
2016-04-04 11:25:34.663 TRACE nova.compute.manager self._set_hvdevs(devices)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/manager.py", line 141, in _set_hvdevs
2016-04-04 11:25:34.663 TRACE nova.compute.manager 
self.stats.remove_device(existed)
2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/stats.py", line 138, in remove_device
2016-04-04 11:25:34.663 TRACE nova.compute.manager 
pool['devices'].remove(dev)
2016-04-04 11:25:34.663 TRACE nova.compute.manager ValueError: list.remove(x): 
x not in list

Which basically kills the RT periodic run meaning no further resources
get updated by the periodic task.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565721

Title:
  SR-IOV PF passthrough breaks resource tracking

Status in OpenStack Compute (nova):
  New

Bug description:
  Enable PCI passthrough on a compute host (whitelist devices explained
  in more detail in the docs), and create a network, subnet and a port
  that represents a SR-IOV physical function passthrough:

  $ neutron net-create --provider:physical_network=phynet 
--provider:network_type=flat sriov-net
  $ neutron subnet-create sriov-net 192.168.2.0/24 --name sriov-subne
  $ neutron port-create sriov-net --binding:vnic_type=direct-physical --name pf

  After that try to boot an instance using the created port (provided
  the pci_passthrough_whitelist was setup correctly) this should work:

  $ boot --image xxx --flavor 1 --nic port-id=$PORT_ABOVE testvm

  however, the next resource tracker run fails with:

  2016-04-04 11:25:34.663 ERROR nova.compute.manager 
[req-d8095318-9710-48a8-a054-4581641c3bf3 None None] Error updating resources 
for node kilmainham-ghost.
  2016-04-04 11:25:34.663 TRACE nova.compute.manager Traceback (most recent 
call last):
  2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6442, in 
update_available_resource_for_node
  2016-04-04 11:25:34.663 TRACE nova.compute.manager 
rt.update_available_resource(context)
  2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 458, in 
update_available_resource
  2016-04-04 11:25:34.663 TRACE nova.compute.manager 
self._update_available_resource(context, resources)
  2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 
inner
  2016-04-04 11:25:34.663 TRACE nova.compute.manager return f(*args, 
**kwargs)
  2016-04-04 11:25:34.663 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_track

[Yahoo-eng-team] [Bug 1565705] [NEW] iptables duplicate rule warning on ports with multiple security groups

2016-04-04 Thread Kevin Benton
Public bug reported:

If ports are members of multiple security groups, there may be duplicate
rules when it comes time to convert them to iptables rules (e.g. both
groups have a rule to allow TCP port 80). This results in warnings from
the iptables manager detecting duplicate rules that hint that there may
be a bug.

For example:

WARNING neutron.agent.linux.iptables_manager [req-
944a9996-062b-4588-9536-d5df779da344 - -] Duplicate iptables rule
detected. This may indicate a bug in the the iptables rule generation
code. Line: -A neutron-openvswi-oe4186b39-0 -j RETURN


This warning resulted from a port that was a member of two security groups that 
both allowed all EGRESS traffic.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565705

Title:
  iptables duplicate rule warning on ports with multiple security groups

Status in neutron:
  New

Bug description:
  If ports are members of multiple security groups, there may be
  duplicate rules when it comes time to convert them to iptables rules
  (e.g. both groups have a rule to allow TCP port 80). This results in
  warnings from the iptables manager detecting duplicate rules that hint
  that there may be a bug.

  For example:

  WARNING neutron.agent.linux.iptables_manager [req-
  944a9996-062b-4588-9536-d5df779da344 - -] Duplicate iptables rule
  detected. This may indicate a bug in the the iptables rule generation
  code. Line: -A neutron-openvswi-oe4186b39-0 -j RETURN

  
  This warning resulted from a port that was a member of two security groups 
that both allowed all EGRESS traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565698] [NEW] novncproxy missed the vnc section in cli options

2016-04-04 Thread Allen Gao
Public bug reported:

$ nova-novncproxy --help
usage: nova-novncproxy [-h] [--cert CERT] [--config-dir DIR]
   [--config-file PATH] [--daemon] [--debug] [--key KEY]
   [--log-config-append PATH]
   [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR]
   [--log-file PATH] [--nodaemon] [--nodebug] [--norecord]
   [--nosource_is_ipv6] [--nossl_only] [--nouse-syslog]
   [--noverbose] [--nowatch-log-file] [--record]
   [--source_is_ipv6] [--ssl_only]
   [--syslog-log-facility SYSLOG_LOG_FACILITY]
   [--use-syslog] [--verbose] [--version]
   [--watch-log-file] [--web WEB]
   [--remote_debug-host REMOTE_DEBUG_HOST]
   [--remote_debug-port REMOTE_DEBUG_PORT]

optional arguments:
  -h, --helpshow this help message and exit
  --cert CERT   SSL certificate file
  --config-dir DIR  Path to a config directory to pull *.conf files from.
This file set is sorted, so as to provide a
predictable parse order if individual options are
over-ridden. The set is parsed after the file(s)
specified via previous --config-file, arguments hence
over-ridden options in the directory take precedence.
  --config-file PATHPath to a config file to use. Multiple config files
can be specified, with values in later files taking
precedence. Defaults to None.
  --daemon  Become a daemon (background process)
  --debug, -d   If set to true, the logging level will be set to DEBUG
instead of the default INFO level.
  --key KEY SSL key file (if separate from cert)
  --log-config-append PATH, --log_config PATH
The name of a logging configuration file. This file is
appended to any existing logging configuration files.
For details about logging configuration files, see the
Python logging module documentation. Note that when
logging configuration files are used then all logging
configuration is set in the configuration file and
other logging configuration options are ignored (for
example, logging_context_format_string).
  --log-date-format DATE_FORMAT
Defines the format string for %(asctime)s in log
records. Default: None . This option is ignored if
log_config_append is set.
  --log-dir LOG_DIR, --logdir LOG_DIR
(Optional) The base directory used for relative
log_file paths. This option is ignored if
log_config_append is set.
  --log-file PATH, --logfile PATH
(Optional) Name of log file to send logging output to.
If no default is set, logging will go to stderr as
defined by use_stderr. This option is ignored if
log_config_append is set.
  --nodaemonThe inverse of --daemon
  --nodebug The inverse of --debug
  --norecordThe inverse of --record
  --nosource_is_ipv6The inverse of --source_is_ipv6
  --nossl_only  The inverse of --ssl_only
  --nouse-syslogThe inverse of --use-syslog
  --noverbose   The inverse of --verbose
  --nowatch-log-fileThe inverse of --watch-log-file
  --record  Record sessions to FILE.[session_number]
  --source_is_ipv6  Source is ipv6
  --ssl_onlyDisallow non-encrypted connections
  --syslog-log-facility SYSLOG_LOG_FACILITY
Syslog facility to receive log lines. This option is
ignored if log_config_append is set.
  --use-syslog  Use syslog for logging. Existing syslog format is
DEPRECATED and will be changed later to honor RFC5424.
This option is ignored if log_config_append is set.
  --verbose, -v If set to false, the logging level will be set to
WARNING instead of the default INFO level.
  --version show program's version number and exit
  --watch-log-file  Uses logging handler designed to watch file system.
When log file is moved or removed this handler will
open a new log file with specified path
instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This
option is ignored if log_config_append is set.
  --web WEB

[Yahoo-eng-team] [Bug 1565682] [NEW] OVS-DPDK failed to boot more than 1 instance on OVS-DPDK setup

2016-04-04 Thread Eran Kuris
Public bug reported:

vm failed

Description of problem:
On OVS-DPDK setup 1 controller & 1 compute with data tenant/network type vlan I 
failed to boot instances. When create 1 VM it success. From the second vm all 
vm's failed .When I created multiplae VM's - Instance Count 4 for example 
few of the instances will boot as active and few will be failed.
When using DPDK we should boot vm with flavor that use hugepages - :
$ nova flavor-create  m1.medium_dpdk 6 2048 20 2
$ nova flavor-key m1.medium_dpdk set "hw:mem_page_size=large" 
attached all log 
nic type: Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter 
driver: ixgbe
Version-Release number of selected component (if applicable):
[root@puma48 ~]# rpm -qa |grep neutro
openstack-neutron-openvswitch-7.0.1-6.el7ost.noarch
python-neutron-7.0.1-6.el7ost.noarch
openstack-neutron-common-7.0.1-6.el7ost.noarch
python-neutronclient-3.1.0-1.el7ost.noarch
openstack-neutron-7.0.1-6.el7ost.noarch
[root@puma48 ~]# rpm -qa |grep dpd
openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
dpdk-2.1.0-5.el7.x86_64

How reproducible:
always 

Steps to Reproduce:
1.https://docs.google.com/document/d/1K_ku6_08ooq46dFLiE7fAJ0ByFdPCb0W_q6kKqF3Y0o/edit
2.
3.

Actual results:


Expected results:


Additional info:

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova dpdk"
   
https://bugs.launchpad.net/bugs/1565682/+attachment/4622775/+files/nova%20dpdk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565682

Title:
  OVS-DPDK failed to boot more than 1 instance on OVS-DPDK setup

Status in OpenStack Compute (nova):
  New

Bug description:
  vm failed

  Description of problem:
  On OVS-DPDK setup 1 controller & 1 compute with data tenant/network type vlan 
I failed to boot instances. When create 1 VM it success. From the second vm all 
vm's failed .When I created multiplae VM's - Instance Count 4 for example 
  few of the instances will boot as active and few will be failed.
  When using DPDK we should boot vm with flavor that use hugepages - :
  $ nova flavor-create  m1.medium_dpdk 6 2048 20 2
  $ nova flavor-key m1.medium_dpdk set "hw:mem_page_size=large" 
  attached all log 
  nic type: Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter 
  driver: ixgbe
  Version-Release number of selected component (if applicable):
  [root@puma48 ~]# rpm -qa |grep neutro
  openstack-neutron-openvswitch-7.0.1-6.el7ost.noarch
  python-neutron-7.0.1-6.el7ost.noarch
  openstack-neutron-common-7.0.1-6.el7ost.noarch
  python-neutronclient-3.1.0-1.el7ost.noarch
  openstack-neutron-7.0.1-6.el7ost.noarch
  [root@puma48 ~]# rpm -qa |grep dpd
  openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
  dpdk-2.1.0-5.el7.x86_64

  How reproducible:
  always 

  Steps to Reproduce:
  
1.https://docs.google.com/document/d/1K_ku6_08ooq46dFLiE7fAJ0ByFdPCb0W_q6kKqF3Y0o/edit
  2.
  3.

  Actual results:

  
  Expected results:

  
  Additional info:

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565680] [NEW] conflict when we try to set router gateway

2016-04-04 Thread abdul NIzamudin
Public bug reported:

When the router is already attached to a external network and again when
we try to set the same router to the same router it is gicing success
message but it does not changes the allocated ip. In this case neutron
should returns some messages like router is already set to the external
network.

** Affects: neutron
 Importance: Undecided
 Assignee: abdul NIzamudin (abdul-nizamuddin)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565680

Title:
  conflict when we try to set router gateway

Status in neutron:
  New

Bug description:
  When the router is already attached to a external network and again
  when we try to set the same router to the same router it is gicing
  success message but it does not changes the allocated ip. In this case
  neutron should returns some messages like router is already set to the
  external network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565534] Re: Metering agent reverses iptable's NAT rules in POSTROUTING chain if starts after l3 agent

2016-04-04 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1544508 ***
https://bugs.launchpad.net/bugs/1544508

** This bug has been marked a duplicate of bug 1544508
   neutron-meter-agent - makes traffic between internal networks NATed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565534

Title:
  Metering agent reverses iptable's NAT rules in POSTROUTING chain if
  starts after l3 agent

Status in neutron:
  New

Bug description:
  Metering agent reverses iptable's NAT rules in POSTROUTING chain in
  qrouter namespace if starts after l3 agent. The neutron-postrouting-
  bottom chain is prior to the neutron-l3-agent-POSTROUTING
  chain,resulting in that east-west network traffic has been address
  translated.

  Pre-condition:
  - enable metering agent and config it using driver 
"neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver"
  - enable l3 agent.
  - two tenant networks and one external network.

  How to reproduce:
  - create one router, connect two subnets in the above networks to the router, 
set the external network as the router's gateway.
  - create two instances: ins-A and ins-B. ins-A join one tenant network and 
ins-B join the other one.
  - associate an floating ip to ins-A.
  - restart metering agent.
  - ping ins-B's fixed ip from ins-A. 
  - capture icmp packets in ins-B and you will receive packets whose source ip 
is the floating ip of ins-A.
  - see iptable's nat rule and you will find the neutron-postrouting-bottom 
chain is prior to the neutron-l3-agent-POSTROUTING chain:
  sudo ip netns exec qrouter-ROUTER-ID iptables-save -t nat
  ...
  -A POSTROUTING -j neutron-meter-POSTROUTING
  -A POSTROUTING -j neutron-postrouting-bottom
  -A POSTROUTING -j neutron-l3-agent-POSTROUTING
  ...

  Expected behavior:
  - east-west network traffic should not been address translated. ins-B should 
receive packets whose source ip is the fixed ip of ins-A. 
  - the neutron-l3-agent-POSTROUTING chain should be prior to the 
neutron-postrouting-bottom chain.

  Affected versions:

  - I saw the issue into OpenStack Kilo, under Ubuntu 14.04. But
  according to the upstream code, the issue is still present into the
  master branch, into; neutron/agent/linux/iptables_manager.py, into
  function IptablesManager._modify_rules:

  our_chains_and_rules = our_chains + our_top_rules +
  our_bottom_rules

  # locate the position immediately after the existing chains to insert
  # our chains and rules
  rules_index = self._find_rules_index(new_filter)
  new_filter[rules_index:rules_index] = our_chains_and_rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552631] Re: [RFE] Bulk Floating IP allocation

2016-04-04 Thread Reedip
In keeping with the current transition of NeutronClient to OSC, I think
any change in neutronClient for this support would need a corresponding
change in OSC.


** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552631

Title:
  [RFE] Bulk Floating IP allocation

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in neutron:
  Triaged
Status in python-neutronclient:
  Confirmed
Status in python-openstackclient:
  New

Bug description:
  I needed to allocate 2 floating IPs to my project.
  Via GUI: 
  access and security -> Floating IPs -> Allocate IP to project. 

  I noticed that in order to allocate 2 FIPs, I need to execute
  "Allocate IP to project" twice.

  The costumers have no option to allocate a  range of FIPs with one
  action. They need to do it one by one.

  BR
  Alex

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1552631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565638] [NEW] Cloud-init on Xenial won't deploy compressed content in "write_files" - says DecompressionError: Not a gzipped file

2016-04-04 Thread Oded Arbel
Public bug reported:

I'm trying to deploy an Amazon EC2 instance using cloud-init.

When I'm compressing the files in the write_files section, the cloud-
init process doesn't write the files, and I get the following error in
the cloud-init.log :

Apr  3 16:27:16 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: 
init-network/config-write-files: FAIL: running config-write-files with 
frequency once-per-instance
Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[WARNING]: Running module write-files 
()
 failed
Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[DEBUG]: Running module write-files 
() failed
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 393, in 
decomp_gzip
return decode_binary(gh.read())
  File "/usr/lib/python3.5/gzip.py", line 274, in read
return self._buffer.read(size)
  File "/usr/lib/python3.5/gzip.py", line 461, in read
if not self._read_gzip_header():
  File "/usr/lib/python3.5/gzip.py", line 409, in _read_gzip_header
raise OSError('Not a gzipped file (%r)' % magic)
OSError: Not a gzipped file (b"b'")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 735, in 
_run_modules
freq=freq)
  File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
return self._runners.run(name, functor, args, freq, clear_on_fail)
  File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
results = functor(*args)
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 39, in handle
write_files(name, files, log)
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 74, in write_files
contents = extract_contents(f_info.get('content', ''), extractions)
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write_files.py", 
line 98, in extract_contents
result = util.decomp_gzip(result, quiet=False)
  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 400, in 
decomp_gzip
raise DecompressionError(six.text_type(e))
cloudinit.util.DecompressionError: Not a gzipped file (b"b'")

I've verified that the cloud init I'm submitting does encode the files
correctly by running this ruby code on the cloud-init YAML file:

> File.write("test.gz",
YAML.load(File.read("test.init"))["write_files"].first["content"])

Then:

$ file test.gz
test.gz: gzip compressed data, last modified: Mon Apr  4 06:55:53 2016, max 
compression, from Unix
$ python
Python 2.7.11+ (default, Mar 30 2016, 21:00:42) 
[GCC 5.3.1 20160330] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gzip
>>> with gzip.open('test.gz', 'rb') as f:
... file_content = f.read()
... print file_content
... 


I've never implemented this with trusty, so I'm not sure how cloud-init
on trusty handles that.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565638

Title:
  Cloud-init on Xenial won't deploy compressed content in "write_files"
  - says DecompressionError: Not a gzipped file

Status in cloud-init:
  New

Bug description:
  I'm trying to deploy an Amazon EC2 instance using cloud-init.

  When I'm compressing the files in the write_files section, the cloud-
  init process doesn't write the files, and I get the following error in
  the cloud-init.log :

  Apr  3 16:27:16 ubuntu [CLOUDINIT] handlers.py[DEBUG]: finish: 
init-network/config-write-files: FAIL: running config-write-files with 
frequency once-per-instance
  Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[WARNING]: Running module 
write-files ()
   failed
  Apr  3 16:27:16 ubuntu [CLOUDINIT] util.py[DEBUG]: Running module write-files 
() failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 393, in 
decomp_gzip
  return decode_binary(gh.read())
File "/usr/lib/python3.5/gzip.py", line 274, in read
  return self._buffer.read(size)
File "/usr/lib/python3.5/gzip.py", line 461, in read
  if not self._read_gzip_header():
File "/usr/lib/python3.5/gzip.py", line 409, in _read_gzip_header
  raise OSError('Not a gzipped file (%r)' % magic)
  OSError: Not a gzipped file (b"b'")

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 735, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 70, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 199, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_write