[Yahoo-eng-team] [Bug 1401779] [NEW] systemctl start neutron-ovs-cleanup' returned 6: Failed to issue method call: Unit neutron-ovs-cleanup.service failed to load: No such file or directory

2014-12-11 Thread meizhifang
Public bug reported:

After I uninstall neutron and  reinstall openstack with packstack,a
failure occured  :"systemctl start neutron-ovs-cleanup' returned 6:
Failed to issue method call: Unit neutron-ovs-cleanup.service failed to
load: No such file or directory".   The reason is that uninstalling
openstack-neutron-openvswitch rpm ,do not disable neutron-ovs-
cleanup.service. I suggest that in openstack-neutron.spec deal with the
problem.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401779

Title:
  systemctl start neutron-ovs-cleanup' returned 6: Failed to issue
  method call: Unit neutron-ovs-cleanup.service failed to load: No such
  file or directory

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After I uninstall neutron and  reinstall openstack with packstack,a
  failure occured  :"systemctl start neutron-ovs-cleanup' returned 6:
  Failed to issue method call: Unit neutron-ovs-cleanup.service failed
  to load: No such file or directory".   The reason is that uninstalling
  openstack-neutron-openvswitch rpm ,do not disable neutron-ovs-
  cleanup.service. I suggest that in openstack-neutron.spec deal with
  the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401778] [NEW] 500 error returned while uploading image using multi filesystem store and 'filesystem_store_metadata_file' option enabled

2014-12-11 Thread Abhijeet Malawade
Public bug reported:

When we try to upload image with multi filesystem store enabled and 
'filesystem_store_metadata_file' 
(containing list of all directories configured for multi filesystem store) is 
provided
then glance throws 'HTTPInternalServerError (HTTP 500)' error.

- Glance Configuration:

1. /etc/glance/glance-api.conf

[DEFAULT]
show_multiple_locations = True
filesystem_store_metadata_file = /etc/glance/metadata.json

[glance_store]
filesystem_store_datadirs = /var/lib/glance/images1/:1
filesystem_store_datadirs = /var/lib/glance/images2/:2


2. /etc/glance/metadata.json

[
{
"id": "f0781415-cf81-47cd-8860-b83f9c2a415c",
"mountpoint": "/var/lib/glance/images1/" 
},
{
"id": "5d2dd1db-8684-46bb-880f-b94a1942cfd2",
"mountpoint": "/var/lib/glance/images2/" 
}
]

3. 'df -ha' command result:

openstack@openstack:/opt/stack$ df -ha
Filesystem   Size  Used Avail Use% Mounted on
proc0 0 0- /proc
sysfs   0 0 0- /sys
tmpfs799M  600K  798M   1% /run
/dev/sda1236M   41M  183M  19% /boot
nfsd0 0 0- /proc/fs/nfsd
10.69.4.173:/export/images2  443G   23G  398G   6% 
/var/lib/glance/images2
10.69.4.172:/export/images1  447G  8.7G  415G   3% 
/var/lib/glance/images1
openstack@openstack:/opt/stack$


- Steps to reproduce:

1. Create image:
glance --os-image-api-version 2 image-create --name Test123 --disk-format raw 
--container-format ami

2. Upload image data:
 
openstack@openstack-150:~$ glance --os-image-api-version 2 image-upload 
47d39050-cc7e-498a-a800-4faf80a72c93 < /home/openstack/workbook/test.py

HTTPInternalServerError (HTTP 500)
openstack@openstack-150:~$


- glance-api.log :

2014-12-11 22:16:59.586 3495 ERROR glance_store.backend 
[95987e95-dcae-4516-b57e-87fbd9135ff3 0080647f6a2145f8a40bace67654a058 
48f94106d3b24ca2a0a9e2951c505bf9 - - -] The storage driver 
 returned invalid metadata [{u'mountpoint': 
u'/opt/stack/data/glance/images/', u'id': 
u'f0781415-cf81-47cd-8860-b83f9c2a415c'}, {u'mountpoint': 
u'/opt/stack/data/glance/images1/', u'id': u'5d2dd1db-
8684-46bb-880f-b94a1942cfd2'}]. This must be a dictionary type
2014-12-11 22:16:59.591 3495 ERROR glance.api.v2.image_data 
[95987e95-dcae-4516-b57e-87fbd9135ff3 0080647f6a2145f8a40bace67654a058 
48f94106d3b24ca2a0a9e2951c505bf9 - - -] Failed to upload image data due to 
internal error
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data Traceback (most 
recent call last):
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/api/v2/image_data.py", line 74, in upload
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
image.set_data(data, size)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/domain/proxy.py", line 160, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
self.base.set_data(data, size)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/notifier.py", line 252, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
self.notifier.error('image.upload', msg)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in 
__exit
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
six.reraise(self.type_, self.value, self.tb)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/notifier.py", line 201, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
self.image.set_data(data, size)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/api/policy.py", line 176, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data return 
self.image.set_data(*args, **kwargs)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/quota/__init__.py", line 296, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
self.image.set_data(data, size=size)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/opt/stack/glance/glance/location.py", line 364, in set_data
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data 
context=self.context)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/usr/local/lib/python2.7/dist-packages/glance_store/backend.py", line 357, in 
add_to_backend
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data return 
store_add_to_backend(image_id, data, size, store, context)
2014-12-11 22:16:59.591 3495 TRACE glance.api.v2.image_data File 
"/usr/local/lib/python2.7/dist-packages/glance_store/backend.py", line 338, in 
store_add_to_backend
2014-12-11 22:16:59.591 3495 TRACE glanc

[Yahoo-eng-team] [Bug 1401773] [NEW] Non admin user cannot see port that he create in public network

2014-12-11 Thread Evgeny
Public bug reported:

When non admin user create router and set router as external gateway
(via API or Horizon UI), in public network created port. It's really
strange that user cannot see this port although he is creator of it.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401773

Title:
  Non admin user cannot see port that he create in public network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When non admin user create router and set router as external gateway
  (via API or Horizon UI), in public network created port. It's really
  strange that user cannot see this port although he is creator of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401751] [NEW] updating ipv6 allocation pool start ip address made neutron-server hang

2014-12-11 Thread Jerry Zhao
Public bug reported:

neutron subnet-update --allocation-pool
start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::::fffe
ipv6


Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.024 21692 DEBUG neutron.api.v2.base 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Request body: {u'subnet': 
{u'allocation_pools': [{u'start': u'2001:470:1f0e:cb4::20', u'end': 
u'2001:470:1f0e:cb4::::fffe'}]}} prepare_request_body 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/api/v2/base.py:585
Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.055 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Performing IP validity checks 
on allocation pools _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:639
Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.058 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Checking for overlaps among 
allocation pools and gateway ip _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:675
Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.061 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Rebuilding availability ranges 
for subnet {'ip_version': 6L, u'allocation_pools': [{u'start': 
u'2001:470:1f0e:cb4::20', u'end': u'2001:470:1f0e:cb4::::fffe'}], 
'cidr': u'2001:470:1f0e:cb4::/64', 'id': 
u'5579d9bb-0d03-4d8e-ba61-9b2d8842983d'} _rebuild_availability_ranges 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:262


 wget 162.3.121.66:9696
--2014-12-12 04:24:18--  http://162.3.121.66:9696/
Connecting to 162.3.121.66:9696... connected.
HTTP request sent, awaiting response... 


restart the neutron-server service, neutron-server got back to normal
and other neutron command still worked, but neutron subnet-update
allocation pool would reproduce the bug.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401751

Title:
  updating ipv6 allocation pool start ip address made neutron-server
  hang

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron subnet-update --allocation-pool
  start=2001:470:1f0e:cb4::20,end=2001:470:1f0e:cb4::::fffe
  ipv6

  
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.024 21692 DEBUG neutron.api.v2.base 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Request body: {u'subnet': 
{u'allocation_pools': [{u'start': u'2001:470:1f0e:cb4::20', u'end': 
u'2001:470:1f0e:cb4::::fffe'}]}} prepare_request_body 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/api/v2/base.py:585
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.055 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Performing IP validity checks 
on allocation pools _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:639
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.058 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Checking for overlaps among 
allocation pools and gateway ip _validate_allocation_pools 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:675
  Dec 12 04:21:14 ci-overcloud-controller0-fm6zhh6u6uwd neutron-server: 
2014-12-12 04:21:14.061 21692 DEBUG neutron.db.db_base_plugin_v2 
[req-8e0c6b88-4beb-4b43-af6a-cab2824fa90c None] Rebuilding availability ranges 
for subnet {'ip_version': 6L, u'allocation_pools': [{u'start': 
u'2001:470:1f0e:cb4::20', u'end': u'2001:470:1f0e:cb4::::fffe'}], 
'cidr': u'2001:470:1f0e:cb4::/64', 'id': 
u'5579d9bb-0d03-4d8e-ba61-9b2d8842983d'} _rebuild_availability_ranges 
/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py:262

  
   wget 162.3.121.66:9696
  --2014-12-12 04:24:18--  http://162.3.121.66:9696/
  Connecting to 162.3.121.66:9696... connected.
  HTTP request sent, awaiting response... 



  restart the neutron-server service, neutron-server got back to normal
  and other neutron command still worked, but neutron subnet-update
  allocation pool would reproduce the bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
P

[Yahoo-eng-team] [Bug 1401728] [NEW] Routing updates lost when multiple IPs attached to router

2014-12-11 Thread Sean M. Collins
Public bug reported:

When attempting to run dual stacked networking at the gate
(https://review.openstack.org/#/c/140128/), IPv4 networking breaks, with
Tempest scenarios reporting no route to host errors for the floating IPs
that tempest attempts to SSH into.

The following errors are reported in the l3 agent log:

2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] Ignoring 
multiple IPs on router port db0953d3-4bd1-4106-9efc-c16cd9a3e922
2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] 'subnet'
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 341, in call
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 646, in process_router
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent 
self._set_subnet_info(ex_gw_port)
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 562, in 
_set_subnet_info
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent prefixlen = 
netaddr.IPNetwork(port['subnet']['cidr']).prefixlen
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent KeyError: 'subnet'
2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py", line 82, 
in _spawn_n_impl
func(*args, **kwargs)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 1537, in 
_process_router_update
self._process_router_if_compatible(router)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 1512, in 
_process_router_if_compatible
self.process_router(ri)
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 344, in call
self.logger(e)
  File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 
82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/new/neutron/neutron/common/utils.py", line 341, in call
return func(*args, **kwargs)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 646, in 
process_router
self._set_subnet_info(ex_gw_port)
  File "/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 562, in 
_set_subnet_info
prefixlen = netaddr.IPNetwork(port['subnet']['cidr']).prefixlen
KeyError: 'subnet'

http://logs.openstack.org/28/140128/4/check/check-tempest-dsvm-neutron-
full/440ec4e/logs/screen-q-l3.txt.gz

Tempest reports no route to host:

2014-12-11 22:57:04.385 30680 WARNING tempest.common.ssh [-] Failed to
establish authenticated ssh connection to cirros@172.24.4.82 ([Errno
113] No route to host). Number attempts: 1. Retry after 2 seconds.

http://logs.openstack.org/28/140128/4/check/check-tempest-dsvm-neutron-
full/440ec4e/logs/tempest.txt.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401728

Title:
  Routing updates lost when multiple IPs attached to router

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When attempting to run dual stacked networking at the gate
  (https://review.openstack.org/#/c/140128/), IPv4 networking breaks,
  with Tempest scenarios reporting no route to host errors for the
  floating IPs that tempest attempts to SSH into.

  The following errors are reported in the l3 agent log:

  2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] Ignoring 
multiple IPs on router port db0953d3-4bd1-4106-9efc-c16cd9a3e922
  2014-12-11 23:19:58.393 25977 ERROR neutron.agent.l3.agent [-] 'subnet'
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 341, in call
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 646, in process_router
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent 
self._set_subnet_info(ex_gw_port)
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent   File 
"/opt/stack/new/neutron/neutron/agent/l3/agent.py", line 562, in 
_set_subnet_info
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent prefixlen = 
netaddr.IPNetwork(port['subnet']['cidr']).prefixlen
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent KeyError: 'subnet'
  2014-12-11 23:19:58.393 25977 TRACE neutron.agent.l3.agent
  Traceback (most recent 

[Yahoo-eng-team] [Bug 1257683] Re: vmware VC driver doesn't attach swap storage

2014-12-11 Thread Thang Pham
This was fixed with the following patch -
https://review.openstack.org/#/c/109432/.

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257683

Title:
  vmware VC driver doesn't attach swap storage

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  With a vmware compute node (vcenter with 3 esxi in my case), no
  ephemeral storage is attache to the instance: swap or ephemeral disks.

  I didn't find any errors/warnings when creating instances.

  Let me know ig you need other information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401602] Re: db test directories missing after a recent commit

2014-12-11 Thread Kevin Benton
This is expected. The services were split from the main neutron repo.

See
https://github.com/openstack/neutron/commit/e55e71524f7431f3947f994e8552aab047e5b0cb

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401602

Title:
  db test directories missing after a recent commit

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  After the following commit to master,

  6bee8592b1bf661f0b247d804738c7202b37604c Imported Translations from
  Transifex

  The following directories are missing

  neutron/tests/unit/db/firewall
  neutron/tests/unit/db/loadbalancer
  neutron/tests/unit/db/vpn

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401721] [NEW] Update role using LDAP backend with same name fails

2014-12-11 Thread Brant Knudson
Public bug reported:


When the keystone server is configured to use the LDAP backend for assignments 
and a role is updated to have the same name the operation fails saying that you 
can't create a role because another role with the same name already exists.

The keystone server should just accept the request and ignore the change
rather than failing.

To recreate:

0. Start with a devstack install using LDAP for assignment backend

1. Get a token

$ curl -i \
  -H "Content-Type: application/json" \
  -d '
{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "admin",
  "domain": { "id": "default" },
  "password": "adminpwd"
}
  }
},
"scope": {
  "project": {
"name": "demo",
"domain": { "id": "default" }
  }
}
  }
}' \
  http://localhost:35357/v3/auth/tokens ; echo

$ TOKEN=...

2. List roles

$ curl \
-H "X-Auth-Token: $TOKEN" \
http://localhost:35357/v3/roles | python -m json.tool

$ ROLE_ID=36a9eede308d41e8a92effce2e46cc4a

3. Update a role with the same name.

$ curl -X PATCH \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{"role": {"name": "anotherrole"}}' \
http://localhost:35357/v3/roles/$ROLE_ID

{"error": {"message": "Cannot duplicate name {'id':
u'36a9eede308d41e8a92effce2e46cc4a', 'name': u'anotherrole'}", "code":
409, "title": "Conflict"}}

The operation should have worked.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1401721

Title:
  Update role using LDAP backend with same name fails

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  When the keystone server is configured to use the LDAP backend for 
assignments and a role is updated to have the same name the operation fails 
saying that you can't create a role because another role with the same name 
already exists.

  The keystone server should just accept the request and ignore the
  change rather than failing.

  To recreate:

  0. Start with a devstack install using LDAP for assignment backend

  1. Get a token

  $ curl -i \
-H "Content-Type: application/json" \
-d '
  { "auth": {
  "identity": {
"methods": ["password"],
"password": {
  "user": {
"name": "admin",
"domain": { "id": "default" },
"password": "adminpwd"
  }
}
  },
  "scope": {
"project": {
  "name": "demo",
  "domain": { "id": "default" }
}
  }
}
  }' \
http://localhost:35357/v3/auth/tokens ; echo

  $ TOKEN=...

  2. List roles

  $ curl \
  -H "X-Auth-Token: $TOKEN" \
  http://localhost:35357/v3/roles | python -m json.tool

  $ ROLE_ID=36a9eede308d41e8a92effce2e46cc4a

  3. Update a role with the same name.

  $ curl -X PATCH \
  -H "X-Auth-Token: $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"role": {"name": "anotherrole"}}' \
  http://localhost:35357/v3/roles/$ROLE_ID

  {"error": {"message": "Cannot duplicate name {'id':
  u'36a9eede308d41e8a92effce2e46cc4a', 'name': u'anotherrole'}", "code":
  409, "title": "Conflict"}}

  The operation should have worked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1401721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401674] [NEW] Fail to download object with a space in the file name

2014-12-11 Thread Ying Zuo
Public bug reported:

When downloading a file with a space in the file name, for example, test
123, the request will fail with a ClientException from the
swiftclient/client.py. The name of the file that Swift is trying to
retrieve is "test%2520123" and a 404 error is returned.

I am using icehouse/stable branch and was not able to reproduce on
Havana.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon icehouse

** Description changed:

  When downloading a file with a space in the file name, for example, test
- 123. The request will fail with a ClientException from the
+ 123, the request will fail with a ClientException from the
  swiftclient/client.py. The name of the file that Swift is trying to
  retrieve is "test%2520123" and a 404 error is returned.
  
  I am using icehouse/stable branch and was not able to reproduce on
  Havana.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1401674

Title:
  Fail to download object with a space in the file name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When downloading a file with a space in the file name, for example,
  test 123, the request will fail with a ClientException from the
  swiftclient/client.py. The name of the file that Swift is trying to
  retrieve is "test%2520123" and a 404 error is returned.

  I am using icehouse/stable branch and was not able to reproduce on
  Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1401674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401435] Re: Security-group-name is case sensitive when booting instance with nova

2014-12-11 Thread Sean Dague
Case sensitivity is by design

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401435

Title:
  Security-group-name is case sensitive when booting instance with nova

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When booting an instance with nova-networking then the instance goes
  to error state if security group name provided in mixed case or
  capital case letters.

  Mean to say that security groups name are case sensitive.

  steps to replicate:

  1. stack@devstack:~$ nova secgroup-list
  +--+-+-+
  | id | name | description |
  +--+-+-+
  | 57597299-782e-4820-b814-b27c2f125ee2 | test |   |
  | 9ae55da3-5246-4a28-b4d6-d45affe7b5d8 | default | default |
  +--+-+-+
  2. stack@devstack:~$ nova boot --image <> --flavor <> --security-groups test 
vm_name

  vm_name instance will boot up in running state

  3. stack@devstack:~$ nova boot --image <> --flavor <> --security-
  groups TEST vm_name_1

  The instance will queue with the scheduler but fail to boot.

  Expected Result :

  1. Instance should boot up in running .
  2. Case sensitivity should not affect the state of instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286099] Re: UpdateProjectQuotas doesn't pay attention on disabled_quotas

2014-12-11 Thread Sean Dague
** Changed in: nova
   Status: New => Fix Released

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286099

Title:
  UpdateProjectQuotas doesn't pay attention on disabled_quotas

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Environment:
  - OpenStack Havana release (2013.2.1)
  - Neutron

  Steps to reproduce:
  - Go to admin -> projects
  - Try to update project quotas
  - Update fails with "Error: Modified project information and members, but 
unable to modify project quotas."

  Workaround:
  Comment out "security_group", "security_group_rule" in NEUTRON_QUOTA_FIELDS 
(openstack_dashboard/usage/quotas.py)

  In neutron/server.log:

  2014-02-28 11:45:03.145 34093 ERROR neutron.api.v2.resource [-] update failed
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/extensions/quotasv2.py", line 107, in 
update
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource 
EXTENDED_ATTRIBUTES_2_0[RESOURCE_COLLECTION])
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 555, in 
prepare_request_body
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource 
Controller._verify_attributes(res_dict, attr_info)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 602, in 
_verify_attributes
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPBadRequest(msg)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource HTTPBadRequest: 
Unrecognized attribute(s) 'security_group_rule, security_group'
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1286099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400048] Re: nova list --hostname invalidhostname

2014-12-11 Thread Sean Dague
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** No longer affects: nova

** Changed in: python-novaclient
   Importance: Undecided => Wishlist

** Changed in: python-novaclient
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400048

Title:
  nova list --hostname invalidhostname

Status in Python client library for Nova:
  Confirmed

Bug description:
  Versions:

  rhel 7 
  python-nova-2014.1.3-9.el7ost.noarch
  openstack-nova-compute-2014.1.3-9.el7ost.noarch
  openstack-nova-novncproxy-2014.1.3-9.el7ost.noarch
  openstack-nova-common-2014.1.3-9.el7ost.noarch
  python-novaclient-2.17.0-2.el7ost.noarch

  FYI my setup is an HA deployment, but same happens on none HA.

  Description of bug:
  When running nova list --host   with an invalid hostname (no such server or 
typo in server name), we should get an error.  
  Today we get an empty table, alerting user to the fact that an invalid host 
name was given would be better IMHO :)

  
  [root@mace83935b075d6 ~(openstack_admin)]# nova list --host noSuchServer
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  [root@mace83935b075d6 ~(openstack_admin)]# hostname
  mace83935b075d6.example.com
  [root@mace83935b075d6 ~(openstack_admin)]# nova list --host 
mace83935b075d6.example.com
  
+--+-+++-++
  | ID   | Name| Status | Task State | 
Power State | Networks   |
  
+--+-+++-++
  | 620ce9ae-2767-4f7b-a555-aa59fe10dd6b | tshefi3 | ERROR  | -  | 
NOSTATE ||
  | 5b7750d1-8b0f-4fc1-8199-885317e2d5cf | tshefi4 | ACTIVE | -  | 
Running | floating-362-main=10.35.184.24 |
  
+--+-+++-++

  Steps to reproduce: 
  1. Boot up an instance
  2. #nova list --host   FakeServerName->  you will get an empty table, 
without notice of invalid hostname
  3. #nova list --host realServername  you will see instance name in table as 
should.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1400048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401430] Re: compute create_test_server_group fails for each tempest api test in compute

2014-12-11 Thread Sean Dague
** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401430

Title:
  compute create_test_server_group fails for each tempest api test in
  compute

Status in Tempest:
  New

Bug description:
  nova version : 2014.2.1

  tempest compute api test cases fails throwing AttributeError.

  sample Test_case of compute:
  
tempest.api.compute.admin.test_hypervisor_negative.HypervisorAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_quotas.QuotaClassesAdminTestXML.create_test_server_group
  
tempest.api.compute.admin.test_quotas.QuotasAdminTestXML.create_test_server_group
  
tempest.api.compute.admin.test_hosts_negative.HostsAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_aggregates_negative.AggregatesAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_instance_usage_audit_log_negative.InstanceUsageAuditLogNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_instance_usage_audit_log.InstanceUsageAuditLogTestXML.create_test_server_group
  
tempest.api.compute.admin.test_security_groups.SecurityGroupsTestAdminXML.create_test_server_group
  
tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_services.ServicesAdminTestXML.create_test_server_group

  Common Error:
  AttributeError: 'ServersClientXML' object has no attribute 
'create_test_server_group'

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1401430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401664] [NEW] Update role using LDAP backend requires name

2014-12-11 Thread Brant Knudson
Public bug reported:


When updating a role and the keystone identity server is configured to use LDAP 
as the backend, you get a 500 error if the update doesn't have the name. For 
example, if you just disable a role, it fails with a 500 error.

0. Start with devstack configured to use LDAP assignment backend.

1. Get a token:

$ curl -i \
  -H "Content-Type: application/json" \
  -d '
{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "admin",
  "domain": { "id": "default" },
  "password": "adminpwd"
}
  }
},
"scope": {
  "project": {
"name": "demo",
"domain": { "id": "default" }
  }
}
  }
}' \
  http://localhost:35357/v3/auth/tokens ; echo

$ TOKEN=...

2. Pick a role.

$ curl \
-H "X-Auth-Token: $TOKEN" \
http://localhost:35357/v3/roles | python -m json.tool

$ ROLE_ID=36a9eede308d41e8a92effce2e46cc4a

3. Update without a name.

$ curl -X PATCH \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d '{"role": {"enabled": false}}' \
http://localhost:35357/v3/roles/$ROLE_ID

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'name' (Disable debug mode to suppress these
details.)", "code": 500, "title": "Internal Server Error"}}


The update operation should be successful.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1401664

Title:
  Update role using LDAP backend requires name

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  When updating a role and the keystone identity server is configured to use 
LDAP as the backend, you get a 500 error if the update doesn't have the name. 
For example, if you just disable a role, it fails with a 500 error.

  0. Start with devstack configured to use LDAP assignment backend.

  1. Get a token:

  $ curl -i \
-H "Content-Type: application/json" \
-d '
  { "auth": {
  "identity": {
"methods": ["password"],
"password": {
  "user": {
"name": "admin",
"domain": { "id": "default" },
"password": "adminpwd"
  }
}
  },
  "scope": {
"project": {
  "name": "demo",
  "domain": { "id": "default" }
}
  }
}
  }' \
http://localhost:35357/v3/auth/tokens ; echo

  $ TOKEN=...

  2. Pick a role.

  $ curl \
  -H "X-Auth-Token: $TOKEN" \
  http://localhost:35357/v3/roles | python -m json.tool

  $ ROLE_ID=36a9eede308d41e8a92effce2e46cc4a

  3. Update without a name.

  $ curl -X PATCH \
  -H "X-Auth-Token: $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"role": {"enabled": false}}' \
  http://localhost:35357/v3/roles/$ROLE_ID

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'name' (Disable debug mode to suppress these
  details.)", "code": 500, "title": "Internal Server Error"}}

  
  The update operation should be successful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1401664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401656] [NEW] IPv6 Tempest tests fail with DVR

2014-12-11 Thread Armando Migliaccio
Public bug reported:

https://review.openstack.org/#/c/112336/ makes the DVR job fail

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: l3-dvr-backlog

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401656

Title:
  IPv6 Tempest tests fail with DVR

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  https://review.openstack.org/#/c/112336/ makes the DVR job fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images

2014-12-11 Thread Vish Ishaya
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368815

Title:
  qemu-img convert intermittently corrupts output images

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in QEMU:
  In Progress
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Fix Released
Status in qemu source package in Utopic:
  Fix Released
Status in qemu source package in Vivid:
  Fix Released

Bug description:
  ==
  Impact: occasional image corruption (any format on local filesystem)
  Test case: see the qemu-img command below
  Regression potential: this cherrypicks a patch from upstream to a 
not-insignificantly older qemu source tree.  While the cherrypick seems sane, 
it's possible that there are subtle interactions with the other delta.  I'd 
really like for a full qa-regression-test qemu testcase to be run against this 
package.
  ==

  -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on
  Ubuntu 14.04 using Ext4 filesystems.

  The command

    qemu-img convert -O raw inputimage.qcow2 outputimage.raw

  intermittently creates corrupted output images, when the input image
  is not yet fully synchronized to disk. While the issue has actually
  been discovered in operation of of OpenStack nova, it can be
  reproduced "easily" on command line using

    cat $SRC_PATH > $TMP_PATH && $QEMU_IMG_PATH convert -O raw $TMP_PATH
  $DST_PATH && cksum $DST_PATH

  on filesystems exposing this behavior. (The difficult part of this
  exercise is to prepare a filesystem to reliably trigger this race. On
  my test machine some filesystems are affected while other aren't, and
  unfortunately I haven't found the relevant difference between them,
  yet. Possible it's timing issues completely out of userspace control
  ...)

  The root cause, however, is the same as in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html

  and it can be solved the same way as suggested in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html

  In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change

  f.fm.fm_flags = 0;

  to

  f.fm.fm_flags = FIEMAP_FLAG_SYNC;

  As discussed in the thread mentioned above, retrieving a page cache
  coherent map of file extents is possible only after fsync on that
  file.

  See also

    https://bugs.launchpad.net/nova/+bug/1350766

  In that bug report filed against nova, fsync had been suggested to be
  performed by the framework invoking qemu-img. However, as the choice
  of fiemap -- implying this otherwise unneeded fsync of a temporary
  file  -- is not made by the caller but by qemu-img, I agree with the
  nova bug reviewer's objection to put it into nova. The fsync should
  instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC,
  specifically intended for that purpose.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401647] [NEW] Huge pages: Compute driver fails to set appropriate page size when using flavor extra spec -- 'hw:mem_page_size=any'

2014-12-11 Thread Kashyap Chamarthy
Public bug reported:

Description of problem
--

>From the proposed Nova specification "Virt driver large page allocation
for guest RAM"[*], if you set the Nova flavor extra_spec for huge pages
as 'any' ('nova flavor-key m1.hugepages set hw:mem_page_size=any',
it means: "leave policy upto the compute driver implementation to
decide. When seeing 'any' the libvirt driver might try to find large
pages, but fallback to small pages"

However, booting a guest with a Nova flavor defined with huge pages size
set to 'any', results in:

libvirtError: internal error: Unable to find any usable hugetlbfs
mount for 4 KiB


>From Nova Conductor logs:

. . .
2014-12-11 13:06:34.738 ERROR nova.scheduler.utils 
[req-7812c740-ec60-461e-a6b7-66b4bd4359ee admin admin] [instance: 
c8e1093b-81d6-4bc8-a319-7a8ea384c
9fb] Error from last host: fedvm1 (node fedvm1): [u'Traceback (most recent call 
last):\n', u'  File "/home/kashyapc/src/cloud/nova/nova/compute/manage
r.py", line 2060, in _do_build_and_run_instance\nfilter_properties)\n', u'  
File "/home/kashyapc/src/cloud/nova/nova/compute/manager.py", line 220
0, in _build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
c8e1093b-81d6-4
bc8-a319-7a8ea384c9fb was re-scheduled: internal error: Unable to find any 
usable hugetlbfs mount for 4 KiB\n']
. . .

 
[*] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-large-pages.html#proposed-change


Version
---

Apply the virt-driver-large-pages patch series to Nova git, and test via
DevStack:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp
/virt-driver-large-pages,n,z

$ git log | grep "commit\ " | head -8
commit c0c5d6a497c0e275e6f2037c1f7d45983a077cbc
commit 9d1d59bd82a7f2747487884d5880270bfdc9734a
commit eda126cce41fd5061b630a1beafbf5c37292946e
commit 6980502683bdcf514b386038ca0e0ef8226c27ca
commit b1ddc34efdba271f406a6db39c8dadcb8cc9
This commit also add a new exceptions MemoryPageSizeInvalid and
commit 2fcfc675aa04ef2760f0e763697c73b6d90a4fca
commit 567987035bc3ef685ea09ac2b82be55aa5e23ca5

$ git describe
2014.2-1358-gc0c5d6a


libvirt version: libvirt-1.2.11 (built from libvirt git)

$ git log | head -1 
commit a2a35d0164f4244b9c6f143f54e9bb9f3c9af7d3a
$ git describe
CVE-2014-7823-247-ga2a35d0


Steps to Reproduce
--

Test environment: I was testing Nova huge pages in a DevStack VM with KVM
nested virtualization, i.e. the Nova instances will be the nested guests.

Check if the 'hugetlbfs' is present in /proc filesystem:

$ cat /proc/filesystems  | grep hugetlbfs
nodev   hugetlbfs

Get the number of total huge pages:

$ grep HugePages_Total /proc/meminfo
HugePages_Total: 512

Get the number of free huge pages:

$ grep HugePages_Free /proc/meminfo
HugePages_Free:  512

Create flavor:

nova flavor-create m1.hugepages 999 2048 1 4

Set extra_spec values for NUMA and Huge pages, with value as 'any':

nova flavor-key m1.hugepages set hw:numa_nodes=1
nova flavor-key m1.hugepages set hw:mem_page_size=any

Enumerate the newly created flavor properties:

$ nova flavor-show m1.hugepages

++-+
| Property   | Value
   |

++-+
| OS-FLV-DISABLED:disabled   | False
   |
| OS-FLV-EXT-DATA:ephemeral  | 0
   |
| disk   | 1
   |
| extra_specs| {"hw:mem_page_size": "any", "hw:numa_nodes": 
"1"}   |
| id | 999  
   |
| name   | m1.hugepages 
   |
| os-flavor-access:is_public | True 
   |
| ram| 2048 
   |
| rxtx_factor| 1.0  
   |
| swap   |  
   |
| vcpus  | 4
   |

++-+


Boot a guest with the above falvor:


Actual results
--


(1) Contextual error messages from Nova Compute log (screen-n-cpu.log):

. . .
2014-12-11 13:06:34.141 ERROR nova.compute.manager [-] [instance: 
c8e1093b-81d6-4bc8-a319-7a8ea384c9fb] Instance failed to spawn
2014-12-11 13:06:34.141 TRACE nova.compute.man

[Yahoo-eng-team] [Bug 1013594] Re: Update server name with invalid server name is not raising BadRequest

2014-12-11 Thread Joe Gordon
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1013594

Title:
  Update server name with invalid server name is not raising BadRequest

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Update server name with invalid server name

  Expected Result:
  When name parameter of server is updated with invalid value, Bad Request 
Exception is supposed 

  Actual Result:
  Did not raise Bad Request

  Log:

  Update name of the server with name of server to already existing ...
  FAIL

  ==
  FAIL: Update name of the server with name of server to already existing
  --
  Traceback (most recent call last):
File 
"/home/openstack/tempest/tempest_harika/tempest_15thJune/tempest/tempest/tests/compute/test_servers_negative.py",
 line 442, in test_update_server_with_name_of_server_to_already_existing
  self.server['id'], name=server_detail['name'])
  AssertionError: BadRequest not raised

  
  Input given:

  new_name = 'update_!@#$%^&*()_+|={}<>?'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1013594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 975212] Re: Flavor self link not included in GET server response

2014-12-11 Thread Joe Gordon
** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/975212

Title:
  Flavor self link not included in GET server response

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I noticed that the flavor self link is missing from a server's GET
  response. Since we return the self link for the server's image, it
  seems odd for it to be missing for the flavor.

  Daryls-MacBook-Pro:zodiac dwalleck$ curl -i -H "X-Auth-Token: 
9ceb158daab24d5e813bbbcb7c2f503b" 
http://127.0.0.1:8774/v2/a7d84f9effeb47f59b1838d6ebc3aef7/servers/aef8ac3f-60d2-4e6c-9085-6ab01dd354a2
  HTTP/1.1 200 OK
  X-Compute-Request-Id: req-9554caec-c78e-46f5-a8c4-4dc6adc28789
  Content-Type: application/json
  Content-Length: 1371
  Date: Fri, 06 Apr 2012 14:47:05 GMT

  {"server": {"OS-EXT-STS:task_state": null, "addresses": {"private":
  [{"version": 4, "addr": "10.0.0.2"}]}, "links": [{"href":
  "http://127.0.0.1:8774/v2/a7d84f9effeb47f59b1838d6ebc3aef7/servers
  /aef8ac3f-60d2-4e6c-9085-6ab01dd354a2", "rel": "self"}, {"href":
  "http://127.0.0.1:8774/a7d84f9effeb47f59b1838d6ebc3aef7/servers
  /aef8ac3f-60d2-4e6c-9085-6ab01dd354a2", "rel": "bookmark"}], "image":
  {"id": "f0ab6f51-65c5-4375-8891-41498e5a0f4f", "links": [{"href":
  
"http://127.0.0.1:8774/a7d84f9effeb47f59b1838d6ebc3aef7/images/f0ab6f51-65c5-4375-8891-41498e5a0f4f";,
  "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", "OS-EXT-SRV-
  ATTR:instance_name": "instance-0001", "flavor": {"id": "1",
  "links": [{"href":
  "http://127.0.0.1:8774/a7d84f9effeb47f59b1838d6ebc3aef7/flavors/1";,
  "rel": "bookmark"}]}, "id": "aef8ac3f-60d2-4e6c-9085-6ab01dd354a2",
  "user_id": "18d53cdbbef0443c80c675fa5b77a935", "OS-DCF:diskConfig":
  "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-
  STS:power_state": 1, "config_drive": "", "status": "ACTIVE",
  "updated": "2012-04-06T14:45:53Z", "hostId":
  "70ca6c8a65a0bf557cbf611a2f67d351b20298a6f7985fb0fca5c591", "OS-EXT-
  SRV-ATTR:host": "devstack2", "key_name": "", "OS-EXT-SRV-
  ATTR:hypervisor_hostname": null, "name": "a1", "created":
  "2012-04-06T14:45:40Z", "tenant_id":
  "a7d84f9effeb47f59b1838d6ebc3aef7", "metadata": {}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/975212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1167073] Re: nova-network should increase nf_conntrack_max

2014-12-11 Thread Joe Gordon
** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1167073

Title:
  nova-network should increase nf_conntrack_max

Status in OpenStack Compute (Nova):
  Opinion
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  We ran into trouble when net.netfilter.nf_conntrack_count was
  exhausted by the system default of net.netfilter.nf_conntrack_max
  (65536).  As the typical use scenario for Nova can easily exhaust
  that, nova-network should probably set a more reasonable default, like
  2097152.

  Version: 2012.2.1+stable-20121212-a99a802e-0ubuntu1.4~cloud0 (from
  cloud-archive)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1167073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 914897] Re: euca-authorize port ranges misleading error message

2014-12-11 Thread Joe Gordon
$  euca-authorize -P tcp -p 0-65535 -s 0.0.0.0/0 default


euca-authorize: error (InvalidParameterValue): Invalid port range 0:65535. 
Valid TCP ports should be between 1-65535


** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/914897

Title:
  euca-authorize port ranges misleading error message

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  NOTE: Opening based on forum thread
  http://forums.openstack.org/viewtopic.php?f=10&t=662

  version: 2011.3 (2011.3-nova-milestone-
  tarball:tarmac-20110922115702-k9nkvxqzhj130av2)

  euca-authorize -P tcp -p 0-65535 -s 0.0.0.0/0 default
  ApiError: [] Not enough parameters to build a valid rule

  compared to

  euca-authorize -P tcp -p 1-65535 -s 0.0.0.0/0 default

  which succeeds.

  The error message is misleading, "out of range" or similar would have
  been more helpful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/914897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1027263] Re: Nova volume api volume list and volume detail list are the same

2014-12-11 Thread Joe Gordon
not sure how this relates to nova anymore.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1027263

Title:
  Nova volume api volume list and volume detail list are the same

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  the output of a GET to /volumes and a GET to /volumes/detail is
  identical.  a GET to /volumes should be just a listing of the volume
  ids and the links to those specific volume details.  This should be
  similar to how a GET to /servers and /servers/detail works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1027263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 897140] Re: unassociated floating IPs not visible to admin

2014-12-11 Thread Joe Gordon
Is this still valid,  this bugs is several years old.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/897140

Title:
  unassociated floating IPs not visible to admin

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Using the following command as admin does not show IPs allocated by
  the different users:

  nova floating-ip-list

  The list of allocated and associated IP addresses can be extracted
  from "nova list", but there's no easy way for an admin to list
  floating IP addresses not associated to any server by querying the
  nova API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/897140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 829609] Re: EC2 compatibility describe security group returns erroneous value for group ip permissions

2014-12-11 Thread Joe Gordon
Is this still valid, hasn't been touched in years.

** Changed in: nova
   Status: Confirmed => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/829609

Title:
  EC2 compatibility describe security group returns erroneous value for
  group ip permissions

Status in pyjuju:
  Fix Released
Status in OpenStack Compute (Nova):
  Invalid
Status in txAWS: Twisted Amazon:
  Fix Released
Status in txaws package in Ubuntu:
  Fix Released

Bug description:
  When dealing with group to group authorization (including self group
  authorization), nova doesn't associate the correct port ranges to the
  group ip permission.

  ie.
  ec2.authorize_security_group(
  "ensemble-east",
  source_group_name="ensemble-east",
  source_group_owner_id=owner_id)

  results in very different output from euca-describe-groups vs. ec2
  -describe-group.

  ec2-describe-group reports

  GROUP   sg-a7351dce 619193117841ensemble-east   Ensemble group for 
east 
  PERMISSION  619193117841ensemble-east   ALLOWS  tcp 1   65535 
  FROMUSER619193117841NAME ensemble-east  ID sg-a7351dce  
ingress
  PERMISSION  619193117841ensemble-east   ALLOWS  udp 1   65535 
  FROMUSER619193117841NAME ensemble-east  ID sg-a7351dce  
ingress
  PERMISSION  619193117841ensemble-east   ALLOWS  icmp-1  -1
  FROMUSER619193117841NAME ensemble-east  ID sg-a7351dce  
ingress

  where as euca-describe-group

  GROUP   kapil_project   ensemble-internal   Ensemble group for internal
  PERMISSION  kapil_project   ensemble-internal   ALLOWS
  GRPNAME ensemble-internal

  the output of euca-describe-group isn't parseable to some tools since
  its also missing port ranges. Its unclear if this source group
  declaration for an ingress rule has worked correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju/+bug/829609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401626] [NEW] The L3 agent tries to catch an exception from processutils when processutils is not used

2014-12-11 Thread Terry Wilson
Public bug reported:

L3 agent imports the processutils module to catch exceptions that
wouldn't ever be thrown because the underlying execute() being called is
the one from neutron.agent.linux.utils which raises a RuntimeError on
failure.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401626

Title:
  The L3 agent tries to catch an exception from processutils when
  processutils is not used

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  L3 agent imports the processutils module to catch exceptions that
  wouldn't ever be thrown because the underlying execute() being called
  is the one from neutron.agent.linux.utils which raises a RuntimeError
  on failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images

2014-12-11 Thread Launchpad Bug Tracker
This bug was fixed in the package qemu - 2.1+dfsg-4ubuntu6.2

---
qemu (2.1+dfsg-4ubuntu6.2) utopic-proposed; urgency=medium

  * Apply two patches to fix intermittent qemu-img corruption
(LP: #1368815)
- 501-block-raw-posix-fix-disk-corruption-in-try-fiemap
- 502-block-raw-posic-use-seek-hole-ahead-of-fiemap
 -- Serge HallynThu, 20 Nov 2014 16:33:09 -0600

** Changed in: qemu (Ubuntu Utopic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368815

Title:
  qemu-img convert intermittently corrupts output images

Status in OpenStack Compute (Nova):
  In Progress
Status in QEMU:
  In Progress
Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Trusty:
  Fix Released
Status in qemu source package in Utopic:
  Fix Released
Status in qemu source package in Vivid:
  Fix Released

Bug description:
  ==
  Impact: occasional image corruption (any format on local filesystem)
  Test case: see the qemu-img command below
  Regression potential: this cherrypicks a patch from upstream to a 
not-insignificantly older qemu source tree.  While the cherrypick seems sane, 
it's possible that there are subtle interactions with the other delta.  I'd 
really like for a full qa-regression-test qemu testcase to be run against this 
package.
  ==

  -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on
  Ubuntu 14.04 using Ext4 filesystems.

  The command

    qemu-img convert -O raw inputimage.qcow2 outputimage.raw

  intermittently creates corrupted output images, when the input image
  is not yet fully synchronized to disk. While the issue has actually
  been discovered in operation of of OpenStack nova, it can be
  reproduced "easily" on command line using

    cat $SRC_PATH > $TMP_PATH && $QEMU_IMG_PATH convert -O raw $TMP_PATH
  $DST_PATH && cksum $DST_PATH

  on filesystems exposing this behavior. (The difficult part of this
  exercise is to prepare a filesystem to reliably trigger this race. On
  my test machine some filesystems are affected while other aren't, and
  unfortunately I haven't found the relevant difference between them,
  yet. Possible it's timing issues completely out of userspace control
  ...)

  The root cause, however, is the same as in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html

  and it can be solved the same way as suggested in

    http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html

  In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change

  f.fm.fm_flags = 0;

  to

  f.fm.fm_flags = FIEMAP_FLAG_SYNC;

  As discussed in the thread mentioned above, retrieving a page cache
  coherent map of file extents is possible only after fsync on that
  file.

  See also

    https://bugs.launchpad.net/nova/+bug/1350766

  In that bug report filed against nova, fsync had been suggested to be
  performed by the framework invoking qemu-img. However, as the choice
  of fiemap -- implying this otherwise unneeded fsync of a temporary
  file  -- is not made by the caller but by qemu-img, I agree with the
  nova bug reviewer's objection to put it into nova. The fsync should
  instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC,
  specifically intended for that purpose.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401602] [NEW] db test directories missing after a recent commit

2014-12-11 Thread Pauline Yeung
Public bug reported:

After the following commit to master,

6bee8592b1bf661f0b247d804738c7202b37604c Imported Translations from
Transifex

The following directories are missing

neutron/tests/unit/db/firewall
neutron/tests/unit/db/loadbalancer
neutron/tests/unit/db/vpn

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401602

Title:
  db test directories missing after a recent commit

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After the following commit to master,

  6bee8592b1bf661f0b247d804738c7202b37604c Imported Translations from
  Transifex

  The following directories are missing

  neutron/tests/unit/db/firewall
  neutron/tests/unit/db/loadbalancer
  neutron/tests/unit/db/vpn

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401600] [NEW] docs build fails for nova-specs

2014-12-11 Thread Matt Riedemann
Public bug reported:

I just cloned nova-specs and ran 'tox -e docs', I'm on Ubuntu Trusty
14.04 (no devstack), it fails with this:

mriedem@ubuntu:~/git/nova-specs$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
mriedem@ubuntu:~/git/nova-specs$ tox -e docs
docs develop-inst-nodeps: /home/mriedem/git/nova-specs
docs runtests: PYTHONHASHSEED='2805332766'
docs runtests: commands[0] | python setup.py build_sphinx
running build_sphinx
Running Sphinx v1.1.3
loading pickled environment... not yet created
Using openstack theme from 
/home/mriedem/git/nova-specs/.tox/docs/local/lib/python2.7/site-packages/oslosphinx/theme
building [html]: all source files
updating environment: 7 added, 0 changed, 0 removed
reading sources... [100%] specs/kilo/template   

   
scanning /home/mriedem/git/nova-specs/doc/source for redirects...
   found redirects at 
/home/mriedem/git/nova-specs/doc/source/specs/kilo/redirects
Traceback (most recent call last):
  File "setup.py", line 22, in 
pbr=True)
  File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
  File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
  File 
"/home/mriedem/git/nova-specs/.tox/docs/local/lib/python2.7/site-packages/pbr/packaging.py",
 line 754, in run
self._sphinx_run()
  File 
"/home/mriedem/git/nova-specs/.tox/docs/local/lib/python2.7/site-packages/pbr/packaging.py",
 line 715, in _sphinx_run
app.build(force_all=self.all_files)
  File 
"/home/mriedem/git/nova-specs/.tox/docs/local/lib/python2.7/site-packages/sphinx/application.py",
 line 206, in build
self.emit('build-finished', err)
  File 
"/home/mriedem/git/nova-specs/.tox/docs/local/lib/python2.7/site-packages/sphinx/application.py",
 line 314, in emit
results.append(callback(self, *args))
  File "/home/mriedem/git/nova-specs/doc/source/redirect.py", line 49, in 
emit_redirects
process_directory(app.builder.srcdir)
  File "/home/mriedem/git/nova-specs/doc/source/redirect.py", line 44, in 
process_directory
process_directory(p)
  File "/home/mriedem/git/nova-specs/doc/source/redirect.py", line 44, in 
process_directory
process_directory(p)
  File "/home/mriedem/git/nova-specs/doc/source/redirect.py", line 47, in 
process_directory
process_redirect_file(app, path, ent)
  File "/home/mriedem/git/nova-specs/doc/source/redirect.py", line 23, in 
process_redirect_file
from_path, to_path = line.rstrip().split(' ')
ValueError: need more than 1 value to unpack
ERROR: InvocationError: '/home/mriedem/git/nova-specs/.tox/docs/bin/python 
setup.py build_sphinx'
___
 summary 
___
ERROR:   docs: commands failed


My virtualenv pip freeze output looks the same as what's in community
runs:

(docs)mriedem@ubuntu:~/git/nova-specs$ pip freeze
Jinja2==2.7.3
MarkupSafe==0.23
Pygments==2.0.1
Sphinx==1.1.3
argparse==1.2.1
cssselect==0.9.1
docutils==0.12
extras==0.0.3
feedformatter==0.4
fixtures==1.0.0
lxml==3.4.1
-e 
git://git.openstack.org/openstack/nova-specs@ea08644e90efb680f1eeba5731d01315b12e8d6f#egg=nova_specs-bp/db2-database
oslosphinx==2.3.0
pbr==0.10.0
pyquery==1.2.9
python-mimeparse==0.1.4
python-subunit==1.0.0
six==1.8.0
testrepository==0.0.20
testtools==1.5.0
unittest2==0.8.0
wsgiref==0.1.2
yasfb==0.5.1
(docs)mriedem@ubuntu:~/git/nova-specs$ 


http://logs.openstack.org/87/136487/6/check/gate-nova-specs-python27/d617099/console.html#_2014-12-11_16_08_01_111

I'm assuming I'm missing some native site-package dependency, like
sphinxcontrib or something, but I'd think that should be called out
somewhere if it's needed to build docs.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401600

Title:
  docs build fails for nova-specs

Status in OpenStack Compute (Nova):
  New

Bug description:
  I just cloned nova-specs and ran 'tox -e docs', I'm on Ubuntu Trusty
  14.04 (no devstack), it fails with this:

  mriedem@ubuntu:~/git/nova-specs$ git checkout master
  Switched to branch 'master'
  Your branch is up-to-date with 'origin/master'.
  mriedem@ubuntu:~/git/nova-specs$ tox -e docs
  docs develop-inst-nodeps: /home/mriedem/git/nova-specs
  docs runtests: PYTHONHASHSEED='2805332766'
  docs runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  Running Sphinx v1.1.3
  loading pickled environment... not yet created
  Using openstack theme from 
/hom

[Yahoo-eng-team] [Bug 1279347] Re: Horizon user with member role should not be provided with subnet creation option for external network

2014-12-11 Thread Timur Sufiev
Not reproducible on Kilo Devstack: regular user which was created that
way is not able to see public external network. Closing as 'Invalid'.
Feel free to reopen if it is reproduced in Juno/Icehouse.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1279347

Title:
  Horizon user with member role should not be provided with subnet
  creation option for external network

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Horizon user with member role should not be provided with subnet
  creation option for external network

  Steps to reproduce the issue:

  1- Login to Horizon using Administrator account and create an user with role 
as member
  2- Logout and login to horizon as normal user
  3- Click on networks tab, click on external network
  4- There you can see the "create subnet" button.
  5- Click on create subnet and provide the required details for subnet 
creation.
  6- Subnet creation fails as expected but for a better user experience, the 
option of creating subnet for external network should not be provided to user 
with member role.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1279347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401437] Re: nova passes incorrect authentication info to cinderclient

2014-12-11 Thread Matt Riedemann
** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401437

Title:
  nova passes incorrect authentication info to cinderclient

Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Cinder:
  New

Bug description:
  There are multiple problems with the authentication information that
  nova/volume/cinder code passes to cinderclient:

  1. nova/volume/cinder.py passes 'cinder endpoint publicURL'  as the
  auth_url to cinderclient for credential authentication instead of the
  keystone auth_url .This happens here:

  get_cinder_client_version(context) sets the value for global CINDER_URL and 
passes it to
  c = cinder_client.Client(version,
   context.user_id,
   context.auth_token,
   project_id=context.project_id,
   auth_url=CINDER_URL,
   insecure=CONF.cinder.api_insecure,
   retries=CONF.cinder.http_retries,
   timeout=CONF.cinder.http_timeout,
   cacert=CONF.cinder.ca_certificates_file)

  c.client.auth_token = context.auth_token or '%s:%s' % (context.user_id,
 context.project_id)
  

  Under normal circumstances ( i e in cases where the context has
  auth_token) , the auth_url is never used/required. So this is required
  only when the token expires and an attempt to do fresh authentication
  is made here:

  def _cs_request(self, url, method, **kwargs):
  auth_attempts = 0
  attempts = 0
  backoff = 1
  while True:
  attempts += 1
  if not self.management_url or not self.auth_token:
  self.authenticate()
  kwargs.setdefault('headers', {})['X-Auth-Token'] = self.auth_token
  if self.projectid:
  kwargs['headers']['X-Auth-Project-Id'] = self.projectid
  try:
  resp, body = self.request(self.management_url + url, method,
**kwargs)
  return resp, body
  except exceptions.BadRequest as e:
  if attempts > self.retries:
  raise
  except exceptions.Unauthorized:
  if auth_attempts > 0:
  raise
  self._logger.debug("Unauthorized, reauthenticating.")
  self.management_url = self.auth_token = None
  # First reauth. Discount this attempt.
  attempts -= 1
  auth_attempts += 1
  continue

  
  2. nova/volume.cinderclient.py >> cinderclient method passes 
context.auth_token instead of the password.Due to this HttpClient.password 
attribute is set to the auth token instead of the password. 

  3. There are other problems around this which is summarized as below:

  cinderclient should really support a way of passing an auth_token in
  on the __init__ so it is explicitly supported for the caller to
  specify an auth_token, rather than forcing this hack that nova is
  currently using of setting the auth_token itself after creating the
  cinderclient instance. That's not strictly required, but it would be a
  much better design. At that point, cinderclient should also stop
  requiring the auth_url parameter (it currently raises an exception if
  that isn't specified) if an auth_token is specified and retries==0,
  since in that case the auth_url would never be used. Userid and
  password would also not be required in that case.

  nova needs to either start passing a valid userid and password and a
  valid auth_url so that retries will work, or stop setting retries to a
  non-zero number (it's using a conf setting to determine the number of
  retries, and the default is 3). If the decision is to get retries
  working, then we have to figure out what to pass for the userid and
  password. Nova won't know the end-user's user/password that correspond
  to the auth_token it initially uses, and we wouldn't want to be using
  a different user on retries than we do on the initial requests, so I
  don't think retries should be supported unless nova is going to make
  ALL requests with a service userid rather than with the end-user's
  userid... and I don't think that fits with the current OpenStack
  architecture. So that leaves us with not supporting retries. In that
  case, nova should still stop passing the auth_token in as the password
  so that someone doesn't stumble over that later when retry support is
  added. Similarly for the auth_url it should start passing the correct
  keystone auth_url, or at least make it clear tha

[Yahoo-eng-team] [Bug 1401520] [NEW] quota noop driver 'KeyError'

2014-12-11 Thread Max Lvov
Public bug reported:

root@node-19:~# nova quota-show 

  [134/569]
+-+---+
| Quota   | Limit |
+-+---+
| instances   | -1|
| cores   | -1|
| ram | -1|
| floating_ips| -1|
| fixed_ips   | -1|
| metadata_items  | -1|
| injected_files  | -1|
| injected_file_content_bytes | -1|
| injected_file_path_bytes| -1|
| key_pairs   | -1|
| security_groups | -1|
| security_group_rules| -1|
+-+---+

root@node-19:~# nova quota-update --instances 2 082cd5a6411e4acab173b8325290672e
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-7dd31dae-1d49-490e-a0f6-9c345304f60e)


from nova-api.log:
...
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", 
line 582, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 917, in 
__call__
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack content_type, body, 
accept)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 983, in 
_process_stack
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 1070, in 
dispatch
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/quotas.py",
 line 135, in update
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack user_id=user_id)
2014-12-11 06:42:23.349 5277 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/quota.py", line 1179, in 
get_settable_quotas
2014-12-11 06:42:23.349 

[Yahoo-eng-team] [Bug 1286099] Re: UpdateProjectQuotas doesn't pay attention on disabled_quotas

2014-12-11 Thread Timur Sufiev
With the above fix in Nova, the Dashboard doesn't produce errors when
modifying data of a project with Nova quota driver set to
NoopQoutaDriver, still the Dashboard makes user to think that he can
change some quota values while actually he can't. Unfortunately,
currently there is no way to query Nova about the type of quota driver
it uses, so it cannot be fixed in Horizon alone. Adding Nova to this
bug.

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
 Assignee: Timur Sufiev (tsufiev-x) => (unassigned)

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286099

Title:
  UpdateProjectQuotas doesn't pay attention on disabled_quotas

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  Environment:
  - OpenStack Havana release (2013.2.1)
  - Neutron

  Steps to reproduce:
  - Go to admin -> projects
  - Try to update project quotas
  - Update fails with "Error: Modified project information and members, but 
unable to modify project quotas."

  Workaround:
  Comment out "security_group", "security_group_rule" in NEUTRON_QUOTA_FIELDS 
(openstack_dashboard/usage/quotas.py)

  In neutron/server.log:

  2014-02-28 11:45:03.145 34093 ERROR neutron.api.v2.resource [-] update failed
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/extensions/quotasv2.py", line 107, in 
update
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource 
EXTENDED_ATTRIBUTES_2_0[RESOURCE_COLLECTION])
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 555, in 
prepare_request_body
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource 
Controller._verify_attributes(res_dict, attr_info)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 602, in 
_verify_attributes
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPBadRequest(msg)
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource HTTPBadRequest: 
Unrecognized attribute(s) 'security_group_rule, security_group'
  2014-02-28 11:45:03.145 34093 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1286099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325736] Re: Security Group Rules can only be specified in one direction

2014-12-11 Thread Elena Ezhova
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325736

Title:
  Security Group Rules can only be specified in one direction

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Confirmed
Status in Python client library for Neutron:
  In Progress

Bug description:
  It might save users potentially a lot of time if instead of only
  offering an INGRESS and an EGRESS direction, if they could specify a
  BOTH direction. Whenever someone needs to enter both an ingress and
  egress rule for the same port they have to enter it twice, remembering
  all of the information they need (since it can't be cloned). If they
  forget to flip the direction the second time from the default value,
  it'll error out as a duplicate and they'll have to try a third time.
  If they messed up the second rule, there's no edit, so they would have
  to delete it if they got a value wrong and do it all over again.

  It would be awesome if the UI allowed for specifying both an ingress
  and egress rule at the same time, even if all it did was create the
  ingress and egress rows and put them in the table, at least they'd be
  guaranteed to have the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1325736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401486] [NEW] Incorrect initialisation of tests for extended_availability_zones V21 API extension

2014-12-11 Thread Sergey Nikitin
Public bug reported:

In test case ExtendedAvailabilityZoneTestV21 we initialize ALL API
extension instead of one (extended_availability_zone extension).

here:
https://github.com/openstack/nova/blob/c3f3dc012ae3938b6f116491273a4eef0acfab83/nova/tests/unit/api/openstack/compute/contrib/test_extended_availability_zone.py#L96

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401486

Title:
  Incorrect initialisation of tests for extended_availability_zones V21
  API extension

Status in OpenStack Compute (Nova):
  New

Bug description:
  In test case ExtendedAvailabilityZoneTestV21 we initialize ALL API
  extension instead of one (extended_availability_zone extension).

  here:
  
https://github.com/openstack/nova/blob/c3f3dc012ae3938b6f116491273a4eef0acfab83/nova/tests/unit/api/openstack/compute/contrib/test_extended_availability_zone.py#L96

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401480] [NEW] nova service-list show inconsistent output for disabled_reason

2014-12-11 Thread jichenjc
Public bug reported:

[root@compute1 ~]# nova service-list
++--+--+--+-+---++-+
| Id | Binary   | Host | Zone | Status  | State | Updated_at
 | Disabled Reason |
++--+--+--+-+---++-+
| 4  | nova-scheduler   | compute1 | internal | enabled | up| 
2014-12-11T10:25:17.00 | -   |
| 5  | nova-compute | compute1 | nova | enabled | up| 
2014-12-11T10:25:12.00 | -   |
| 6  | nova-consoleauth | compute1 | internal | enabled | down  | 
2014-12-10T07:39:27.00 | -   |
| 7  | nova-compute | compute2 | nova | enabled | down  | 
2014-12-10T07:39:25.00 | None|
++--+--+--+-+---++-+


the None at disabled_reason of id 7 is inconsistent to other columns
it should be - to avoid confusion

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401480

Title:
  nova service-list show inconsistent output for disabled_reason

Status in OpenStack Compute (Nova):
  New

Bug description:
  [root@compute1 ~]# nova service-list
  
++--+--+--+-+---++-+
  | Id | Binary   | Host | Zone | Status  | State | Updated_at  
   | Disabled Reason |
  
++--+--+--+-+---++-+
  | 4  | nova-scheduler   | compute1 | internal | enabled | up| 
2014-12-11T10:25:17.00 | -   |
  | 5  | nova-compute | compute1 | nova | enabled | up| 
2014-12-11T10:25:12.00 | -   |
  | 6  | nova-consoleauth | compute1 | internal | enabled | down  | 
2014-12-10T07:39:27.00 | -   |
  | 7  | nova-compute | compute2 | nova | enabled | down  | 
2014-12-10T07:39:25.00 | None|
  
++--+--+--+-+---++-+

  
  the None at disabled_reason of id 7 is inconsistent to other columns
  it should be - to avoid confusion

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378388] Re: Performance regression uploading images to glance in juno

2014-12-11 Thread Erno Kuvaja
*** This bug is a duplicate of bug 1370247 ***
https://bugs.launchpad.net/bugs/1370247

** This bug has been marked a duplicate of bug 1370247
   glance_store rbd driver accidentally changed chunk size units

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1378388

Title:
  Performance regression uploading images to glance in juno

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in glance package in Ubuntu:
  Invalid
Status in python-glance-store package in Ubuntu:
  Fix Released

Bug description:
  Testing: 1:2014.2~rc1-0ubuntu1

  Uploads of standard ubuntu images to glance, backed by ceph, are 10x
  slower than on icehouse on the same infrastructure.  With icehouse i
  saw around 200MBps, with juno around 20Mbps.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1378388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401469] [NEW] Active service should not be allowed to be deleted

2014-12-11 Thread jichenjc
Public bug reported:


we should not enable service to be deleted when it's up
see following example,  the service itself is still working but we lost the 
information of the services
we'd better give some flexibility to operators (e.g unless --force flag, we 
can't delete active services)


[root@compute1 ~]# nova service-list
++--+--+--+-+---++-+
| Id | Binary   | Host | Zone | Status  | State | Updated_at
 | Disabled Reason |
++--+--+--+-+---++-+
| 4  | nova-scheduler   | compute1 | internal | enabled | up| 
2014-12-11T10:06:47.00 | -   |
| 5  | nova-compute | compute1 | nova | enabled | up| 
2014-12-11T10:06:48.00 | None|
| 6  | nova-consoleauth | compute1 | internal | enabled | down  | 
2014-12-10T07:39:27.00 | -   |
| 7  | nova-compute | compute2 | nova | enabled | down  | 
2014-12-10T07:39:25.00 | None|
| 8  | nova-conductor   | compute1 | internal | enabled | up| - 
 | -   |
++--+--+--+-+---++-+
[root@compute1 ~]# nova service-delete 8
[root@compute1 ~]# nova service-list
++--+--+--+-+---++-+
| Id | Binary   | Host | Zone | Status  | State | Updated_at
 | Disabled Reason |
++--+--+--+-+---++-+
| 4  | nova-scheduler   | compute1 | internal | enabled | up| 
2014-12-11T10:08:47.00 | -   |
| 5  | nova-compute | compute1 | nova | enabled | up| 
2014-12-11T10:08:42.00 | None|
| 6  | nova-consoleauth | compute1 | internal | enabled | down  | 
2014-12-10T07:39:27.00 | -   |
| 7  | nova-compute | compute2 | nova | enabled | down  | 
2014-12-10T07:39:25.00 | None|
++--+--+--+-+---++-+
[root@compute1 ~]#


error logs in conductor, though it won't affect operations

2014-12-11 05:07:43.149 ERROR nova.servicegroup.drivers.db [-] model server 
went away
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db Traceback (most 
recent call last):
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/servicegroup/drivers/db.py", line 99, in _report_state
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db 
service.service_ref, state_catalog)
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/conductor/api.py", line 180, in service_update
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db return 
self._manager.service_update(context, service, values)
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/utils.py", line 951, in wrapper
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db raise 
(e.exc_info[1], None, e.exc_info[2])
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db ServiceNotFound: 
Service 8 could not be found.
2014-12-11 05:07:43.149 TRACE nova.servicegroup.drivers.db
2014-12-11 05:07:43.143 ERROR nova.servicegroup.drivers.db [-] model server 
went away
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db Traceback (most 
recent call last):
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/servicegroup/drivers/db.py", line 99, in _report_state
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db 
service.service_ref, state_catalog)
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/conductor/api.py", line 180, in service_update
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db return 
self._manager.service_update(context, service, values)
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db   File 
"/opt/stack/nova/nova/utils.py", line 951, in wrapper
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db raise 
(e.exc_info[1], None, e.exc_info[2])
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db ServiceNotFound: 
Service 8 could not be found.
2014-12-11 05:07:43.143 TRACE nova.servicegroup.drivers.db

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401469

Title:
  Active service should not be allowed to be deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  we should not enable service to be dele

[Yahoo-eng-team] [Bug 1396976] Re: osprofiler configuration option is inconsistant with other projects using osprofiler

2014-12-11 Thread Erno Kuvaja
Marking as Opinion based on the discussion at the Cross Project meeting
Tue 9.12.

Only other projects that has merged osprofiler has been Heat and Cinder.

** Changed in: glance
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1396976

Title:
  osprofiler configuration option is inconsistant with other projects
  using osprofiler

Status in OpenStack Image Registry and Delivery Service (Glance):
  Opinion

Bug description:
  The option to enable/disable osprofiler in the Glance configuration
  files is inconsistent with other projects which use osprofiler, such
  as Heat, Cinder and others.

  In Glance, we have a profiler section:

  [profiler]
  # If False fully disable profiling feature. 
  enabled = True

  Other projects are similar, but instead of an option named 'enabled',
  they use 'profiler_enabled'.

  To make configuring osprofiler and Glance easier, this option should
  be consistent with other projects and use 'profiler_enabled'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1396976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399782] Re: Python glance-client image-create validation error

2014-12-11 Thread Erno Kuvaja
** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1399782

Title:
  Python glance-client image-create validation error

Status in Python client library for Glance:
  New

Bug description:
  When using the python-glance client to create an image the schema
  validator fails when validating the locations object.

  Basing from the format provided on the image schema:
  {
  "properties": {
  "locations": {
  "items": {
  "required": ["url", "metadata"],
  "type": "object",
  "properties": {
  "url": {
  "type": "string",
  "maxLength": 255
  },
  "metadata": {
  "type": "object"
  }
  }
  },
  "type": "array",
  "description": "A set of URLs to access the image file kept in 
external store"
  },
  }
  }

  The locations attribute is an array of objects containing two attributes, url 
and metadata, eg;
  locations: [
{
   url: 'image.url',
  metadata: {}
}
  ]

  However, when trying to set an image location the following validation error 
is raised:
  glance --debug --os-image-api-version 2  image-create --locations 
"https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img";

  Failed validating 'type' in schema['properties']['locations']['items']:
  {'properties': {'metadata': {'type': 'object'},
  'url': {'maxLength': 255, 'type': 'string'}},
   'required': ['url', 'metadata'],
   'type': 'object'}

  On instance['locations'][0]:
  
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 154, in create
  setattr(image, key, value)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 75, in __setattr__
  self.__setitem__(key, value)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/warlock/model.py",
 line 50, in __setitem__
  raise exceptions.InvalidOperation(msg)
  warlock.exceptions.InvalidOperation: Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

  Failed validating 'type' in schema['properties']['locations']['items']:
  {'properties': {'metadata': {'type': 'object'},
  'url': {'maxLength': 255, 'type': 'string'}},
   'required': ['url', 'metadata'],
   'type': 'object'}

  On instance['locations'][0]:
  
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/shell.py",
 line 620, in main
  args.func(client, args)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/shell.py",
 line 68, in do_image_create
  image = gc.images.create(**fields)
File 
"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/glanceclient/v2/images.py",
 line 156, in create
  raise TypeError(utils.exception_to_str(e))
  TypeError: Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

  Failed validating 'type' in schema['properties']['locations']['items']:
  {'properties': {'metadata': {'type': 'object'},
  'url': {'maxLength': 255, 'type': 'string'}},
   'required': ['url', 'metadata'],
   'type': 'object'}

  On instance['locations'][0]:
  
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
  Unable to set 'locations' to 
'['https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img']'.
 Reason: 
'https://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-i386-disk1.img'
 is not of type 'object'

  Failed validating 'type' in schema['properties']['locations']['items']:
  {'properties': {'metadata': {'type': 'object'},
  'url': {'maxLength': 255, 'type': 'string'}},
   'required': ['url

[Yahoo-eng-team] [Bug 1401457] [NEW] OVS tunnel UT wastes time on unnecessary time.sleep in daemon_loop

2014-12-11 Thread Robin Wang
Public bug reported:

Currently in unit test neutron.tests.unit.openvswitch.test_ovs_tunnel, there're 
3 test_daemon_loop test case.
And in each of them, there's a 2 seconds wait according to "polling_interval". 
It's unnecessary for these test cases,
but introduces extra 6 seconds cost for unit test.

As shown below, these are the top 3 slowest tests in test_ovs_tunnel. Time cost 
{2.094s, 2.093s, 2.085s}, total 6.272s.
With the patch, time cost reduces to {0.022s, 0.090s, 0.023s}, total 0.135s.


* without patch **
Slowest Tests
Test id 
  Runtime (s) 
---
  ---
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_daemon_loop
 2.094
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTest.test_daemon_loop  
 2.093
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop
2.085
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_tunnel_update
   0.249
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_provision_local_vlan_flat
  0.237


* with patch **
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_daemon_loop
 [0.021863s] ... ok
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTest.test_daemon_loop 
[0.090144s] ... ok
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop
 [0.022620s] ... ok

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401457

Title:
   OVS tunnel UT wastes time on unnecessary time.sleep in daemon_loop

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently in unit test neutron.tests.unit.openvswitch.test_ovs_tunnel, 
there're 3 test_daemon_loop test case.
  And in each of them, there's a 2 seconds wait according to 
"polling_interval". It's unnecessary for these test cases,
  but introduces extra 6 seconds cost for unit test.

  As shown below, these are the top 3 slowest tests in test_ovs_tunnel. Time 
cost {2.094s, 2.093s, 2.085s}, total 6.272s.
  With the patch, time cost reduces to {0.022s, 0.090s, 0.023s}, total 0.135s.

  
  * without patch **
  Slowest Tests
  Test id   
Runtime (s) 
  
---
  ---
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_daemon_loop
 2.094
  neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTest.test_daemon_loop
   2.093
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop
2.085
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_tunnel_update
   0.249
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_provision_local_vlan_flat
  0.237

  
  * with patch **
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestUseVethInterco.test_daemon_loop
 [0.021863s] ... ok
  neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTest.test_daemon_loop 
[0.090144s] ... ok
  
neutron.tests.unit.openvswitch.test_ovs_tunnel.TunnelTestWithMTU.test_daemon_loop
 [0.022620s] ... ok

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401437] [NEW] nova passes incorrect authentication info to cinderclient

2014-12-11 Thread Divya K Konoor
Public bug reported:

There are multiple problems with the authentication information that
nova/volume/cinder code passes to cinderclient:

1. nova/volume/cinder.py passes 'cinder endpoint publicURL'  as the
auth_url to cinderclient for credential authentication instead of the
keystone auth_url .This happens here:

get_cinder_client_version(context) sets the value for global CINDER_URL and 
passes it to
c = cinder_client.Client(version,
 context.user_id,
 context.auth_token,
 project_id=context.project_id,
 auth_url=CINDER_URL,
 insecure=CONF.cinder.api_insecure,
 retries=CONF.cinder.http_retries,
 timeout=CONF.cinder.http_timeout,
 cacert=CONF.cinder.ca_certificates_file)

c.client.auth_token = context.auth_token or '%s:%s' % (context.user_id,
   context.project_id)


Under normal circumstances ( i e in cases where the context has
auth_token) , the auth_url is never used/required. So this is required
only when the token expires and an attempt to do fresh authentication is
made here:

def _cs_request(self, url, method, **kwargs):
auth_attempts = 0
attempts = 0
backoff = 1
while True:
attempts += 1
if not self.management_url or not self.auth_token:
self.authenticate()
kwargs.setdefault('headers', {})['X-Auth-Token'] = self.auth_token
if self.projectid:
kwargs['headers']['X-Auth-Project-Id'] = self.projectid
try:
resp, body = self.request(self.management_url + url, method,
  **kwargs)
return resp, body
except exceptions.BadRequest as e:
if attempts > self.retries:
raise
except exceptions.Unauthorized:
if auth_attempts > 0:
raise
self._logger.debug("Unauthorized, reauthenticating.")
self.management_url = self.auth_token = None
# First reauth. Discount this attempt.
attempts -= 1
auth_attempts += 1
continue


2. nova/volume.cinderclient.py >> cinderclient method passes context.auth_token 
instead of the password.Due to this HttpClient.password attribute is set to the 
auth token instead of the password. 

3. There are other problems around this which is summarized as below:

cinderclient should really support a way of passing an auth_token in on
the __init__ so it is explicitly supported for the caller to specify an
auth_token, rather than forcing this hack that nova is currently using
of setting the auth_token itself after creating the cinderclient
instance. That's not strictly required, but it would be a much better
design. At that point, cinderclient should also stop requiring the
auth_url parameter (it currently raises an exception if that isn't
specified) if an auth_token is specified and retries==0, since in that
case the auth_url would never be used. Userid and password would also
not be required in that case.

nova needs to either start passing a valid userid and password and a
valid auth_url so that retries will work, or stop setting retries to a
non-zero number (it's using a conf setting to determine the number of
retries, and the default is 3). If the decision is to get retries
working, then we have to figure out what to pass for the userid and
password. Nova won't know the end-user's user/password that correspond
to the auth_token it initially uses, and we wouldn't want to be using a
different user on retries than we do on the initial requests, so I don't
think retries should be supported unless nova is going to make ALL
requests with a service userid rather than with the end-user's userid...
and I don't think that fits with the current OpenStack architecture. So
that leaves us with not supporting retries. In that case, nova should
still stop passing the auth_token in as the password so that someone
doesn't stumble over that later when retry support is added. Similarly
for the auth_url it should start passing the correct keystone auth_url,
or at least make it clear that it's passing an invalid auth_url so
someone doesn't stumble over that when trying to add retry support
later. And it definitely needs to stop setting retries to a non-zero
number.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401437

Title:
  nova passes incorrect authentication info to cinderclient

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are multiple pr

[Yahoo-eng-team] [Bug 1401435] [NEW] Security-group-name is case sensitive when booting instance with nova

2014-12-11 Thread Amandeep
Public bug reported:

When booting an instance with nova-networking then the instance goes to
error state if security group name provided in mixed case or capital
case letters.

Mean to say that security groups name are case sensitive.

steps to replicate:

1. stack@devstack:~$ nova secgroup-list
+--+-+-+
| id | name | description |
+--+-+-+
| 57597299-782e-4820-b814-b27c2f125ee2 | test |   |
| 9ae55da3-5246-4a28-b4d6-d45affe7b5d8 | default | default |
+--+-+-+
2. stack@devstack:~$ nova boot --image <> --flavor <> --security-groups test 
vm_name

vm_name instance will boot up in running state

3. stack@devstack:~$ nova boot --image <> --flavor <> --security-groups
TEST vm_name_1

The instance will queue with the scheduler but fail to boot.

Expected Result :

1. Instance should boot up in running .
2. Case sensitivity should not affect the state of instance.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401435

Title:
  Security-group-name is case sensitive when booting instance with nova

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting an instance with nova-networking then the instance goes
  to error state if security group name provided in mixed case or
  capital case letters.

  Mean to say that security groups name are case sensitive.

  steps to replicate:

  1. stack@devstack:~$ nova secgroup-list
  +--+-+-+
  | id | name | description |
  +--+-+-+
  | 57597299-782e-4820-b814-b27c2f125ee2 | test |   |
  | 9ae55da3-5246-4a28-b4d6-d45affe7b5d8 | default | default |
  +--+-+-+
  2. stack@devstack:~$ nova boot --image <> --flavor <> --security-groups test 
vm_name

  vm_name instance will boot up in running state

  3. stack@devstack:~$ nova boot --image <> --flavor <> --security-
  groups TEST vm_name_1

  The instance will queue with the scheduler but fail to boot.

  Expected Result :

  1. Instance should boot up in running .
  2. Case sensitivity should not affect the state of instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401430] [NEW] compute create_test_server_group fails for each tempest api test in compute

2014-12-11 Thread Abhishek Kumar
Public bug reported:

nova version : 2014.2.1

tempest compute api test cases fails throwing AttributeError.

sample Test_case of compute:
tempest.api.compute.admin.test_hypervisor_negative.HypervisorAdminNegativeTestXML.create_test_server_group
tempest.api.compute.admin.test_quotas.QuotaClassesAdminTestXML.create_test_server_group
tempest.api.compute.admin.test_quotas.QuotasAdminTestXML.create_test_server_group
tempest.api.compute.admin.test_hosts_negative.HostsAdminNegativeTestXML.create_test_server_group
tempest.api.compute.admin.test_aggregates_negative.AggregatesAdminNegativeTestXML.create_test_server_group
tempest.api.compute.admin.test_instance_usage_audit_log_negative.InstanceUsageAuditLogNegativeTestXML.create_test_server_group
tempest.api.compute.admin.test_instance_usage_audit_log.InstanceUsageAuditLogTestXML.create_test_server_group
tempest.api.compute.admin.test_security_groups.SecurityGroupsTestAdminXML.create_test_server_group
tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestXML.create_test_server_group
tempest.api.compute.admin.test_services.ServicesAdminTestXML.create_test_server_group

Common Error:
AttributeError: 'ServersClientXML' object has no attribute 
'create_test_server_group'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401430

Title:
  compute create_test_server_group fails for each tempest api test in
  compute

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova version : 2014.2.1

  tempest compute api test cases fails throwing AttributeError.

  sample Test_case of compute:
  
tempest.api.compute.admin.test_hypervisor_negative.HypervisorAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_quotas.QuotaClassesAdminTestXML.create_test_server_group
  
tempest.api.compute.admin.test_quotas.QuotasAdminTestXML.create_test_server_group
  
tempest.api.compute.admin.test_hosts_negative.HostsAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_aggregates_negative.AggregatesAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_instance_usage_audit_log_negative.InstanceUsageAuditLogNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_instance_usage_audit_log.InstanceUsageAuditLogTestXML.create_test_server_group
  
tempest.api.compute.admin.test_security_groups.SecurityGroupsTestAdminXML.create_test_server_group
  
tempest.api.compute.admin.test_servers_negative.ServersAdminNegativeTestXML.create_test_server_group
  
tempest.api.compute.admin.test_services.ServicesAdminTestXML.create_test_server_group

  Common Error:
  AttributeError: 'ServersClientXML' object has no attribute 
'create_test_server_group'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401424] [NEW] Enable test_migration

2014-12-11 Thread Ann Kamyshnikova
Public bug reported:

After splitting in neutron database was left a number of tables that
don't have any models. Test should be improved to skip this tables from
checking.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401424

Title:
  Enable test_migration

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  After splitting in neutron database was left a number of tables that
  don't have any models. Test should be improved to skip this tables
  from checking.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp