[Yahoo-eng-team] [Bug 1441453] [NEW] Image members CRUD doesn't generate notifications which will impact Catalog Index service by not having latest changes to Image memberships

2015-04-08 Thread Lakshmi N Sampath
Public bug reported:

Image members CRUD doesn't generate notifications which will impact
Catalog Index service by not having latest changes to Image memberships.

** Affects: glance
 Importance: Undecided
 Assignee: Lakshmi N Sampath (lakshmi-sampath)
 Status: In Progress


** Tags: kilo-rc-potential

** Changed in: glance
 Assignee: (unassigned) = Lakshmi N Sampath (lakshmi-sampath)

** Changed in: glance
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1441453

Title:
  Image members CRUD doesn't generate notifications which will impact
  Catalog Index service by not having latest changes to Image
  memberships

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  Image members CRUD doesn't generate notifications which will impact
  Catalog Index service by not having latest changes to Image
  memberships.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1441453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421863] Re: Can not find policy directory: policy.d spams the logs

2015-04-08 Thread yuntongjin
** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
 Assignee: (unassigned) = yuntongjin (yuntongjin)

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
 Assignee: (unassigned) = yuntongjin (yuntongjin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421863

Title:
  Can not find policy directory: policy.d spams the logs

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  In Progress
Status in devstack - openstack dev environments:
  In Progress
Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Triaged
Status in Oslo Policy:
  Fix Released
Status in Openstack Database (Trove):
  New

Bug description:
  This hits over 118 million times in 24 hours in Jenkins runs:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2FuIG5vdCBmaW5kIHBvbGljeSBkaXJlY3Rvcnk6IHBvbGljeS5kXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMzg2Njk0MTcxOH0=

  We can probably just change something in devstack to avoid this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1421863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441522] [NEW] Downtime for one of nodes with memcached causes delays in Horizon

2015-04-08 Thread Vlad Okhrimenko
Public bug reported:

Horizon uses memcached servers for caching and it connects to all of
them directly. So if one of them is not responding, it may lead to
delays in Horizon operations.

Workaround:

1) Edit /etc/openstack-dashboard/local_settings file and temporarily
remove the problem controller IP:PORT from LOCATION line in CACHE
structure:

CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : 127.0.0.3:11211;127.0.0.5:11211;127.0.0.6:11211
},

2) Restart apache web server

** Affects: horizon
 Importance: Undecided
 Assignee: Vlad Okhrimenko (vokhrimenko)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Vlad Okhrimenko (vokhrimenko)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441522

Title:
  Downtime for one of nodes with memcached causes delays in Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon uses memcached servers for caching and it connects to all of
  them directly. So if one of them is not responding, it may lead to
  delays in Horizon operations.

  Workaround:

  1) Edit /etc/openstack-dashboard/local_settings file and temporarily
  remove the problem controller IP:PORT from LOCATION line in CACHE
  structure:

  CACHES = {
  'default': {
  'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
  'LOCATION' : 127.0.0.3:11211;127.0.0.5:11211;127.0.0.6:11211
  },

  2) Restart apache web server

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441576] [NEW] LoadBalancer not opening in horizon after network is deleted

2015-04-08 Thread senthilmageswaran
Public bug reported:

Load Balancer fails to open in Horizon, after deleting the Assigned
Network

Steps:
1)Create Network and Subnetwork
2)Create pool in Load Balancer and assign a subnet
3)Delete the Network assigned to the Pool
4)load balancer failed to open from horizon.
   
But load balancer opens after deleting the loadbalancer-pool from CLI.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: horizon loadbalancer

** Attachment added: Attached image contains the screen shot of horizon error 
while opening LB
   
https://bugs.launchpad.net/bugs/1441576/+attachment/4369318/+files/lb_horizon_issue.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441576

Title:
  LoadBalancer not opening in horizon after network is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Load Balancer fails to open in Horizon, after deleting the Assigned
  Network

  Steps:
  1)Create Network and Subnetwork
  2)Create pool in Load Balancer and assign a subnet
  3)Delete the Network assigned to the Pool
  4)load balancer failed to open from horizon.
 
  But load balancer opens after deleting the loadbalancer-pool from CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441591] [NEW] Refactor project overview tests

2015-04-08 Thread Bradley Jones
Public bug reported:

Projectoverviewtests.py should use decorators for creating mock stubs
and have less duplicated code

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441591

Title:
  Refactor project overview tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Projectoverviewtests.py should use decorators for creating mock
  stubs and have less duplicated code

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439827] Re: No results returned when listing images passing in the 'created_at' property as a filter

2015-04-08 Thread Kamil Rykowski
Not able to reproduce it. I've image which created_at field is set to
2015-04-08T07:57:08Z and I can easily get it using

http://localhost:9292/v2/images?created_at=2015-04-08T07:57:08Z

or

http://localhost:9292/v2/images?created_at=2015-04-08T07:57:08

and even

http://localhost:9292/v2/images?created_at=2015-04-08-07:57:08

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1439827

Title:
  No results returned when listing images passing in the 'created_at'
  property as a filter

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Overview:
  When making the list images request, passing in a datetime for the created_at 
property, there are no results returned.

  Steps to reproduce:
  1) Create an image, note the image's created_at property
  2) Perform a list images request passing in the created_at property via GET 
/images?created_at=datetime
  3) Notice no results are returned

  Expected:
  Return a list of images with the specified created_at datetime

  Actual:
  No results are returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1439827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438826] Re: Able to update an image passing id, file, location, schema, and self

2015-04-08 Thread Kamil Rykowski
** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1438826

Title:
  Able to update an image passing id, file, location, schema, and self

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Overview:
  A user is able to update an image via PATCH /images/image_id when passing 
in id, file, location, schema, and self. However, these are restricted image 
properties.

  Steps to reproduce:
  1) Create an image
  2) Update the image via PATCH /images/image_id and pass in id, file, 
location, schema, and self
  2) Notice the request is accepted and the image is updated

  Expected:
  Because these properties are restricted, when attempting to update an image 
and they appear to be properties that should be automatically generated, the 
request should not be allowed.

  Actual:
  The request is accepted and the image is updated

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1438826/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441393] Re: keystone unit tests fail with pymongo 3.0

2015-04-08 Thread Alan Pevec
** Changed in: keystone
   Importance: Undecided = High

** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = High

** Changed in: keystone/juno
   Importance: Undecided = High

** Changed in: keystone/icehouse
   Status: New = Confirmed

** Changed in: keystone/juno
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441393

Title:
  Keystone and Ceilometer unit tests fail with pymongo 3.0

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Confirmed
Status in Ceilometer juno series:
  Confirmed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  In Progress
Status in Keystone juno series:
  In Progress

Bug description:
  
  pymongo 3.0 was released 2015-04-07. This causes keystone tests to fail:

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 357, in 
test_correct_read_preference
  region.set(random_key, dummyValue10)
 
  ...
File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection 
  self.read_preference = pymongo.read_preferences.mongos_enum(  
  
  AttributeError: 'module' object has no attribute 'mongos_enum'
  

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 345, in 
test_incorrect_read_preference
  random_key, dummyValue10)   
   
  ...
File keystone/common/cache/backends/mongo.py, line 168, in client 

  self.api.get_cache_collection()   

File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection   
  self.read_preference = pymongo.read_preferences.mongos_enum(  

  AttributeError: 'module' object has no attribute 'mongos_enum'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1441393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441393] Re: keystone unit tests fail with pymongo 3.0

2015-04-08 Thread Alan Pevec
Ceilometer WIP fix https://review.openstack.org/171458

** Changed in: keystone/juno
   Status: Confirmed = In Progress

** Changed in: keystone/juno
 Assignee: (unassigned) = Ihar Hrachyshka (ihar-hrachyshka)

** Summary changed:

- keystone unit tests fail with pymongo 3.0
+ Keystone and Ceilometer unit tests fail with pymongo 3.0

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: ceilometer/icehouse
   Importance: Undecided
   Status: New

** Also affects: ceilometer/juno
   Importance: Undecided
   Status: New

** Changed in: ceilometer
   Status: New = Confirmed

** Changed in: ceilometer/icehouse
   Status: New = Confirmed

** Changed in: ceilometer/juno
   Status: New = Confirmed

** Changed in: ceilometer
   Importance: Undecided = High

** Changed in: ceilometer/icehouse
   Importance: Undecided = High

** Changed in: ceilometer/juno
   Importance: Undecided = High

** Changed in: ceilometer
 Assignee: (unassigned) = ZhiQiang Fan (aji-zqfan)

** Changed in: ceilometer
   Status: Confirmed = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441393

Title:
  Keystone and Ceilometer unit tests fail with pymongo 3.0

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Confirmed
Status in Ceilometer juno series:
  Confirmed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  In Progress
Status in Keystone juno series:
  In Progress

Bug description:
  
  pymongo 3.0 was released 2015-04-07. This causes keystone tests to fail:

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 357, in 
test_correct_read_preference
  region.set(random_key, dummyValue10)
 
  ...
File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection 
  self.read_preference = pymongo.read_preferences.mongos_enum(  
  
  AttributeError: 'module' object has no attribute 'mongos_enum'
  

  Traceback (most recent call last):
File keystone/tests/unit/test_cache_backend_mongo.py, line 345, in 
test_incorrect_read_preference
  random_key, dummyValue10)   
   
  ...
File keystone/common/cache/backends/mongo.py, line 168, in client 

  self.api.get_cache_collection()   

File keystone/common/cache/backends/mongo.py, line 363, in 
get_cache_collection   
  self.read_preference = pymongo.read_preferences.mongos_enum(  

  AttributeError: 'module' object has no attribute 'mongos_enum'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1441393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439861] Re: encrypted iSCSI volume attach fails to attach when multipath-tools installed

2015-04-08 Thread Duncan Thomas
Marked as 'incomplete' rather than 'invalid' since we're waiting on logs
to confirm one way or another

** Changed in: nova
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439861

Title:
  encrypted iSCSI volume attach fails to attach when multipath-tools
  installed

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  An error was occurring in a devstack setup with nova version:

  commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
  Merge instance termination with update_dns_entries set fails

  A volume-type encrypted with CryptsetupEncryptor was being used.  A
  volume was created using this volume-type and an attempt to attach it
  to an instance was made.  This error also occurred when using the
  LuksEncryptor for the volume-type.

  The following error occurred in n-cpu during attachment:

  Stack Trace:

  2015-04-02 13:39:54.397 ERROR nova.virt.block_device 
[req-a8220e7d-8d1e-459d-be1f-4ddd65b7ff66 admin admin] [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Driver failed to attach volume 
81c5f69a-9b01-4f
  c0-a105-be9d3c966aaf at /dev/vdb
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Traceback (most recent call last):
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/opt/stack/nova/nova/virt/block_device.py, line 251, in attach
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] device_type=self['device_type'], 
encryption=encryption)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1064, in attach_volume
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] 
self._disconnect_volume(connection_info, disk_dev)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] six.reraise(self.type_, self.value, 
self.tb)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1051, in attach_volume
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] encryptor.attach_volume(context, 
**encryption)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/opt/stack/nova/nova/volume/encryptors/cryptsetup.py, line 93, in 
attach_volume
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] self._open_volume(passphrase, 
**kwargs)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/opt/stack/nova/nova/volume/encryptors/cryptsetup.py, line 78, in _open_volume
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] check_exit_code=True, 
run_as_root=True)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File /opt/stack/nova/nova/utils.py, 
line 206, in execute
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] return processutils.execute(*cmd, 
**kwargs)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5]   File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py, line 
233, in execute
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] cmd=sanitized_cmd)
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] ProcessExecutionError: Unexpected error 
while running command.
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup create --key-file=- ip-10.50.3.20:3260-
  
iscsi-iqn.2003-10.com.lefthandnetworks:vsa-12-721:853:volume-81c5f69a-9b01-4fc0-a105-be9d3c966aaf-lun-0
 /dev/sdb
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Exit code: 5
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stdout: u''
  2015-04-02 13:39:54.397 TRACE nova.virt.block_device [instance: 
41d0c192-a1ce-45eb-a5ff-bcb96ec0d8e5] Stderr: u'Cannot use device /dev/sdb 
which is in 

[Yahoo-eng-team] [Bug 1441488] [NEW] stale db objects due to session reuse

2015-04-08 Thread YAMAMOTO Takashi
Public bug reported:

many of neutron code seems to be written without considering
the reuse of session and its associated object cache.

it's a problem for UTs where admin context (and thus its session) is
reused heavily.  here's an example of the failure [1].
but there might be non-test code possibly affected.

switching to expire_on_commit=True might improve the situation.
but it doesn't entirely solve the problem because some of read-only
queries are done without explicit transactions.

[1] http://logs.openstack.org/82/158182/5/check/gate-neutron-python27/402b450/
Traceback (most recent call last):
  File neutron/tests/unit/plugins/ml2/drivers/l2pop/test_mech_driver.py, line 
864, in test_host_changed_twice
mock.ANY, 'remove_fdb_entries', expected)
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 831, in assert_called_with
raise AssertionError('Expected call: %s\nNot called' % (expected,))
AssertionError: Expected call: _notification_fanout(ANY, 
'remove_fdb_entries', {u'58de62e4-5001-485b-a334-b65c3da97745': {'segment_id': 
1, 'ports': {'20.0.0.1': [('00:00:00:00:00:00', '0.0.0.0'), 
PortInfo(mac_address=u'12:34:56:78:fa:4a', ip_address=u'10.0.0.2')]}, 
'network_type': 'vxlan'}})
Not called

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441488

Title:
  stale db objects due to session reuse

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  many of neutron code seems to be written without considering
  the reuse of session and its associated object cache.

  it's a problem for UTs where admin context (and thus its session) is
  reused heavily.  here's an example of the failure [1].
  but there might be non-test code possibly affected.

  switching to expire_on_commit=True might improve the situation.
  but it doesn't entirely solve the problem because some of read-only
  queries are done without explicit transactions.

  [1] http://logs.openstack.org/82/158182/5/check/gate-neutron-python27/402b450/
  Traceback (most recent call last):
File neutron/tests/unit/plugins/ml2/drivers/l2pop/test_mech_driver.py, 
line 864, in test_host_changed_twice
  mock.ANY, 'remove_fdb_entries', expected)
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 831, in assert_called_with
  raise AssertionError('Expected call: %s\nNot called' % (expected,))
  AssertionError: Expected call: _notification_fanout(ANY, 
'remove_fdb_entries', {u'58de62e4-5001-485b-a334-b65c3da97745': {'segment_id': 
1, 'ports': {'20.0.0.1': [('00:00:00:00:00:00', '0.0.0.0'), 
PortInfo(mac_address=u'12:34:56:78:fa:4a', ip_address=u'10.0.0.2')]}, 
'network_type': 'vxlan'}})
  Not called

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441523] [NEW] changing flavor details on running instances will result in errors popping up for users

2015-04-08 Thread Matthias Runge
Public bug reported:

1. Install/use and all-in-one w/ demo project
2. As admin, create a flavor and assign to the demo project
3. Log out as admin and log in as demo (must not have admin privs)
4. As demo, launch an instance on this flavor in the demo project
5. Log out as demo and log in as admin
6. As admin, change the amount of RAM for the flavor
7. Log out as admin, log in as demo
8. Check the instances page and size should show Not available and there 
should be an error in the upper right saying Error: Unable to retrieve 
instance size information.

The error is only shown for non-admin users.

what happens here: 
when editing flavors, nova silently deletes the old flavor, creating a new one. 
running instances are not touched. the old flavor is marked as deleted, and 
normal users can not get specifics of that flavor any more.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441523

Title:
  changing flavor details on running instances will result in errors
  popping up for users

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Install/use and all-in-one w/ demo project
  2. As admin, create a flavor and assign to the demo project
  3. Log out as admin and log in as demo (must not have admin privs)
  4. As demo, launch an instance on this flavor in the demo project
  5. Log out as demo and log in as admin
  6. As admin, change the amount of RAM for the flavor
  7. Log out as admin, log in as demo
  8. Check the instances page and size should show Not available and there 
should be an error in the upper right saying Error: Unable to retrieve 
instance size information.

  The error is only shown for non-admin users.

  what happens here: 
  when editing flavors, nova silently deletes the old flavor, creating a new 
one. running instances are not touched. the old flavor is marked as deleted, 
and normal users can not get specifics of that flavor any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441546] [NEW] ChanceScheduler: there is no dealing with the force_host/force_node options

2015-04-08 Thread zhangtralon
Public bug reported:

There is no dealing with the force_host/force_node options in the
ChanceScheduler, which means that  the  force_host/force_node options
doesn't work.

** Affects: nova
 Importance: Undecided
 Assignee: zhangtralon (zhangchunlong1)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = zhangtralon (zhangchunlong1)

** Summary changed:

- ChanceScheduler: there is no deal with the force_host/force_node options 
+ ChanceScheduler: there is no dealing with the force_host/force_node options

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441546

Title:
  ChanceScheduler: there is no dealing with the force_host/force_node
  options

Status in OpenStack Compute (Nova):
  New

Bug description:
  There is no dealing with the force_host/force_node options in the
  ChanceScheduler, which means that  the  force_host/force_node options
  doesn't work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441511] [NEW] asterisk shown for admin state in network create and update forms

2015-04-08 Thread Masco Kaliyamoorthy
Public bug reported:

In network create and update forms the admin_state parameter is optional.
but the mandatory mark(asterisk) is showing.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441511

Title:
  asterisk shown for admin state in network create and update forms

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In network create and update forms the admin_state parameter is optional.
  but the mandatory mark(asterisk) is showing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441501] [NEW] gate-neutron-fwaas-python27 fails with ImportError: cannot import name test_db_plugin

2015-04-08 Thread Thierry Carrez
Public bug reported:

Currently on master all gate-neutron-fwaas-python27 tests fail with the
following error:

2015-04-08 03:05:00.445 | Failed to import test module: 
neutron_fwaas.tests.unit.db.firewall.test_db_firewall
2015-04-08 03:05:00.446 | Traceback (most recent call last):
2015-04-08 03:05:00.446 |   File 
/home/jenkins/workspace/gate-neutron-fwaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 445, in _find_test_path
2015-04-08 03:05:00.446 | module = self._get_module_from_name(name)
2015-04-08 03:05:00.446 |   File 
/home/jenkins/workspace/gate-neutron-fwaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 384, in _get_module_from_name
2015-04-08 03:05:00.446 | __import__(name)
2015-04-08 03:05:00.446 |   File 
neutron_fwaas/tests/unit/db/firewall/test_db_firewall.py, line 35, in module
2015-04-08 03:05:00.446 | from neutron_fwaas.tests import base
2015-04-08 03:05:00.446 |   File neutron_fwaas/tests/base.py, line 18, in 
module
2015-04-08 03:05:00.447 | from neutron.tests.unit import test_db_plugin
2015-04-08 03:05:00.447 | ImportError: cannot import name test_db_plugin

See example at http://logs.openstack.org/39/169239/2/check/gate-neutron-
fwaas-python27/0346944/

This blocks merging of version bumps for Kilo release.

** Affects: neutron
 Importance: Critical
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441501

Title:
  gate-neutron-fwaas-python27 fails with ImportError: cannot import name
  test_db_plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently on master all gate-neutron-fwaas-python27 tests fail with
  the following error:

  2015-04-08 03:05:00.445 | Failed to import test module: 
neutron_fwaas.tests.unit.db.firewall.test_db_firewall
  2015-04-08 03:05:00.446 | Traceback (most recent call last):
  2015-04-08 03:05:00.446 |   File 
/home/jenkins/workspace/gate-neutron-fwaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 445, in _find_test_path
  2015-04-08 03:05:00.446 | module = self._get_module_from_name(name)
  2015-04-08 03:05:00.446 |   File 
/home/jenkins/workspace/gate-neutron-fwaas-python27/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py,
 line 384, in _get_module_from_name
  2015-04-08 03:05:00.446 | __import__(name)
  2015-04-08 03:05:00.446 |   File 
neutron_fwaas/tests/unit/db/firewall/test_db_firewall.py, line 35, in module
  2015-04-08 03:05:00.446 | from neutron_fwaas.tests import base
  2015-04-08 03:05:00.446 |   File neutron_fwaas/tests/base.py, line 18, in 
module
  2015-04-08 03:05:00.447 | from neutron.tests.unit import test_db_plugin
  2015-04-08 03:05:00.447 | ImportError: cannot import name test_db_plugin

  See example at http://logs.openstack.org/39/169239/2/check/gate-
  neutron-fwaas-python27/0346944/

  This blocks merging of version bumps for Kilo release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434370] Re: common/README still references openstack-common

2015-04-08 Thread Kamil Rykowski
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Changed in: manila
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1434370

Title:
  common/README still references openstack-common

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Manila:
  In Progress
Status in Openstack Database (Trove):
  In Progress

Bug description:
  The README under openstack/common references openstack-common, but the
  link (correctly) points to oslo-incubator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1434370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434370] Re: common/README still references openstack-common

2015-04-08 Thread Kamil Rykowski
** Changed in: glance
 Assignee: Darren Birkett (darren-birkett) = Kamil Rykowski 
(kamil-rykowski)

** Changed in: glance
   Status: New = In Progress

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Changed in: cinder
   Status: New = In Progress

** Changed in: keystone
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1434370

Title:
  common/README still references openstack-common

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The README under openstack/common references openstack-common, but the
  link (correctly) points to oslo-incubator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1434370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434370] Re: common/README still references openstack-common

2015-04-08 Thread Kamil Rykowski
** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

** Changed in: trove
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Changed in: heat
 Assignee: (unassigned) = Kamil Rykowski (kamil-rykowski)

** Changed in: trove
   Status: New = In Progress

** Changed in: heat
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1434370

Title:
  common/README still references openstack-common

Status in Cinder:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Openstack Database (Trove):
  In Progress

Bug description:
  The README under openstack/common references openstack-common, but the
  link (correctly) points to oslo-incubator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1434370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441688] [NEW] [data processing] Unable to run Spark jobs

2015-04-08 Thread Chad Roberts
Public bug reported:

*high prio in my opinion*

It is currently not possible to run a Spark job via the data processing
UI.

On the launch form, there are 2 fields input/output data source which
have their default values set to (None, None) and they are hidden on
that form since they are not pertinent to Spark.

I think that the new django 1.7 validation scrapes the None value and
treats it as empty, and therefore not present and generates an error for
those fields saying that they are required and the form submission is
rejected.  This is not evident to the user since those fields remain
hidden.

I think a simple fix would be to use another value to indicate that no
data source is the choice (which is valid for some job types).

** Affects: horizon
 Importance: Undecided
 Assignee: Chad Roberts (croberts)
 Status: New


** Tags: sahara

** Changed in: horizon
 Assignee: (unassigned) = Chad Roberts (croberts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441688

Title:
  [data processing] Unable to run Spark jobs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  *high prio in my opinion*

  It is currently not possible to run a Spark job via the data
  processing UI.

  On the launch form, there are 2 fields input/output data source which
  have their default values set to (None, None) and they are hidden on
  that form since they are not pertinent to Spark.

  I think that the new django 1.7 validation scrapes the None value and
  treats it as empty, and therefore not present and generates an error
  for those fields saying that they are required and the form submission
  is rejected.  This is not evident to the user since those fields
  remain hidden.

  I think a simple fix would be to use another value to indicate that no
  data source is the choice (which is valid for some job types).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441745] [NEW] Lots of gate failures with not enough hosts available

2015-04-08 Thread David Kranz
Public bug reported:

Thousands of matches in the last two days:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

The following is from this log file:

http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-neutron-
full/1f66320/logs/screen-n-cond.txt.gz


For the few I looked at, there is an error in the n-cond log:

2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner
return func(*args, **kwargs)

  File /opt/stack/new/nova/nova/scheduler/manager.py, line 86, in 
select_destinations
filter_properties)

  File /opt/stack/new/nova/nova/scheduler/filter_scheduler.py, line 80, in 
select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts
available.

--

That makes it sound like the problem is that the deployed devstack does
not have enough capacity. But right before that I see:

2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
/opt/stack/new/nova/nova/compute/manager.py, line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'uunexpected update 
keyword \\\'availability_zone\\\'\\nTraceback (most recent call last):\\n\\n  
File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner\\nreturn func(*args, **kwargs)\
 \n\\n  File /opt/stack/new/nova/nova/conductor/manager.py, line 125, in 
instance_update\\nraise KeyError(unexpected update keyword \\\'%s\\\' % 
key)\\n\\nKeyError: uunexpected update keyword 
\\\'availability_zone\\\'\\n\'\n']

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with not enough hosts available

Status in OpenStack Compute (Nova):
  New

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner
  return func(*args, **kwargs)

File /opt/stack/new/nova/nova/scheduler/manager.py, line 86, in 
select_destinations
  filter_properties)

File /opt/stack/new/nova/nova/scheduler/filter_scheduler.py, line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 

[Yahoo-eng-team] [Bug 1441764] [NEW] Cleanup TODO in glance/gateway.py for elasticsearch being unavailable

2015-04-08 Thread Matt Riedemann
Public bug reported:

This came up in this change
https://review.openstack.org/#/c/171279/4/glance/gateway.py to fix bug
1441239.  We are going to merge that change with a debug level log
message to avoid a string freeze exception in kilo, but then we need to
use this bug to close the TODO when liberty opens up.

** Affects: glance
 Importance: Medium
 Status: Triaged


** Tags: search

** Tags added: search

** Changed in: glance
Milestone: None = liberty-1

** Changed in: glance
   Status: New = Triaged

** Changed in: glance
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1441764

Title:
  Cleanup TODO in glance/gateway.py for elasticsearch being unavailable

Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged

Bug description:
  This came up in this change
  https://review.openstack.org/#/c/171279/4/glance/gateway.py to fix bug
  1441239.  We are going to merge that change with a debug level log
  message to avoid a string freeze exception in kilo, but then we need
  to use this bug to close the TODO when liberty opens up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1441764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441632] [NEW] FWaaS: Remove checks for bash scripts

2015-04-08 Thread Paul Michali
Public bug reported:

Like the change in Neutron (14408244), remove the check for bash script.

** Affects: neutron
 Importance: Low
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441632

Title:
  FWaaS: Remove checks for bash scripts

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Like the change in Neutron (14408244), remove the check for bash
  script.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441576] Re: LoadBalancer not opening in horizon after network is deleted

2015-04-08 Thread Sam Betts
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441576

Title:
  LoadBalancer not opening in horizon after network is deleted

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Load Balancer fails to open in Horizon, after deleting the Assigned
  Network

  Steps:
  1)Create Network and Subnetwork
  2)Create pool in Load Balancer and assign a subnet
  3)Delete the Network assigned to the Pool
  4)load balancer failed to open from horizon.
 
  But load balancer opens after deleting the loadbalancer-pool from CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429740] Re: Using DVR - Instance with a floating IP can't reach other instances connected to a different network

2015-04-08 Thread Oleg Bondarev
Seems I was facing bug 1438969 while reproducing this one. Once
corresponding fix was merged I am not able to reproduce, connectivity
between subnets is fine regardless of floating ips.

** Changed in: neutron
   Importance: High = Undecided

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429740

Title:
  Using DVR - Instance with a floating IP can't reach other instances
  connected to a different network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
  A distributed router with interfaces connected to two private networks and to 
an external network.
  Instances without floating IP connected to network A can reach other 
instances connected to network B 
  but instances with a floating IP connected to network A can't reach other 
instances connected to network B.

  Version
  ===
  openstack-neutron-2014.2.2-1.el7ost.noarch
  python-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
  openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387539] Re: Failed to hard reboot lxc instance booted from image

2015-04-08 Thread Vladik Romanovsky
*** This bug is a duplicate of bug 1370590 ***
https://bugs.launchpad.net/bugs/1370590

** This bug has been marked a duplicate of bug 1370590
   Libvirt _create_domain_and_network calls missing disk_info

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387539

Title:
  Failed to hard reboot lxc instance booted from image

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When hard reboot a lxc instance, the operation failed with the following 
trace log:
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher 
ESC[01;35mESC[00mTraceback (most recent call last):
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   incoming.message))
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   return self._do_dispatch(endpoint, method, ctxt, args)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   result = getattr(endpoint, method)(ctxt, **new_args)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/exception.py, line 88, in wrapped
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   payload)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py, line 82, 
in __exit__
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   six.reraise(self.type_, self.value, self.tb)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/exception.py, line 71, in wrapped
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   return f(self, context, *args, **kw)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 300, in decorated_function
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   pass
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py, line 82, 
in __exit__
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   six.reraise(self.type_, self.value, self.tb)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 286, in decorated_function
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   return function(self, context, *args, **kwargs)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 350, in decorated_function
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   return function(self, context, *args, **kwargs)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 328, in decorated_function
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   kwargs['instance'], e, sys.exc_info())
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py, line 82, 
in __exit__
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   six.reraise(self.type_, self.value, self.tb)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 316, in decorated_function
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   return function(self, context, *args, **kwargs)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /opt/stack/nova/nova/compute/manager.py, line 2943, in reboot_instance
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
   self._set_instance_obj_error_state(context, instance)
  2014-10-30 07:11:54.389 TRACE oslo.messaging.rpc.dispatcher ESC[01;35mESC[00m 
 File /usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py, line 82, 
in __exit__
  2014-10-30 07:11:54.389 TRACE 

[Yahoo-eng-team] [Bug 1441789] [NEW] VPNaaS: Confirm OpenSwan -- StrongSwan interop

2015-04-08 Thread Paul Michali
Public bug reported:

Some early testing was showing a problem getting VPN IPSec connection up
and passing traffic, when using StrongSwan on one end and OpenSwan on
the other end (using the same, default, configuration). Worked fine,
when the same Swan flavor was used on each end.

Need to investigate into whether or not this works, and if it does not
work, research into the root cause.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441789

Title:
  VPNaaS: Confirm OpenSwan -- StrongSwan interop

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Some early testing was showing a problem getting VPN IPSec connection
  up and passing traffic, when using StrongSwan on one end and OpenSwan
  on the other end (using the same, default, configuration). Worked
  fine, when the same Swan flavor was used on each end.

  Need to investigate into whether or not this works, and if it does not
  work, research into the root cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441790] [NEW] Simplify and modernize model_query()

2015-04-08 Thread Henry Gessau
Public bug reported:

From zzzeek on IRC, 2015-04-08:

this thing:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L486

this is in a few places.   the model_query() for neutron is broken up
into these three awkward phases, and seveal of these pliugins put an
unnecessary and expensive OUTER JOIN on all queries

this should be an INNER JOIN and only when the filter_hook is actually
in use

now its hard for me to change this b.c. everyone will be like, it works great 
and nobody uses that thing so who cares
but i really want to fix up how we build queries to be cleaner, using newer 
techniques

there’s a quick cahnge we can make right there that will probably corect
the outerjoin, we can do query.join() right in the
_ml2_port_result_filter_hook for now

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441790

Title:
  Simplify and modernize model_query()

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  From zzzeek on IRC, 2015-04-08:

  this thing:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L486

  this is in a few places.   the model_query() for neutron is broken up
  into these three awkward phases, and seveal of these pliugins put an
  unnecessary and expensive OUTER JOIN on all queries

  this should be an INNER JOIN and only when the filter_hook is actually
  in use

  now its hard for me to change this b.c. everyone will be like, it works great 
and nobody uses that thing so who cares
  but i really want to fix up how we build queries to be cleaner, using newer 
techniques

  there’s a quick cahnge we can make right there that will probably
  corect the outerjoin, we can do query.join() right in the
  _ml2_port_result_filter_hook for now

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441788] [NEW] VPNaaS: Fedora support for StrongSwan

2015-04-08 Thread Paul Michali
Public bug reported:

In early testing, I was unable to create a VPN connection using Fedora
Release 12 and StrongSwan driver.

Found out from StrongSwan IRC folks these issues:

A) Unlike Ubuntu, both Strongswan and LibreSwan can be installed at once
B) Fedora uses the process name strongswan, whereas Ubuntu uses ipsec.  The 
ipsec process is for LIbreSwan, under Fedora.
C) The may be some sensitivity to tabs, in the config file, for Fedora (use 
only one?)


The StrongSwan folks also mentioned about a issue, where one needs a kernel 
with support for XFRM and namespaces. They stated that there can be problems 
with passing traffic. They indicated it was a kernel issue and therefore 
applied to all Swan flavors.

Research is needed to see if both Ubuntu and Fedora kernels have this
support. Currently, we don't see an issue with Ubuntu, but should verify
the kernel for all target operating systems.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441788

Title:
  VPNaaS: Fedora support for StrongSwan

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In early testing, I was unable to create a VPN connection using Fedora
  Release 12 and StrongSwan driver.

  Found out from StrongSwan IRC folks these issues:

  A) Unlike Ubuntu, both Strongswan and LibreSwan can be installed at once
  B) Fedora uses the process name strongswan, whereas Ubuntu uses ipsec.  
The ipsec process is for LIbreSwan, under Fedora.
  C) The may be some sensitivity to tabs, in the config file, for Fedora (use 
only one?)

  
  The StrongSwan folks also mentioned about a issue, where one needs a kernel 
with support for XFRM and namespaces. They stated that there can be problems 
with passing traffic. They indicated it was a kernel issue and therefore 
applied to all Swan flavors.

  Research is needed to see if both Ubuntu and Fedora kernels have this
  support. Currently, we don't see an issue with Ubuntu, but should
  verify the kernel for all target operating systems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441793] [NEW] _create_subnet_from_implicit_pool assumes external network

2015-04-08 Thread Aaron Rosen
Public bug reported:

2015-04-08 11:28:07.968 ERROR neutron.api.v2.resource 
[req-61265bfc-9503-452d-80e2-022c449feee0 admin 
f4c311581f4d4d138d87012d00ae3b24] create failed
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1348, in 
create_subnet
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource return 
self._create_subnet_from_implicit_pool(context, subnet)
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1285, in 
_create_subnet_from_implicit_pool
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource if network.external:
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource AttributeError: 'Network' 
object has no attribute 'external'
2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441793

Title:
  _create_subnet_from_implicit_pool assumes external network

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  2015-04-08 11:28:07.968 ERROR neutron.api.v2.resource 
[req-61265bfc-9503-452d-80e2-022c449feee0 admin 
f4c311581f4d4d138d87012d00ae3b24] create failed
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1348, in 
create_subnet
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource return 
self._create_subnet_from_implicit_pool(context, subnet)
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1285, in 
_create_subnet_from_implicit_pool
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource if network.external:
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource AttributeError: 
'Network' object has no attribute 'external'
  2015-04-08 11:28:07.968 TRACE neutron.api.v2.resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441745] Re: Lots of gate failures with not enough hosts available

2015-04-08 Thread David Kranz
It is true that this particular sub-case of the bug title has only one
patch responsible, there are many other patches shown in logstash that
could not possibly cause this problem but which experience it.  So this
seems to be a problem that can randomly impact any patch. Though it may
be difficult to find, it seems to me there is a bug here. The other
possibility is that tempest is trying to create too many vms. I'm not
sure how many tiny vms are expected to be supported by our devstack.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with not enough hosts available

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  New

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner
  return func(*args, **kwargs)

File /opt/stack/new/nova/nova/scheduler/manager.py, line 86, in 
select_destinations
  filter_properties)

File /opt/stack/new/nova/nova/scheduler/filter_scheduler.py, line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
  2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
/opt/stack/new/nova/nova/compute/manager.py, line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'uunexpected update 
keyword \\\'availability_zone\\\'\\nTraceback (most recent call last):\\n\\n  
File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner\\nreturn func(*args, **kwargs
 )\\n\\n  File /opt/stack/new/nova/nova/conductor/manager.py, line 125, in 
instance_update\\nraise KeyError(unexpected update keyword \\\'%s\\\' % 
key)\\n\\nKeyError: uunexpected update keyword 
\\\'availability_zone\\\'\\n\'\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441794] [NEW] Removing unused variables in javascripts

2015-04-08 Thread Thai Tran
Public bug reported:

Before we can enable a global check for unused variables, we need to
clean up the code first. This way, the gate won't barf when we try to
enable the global unused and undef jshint config.

** Affects: horizon
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441794

Title:
  Removing unused variables in javascripts

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Before we can enable a global check for unused variables, we need to
  clean up the code first. This way, the gate won't barf when we try to
  enable the global unused and undef jshint config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440297] Re: Create Volume from Snapshot / Image creates empty volume

2015-04-08 Thread Gary W. Smith
Closing per the above discussion.

** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1440297

Title:
  Create Volume from Snapshot / Image creates empty volume

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When creating a volume from snapshot or image in current version of
  horizon the operation apparantly succeeds but the created volume does
  not contain any data.

  Examining the request sent to Horizon shows that the snapshot_id / image_id 
is present.
  Examining the request to Cinder API shows that the snapshot_id / image_id is 
NOT present.

  I have tracked this down to the tests done in the handle fucntion in
  openstack_dashboard/dashboards/project/volumes/volumes/forms.py around
  line 309.

  The tests expects source_type (retrieved from
  data['volume_source_type']) to either be None or 'snapshot_source' /
  'image_source'. This is also supported by the
  prepare_source_fields_if_XXX functions.

  However, when the actual data reaches the function
  'volume_source_type' is set to '' (empty string) and not None.

  I am not sure if it is the test that is wrong or if the data gets
  changed unexpectedly on the way into the function by other parts of
  the code.

  Running on Ubuntu 14.04 LTS with packages from the Kilo Cloud archive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1440297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441821] [NEW] Jshint should ignore libraries

2015-04-08 Thread Thai Tran
Public bug reported:

When undef and unused gets globally enabled, 3rd party libraries will
fail. We need to add a jshintginore file to ignore this folder.

** Affects: horizon
 Importance: High
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441821

Title:
  Jshint should ignore libraries

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When undef and unused gets globally enabled, 3rd party libraries will
  fail. We need to add a jshintginore file to ignore this folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441731] [NEW] [data processing] Jobs table sort order by id is nonsensical

2015-04-08 Thread Chad Roberts
Public bug reported:

I'm proposing this for Liberty.

Once you have run several jobs the default sort order on the data
processing Jobs table really stops making sense.  It is currently sorted
by ID, which is just a long uuid.  This results in newly added jobs
being added to the middle of the list, which just seems odd to have as
the default behavior.

I propose sorting that table by job creation time, even if that field is
not displayed, to give the table some sense of tangible order.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441731

Title:
  [data processing] Jobs table sort order by id is nonsensical

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm proposing this for Liberty.

  Once you have run several jobs the default sort order on the data
  processing Jobs table really stops making sense.  It is currently
  sorted by ID, which is just a long uuid.  This results in newly added
  jobs being added to the middle of the list, which just seems odd to
  have as the default behavior.

  I propose sorting that table by job creation time, even if that field
  is not displayed, to give the table some sense of tangible order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441733] [NEW] pip install or python setup.py install should include httpd/keystone.py

2015-04-08 Thread Haneef Ali
Public bug reported:

Now the recommended way to install keystone is via apache.  But
httpd/keystone.py is not included when we do  python setup.py install
in keystone. It should be included

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441733

Title:
  pip install or python setup.py install should include
  httpd/keystone.py

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Now the recommended way to install keystone is via apache.  But
  httpd/keystone.py is not included when we do  python setup.py install
  in keystone. It should be included

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441083] Re: pkg_resources.DistributionNotFound: The 'argparse' distribution was not found and is required by oslo.config, python-keystoneclient, pysaml2

2015-04-08 Thread Dolph Mathews
Abandoning this as invalid since pip 6.1.1 handles argparse correctly
now.

** Changed in: oslo.config
   Status: In Progress = Invalid

** Changed in: python-keystoneclient
   Status: In Progress = Invalid

** Changed in: pysaml2
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441083

Title:
  pkg_resources.DistributionNotFound: The 'argparse' distribution was
  not found and is required by oslo.config, python-keystoneclient,
  pysaml2

Status in OpenStack Identity (Keystone):
  Invalid
Status in Oslo configuration management library:
  Invalid
Status in Python implementation of SAML2:
  Invalid
Status in Python client library for Keystone:
  Invalid
Status in OpenStack Command Line Client:
  Invalid

Bug description:
  Hi,

  When trying to install a fresh DevStack, I got issues with pip 6.1. First 
issue:
  https://bugs.launchpad.net/tempest/+bug/1440984

  I worked around the first issue, but then I got this issue:

  2015-04-07 10:08:34.084 | + /usr/bin/keystone-manage db_sync
  2015-04-07 10:08:34.239 | Traceback (most recent call last):
  2015-04-07 10:08:34.239 |   File /usr/bin/keystone-manage, line 4, in 
module
  2015-04-07 10:08:34.239 | 
__import__('pkg_resources').require('keystone==2015.1.dev143')
  2015-04-07 10:08:34.239 |   File 
/usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 3057, in 
module
  2015-04-07 10:08:34.239 | working_set = WorkingSet._build_master()
  2015-04-07 10:08:34.240 |   File 
/usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 639, in 
_build_master
  2015-04-07 10:08:34.240 | ws.require(__requires__)
  2015-04-07 10:08:34.240 |   File 
/usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 940, in 
require
  2015-04-07 10:08:34.240 | needed = 
self.resolve(parse_requirements(requirements))
  2015-04-07 10:08:34.240 |   File 
/usr/lib/python2.7/site-packages/pkg_resources/__init__.py, line 827, in 
resolve
  2015-04-07 10:08:34.240 | raise DistributionNotFound(req, requirers)
  2015-04-07 10:08:34.241 | pkg_resources.DistributionNotFound: The 'argparse' 
distribution was not found and is required by oslo.config, 
python-keystoneclient, pysaml2

  The problem is that newly released pip 6.1 doesn't want to install
  argparse because argparse is part of the Python standard library:

  fedora@myhost$ pip install argparse
  Skipping requirement: argparse because argparse is a stdlib package
  You must give at least one requirement to install (see pip help install)

  Workaround: downgrade pip to 6.0.8 and install argparse using pip (pip
  install argparse).

  A better fix is maybe to make argparse optional in keystone
  requirements? It's now possible to add environment markers to
  dependencies. Example:

  futures; python_version  '2.7'

  See https://github.com/pypa/pip/pull/1472

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441745] Re: Lots of gate failures with not enough hosts available

2015-04-08 Thread Matt Riedemann
The KeyError is only happening on one change in the check queue:

http://goo.gl/FpqCku

https://review.openstack.org/#/c/163842/

So that patch is busted, it's not a gate bug.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with not enough hosts available

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner
  return func(*args, **kwargs)

File /opt/stack/new/nova/nova/scheduler/manager.py, line 86, in 
select_destinations
  filter_properties)

File /opt/stack/new/nova/nova/scheduler/filter_scheduler.py, line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
  2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
/opt/stack/new/nova/nova/compute/manager.py, line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'uunexpected update 
keyword \\\'availability_zone\\\'\\nTraceback (most recent call last):\\n\\n  
File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner\\nreturn func(*args, **kwargs
 )\\n\\n  File /opt/stack/new/nova/nova/conductor/manager.py, line 125, in 
instance_update\\nraise KeyError(unexpected update keyword \\\'%s\\\' % 
key)\\n\\nKeyError: uunexpected update keyword 
\\\'availability_zone\\\'\\n\'\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441776] [NEW] Get image file passing image id without an image file returns a 204 response

2015-04-08 Thread Luke Wollney
Public bug reported:

Overview:
Attempting a GET /images/image_id/file returns a 204 response. At one point 
this returned a 404, but that does not appear to be the case anymore.

Steps to reproduce:
1)Register a blank image via POST /images
2)Attempt a GET /images/image_id/file
3)Notice the response is a 204

Expected:
A 404 response should be returned

Actual:
A 204 response is returned

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1441776

Title:
  Get image file passing image id without an image file returns a 204
  response

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Overview:
  Attempting a GET /images/image_id/file returns a 204 response. At one point 
this returned a 404, but that does not appear to be the case anymore.

  Steps to reproduce:
  1)Register a blank image via POST /images
  2)Attempt a GET /images/image_id/file
  3)Notice the response is a 204

  Expected:
  A 404 response should be returned

  Actual:
  A 204 response is returned

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1441776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441779] [NEW] Add unit tests for N1kv-neutron state sync

2015-04-08 Thread Saksham Varma
Public bug reported:

Add unit tests for verifying that N1kv-Neutron state sync would be
triggered on a state mismatch.

** Affects: neutron
 Importance: Undecided
 Assignee: Saksham Varma (sakvarma)
 Status: New


** Tags: n1kv

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441779

Title:
  Add unit tests for N1kv-neutron state sync

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Add unit tests for verifying that N1kv-Neutron state sync would be
  triggered on a state mismatch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441780] [NEW] Adding horizon to jshint global

2015-04-08 Thread Thai Tran
Public bug reported:

Some legacy functionality like horizon.alert is use in new angular work.
Since we are doing jshint checks on new angular work, horizon need to be
added to global jshint config to suppress the error.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441780

Title:
  Adding horizon to jshint global

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Some legacy functionality like horizon.alert is use in new angular
  work. Since we are doing jshint checks on new angular work, horizon
  need to be added to global jshint config to suppress the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441783] [NEW] VPNaaS: Non-stacking functional jobs

2015-04-08 Thread Paul Michali
Public bug reported:

In a fashion similar to Neutron, the functional tests jobs (dsvm-
functional, dsvm-functional-sswan) should be converted to only prepare
the environment, using DevStack, and not to actually stack (starting up
all the processes).

This will permit better performance for the functional tests and make
the jobs more consistent to what is done in the Neutron repo.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Michali (pcm)
 Status: In Progress


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) = Paul Michali (pcm)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441783

Title:
  VPNaaS: Non-stacking functional jobs

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In a fashion similar to Neutron, the functional tests jobs (dsvm-
  functional, dsvm-functional-sswan) should be converted to only prepare
  the environment, using DevStack, and not to actually stack (starting
  up all the processes).

  This will permit better performance for the functional tests and make
  the jobs more consistent to what is done in the Neutron repo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441745] Re: Lots of gate failures with not enough hosts available

2015-04-08 Thread David Kranz
** Changed in: tempest
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441745

Title:
  Lots of gate failures with not enough hosts available

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Thousands of matches in the last two days:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm8gdmFsaWQgaG9zdCB3YXMgZm91bmQuIFRoZXJlIGFyZSBub3QgZW5vdWdoIGhvc3RzIGF2YWlsYWJsZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI4NTA4MTY3MTcwfQ==

  The following is from this log file:

  http://logs.openstack.org/42/163842/8/check/check-tempest-dsvm-
  neutron-full/1f66320/logs/screen-n-cond.txt.gz

  
  For the few I looked at, there is an error in the n-cond log:

  2015-04-08 07:20:15.207 WARNING nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Failed to compute_task_build_instances: No 
valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner
  return func(*args, **kwargs)

File /opt/stack/new/nova/nova/scheduler/manager.py, line 86, in 
select_destinations
  filter_properties)

File /opt/stack/new/nova/nova/scheduler/filter_scheduler.py, line 80, in 
select_destinations
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  --

  That makes it sound like the problem is that the deployed devstack
  does not have enough capacity. But right before that I see:

  2015-04-08 07:20:15.014 ERROR nova.conductor.manager 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] Instance update attempted for 
'availability_zone' on 745aafcf-686d-4cf0-91c7-701e282f6d06
  2015-04-08 07:20:15.149 ERROR nova.scheduler.utils 
[req-a21c9875-efe1-407d-b08b-2b05b35b4642 AggregatesAdminTestJSON-325246720 
AggregatesAdminTestJSON-279542170] [instance: 
745aafcf-686d-4cf0-91c7-701e282f6d06] Error from last host: 
devstack-trusty-rax-dfw-1769605.slave.openstack.org (node 
devstack-trusty-rax-dfw-1769605.slave.openstack.org): [u'Traceback (most recent 
call last):\n', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 
2193, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
/opt/stack/new/nova/nova/compute/manager.py, line 2336, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
745aafcf-686d-4cf0-91c7-701e282f6d06 was re-scheduled: u\'uunexpected update 
keyword \\\'availability_zone\\\'\\nTraceback (most recent call last):\\n\\n  
File /usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py, 
line 142, in inner\\nreturn func(*args, **kwargs
 )\\n\\n  File /opt/stack/new/nova/nova/conductor/manager.py, line 125, in 
instance_update\\nraise KeyError(unexpected update keyword \\\'%s\\\' % 
key)\\n\\nKeyError: uunexpected update keyword 
\\\'availability_zone\\\'\\n\'\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441827] [NEW] Cannot set per protocol remote_id_attribute

2015-04-08 Thread Adam Young
Public bug reported:

Setup Federation with SSSD.  Worked OK with

[federation]
remote_id_attribute=foo

but not with

[kerberos]
remote_id_attribute=foo

** Affects: keystone
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441827

Title:
  Cannot set per protocol remote_id_attribute

Status in OpenStack Identity (Keystone):
  Confirmed

Bug description:
  Setup Federation with SSSD.  Worked OK with

  [federation]
  remote_id_attribute=foo

  but not with

  [kerberos]
  remote_id_attribute=foo

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1046121] Re: dhcp should never be enabled for a router external net

2015-04-08 Thread Ryan Moats
waking this bug up because while the solution was to document, there
should be a pointer to the document in question so that the issue is not
brought up in the future.

** Changed in: neutron
   Status: Expired = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1046121

Title:
  dhcp should never be enabled for a router external net

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  it doesn't make sense in the existing model, as the router IPs and the
  floating IPs allocated from an external net never make DHCP requests.

  I don't believe there is any significant additional harm caused by
  this though, other than unneeded CPU churn from DHCP agent and
  dnsmasq, and a burned IP address allocated for a DHCP port.

  One tricky issue is that DHCP is enabled by default, so the question
  is whether we should fail if the user does not explicitly disable it
  when creating a network, or if we should just automatically set it to
  False.  Unfortunately, I don't think we can tell the difference
  between a this value default to true and it being explicitly set to
  true, so it seems that if we want to prevent it from being set to true
  in the API, we should require it to be set to False.  We also need to
  prevent it from being set to True on an update.

  Another option would be to ignore the value set in the API and just
  have the DHCP agent ignore networks for which router:external =True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1046121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441892] [NEW] template argument is ignored in get_injected_network_template

2015-04-08 Thread Mathieu Gagné
Public bug reported:

The template argument in get_injected_network_template is always
ignored.

See:
https://github.com/openstack/nova/blob/2015.1.0b3/nova/virt/netutils.py#L84-L85
And:
https://github.com/openstack/nova/blob/2015.1.0b3/nova/virt/netutils.py#L164

** Affects: nova
 Importance: Undecided
 Assignee: Mathieu Gagné (mgagne)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441892

Title:
  template argument is ignored in get_injected_network_template

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The template argument in get_injected_network_template is always
  ignored.

  See:
  
https://github.com/openstack/nova/blob/2015.1.0b3/nova/virt/netutils.py#L84-L85
  And:
  https://github.com/openstack/nova/blob/2015.1.0b3/nova/virt/netutils.py#L164

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441871] [NEW] Removing scope digest from login directive

2015-04-08 Thread Thai Tran
Public bug reported:

Call to digest is not needed since $timeout service already does it.

** Affects: horizon
 Importance: Low
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1441871

Title:
  Removing scope digest from login directive

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Call to digest is not needed since $timeout service already does it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1441871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441903] [NEW] rootwrap.d ln doesn't work for non iSCSI volumes

2015-04-08 Thread Walt Boring
Public bug reported:

The compute.filters line for ln doesn't allow for anything other than
iSCSI volumes.

It should allow for FC based volumes as well.

# nova/virt/libvirt/volume.py:
sginfo: CommandFilter, sginfo, root
sg_scan: CommandFilter, sg_scan, root
ln: RegExpFilter, ln, root, ln, --symbolic, --force, 
/dev/mapper/ip-.*-iscsi-iqn.*, /dev/disk/by-path/ip-.*-iscsi-iqn.*

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441903

Title:
  rootwrap.d ln doesn't work for non iSCSI volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  The compute.filters line for ln doesn't allow for anything other than
  iSCSI volumes.

  It should allow for FC based volumes as well.

  # nova/virt/libvirt/volume.py:
  sginfo: CommandFilter, sginfo, root
  sg_scan: CommandFilter, sg_scan, root
  ln: RegExpFilter, ln, root, ln, --symbolic, --force, 
/dev/mapper/ip-.*-iscsi-iqn.*, /dev/disk/by-path/ip-.*-iscsi-iqn.*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441931] [NEW] nova-network create ignores uuid request parameter

2015-04-08 Thread melanie witt
Public bug reported:

In nova-manage network create there is a --uuid option you can pass to
specify the uuid as shown in the API doc [1] but nova-network ignores
it. That is, when you try to create a network while specifying the uuid,
you will instead get a new network with a randomly generated uuid.

[1] http://developer.openstack.org/api-ref-compute-v2-ext.html#ext-os-
networks

** Affects: nova
 Importance: Undecided
 Assignee: melanie witt (melwitt)
 Status: New


** Tags: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441931

Title:
  nova-network create ignores uuid request parameter

Status in OpenStack Compute (Nova):
  New

Bug description:
  In nova-manage network create there is a --uuid option you can pass to
  specify the uuid as shown in the API doc [1] but nova-network ignores
  it. That is, when you try to create a network while specifying the
  uuid, you will instead get a new network with a randomly generated
  uuid.

  [1] http://developer.openstack.org/api-ref-compute-v2-ext.html#ext-os-
  networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441879] [NEW] POST body not json causing 500's

2015-04-08 Thread John Perkins
Public bug reported:

The body of a POST request may be something other than JSON. When this
occurs, a 500 is returned.

** Affects: neutron
 Importance: Undecided
 Assignee: John Perkins (john-d-perkins)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = John Perkins (john-d-perkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441879

Title:
  POST body not json causing 500's

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The body of a POST request may be something other than JSON. When this
  occurs, a 500 is returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441950] [NEW] instance on source host can not be cleaned after evacuating

2015-04-08 Thread Eric Xie
Public bug reported:

1. Version
nova: 2014.1
hypervisor: rhel7 + libvirt + kvm

2. Description
After one instance was evacuated from hostA to hostB, then delete this 
instance. 
Then started 'nova-compute' service of hostA, and found in nova-compute.log:
2015-04-09 10:39:52.201 1977 WARNING nova.compute.manager [-] Found 0 in the 
database and 1 on the hypervisor.

3. Reproduce steps:
* Launch one instance INST on hostA
* Stop 'nova-compute' service on hostA, and wait for down(use 'nova 
service-list')
* Evacuate INST to hostB
* After evacuated successfully, delete INST
* Start 'nova-compute' service on hostA

Expected results:
* INST on hostA's hypervisor should be destroyed

Actual result:
* INST was alive on hostA's hypervisor.

4. Tips
I checked the source, and found:
nova.compute.manager.py
def _destroy_evacuated_instances(self, context):

filters = {'deleted': False}   # Here filtered the deleted instance. Is 
it more proper that checked the deleted instances?
local_instances = self._get_instances_on_driver(context, filters)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1441950

Title:
  instance on source host can not be cleaned after evacuating

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. Version
  nova: 2014.1
  hypervisor: rhel7 + libvirt + kvm

  2. Description
  After one instance was evacuated from hostA to hostB, then delete this 
instance. 
  Then started 'nova-compute' service of hostA, and found in nova-compute.log:
  2015-04-09 10:39:52.201 1977 WARNING nova.compute.manager [-] Found 0 in the 
database and 1 on the hypervisor.

  3. Reproduce steps:
  * Launch one instance INST on hostA
  * Stop 'nova-compute' service on hostA, and wait for down(use 'nova 
service-list')
  * Evacuate INST to hostB
  * After evacuated successfully, delete INST
  * Start 'nova-compute' service on hostA

  Expected results:
  * INST on hostA's hypervisor should be destroyed

  Actual result:
  * INST was alive on hostA's hypervisor.

  4. Tips
  I checked the source, and found:
  nova.compute.manager.py
  def _destroy_evacuated_instances(self, context):
  
  filters = {'deleted': False}   # Here filtered the deleted instance. 
Is it more proper that checked the deleted instances?
  local_instances = self._get_instances_on_driver(context, filters)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1441950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440201] Re: reboot of a nova instance appears to leave attached volumes in a bad state

2015-04-08 Thread Mitsuhiro Tanino
Hi, Jay,

I confirmed this bug happened in my devstack environment using LVM/iSCSI driver.
From my investigation of this issue, this might be bug of Libvirt.

When Nova calls detach_volume() at step (3), Libvirt tries to remove the block 
device information from an instance's 
xml file and then calls disconnect_volume() to logout an iSCSI session.

In the disconnect_volume(), Nova checks all block device list from all VM's xml 
files at _get_all_block_devices(), and
if there are some block devices related to the iSCSI session, Nova does not 
logout from the iSCSI session.

Normally, Libvirt removes the block device info from instance's  xml quickly, 
but in this case I found the block device
info remained at the disconnect_volume().  I suppose reboot VM via Libvirt 
affects some bad impact to remove
block device info. So the iSCSI session logout was skipped. As a result, the 
iSCSI session remained.

After that, if user tries to attach a volume again, Cinder creates new user and 
password and passes these to Nova.
Nova tries to use these new user and password but the old iSCSI session still 
remains, this connection is failed
and you can see many error log of iscsid in the /var/log/messages.

I think this is not only affect LVM but also other iSCSI backend
storages.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1440201

Title:
  reboot of a nova instance appears to leave attached volumes in a bad
  state

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to recreate:

  1) attach a volume to a VM
  2) reboot VM
  3) detach the volume from VM,  right after the volume status changes from 
In-use to available, attach the volume again,  upper error occurs and the 
volume can't be attached to the VM any more.  BTW, in step3, detach the volume 
from VM,   wait 3 minutes or so,  attach the volume again,  can be attached 
successfully.

  This is being seen in Kilo using LVM/iSCSI.

  Looking at the logs it appears that something is happening during the
  reboot that leave the iscsi volume in a bad state.  Right after the
  reboot I see:

  2015-04-03 14:42:33.319 19415 INFO nova.compute.manager 
[req-4995e5ea-c845-4b5b-a6d4-8a214e0ea87f 157e5fddcca340a2a9ec6cd5a1216b4f 
df004f3072784994870e46ee997ea70f - - -] [instance: 
0235ba32-323d-4f59-91ea-2f2f7ef2bd04] Detach volume 
3cc769bb-d585-4812-a8fd-2888fedda58d from mountpoint /dev/vdb
  2015-04-03 14:42:56.268 19415 INFO nova.compute.manager 
[req-0037930a-9d20-4b8c-a150-74b0b4411530 157e5fddcca340a2a9ec6cd5a1216b4f 
df004f3072784994870e46ee997ea70f - - -] [instance: 
0235ba32-323d-4f59-91ea-2f2f7ef2bd04] Attaching volume 
3cc769bb-d585-4812-a8fd-2888fedda58d to /dev/vdb
  2015-04-03 14:43:01.647 19415 ERROR nova.virt.libvirt.driver 
[req-0037930a-9d20-4b8c-a150-74b0b4411530 157e5fddcca340a2a9ec6cd5a1216b4f 
df004f3072784994870e46ee997ea70f - - -] [instance: 
0235ba32-323d-4f59-91ea-2f2f7ef2bd04] Failed to attach volume at mountpoint: 
/dev/vdb

  After this I start seeing the following in the logs:

  2015-04-03 14:43:03.157 19415 INFO nova.scheduler.client.report 
[req-0037930a-9d20-4b8c-a150-74b0b4411530 157e5fddcca340a2a9ec6cd5a1216b4f 
df004f3072784994870e46ee997ea70f - - -] Compute_service record updated for 
('devo-n02-kvm.rch.kstart.ibm.com', 'devo-n02-kvm.rch.kstart.ibm.com')
  2015-04-03 14:43:03.638 19415 ERROR oslo_messaging.rpc.dispatcher 
[req-0037930a-9d20-4b8c-a150-74b0b4411530 157e5fddcca340a2a9ec6cd5a1216b4f 
df004f3072784994870e46ee997ea70f - - -] Exception during message handling: The 
supplied device (vdb) is busy.
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 430, in 
decorated_function
  2015-04-03 14:43:03.638 19415 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-03 

[Yahoo-eng-team] [Bug 1441373] Re: Creating a listener with empty or invalid tenant id doesn't raise an exception as expected

2015-04-08 Thread Aishwarya Thangappa
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441373

Title:
  Creating a listener with empty or invalid tenant id doesn't raise an
  exception as expected

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When creating a listener with an empty () or invalid ($%123)
  tenant_id, the expectation is that it should raise a Bad Request
  exception. But, it is not.

  It instead, creates a listener with a default demo tenant.

  kilo1@ubuntu:~/devstack$ neutron lbaas-listener-create --loadbalancer lb1 
--protocol HTTP --protocol-port 8082 --name listener3 --tenant-id ##%DFG
  Created a new listener:
  +--++
  | Field| Value  |
  +--++
  | admin_state_up   | True   |
  | connection_limit | -1 |
  | default_pool_id  ||
  | default_tls_container_id ||
  | description  ||
  | id   | aa69dee8-9ce9-4b92-88e7-73467fbf008a   |
  | loadbalancers| {id: 176ab60b-7ae6-4064-a76e-7c77ddce4e79} |
  | name | listener3  |
  | protocol | HTTP   |
  | protocol_port| 8082   |
  | sni_container_ids||
  | tenant_id| 84af838927ce45c28d5f02d2e4fbcde2   |
  +--++

  
  kilo1@ubuntu:~/devstack$ . ./openrc admin admin
  kilo1@ubuntu:~/devstack$ keystone tenant-list
  +--+--+-+
  |id| name | enabled |
  +--+--+-+
  | 91bf6cb23f844df390621f89e6f8aec6 | ListenersTestJSON-1363551857 |   True  |
  | f68952a293d541d48ae8e3145e8fcfe2 | ListenersTestJSON-2002123060 |   True  |
  | b4592bbb0f9b4441be297b0f9ab59212 | ListenersTestJSON-385541868  |   True  |
  | ee7c2406f71e49bdaf5abe04e5e6ec17 | ListenersTestJSON-733077467  |   True  |
  | 161c1ceb43e7405ca4564120803f8966 |  MemberTestJSON-1004757508   |   True  |
  | 73acf1d36be2425e84562e98f06c2312 |  MemberTestJSON-1368048326   |   True  |
  | 6e8c8d8de6754e21bf047d737715f1ed |   MemberTestJSON-156426384   |   True  |
  | e85f06a8207949f4a97966952895e975 |  MemberTestJSON-2009939882   |   True  |
  | 394ca5799e054474913c4034af036ca3 |  MemberTestJSON-2099428934   |   True  |
  | 3f30074d878643b2bdb6e24915adadf5 |   MemberTestJSON-840725862   |   True  |
  | 3c1f71f1a5c446d199809bd2f21d87ff |admin |   True  |
  | a7bd6901a2f343ea8d75294454c06dc8 |   alt_demo   |   True  |
  | 84af838927ce45c28d5f02d2e4fbcde2 | demo |   True  |
  | f880fa9ad40740349cb501babb0e9ce3 |  invisible_to_admin  |   True  |
  | 49edea44e78f4147bf558f90c8d9b4c9 |   service|   True  |
  +--+--+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp