[Yahoo-eng-team] [Bug 1414252] Re: Horizon throws unauthorized 403 error for cloud admin in domain setup

2015-07-31 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1414252

Title:
  Horizon throws unauthorized 403 error for cloud admin in domain setup

Status in Keystone:
  Expired

Bug description:
  I have a devstack running following components
  1.keystone
  2.heat 
  3.nova
  4.horizon
  5.cinder
   
  For this open stack setup I wanted to enable domain feature, define admin 
boundaries.  To enable the domains, these changes were made : 
  1. Changed the token format from PKI to UUID
  2. added auth_version = v3.0  under [auth_token:fillter] section of all the 
api-paste.ini file of all the services 
  3. updated the endpoints to point to v3 
  4. restarted all the services 
  5. Changed the default keystone policy.json with policy.v3sample.json and set 
the admin_domain_id to default 

  I horizons local_settings.py file 
  1. set the OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT to True
  2. updated the enpoint to point to localhost:5000/v3
   

  after all these changes when I try to login into the default domain
  with admin credentials , i get ubale retirve domain list , unable
  retrive project list errors horizons dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1414252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480514] [NEW] Remove error instance fail when enable serial_consol

2015-07-31 Thread lyanchih
Public bug reported:

When I fixed https://bugs.launchpad.net/nova/+bug/1478607
I found I can't remove those error instances which was failed when config xml.

This is because of following block:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L894

When nova try to destroy instance, it will cleanup relative resources.
if we enable serial console, nova will try to find ports, which was assigned to 
it, and release them.
But the instance was created failed, therefore nova will throw nova instance 
not found.
Yes, the block looks like it had handle instance not found exception.
But the function of _get_serial_ports_from_instance has yield keyword.
It will not raise exception immediately instead of raise exception when program 
try to iterator yielded items.
Therefore instance not found exception will been raised at L894 instead of L889.
You can checkout following sample code.
http://www.tutorialspoint.com/execute_python_online.php?PID=0Bw_CjBb95KQMU05ycERQdUFfcms

** Affects: nova
 Importance: Undecided
 Assignee: lyanchih (lyanchih)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = lyanchih (lyanchih)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480514

Title:
  Remove error instance fail when enable serial_consol

Status in OpenStack Compute (nova):
  New

Bug description:
  When I fixed https://bugs.launchpad.net/nova/+bug/1478607
  I found I can't remove those error instances which was failed when config xml.

  This is because of following block:
  https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L894

  When nova try to destroy instance, it will cleanup relative resources.
  if we enable serial console, nova will try to find ports, which was assigned 
to it, and release them.
  But the instance was created failed, therefore nova will throw nova instance 
not found.
  Yes, the block looks like it had handle instance not found exception.
  But the function of _get_serial_ports_from_instance has yield keyword.
  It will not raise exception immediately instead of raise exception when 
program try to iterator yielded items.
  Therefore instance not found exception will been raised at L894 instead of 
L889.
  You can checkout following sample code.
  
http://www.tutorialspoint.com/execute_python_online.php?PID=0Bw_CjBb95KQMU05ycERQdUFfcms

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480305] [NEW] FloatingIPsTestJSON fails with DBDeadlock inserting into instance_extra

2015-07-31 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/78/193278/15/check/gate-tempest-dsvm-
full/7e16644/logs/screen-n-api.txt.gz?level=TRACE#_2015-07-31_01_52_20_792

2015-07-31 01:52:20.792 ERROR nova.api.openstack 
[req-ea281178-5948-47a1-815d-e97f12b2412b 
tempest-FloatingIPsTestJSON-1949129213 tempest-FloatingIPsTestJSON-96720078] 
Caught error: (pymysql.err.InternalError) (1213, u'Deadlock found when trying 
to get lock; try restarting transaction') [SQL: u'INSERT INTO instance_extra 
(created_at, updated_at, deleted_at, deleted, instance_uuid, numa_topology, 
pci_requests, flavor, vcpu_model) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)'] 
[parameters: (datetime.datetime(2015, 7, 31, 1, 52, 20, 772556), None, None, 0, 
'645dcef8-0852-40b5-ac1f-e422b7909e90', None, '[]', '{new: null, old: null, 
cur: {nova_object.version: 1.1, nova_object.name: Flavor, 
nova_object.data: {disabled: false, root_gb: 0, name: m1.nano, 
flavorid: 42, deleted: false, created_at: 2015-07-31T01:50:53Z, 
ephemeral_gb: 0, updated_at: null, memory_mb: 64, vcpus: 1, 
extra_specs: {}, swap: 0, rxtx_factor: 1.0, is_public: true, delete
 d_at: null, vcpu_weight: 0, id: 6}, nova_object.namespace: nova}}', 
None)]
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack Traceback (most recent 
call last):
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 128, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return 
req.get_response(self.application)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack application, 
catch_exc_info=False)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return resp(environ, 
start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py,
 line 434, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack response = 
req.get_response(self._app)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1317, in send
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack application, 
catch_exc_info=False)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1281, in 
call_application
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return resp(environ, 
start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return resp(environ, 
start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 136, in 
__call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack response = 
self.app(environ, start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return resp(environ, 
start_response)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2015-07-31 01:52:20.792 18304 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
2015-07-31 01:52:20.792 18304 ERROR 

[Yahoo-eng-team] [Bug 1480338] [NEW] neutron-fwaas: Enable python34 support

2015-07-31 Thread Kyle Mestery
Public bug reported:

The following files need to be addressed and fixed so that tox -epy34
completes successfully with all unit tests enabled:

neutron_fwaas/tests/unit/db/firewall/test_firewall_db.py
neutron_fwaas/tests/unit/extensions/test_firewall.py
neutron_fwaas/tests/unit/services/firewall/agents/vyatta/test_vyatta_utils.py
neutron_fwaas/tests/unit/services/firewall/drivers/vyatta/test_vyatta_fwaas.py
neutron_fwaas/tests/unit/services/firewall/drivers/cisco/test_csr_firewall_svc_helper.py
neutron_fwaas/tests/unit/services/firewall/freescale/test_fwaas_plugin.py

** Affects: neutron
 Importance: Low
 Status: New


** Tags: fwaas low-hanging-fruit

** Tags added: fwaas low-hanging-fruit

** Changed in: neutron
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480338

Title:
  neutron-fwaas: Enable python34 support

Status in neutron:
  New

Bug description:
  The following files need to be addressed and fixed so that tox
  -epy34 completes successfully with all unit tests enabled:

  neutron_fwaas/tests/unit/db/firewall/test_firewall_db.py
  neutron_fwaas/tests/unit/extensions/test_firewall.py
  neutron_fwaas/tests/unit/services/firewall/agents/vyatta/test_vyatta_utils.py
  neutron_fwaas/tests/unit/services/firewall/drivers/vyatta/test_vyatta_fwaas.py
  
neutron_fwaas/tests/unit/services/firewall/drivers/cisco/test_csr_firewall_svc_helper.py
  neutron_fwaas/tests/unit/services/firewall/freescale/test_fwaas_plugin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1480338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480270] [NEW] Can't get endpoints with v2 in command line

2015-07-31 Thread Sunny Zheng
Public bug reported:

Reproducible Steps:

1.  Set up the latest devstack environment

2. Run the following commands
$ source ~/devstack/accrc/admin/admin
$ openstack endpoint list

We can get nothing, but the endpoints data is real in db.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480270

Title:
  Can't get endpoints with v2 in command line

Status in Keystone:
  New

Bug description:
  Reproducible Steps:

  1.  Set up the latest devstack environment

  2. Run the following commands
  $ source ~/devstack/accrc/admin/admin
  $ openstack endpoint list

  We can get nothing, but the endpoints data is real in db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480326] [NEW] neutron-vpnaas: Enable python34 support

2015-07-31 Thread Kyle Mestery
Public bug reported:

The following files are currently failing and need to be fixed for tox
-epy34 to work in this repository:

test_plugin.py
test_netns_wrapper.py
test_cisco_ipsec.py (in device drivers)
test_cisco_csr_rest_client.py
test_vpn_db.py

** Affects: neutron
 Importance: Low
 Status: New


** Tags: low-hanging-fruit vpnaas

** Changed in: neutron
   Importance: Undecided = Low

** Tags added: low-hanging-fruit vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480326

Title:
  neutron-vpnaas: Enable python34 support

Status in neutron:
  New

Bug description:
  The following files are currently failing and need to be fixed for
  tox -epy34 to work in this repository:

  test_plugin.py
  test_netns_wrapper.py
  test_cisco_ipsec.py (in device drivers)
  test_cisco_csr_rest_client.py
  test_vpn_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1480326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480270] Re: Can't get endpoints with v2 in command line

2015-07-31 Thread Steve Martinelli
** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480270

Title:
  Can't get endpoints with v2 in command line

Status in Keystone:
  Invalid
Status in python-openstackclient:
  New

Bug description:
  Reproducible Steps:

  1.  Set up the latest devstack environment

  2. Run the following commands
  $ source ~/devstack/accrc/admin/admin
  $ openstack endpoint list

  We can get nothing, but the endpoints data is real in db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267140] Re: The output of security group rules does not include egress rules.

2015-07-31 Thread Russell Bryant
Right, nova-network only supported ingress rules, so nova API matches
that.  If you want egress rules, you should use the Neutron API.

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267140

Title:
  The output of security group rules does not include egress rules.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The output of security group rules does not include egress rules.

  Description of problem:
  ===
  The output of security group rules does not include egress rules.

  Version-Release number of selected component (if applicable):
  =
  Tested on RHEL
  Icehouse: python-nova-2014.1-0.5.b1.el6.noarch

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  1. Add an egress security group rule (I did it via horizon)
  2. via CLI: nova secgroup-list-rules sec group name

  Actual results:
  ===
  List of ingress rules.

  Expected results:
  =
  List of both ingress and egress rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480330] [NEW] Add .rst documentation for glance-swift.conf and using keystone v3

2015-07-31 Thread Stuart McLaren
Public bug reported:

This patch:

https://review.openstack.org/#/c/193422

Adds the ability to use keystone v3 when generating a token for the
swift backend.

(From memory) we're a little weak in documentation around using glance-
swift.conf. We should add some more documentation around that and the
new keystone v3 config options.

** Affects: glance
 Importance: Undecided
 Assignee: Stuart McLaren (stuart-mclaren)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Stuart McLaren (stuart-mclaren)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480330

Title:
  Add .rst documentation for glance-swift.conf and using keystone v3

Status in Glance:
  New

Bug description:
  This patch:

  https://review.openstack.org/#/c/193422

  Adds the ability to use keystone v3 when generating a token for the
  swift backend.

  (From memory) we're a little weak in documentation around using
  glance-swift.conf. We should add some more documentation around that
  and the new keystone v3 config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480301] [NEW] [Sahara] is_proxy_gateway field remains unchanged after update

2015-07-31 Thread Andrey Pavlov
Public bug reported:

Field is_proxy_gateway of node group template objects doesn't change its
value after node group template update.

** Affects: horizon
 Importance: Undecided
 Assignee: Andrey Pavlov (apavlov-n)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Andrey Pavlov (apavlov-n)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480301

Title:
  [Sahara] is_proxy_gateway field remains unchanged after update

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Field is_proxy_gateway of node group template objects doesn't change
  its value after node group template update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456335] Re: neutron-vpn-netns-wrapper missing in Ubuntu Package

2015-07-31 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron-vpnaas - 2:7.0.0~b1-0ubuntu3

---
neutron-vpnaas (2:7.0.0~b1-0ubuntu3) wily; urgency=medium

  * d/neutron-vpn-agent.install: Install neutron-vpn-netns-wrapper
(LP: #1456335).
  * d/control: Add runtime dependency on conntrack (LP: #1447803).

 -- James Page james.p...@ubuntu.com  Fri, 24 Jul 2015 12:17:18 +0100

** Changed in: neutron-vpnaas (Ubuntu)
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456335

Title:
  neutron-vpn-netns-wrapper missing in Ubuntu Package

Status in neutron:
  Invalid
Status in neutron-vpnaas package in Ubuntu:
  Fix Released
Status in neutron-vpnaas source package in Vivid:
  New
Status in neutron-vpnaas package in Debian:
  New

Bug description:
  The executable neutron-vpn-netns-wrapper (path /usr/bin/neutron-vpn-
  netns-wrapper) in Ubuntu 14.04 packages is missing for OpenStack Kilo.

  I tried to enable VPNaaS with StrongSwan and it failed with this error 
message:
  2015-05-18 19:20:41.510 3254 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Stderr: 
/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac neutron-vpn-netns-wrapper 
--mount_paths=/etc:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/etc,/var/run:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/var/run
 --cmd=ipsec,start (no filter matched)

  After copying the content of neutron-vpn-netns-wrapper from the Fedora
  repository VPNaaS with StrongSwan worked.

  The content of the vpn-netns-wrapper:

  #!/usr/bin/python2
  # PBR Generated from u'console_scripts'

  import sys

  from neutron_vpnaas.services.vpn.common.netns_wrapper import main

  
  if __name__ == __main__:
  sys.exit(main())

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480334] [NEW] can't use $ in password for ldap authentication

2015-07-31 Thread Vasyl Saienko
Public bug reported:

keystone can't connect to ldap server if $ used in password.

keystone.tld.conf

[identity]
driver = keystone.identity.backends.ldap.Identity

[assignment]
driver = keystone.assignment.backends.sql.Assignment

[ldap]
url=ldap://172.16.56.46:389
user=admin...@keystone.tld
password=Pa$$w0rd
suffix=dc=keystone,dc=tld
query_scope = sub

user_tree_dn=dc=keystone,dc=tld
user_objectclass=person
user_id_attribute=cn
#user_name_attribute=userPrincipalName
user_name_attribute=cn


use_pool = true
pool_size = 10
pool_retry_max = 3
pool_retry_delay = 0.1
pool_connection_timeout = -1
pool_connection_lifetime = 600


use_auth_pool = true
auth_pool_size = 100
auth_pool_connection_lifetime = 60

debug_level = 4095


Debug from log:
15Jul 31 14:00:04 node-1 keystone-all LDAP init: url=ldap://172.16.56.46:389
15Jul 31 14:00:04 node-1 keystone-all LDAP init: use_tls=False 
tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 tls_avail=1
15Jul 31 14:00:04 node-1 keystone-all LDAP bind: 
who=CN=admin_ad,CN=Users,DC=keystone,DC=tld
15Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
14Jul 31 14:00:04 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 14:00:04] 
OPTIONS / HTTP/1.0 300 919 0.143915
15Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
14Jul 31 14:00:05 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 14:00:05] 
OPTIONS / HTTP/1.0 300 921 0.155419
11Jul 31 14:00:05 node-1 keystone-all {'info': '80090308: LdapErr: 
DSID-0C0903C5, comment: AcceptSecurityContext error, data 52e, v2580', 'desc': 
'Invalid credentials'}

while I can connect to server with ldapsearch

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480334

Title:
  can't use $ in password for ldap authentication

Status in Keystone:
  New

Bug description:
  keystone can't connect to ldap server if $ used in password.

  keystone.tld.conf

  [identity]
  driver = keystone.identity.backends.ldap.Identity

  [assignment]
  driver = keystone.assignment.backends.sql.Assignment

  [ldap]
  url=ldap://172.16.56.46:389
  user=admin...@keystone.tld
  password=Pa$$w0rd
  suffix=dc=keystone,dc=tld
  query_scope = sub

  user_tree_dn=dc=keystone,dc=tld
  user_objectclass=person
  user_id_attribute=cn
  #user_name_attribute=userPrincipalName
  user_name_attribute=cn

  
  use_pool = true
  pool_size = 10
  pool_retry_max = 3
  pool_retry_delay = 0.1
  pool_connection_timeout = -1
  pool_connection_lifetime = 600

  
  use_auth_pool = true
  auth_pool_size = 100
  auth_pool_connection_lifetime = 60

  debug_level = 4095

  
  Debug from log:
  15Jul 31 14:00:04 node-1 keystone-all LDAP init: url=ldap://172.16.56.46:389
  15Jul 31 14:00:04 node-1 keystone-all LDAP init: use_tls=False 
tls_cacertfile=None tls_cacertdir=None tls_req_cert=2 tls_avail=1
  15Jul 31 14:00:04 node-1 keystone-all LDAP bind: 
who=CN=admin_ad,CN=Users,DC=keystone,DC=tld
  15Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
  14Jul 31 14:00:04 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 
14:00:04] OPTIONS / HTTP/1.0 300 919 0.143915
  15Jul 31 14:00:04 node-1 keystone-all arg_dict: {}
  14Jul 31 14:00:05 node-1 keystone-all 192.168.0.2 - - [31/Jul/2015 
14:00:05] OPTIONS / HTTP/1.0 300 921 0.155419
  11Jul 31 14:00:05 node-1 keystone-all {'info': '80090308: LdapErr: 
DSID-0C0903C5, comment: AcceptSecurityContext error, data 52e, v2580', 'desc': 
'Invalid credentials'}

  while I can connect to server with ldapsearch

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480341] [NEW] Creating an instance with wrong flavor put this instance in error state without a possibility to delete it

2015-07-31 Thread olmy0414
Public bug reported:

OS - CentOS Linux release 7.0.1406
Nova version - 2015.1.0-3.el7

When I try to create an instance with wrong flavor,  the instance has been 
created in error state and I am not able to delete it, the instance hungs with 
Task State deleting...
Commands like nova reset-state (with --active or without) or nova 
force-delete - don`t work

Some information from nova-compute.log during instance creating:

ERROR nova.virt.images [req-b7ae209d-db0d-4914-b6a3-cbe1140375ee - - - - -] 
/var/lib/nova/instances/_base/e0d7456a996be86b8092bb4f13d23468401363a9 virtual 
size 2361393152 larger than flavor root disk size 1073741824
ERROR nova.compute.manager [req-b7ae209d-db0d-4914-b6a3-cbe1140375ee - - - - -] 
[instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] Instance failed to spawn
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
Traceback (most recent call last):
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2442, in 
_build_resources
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
yield resources
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2314, in 
_build_and_run_instance
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
block_device_info=block_device_info)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2347, 
in spawn
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
admin_pass=admin_password)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2745, 
in _create_image
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
instance, size, fallback_from_host)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 5875, 
in _try_fetch_image_cache
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
size=size)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py, line 
231, in cache
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
*args, **kwargs)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py, line 
480, in create_image
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
prepare_template(target=base, max_size=size, *args, **kwargs)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py, line 
445, in inner
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
return f(*args, **kwargs)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py, line 
221, in fetch_func_sync
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
fetch_func(target=target, *args, **kwargs)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/libvirt/utils.py, line 501, 
in fetch_image
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
max_size=max_size)
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]   
File /usr/lib/python2.7/site-packages/nova/virt/images.py, line 119, in 
fetch_to_raw
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
raise exception.FlavorDiskTooSmall()
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] 
FlavorDiskTooSmall: Flavor's disk is too small for requested image.
TRACE nova.compute.manager [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b]
INFO nova.compute.manager [req-30c8a040-8c04-45cc-8754-29c343051c02 
05bbbe05d3ad4cbe93bf6fc66735007f 7ecbe7eabedc4c9783fe5ae54bb91a70 - - -] 
[instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] Terminating instance
INFO nova.virt.libvirt.driver [-] [instance: 
64a8556a-85e2-4ac8-b69e-f1b41771950b] During wait destroy, instance disappeared.
INFO nova.virt.libvirt.driver [req-b7ae209d-db0d-4914-b6a3-cbe1140375ee - - - - 
-] [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] Deleting instance files 
/var/lib/nova/instances/64a8556a-85e2-4ac8-b69e-f1b41771950b_del
INFO nova.virt.libvirt.driver [req-b7ae209d-db0d-4914-b6a3-cbe1140375ee - - - - 
-] [instance: 64a8556a-85e2-4ac8-b69e-f1b41771950b] Deletion of 

[Yahoo-eng-team] [Bug 1480393] [NEW] Artifacts: filtering by version ignores operators other than equality

2015-07-31 Thread Alexander Tivelkov
Public bug reported:

When the artifacts list is being filtered by the version (i.e. list all
the artifacts having the given name and specific range of the version),
the API ignores all the comparison operators, i.e. gt:, ge:, le: and lt:
prefixed are accepted with the version value, but are ignored and an
equality check is executed instead.

** Affects: glance
 Importance: Undecided
 Assignee: Alexander Tivelkov (ativelkov)
 Status: New


** Tags: artifacts

** Changed in: glance
 Assignee: (unassigned) = Alexander Tivelkov (ativelkov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480393

Title:
  Artifacts: filtering by version ignores operators other than equality

Status in Glance:
  New

Bug description:
  When the artifacts list is being filtered by the version (i.e. list
  all the artifacts having the given name and specific range of the
  version), the API ignores all the comparison operators, i.e. gt:, ge:,
  le: and lt: prefixed are accepted with the version value, but are
  ignored and an equality check is executed instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480392] [NEW] Artifacts: filtering by range is not working as expected

2015-07-31 Thread Alexander Tivelkov
Public bug reported:

According to spec, the list artifacts API call should be able to list artifacts 
by the range of some of its property, if the property supports comparison, e.g
/v3/artifacts/some_type/?property=gt:1property=lt:10 
 should return all the artifacts having the value of property greater than 1 
and less than 10.
However this does not work: greater than 1 part is ignored and only the last 
condition is applied.

** Affects: glance
 Importance: Undecided
 Assignee: Alexander Tivelkov (ativelkov)
 Status: New


** Tags: artifacts

** Changed in: glance
 Assignee: (unassigned) = Alexander Tivelkov (ativelkov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480392

Title:
  Artifacts: filtering by range is not working as expected

Status in Glance:
  New

Bug description:
  According to spec, the list artifacts API call should be able to list 
artifacts by the range of some of its property, if the property supports 
comparison, e.g
  /v3/artifacts/some_type/?property=gt:1property=lt:10 
   should return all the artifacts having the value of property greater than 
1 and less than 10.
  However this does not work: greater than 1 part is ignored and only the last 
condition is applied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423484] Re: dhcpv6-stateful: error message in contradiction to spec

2015-07-31 Thread Andreas Scheuring
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423484

Title:
  dhcpv6-stateful: error message in contradiction to spec

Status in neutron:
  Invalid

Bug description:
  The spec [1] points out how a subnet configured with ipv6 address mode 
dhcpv6-stateful and ra mode none should behave:
  VM obtains IPv6 address and optional info from dnsmasq using DHCPv6 
stateful [1]

  Now creating such an subnet and adding it as routers interface resulted in 
the following error message:
  neutron subnet-create --name subnet_ipv6-network --enable-dhcp --ip-version 6 
--ipv6-address-mode dhcpv6-stateful ipv6-network 2003::/64
  Created a new subnet:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | allocation_pools  | {start: 2003::2, end: 
2003:::::fffe} |
  | cidr  | 2003::/64   
 |
  | dns_nameservers   | 
 |
  | enable_dhcp   | True
 |
  | gateway_ip| 2003::1 
 |
  | host_routes   | 
 |
  | id| 4c3f8b16-633c-492c-964e-20cbd4f0b30a
 |
  | ip_version| 6   
 |
  | ipv6_address_mode | dhcpv6-stateful 
 |
  | ipv6_ra_mode  | 
 |
  | name  | subnet_ipv6-network 
 |
  | network_id| 2bbd0b0c-0809-43b8-a98f-4f552dcba4d3
 |
  | tenant_id | 3ccda9db620a4d13940f9e79f12d5940
 |
  
+---+--+
  neutron router-interface-add router1 subnet_ipv6-network
  Bad router request: IPv6 subnet 4c3f8b16-633c-492c-964e-20cbd4f0b30a 
configured to receive RAs from an external router cannot be added to Neutron 
Router.

  -- This seems to be in contradiction to what described in the spec!

  [1] https://review.openstack.org/#/c/101306/8/specs/juno/ipv6-radvd-
  ra.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472712] Re: Using SSL with rabbitmq prevents communication between nova-compute and conductor after latest nova updates

2015-07-31 Thread Liam Young
** Also affects: python-oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
   Status: Confirmed = Invalid

** Changed in: nova
   Status: New = Invalid

** Changed in: python-oslo.messaging (Ubuntu)
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472712

Title:
  Using SSL with rabbitmq prevents communication between nova-compute
  and conductor after latest nova updates

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Invalid
Status in python-oslo.messaging package in Ubuntu:
  Confirmed

Bug description:
  On the latest update of the Ubuntu OpenStack packages, it was
  discovered that the nova-compute/nova-conductor
  (1:2014.1.4-0ubuntu2.1) packages encountered a bug with using SSL to
  connect to rabbitmq.

  When this problem occurs, the compute node cannot connect to the
  controller, and this message is constantly displayed:

  WARNING nova.conductor.api [req-4022395c-9501-47cf-bf8e-476e1cc58772
  None None] Timed out waiting for nova-conductor. Is it running? Or did
  this service start before nova-conductor?

  Investigation revealed that having rabbitmq configured with SSL was
  the root cause of this problem.  This seems to have been introduced
  with the current version of the nova packages.   Rabbitmq was not
  updated as part of this distribution update, but the messaging library
  (python-oslo.messaging 1.3.0-0ubuntu1.1) was updated.   So the problem
  could exist in any of these components.

  Versions installed:
  Openstack version: Icehouse
  Ubuntu 14.04.2 LTS
  nova-conductor1:2014.1.4-0ubuntu2.1
  nova-compute1:2014.1.4-0ubuntu2.1
  rabbitmq-server  3.2.4-1
  openssl:amd64/trusty-security   1.0.1f-1ubuntu2.15

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480343] [NEW] Swap field in Flavor create/update dialog shouldn't be mandatory

2015-07-31 Thread Bellantuono Daniel
Public bug reported:

Such as in Nova CLI, during flavor create/update the Swap field should
be optional, if Nova receive an empty value in Swap field set it to 0

** Affects: horizon
 Importance: Undecided
 Assignee: Bellantuono Daniel (kelfen)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Bellantuono Daniel (kelfen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480343

Title:
  Swap field in Flavor create/update dialog shouldn't be mandatory

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Such as in Nova CLI, during flavor create/update the Swap field should
  be optional, if Nova receive an empty value in Swap field set it to
  0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480319] [NEW] Mutable args and wrap_db_retry

2015-07-31 Thread Oleg Bondarev
Public bug reported:

wrapped_db_retry may not work as expected if wrapped function modifies
it's mutable arguments during execution: in this case on the second
attempt the function will be called with modified args. Example:

def create_router(self, context, router):
r = router['router']
gw_info = r.pop(EXTERNAL_GW_INFO, None)
tenant_id = self._get_tenant_id_for_create(context, r)
with context.session.begin(subtransactions=True):
router_db = self._create_router_db(context, r, tenant_id)
if gw_info:
self._update_router_gw_info(context, router_db['id'],
gw_info, router=router_db)
dict =  self._make_router_dict(router_db)
return dict

because of pop() on a second attempt the router dict will not have
gateway info so router will be created without it, silently and
surprisingly for users.

Just doing copy.deepcopy() inside wrap_db_retry will not work as
arguments might be complex objects(like plugins) which do not support
deepcopy(). So this needs a more crafty fix. Otherwise wrap_db_retry
should be used carefully, checking that wrapped function does not modify
mutable args.

Currently neutron uses wrap_db_retry at API layer which is not safe
given described issue.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480319

Title:
  Mutable args and wrap_db_retry

Status in neutron:
  New

Bug description:
  wrapped_db_retry may not work as expected if wrapped function modifies
  it's mutable arguments during execution: in this case on the second
  attempt the function will be called with modified args. Example:

  def create_router(self, context, router):
  r = router['router']
  gw_info = r.pop(EXTERNAL_GW_INFO, None)
  tenant_id = self._get_tenant_id_for_create(context, r)
  with context.session.begin(subtransactions=True):
  router_db = self._create_router_db(context, r, tenant_id)
  if gw_info:
  self._update_router_gw_info(context, router_db['id'],
  gw_info, router=router_db)
  dict =  self._make_router_dict(router_db)
  return dict

  because of pop() on a second attempt the router dict will not have
  gateway info so router will be created without it, silently and
  surprisingly for users.

  Just doing copy.deepcopy() inside wrap_db_retry will not work as
  arguments might be complex objects(like plugins) which do not support
  deepcopy(). So this needs a more crafty fix. Otherwise wrap_db_retry
  should be used carefully, checking that wrapped function does not
  modify mutable args.

  Currently neutron uses wrap_db_retry at API layer which is not safe
  given described issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1480319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480400] [NEW] Documentation error for properties

2015-07-31 Thread Niall Bunting
Public bug reported:

Document: http://developer.openstack.org/api-ref-image-v2.html

Under the image create drop down there is the following line:
properties (Optional)   plain | xsd:dict | Properties, if any, that are 
associated with the image.

This suggests that the properties are a dict and would look like this:
-d '{name: thename, properties: {myprop, mydata}}'

 this is not the case, as if you want
to define custom properties in a curl command you do it by defining it like any 
other property eg.
-d '{name: thename, myprop: mydata}'

The documentation is not clear about this distinction.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480400

Title:
  Documentation error for properties

Status in Glance:
  New

Bug description:
  Document: http://developer.openstack.org/api-ref-image-v2.html

  Under the image create drop down there is the following line:
  properties (Optional) plain | xsd:dict | Properties, if any, that are 
associated with the image.

  This suggests that the properties are a dict and would look like this:
  -d '{name: thename, properties: {myprop, mydata}}'

   this is not the case, as if you want
  to define custom properties in a curl command you do it by defining it like 
any other property eg.
  -d '{name: thename, myprop: mydata}'

  The documentation is not clear about this distinction.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475297] Re: Unbind segment not working correctly

2015-07-31 Thread Henry Gessau
** Changed in: neutron
   Importance: Undecided = Critical

** Changed in: neutron
   Importance: Critical = High

** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** Changed in: networking-cisco
   Status: New = In Progress

** Changed in: networking-cisco
   Importance: Undecided = Critical

** Changed in: networking-cisco
 Assignee: (unassigned) = Sam Betts (sambetts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475297

Title:
  Unbind segment not working correctly

Status in networking-cisco:
  In Progress
Status in neutron:
  In Progress

Bug description:
  A recent commit https://review.openstack.org/#/c/196908/21 changed the
  order of some of the calls in update_port and its causing a failure of
  segment unbind in the Cisco nexus driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1475297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470093] Re: The function _get_multipath_iqn get iqn is not complete

2015-07-31 Thread Davanum Srinivas (DIMS)
Nova has switched to osbrick, _get_multipath_iqn does not exist in nova
code base any more

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470093

Title:
  The function _get_multipath_iqn get iqn is not complete

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. As for SAN storage has not only one iqn. so, one multipath device will 
have not only one iqn.
  2、the function as follow:
  def _get_multipath_iqn(self, multipath_device):
  entries = self._get_iscsi_devices()
  for entry in entries:
  entry_real_path = os.path.realpath(/dev/disk/by-path/%s % entry)
  entry_multipath = self._get_multipath_device_name(entry_real_path)
  if entry_multipath == multipath_device:
  return entry.split(iscsi-)[1].split(-lun)[0]
  return None
  so, if the multipath_device match one device, will return. but return only 
one iqn. 
  but the issue is the multipath_device will contain several single device. as 
following:

  [root@R4300G2-ctrl02 ~]# ll /dev/disk/by-path/
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sds
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.1:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53-lun-1 
- ../../sdl
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.1.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdo
  lrwxrwxrwx 1 root root  9 Jun 30 14:45 
ip-172.12.2.2:3260-iscsi-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00-lun-1 
- ../../sdm
  so the device have two different 
iqns.(-iqn.2099-01.cn.com.zte:usp.spr-4c:09:b4:00:00:00, 
iqn.2099-01.cn.com.zte:usp.spr-a0:c0:00:00:00:53)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480427] [NEW] Remove dup custom style imports

2015-07-31 Thread Shaoquan Chen
Public bug reported:

Previously @import /custom/styles;  has a comment of // Custom Style
Variables  which make it looks like a scss file with only variables, in
that case it should be import to each scss file that will be injected to
_stylesheet.html directly. Confirm with devs, /custom/styles should
only include custom styles, than it should be imported only once.

Also, we need to make sure, /custom/styles should come at the very
bottom of the combined css file.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480427

Title:
  Remove dup custom style imports

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Previously @import /custom/styles;  has a comment of // Custom
  Style Variables  which make it looks like a scss file with only
  variables, in that case it should be import to each scss file that
  will be injected to _stylesheet.html directly. Confirm with devs,
  /custom/styles should only include custom styles, than it should be
  imported only once.

  Also, we need to make sure, /custom/styles should come at the very
  bottom of the combined css file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479981] Re: Openstackclient return wrong quota information

2015-07-31 Thread Hao Chen
** This bug is no longer a duplicate of bug 1420104
   quota set failed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479981

Title:
  Openstackclient return wrong quota information

Status in neutron:
  New
Status in python-openstackclient:
  New

Bug description:
  I try to update the quota port limitations for my project,
  neutronclient works well and I can get the right result using
  neutronclient. But when we run ```openstack  quota show admin``` we
  found the quota port limitation doesn't changed.  Here are the testing
  process:

  
  layton-pistachio:/opt/openstack # neutron --insecure --os-project-id 
d3a77adc69004a6bbfe233cf7f08fdc1 --os-project-domain-name default 
--os-user-domain-name default quota-update --port 160

  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | health_monitor  | -1|
  | member  | -1|
  | network | 10|
  | pool| 10|
  | port| 160   |
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | vip | 10|
  +-+---+
  layton-pistachio:/opt/openstack # neutron --insecure --os-project-id 
d3a77adc69004a6bbfe233cf7f08fdc1 --os-project-domain-name default 
--os-user-domain-name default quota-show
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | health_monitor  | -1|
  | member  | -1|
  | network | 10|
  | pool| 10|
  | port| 160   |
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | vip | 10|
  +-+---+
  layton-pistachio:/opt/openstack # openstack quota show admin
  +--+---+
  | Field| Value |
  +--+---+
  | backup_gigabytes | 1000  |
  | backups  | 10|
  | cores| 20|
  | fixed-ips| -1|
  | floating-ips | 50|
  | gigabytes| 1000  |
  | health_monitor   | -1|
  | injected-file-size   | 10240 |
  | injected-files   | 5 |
  | injected-path-size   | 255   |
  | instances| 10|
  | key-pairs| 100   |
  | member   | -1|
  | network  | 10|
  | pool | 10|
  | port | 50|
  | project  | admin |
  | properties   | 128   |
  | ram  | 51200 |
  | router   | 10|
  | secgroup-rules   | 100   |
  | secgroups| 10|
  | server_group_members | 10|
  | server_groups| 10|
  | snapshots| 10|
  | subnet   | 10|
  | vip  | 10|
  | volumes  | 10|
  +--+---+

  layton-pistachio:/opt/openstack # openstack project list
  +--+-+
  | ID   | Name|
  +--+-+
  | 1c30e840b3d1447ea3820d99cc38cd33 | service |
  | 55d10960d5f7447990e69ebf481ac97d | demo|
  | d3a77adc69004a6bbfe233cf7f08fdc1 | admin   |
  +--+-+


  
  I checked the neutron database and the quota information was right:

  
  MariaDB [neutron] select * from quotas;
  
+--+--+--+---+
  | id   | tenant_id| 
resource | limit |
  
+--+--+--+---+
  | 1c5586f5-7c71-4666-9162-bf29bcaa511d | d3a77adc69004a6bbfe233cf7f08fdc1 | 
port |   160 |
  
+--+--+--+---+
  1 row in set (0.00 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479981] Re: Openstackclient return wrong quota information

2015-07-31 Thread Hao Chen
*** This bug is a duplicate of bug 1420104 ***
https://bugs.launchpad.net/bugs/1420104

This is not the same issue as bug #1420104. There are two main
differents:

1. Quota set can't change the port limitations. In fact ```openstack quota 
set``` command can only change nova quota limitation, it can only be done by 
using neutronclient. 
see the reference here: 
https://wiki.openstack.org/wiki/OpenStackClient/Commands#Quota_3

2. the quota-update command ran successfully we didn't get any error
message like bug #1420104.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479981

Title:
  Openstackclient return wrong quota information

Status in neutron:
  New
Status in python-openstackclient:
  New

Bug description:
  I try to update the quota port limitations for my project,
  neutronclient works well and I can get the right result using
  neutronclient. But when we run ```openstack  quota show admin``` we
  found the quota port limitation doesn't changed.  Here are the testing
  process:

  
  layton-pistachio:/opt/openstack # neutron --insecure --os-project-id 
d3a77adc69004a6bbfe233cf7f08fdc1 --os-project-domain-name default 
--os-user-domain-name default quota-update --port 160

  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | health_monitor  | -1|
  | member  | -1|
  | network | 10|
  | pool| 10|
  | port| 160   |
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | vip | 10|
  +-+---+
  layton-pistachio:/opt/openstack # neutron --insecure --os-project-id 
d3a77adc69004a6bbfe233cf7f08fdc1 --os-project-domain-name default 
--os-user-domain-name default quota-show
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | health_monitor  | -1|
  | member  | -1|
  | network | 10|
  | pool| 10|
  | port| 160   |
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | vip | 10|
  +-+---+
  layton-pistachio:/opt/openstack # openstack quota show admin
  +--+---+
  | Field| Value |
  +--+---+
  | backup_gigabytes | 1000  |
  | backups  | 10|
  | cores| 20|
  | fixed-ips| -1|
  | floating-ips | 50|
  | gigabytes| 1000  |
  | health_monitor   | -1|
  | injected-file-size   | 10240 |
  | injected-files   | 5 |
  | injected-path-size   | 255   |
  | instances| 10|
  | key-pairs| 100   |
  | member   | -1|
  | network  | 10|
  | pool | 10|
  | port | 50|
  | project  | admin |
  | properties   | 128   |
  | ram  | 51200 |
  | router   | 10|
  | secgroup-rules   | 100   |
  | secgroups| 10|
  | server_group_members | 10|
  | server_groups| 10|
  | snapshots| 10|
  | subnet   | 10|
  | vip  | 10|
  | volumes  | 10|
  +--+---+

  layton-pistachio:/opt/openstack # openstack project list
  +--+-+
  | ID   | Name|
  +--+-+
  | 1c30e840b3d1447ea3820d99cc38cd33 | service |
  | 55d10960d5f7447990e69ebf481ac97d | demo|
  | d3a77adc69004a6bbfe233cf7f08fdc1 | admin   |
  +--+-+


  
  I checked the neutron database and the quota information was right:

  
  MariaDB [neutron] select * from quotas;
  
+--+--+--+---+
  | id   | tenant_id| 
resource | limit |
  
+--+--+--+---+
  | 1c5586f5-7c71-4666-9162-bf29bcaa511d | d3a77adc69004a6bbfe233cf7f08fdc1 | 
port |   160 |
  
+--+--+--+---+
  1 row in set (0.00 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1480441] [NEW] Live migration doesn't retry on migration pre-check failure

2015-07-31 Thread Chris St. Pierre
Public bug reported:

When live migrating an instance, it is supposed to retry some
(configurable) number of times. It only retries if the host
compatibility and migration pre-checks raise nova.exception.Invalid,
though:

https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L167-L174

If, for instance, a destination hypervisor has run out of disk space it
will not raise an Invalid subclass, but rather MigrationPreCheckError,
which causes the retry loop to short-circuit. Nova should instead retry
as long as either Invalid or MigrationPreCheckError is raised.

This can be tricky to reproduce because it only occurs if a host raises
MigrationPreCheckError before a valid host is found, so it's dependent
upon the order in which the scheduler supplies possible destinations to
the conductor. In theory, though, it can be reproduced by bringing up a
number of hypervisors, exhausting the disk on one -- ideally the one
that the scheduler will return first -- and then attempting a live
migration. It will fail with something like:

$ nova live-migration  --block-migrate stpierre-test-1 ERROR
(BadRequest): Migration pre-check error: Unable to migrate f44296dd-
ffa6-4ec0-8256-c311d025d46c: Disk of instance is too large(available on
destination host:-38654705664  need:1073741824) (HTTP 400) (Request-ID:
req-9951691a-c63c-4888-bec5-30a072dfe727)

Even when there are valid hosts to migrate to.

** Affects: nova
 Importance: Undecided
 Assignee: Chris St. Pierre (stpierre)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480441

Title:
  Live migration doesn't retry on migration pre-check failure

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When live migrating an instance, it is supposed to retry some
  (configurable) number of times. It only retries if the host
  compatibility and migration pre-checks raise nova.exception.Invalid,
  though:

  
https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L167-L174

  If, for instance, a destination hypervisor has run out of disk space
  it will not raise an Invalid subclass, but rather
  MigrationPreCheckError, which causes the retry loop to short-circuit.
  Nova should instead retry as long as either Invalid or
  MigrationPreCheckError is raised.

  This can be tricky to reproduce because it only occurs if a host
  raises MigrationPreCheckError before a valid host is found, so it's
  dependent upon the order in which the scheduler supplies possible
  destinations to the conductor. In theory, though, it can be reproduced
  by bringing up a number of hypervisors, exhausting the disk on one --
  ideally the one that the scheduler will return first -- and then
  attempting a live migration. It will fail with something like:

  $ nova live-migration  --block-migrate stpierre-test-1 ERROR
  (BadRequest): Migration pre-check error: Unable to migrate f44296dd-
  ffa6-4ec0-8256-c311d025d46c: Disk of instance is too large(available
  on destination host:-38654705664  need:1073741824) (HTTP 400)
  (Request-ID: req-9951691a-c63c-4888-bec5-30a072dfe727)

  Even when there are valid hosts to migrate to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458013] Re: ec2 code uses requests to talk to keystone (not keystoneclient)

2015-07-31 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458013

Title:
  ec2 code uses requests to talk to keystone (not keystoneclient)

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  This code:
  
https://github.com/openstack/nova/blob/master/nova/api/ec2/__init__.py#L270-L288
  uses requests directly to talk to keystone, which means that the ssl
  option configuration is nonstandard. We should use the keystoneclient
  directly for consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1458013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480270] Re: Can't get endpoints with v2 in command line

2015-07-31 Thread Lin Hua Cheng
There is a logic in /v2.0/endpoints that will only returns endpoints
created through the v2.0 api  (where legacy_endpoint_id is not None).

Endpoints in devstack are now created via the v3 API, and thus the
/v2.0/endpoints returns an empty list now.

This is working as designed.

** Changed in: python-openstackclient
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480270

Title:
  Can't get endpoints with v2 in command line

Status in Keystone:
  Invalid
Status in python-openstackclient:
  Invalid

Bug description:
  Reproducible Steps:

  1.  Set up the latest devstack environment

  2. Run the following commands
  $ source ~/devstack/accrc/admin/admin
  $ openstack endpoint list

  We can get nothing, but the endpoints data is real in db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480270] Re: Can't get endpoints with v2 in command line

2015-07-31 Thread Dolph Mathews
Although this is absolutely working as originally designed, it's
effectively broken. This bug report may also be a dupe?

Anyway, I think we (unfortunately) need to make a best guess to collapse
multiple interface-specific, completely independent v3 endpoints into v2
endpoints (where at least a public URL is required, and admin  internal
endpoints cannot exist on their own, according to the v2 spec).

** Changed in: keystone
   Importance: Undecided = High

** Changed in: keystone
   Status: Invalid = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480270

Title:
  Can't get endpoints with v2 in command line

Status in Keystone:
  Triaged
Status in python-openstackclient:
  Invalid

Bug description:
  Reproducible Steps:

  1.  Set up the latest devstack environment

  2. Run the following commands
  $ source ~/devstack/accrc/admin/admin
  $ openstack endpoint list

  We can get nothing, but the endpoints data is real in db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480106] [NEW] Move humanize and truncate functions to horizon.quota.js

2015-07-31 Thread Rajat Vig
Public bug reported:

As commented on the review for

https://review.openstack.org/#/c/199345

here

https://review.openstack.org/#/c/199345/16/horizon/static/framework/util
/tech-debt/helper-functions.js

the functions humanize and truncate are only used in horizon.quota.js
and should be located there.

Also, no code uses capitalize which should be deleted

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480106

Title:
  Move humanize and truncate functions to horizon.quota.js

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As commented on the review for

  https://review.openstack.org/#/c/199345

  here

  https://review.openstack.org/#/c/199345/16/horizon/static/framework/util
  /tech-debt/helper-functions.js

  the functions humanize and truncate are only used in horizon.quota.js
  and should be located there.

  Also, no code uses capitalize which should be deleted

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480119] [NEW] Replace tearDown with addCleanup in unit tests

2015-07-31 Thread Dave Chen
Public bug reported:

tearDown should be replace by addCleanup  in the unit tests to avoid
stale state if setUp fails or any failure in tearDown method.

There is a bp in cinder project, just copy them here for reference,
Infra team has indicated that tearDown methods should be replaced with 
addCleanup in unit tests.
The reason is that all addCleanup methods will be executed even if one of them 
fails, while a failure in tearDown method can leave the rest of the tearDown 
un-executed, which can leave stale state laying around.

Moreover, tearDown methods won't run if an exception raises in setUp
method, while addCleanup will run in such case.

So, we should replace tearDown with addCleanup methods.

Since there tearDown method is not used widely in keystone sub-project,
so just file a bug to track the change.


The link of the reference: 
https://blueprints.launchpad.net/cinder/+spec/replace-teardown-with-addcleanup

** Affects: keystone
 Importance: Undecided
 Assignee: Dave Chen (wei-d-chen)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Dave Chen (wei-d-chen)

** Description changed:

  tearDown should be replace by addCleanup  in the unit tests to avoid
  stale state if setUp fails or any failure in tearDown method.
  
  There is a bp in cinder project, just copy them here for reference,
  Infra team has indicated that tearDown methods should be replaced with 
addCleanup in unit tests.
  The reason is that all addCleanup methods will be executed even if one of 
them fails, while a failure in tearDown method can leave the rest of the 
tearDown un-executed, which can leave stale state laying around.
  
  Moreover, tearDown methods won't run if an exception raises in setUp
  method, while addCleanup will run in such case.
  
  So, we should replace tearDown with addCleanup methods.
  
+ Since there tearDown method is not used widely in keystone sub-project,
+ so just file a bug to track the change.
  
- Since there tearDown method is not used widely in keystone sub-project, so 
just file a bug to track the change.
+ 
+ The link of the reference: 
https://blueprints.launchpad.net/cinder/+spec/replace-teardown-with-addcleanup

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480119

Title:
  Replace tearDown with addCleanup in unit tests

Status in Keystone:
  In Progress

Bug description:
  tearDown should be replace by addCleanup  in the unit tests to avoid
  stale state if setUp fails or any failure in tearDown method.

  There is a bp in cinder project, just copy them here for reference,
  Infra team has indicated that tearDown methods should be replaced with 
addCleanup in unit tests.
  The reason is that all addCleanup methods will be executed even if one of 
them fails, while a failure in tearDown method can leave the rest of the 
tearDown un-executed, which can leave stale state laying around.

  Moreover, tearDown methods won't run if an exception raises in setUp
  method, while addCleanup will run in such case.

  So, we should replace tearDown with addCleanup methods.

  Since there tearDown method is not used widely in keystone sub-
  project, so just file a bug to track the change.

  
  The link of the reference: 
https://blueprints.launchpad.net/cinder/+spec/replace-teardown-with-addcleanup

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480127] [NEW] Display the primary project name of the user in user table

2015-07-31 Thread qiaomin032
Public bug reported:

In the users panel, the table displaying the primary project name of the
user will make more sense.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480127

Title:
  Display the primary project name of the user in user table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the users panel, the table displaying the primary project name of
  the user will make more sense.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480129] [NEW] nova rbd driver features are hard-coded, it should be readable from ceph.conf

2015-07-31 Thread Vikhyat Umrao
Public bug reported:

In nova rbd driver rbd features are hard-coded.

rbd.RBD().clone(src_client.ioctx,
 image.encode('utf-8'),
 snapshot.encode('utf-8'),
 dest_client.ioctx,
 dest_name,
 features=rbd.RBD_FEATURE_LAYERING)


If We see above given code we are just using RBD_FEATURE_LAYERING directly.
This restrict users to use only hard-coded RBD_FEATURE_LAYERING feature. 

We should give a fix which should allow users to opt in to upcoming
features that have not yet become default and users can specify features
in ceph.conf and nova can read features information from ceph.conf.

Fix should be something like :

Rreading rbd_default_features from ceph.conf for rbd
features configuration, falling back to layering if nothing is found.

** Affects: nova
 Importance: Undecided
 Assignee: Vikhyat Umrao (vumrao)
 Status: In Progress


** Tags: ceph

** Changed in: nova
 Assignee: (unassigned) = Vikhyat Umrao (vumrao)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480129

Title:
  nova rbd driver features are hard-coded, it should be readable from
  ceph.conf

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In nova rbd driver rbd features are hard-coded.

  rbd.RBD().clone(src_client.ioctx,
   image.encode('utf-8'),
   snapshot.encode('utf-8'),
   dest_client.ioctx,
   dest_name,
   features=rbd.RBD_FEATURE_LAYERING)

  
  If We see above given code we are just using RBD_FEATURE_LAYERING directly.
  This restrict users to use only hard-coded RBD_FEATURE_LAYERING feature. 

  We should give a fix which should allow users to opt in to upcoming
  features that have not yet become default and users can specify
  features in ceph.conf and nova can read features information from
  ceph.conf.

  Fix should be something like :

  Rreading rbd_default_features from ceph.conf for rbd
  features configuration, falling back to layering if nothing is found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480131] [NEW] Volume_Attachment_ID uses Volume_ID

2015-07-31 Thread Maurice Schreiber
Public bug reported:

Version: Kilo Stable

Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
I receive the volume_id instead of the volume_attachment_id.

Example:

curl -g -H X-Auth-Token: $ADMIN_TOKEN -X GET
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
/os-volume_attachments

{volumeAttachments: [{device: /dev/vdb, serverId:
56293904-9384-48f8-9329-c961056583f1, id: a75bec42-77b5-42ff-
90e5-e505af14b84a, volumeId: a75bec42-77b5-42ff-
90e5-e505af14b84a}]}


Having a look at the database directly, I see the real volume_attachment_id:

select (id, volume_id, instance_uuid) from volume_attachment where
volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

(9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)


Cinder API gets it right, though.


Further Impact:
Horizon uses the returned volume_attachment_id to query  for volume_details.
That is wrong and only works now because of the broken nova behaviour.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480131

Title:
  Volume_Attachment_ID uses Volume_ID

Status in OpenStack Compute (nova):
  New

Bug description:
  Version: Kilo Stable

  Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
  I receive the volume_id instead of the volume_attachment_id.

  Example:

  curl -g -H X-Auth-Token: $ADMIN_TOKEN -X GET
  
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
  /os-volume_attachments

  {volumeAttachments: [{device: /dev/vdb, serverId:
  56293904-9384-48f8-9329-c961056583f1, id: a75bec42-77b5-42ff-
  90e5-e505af14b84a, volumeId: a75bec42-77b5-42ff-
  90e5-e505af14b84a}]}

  
  Having a look at the database directly, I see the real volume_attachment_id:

  select (id, volume_id, instance_uuid) from volume_attachment where
  volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

  (9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
  90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)

  
  Cinder API gets it right, though.

  
  Further Impact:
  Horizon uses the returned volume_attachment_id to query  for volume_details.
  That is wrong and only works now because of the broken nova behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480131] Re: Volume_Attachment_ID uses Volume_ID

2015-07-31 Thread Maurice Schreiber
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480131

Title:
  Volume_Attachment_ID uses Volume_ID

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Version: Kilo Stable

  Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
  I receive the volume_id instead of the volume_attachment_id.

  Example:

  curl -g -H X-Auth-Token: $ADMIN_TOKEN -X GET
  
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
  /os-volume_attachments

  {volumeAttachments: [{device: /dev/vdb, serverId:
  56293904-9384-48f8-9329-c961056583f1, id: a75bec42-77b5-42ff-
  90e5-e505af14b84a, volumeId: a75bec42-77b5-42ff-
  90e5-e505af14b84a}]}

  
  Having a look at the database directly, I see the real volume_attachment_id:

  select (id, volume_id, instance_uuid) from volume_attachment where
  volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

  (9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
  90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)

  
  Cinder API gets it right, though.

  
  Further Impact:
  Horizon uses the returned volume_attachment_id to query  for volume_details.
  That is wrong and only works now because of the broken nova behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473944] Re: login instance by vnc failled

2015-07-31 Thread Markus Zoeller
@gs-opencos-zte:

Sounds like an issue with facter from PuppetLabs which configures
OpenStack. Their issue tracker is at [1] and not at Launchpad, so I
cannot add it as affected project. OpenStack doesn't have control over
facter. I think it makes sense to open an issue at [1] and close this
one as Invalid. If you think this is wrong and OpenStack has to fix
something, reopen this bug by setting it to New and add an
explanation.

[1]
https://tickets.puppetlabs.com/browse/FACT/?selectedTab=com.atlassian.jira
.jira-projects-plugin:issues-panel

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473944

Title:
  login instance by vnc failled

Status in OpenStack Compute (nova):
  Invalid

Bug description:
when i install openstack kilo with facter 2.4.4, i login my instance 
failure by vnc, because the configuration 
vncserver_proxyclient_address=SBCJ3TFG in nova.conf, SBCJ3TFG is my host name, 
but ping SBCJ3TFG will failed.
[root@SBCJ3TFG manifests]# facter|grep fqdn
fqdn = SBCJ3TFG
   
   i found the code in nova_compute.pp as follow:

  if ($::fqdn == '' or $::fqdn =~ /localhost/) {
# For cases where FQDNs have not been correctly set
$vncproxy_server = choose_my_ip(hiera('HOST_LIST'))
  } else {
$vncproxy_server = $::fqdn
  }
and the comment in the code is not perfectly realized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1473944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480264] [NEW] Error message indents are different on login page

2015-07-31 Thread Timur Sufiev
Public bug reported:

When 
1) session is expired = user is redirected to login page where
2) he provides incorrect credentials,
3) he observes that indents are different for different error messages (see 
screenshot).

** Affects: horizon
 Importance: Undecided
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress


** Tags: ux

** Tags added: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480264

Title:
  Error message indents are different on login page

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When 
  1) session is expired = user is redirected to login page where
  2) he provides incorrect credentials,
  3) he observes that indents are different for different error messages (see 
screenshot).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480480] [NEW] keystone v3 example policy file should allow domain admin to get it's current domain

2015-07-31 Thread Dan Nguyen
Public bug reported:

The example keystone v3 policy file should allow domain admin to get
it's domain.

https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L32


-identity:get_domain: rule:cloud_admin,
+identity:get_domain: rule:cloud_admin or 
rule:admin_and_matching_domain_id,


From horizon this will give the Domain Admin a read only view of the Domain 
containing the following data.

NameDescription Domain ID   Enabled

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1480480

Title:
  keystone v3 example policy file should allow domain admin to  get it's
  current domain

Status in Keystone:
  New

Bug description:
  The example keystone v3 policy file should allow domain admin to get
  it's domain.

  
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L32

  
  -identity:get_domain: rule:cloud_admin,
  +identity:get_domain: rule:cloud_admin or 
rule:admin_and_matching_domain_id,

  
  From horizon this will give the Domain Admin a read only view of the Domain 
containing the following data.

  NameDescription Domain ID   Enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1480480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475831] Re: injected_file_content_bytes should be changed to injected-file-size

2015-07-31 Thread Nikola Đipanov
I think Alex was saying that this needs to be fixed in the openstack-
client, not Nova client. Nova client does the right thing for what the
server expects, it's the unified client that gets it wrong.

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475831

Title:
  injected_file_content_bytes should be changed to injected-file-size

Status in OpenStack Compute (nova):
  Invalid
Status in python-openstackclient:
  New

Bug description:
  In nova and novaclient, injected_file_content_bytes should be changed
  to injected_file_size.

  Because

  (1)
  nova/quota.py
  nvoa/compute/api.py

  please use 'grep -r injected_file_content_bytes' to look at

  (2)
  novaclient/v2/shell.py

  3877 _quota_resources = ['instances', 'cores', 'ram',
  3878 'floating_ips', 'fixed_ips', 'metadata_items',
  3879 'injected_files', 'injected_file_content_bytes',
  3880 'injected_file_path_bytes', 'key_pairs',
  3881 'security_groups', 'security_group_rules',
  3882 'server_groups', 'server_group_members']

  (3)
  python-openstackclient/openstackclient/common/quota.py

   30 COMPUTE_QUOTAS = {
   31 'cores': 'cores',
   32 'fixed_ips': 'fixed-ips',
   33 'floating_ips': 'floating-ips',
   34 'injected_file_content_bytes': 'injected-file-size',
   35 'injected_file_path_bytes': 'injected-path-size',
   36 'injected_files': 'injected-files',
   37 'instances': 'instances',
   38 'key_pairs': 'key-pairs',
   39 'metadata_items': 'properties',
   40 'ram': 'ram',
   41 'security_group_rules': 'secgroup-rules',
   42 'security_groups': 'secgroups',
   43 }

  (4).
  
http://docs.openstack.org/developer/python-openstackclient/command-objects/quota.html

  os quota set
  # Compute settings
  [--cores num-cores]
  [--fixed-ips num-fixed-ips]
  [--floating-ips num-floating-ips]
  [--injected-file-size injected-file-bytes]
  [--injected-files num-injected-files]
  [--instances num-instances]
  [--key-pairs num-key-pairs]
  [--properties num-properties]
  [--ram ram-mb]

  # Volume settings
  [--gigabytes new-gigabytes]
  [--snapshots new-snapshots]
  [--volumes new-volumes]
  [--volume-type volume-type]

  project

  so when you use
  stack@openstack:~$ openstack quota set --injected-file-size 11 testproject_dx
  No quotas updatedstack@openstack:~$

  If this bug is solved,  plus the fix to
  https://bugs.launchpad.net/keystone/+bug/1420104 can solve these two.

  
  So the bug is related with nova and novaclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480156] [NEW] Image NotFound and NoUniqueMatch raises same CommandError in openstackclient part

2015-07-31 Thread Marek Aufart
Public bug reported:

Exceptions are not structured correctly (at least not friedly) when
using method find_resource from openstackclient API from python-
glanceclient.

CommandError exception is raised for both NotFound and NoUniqueMatch,
error message is different, what is good, but it would be better provide
more descriptive exception too.

Related code https://github.com/openstack/python-
glanceclient/blob/master/glanceclient/openstack/common/apiclient/utils.py#L72-L100

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480156

Title:
  Image NotFound and NoUniqueMatch raises same CommandError in
  openstackclient part

Status in Glance:
  New

Bug description:
  Exceptions are not structured correctly (at least not friedly) when
  using method find_resource from openstackclient API from python-
  glanceclient.

  CommandError exception is raised for both NotFound and NoUniqueMatch,
  error message is different, what is good, but it would be better
  provide more descriptive exception too.

  Related code https://github.com/openstack/python-
  
glanceclient/blob/master/glanceclient/openstack/common/apiclient/utils.py#L72-L100

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480161] [NEW] in the dir /var/log/qemu files too much to create vm failed

2015-07-31 Thread hiyonger-ZTE_TECS
Public bug reported:

after create vm and destroy vm about 864112 times,the log file is too much. 
and cann't ceate vm any more.
the libvirt  error log:
2015-07-23 16:25:26.670+: 2280: error : qemuProcessWaitForMonitor:1915 : 
internal error: process exited while connecting to monitor: 
/var/log/qemu/d142f7e7-6ebb-4343-bd3c-d69c6d4e3627: Permission denied

-rw-r--r-- 1 root root 115 Jul 31 16:15 96dc7a02-a91c-4c46-b225-7f1c0cac8197
[root@slot13 qemu]# ll | wc
  864112  7776912 66535835
[root@slot13 qemu]# pwd
/var/log/qemu
[root@slot13 qemu]#

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480161

Title:
  in the dir /var/log/qemu files too much  to create vm failed

Status in OpenStack Compute (nova):
  New

Bug description:
  after create vm and destroy vm about 864112 times,the log file is too much. 
  and cann't ceate vm any more.
  the libvirt  error log:
  2015-07-23 16:25:26.670+: 2280: error : qemuProcessWaitForMonitor:1915 : 
internal error: process exited while connecting to monitor: 
/var/log/qemu/d142f7e7-6ebb-4343-bd3c-d69c6d4e3627: Permission denied

  -rw-r--r-- 1 root root 115 Jul 31 16:15 96dc7a02-a91c-4c46-b225-7f1c0cac8197
  [root@slot13 qemu]# ll | wc
864112  7776912 66535835
  [root@slot13 qemu]# pwd
  /var/log/qemu
  [root@slot13 qemu]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480196] [NEW] Request-id is not getting returned if glance throws 500 error

2015-07-31 Thread Abhijeet Malawade
Public bug reported:

If glance throws Internal Server Error (500) for some reason,
then in that case 'request-id' is not getting returned in response headers.

Request-id is required to analyse logs effectively on failure and it should be
returned from headers.

For ex. -

image-create api returns 500 error if property name exceeds 255 characters
(fix for this issue is in progress : https://review.openstack.org/#/c/203948/)

curl command:

$ curl -g -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-
meta-container_format: ami' -H 'x-image-meta-property-
:
jskg' -H 'Accept: */*' -H 'X-Auth-Token:
b94bd7b3a0fb4fada73fe170fe7d49cb' -H 'Connection: keep-alive' -H 'x
-image-meta-is_public: None' -H 'User-Agent: python-glanceclient' -H
'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format:
ami' http://10.69.4.173:9292/v1/images

HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 0
Date: Fri, 31 Jul 2015 08:27:31 GMT
Connection: close

Here request-id is not part of response header.

** Affects: glance
 Importance: Undecided
 Assignee: Abhijeet Malawade (abhijeet-malawade)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Abhijeet Malawade (abhijeet-malawade)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480196

Title:
  Request-id is not getting returned if glance throws 500 error

Status in Glance:
  New

Bug description:
  If glance throws Internal Server Error (500) for some reason,
  then in that case 'request-id' is not getting returned in response headers.

  Request-id is required to analyse logs effectively on failure and it should be
  returned from headers.

  For ex. -

  image-create api returns 500 error if property name exceeds 255 characters
  (fix for this issue is in progress : https://review.openstack.org/#/c/203948/)

  curl command:

  $ curl -g -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-
  meta-container_format: ami' -H 'x-image-meta-property-
  
:
  jskg' -H 'Accept: */*' -H 'X-Auth-Token:
  b94bd7b3a0fb4fada73fe170fe7d49cb' -H 'Connection: keep-alive' -H 'x
  -image-meta-is_public: None' -H 'User-Agent: python-glanceclient' -H
  'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format:
  ami' http://10.69.4.173:9292/v1/images

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Fri, 31 Jul 2015 08:27:31 GMT
  Connection: close

  Here request-id is not part of response header.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480191] [NEW] User can send 'request-id' from headers to glance api

2015-07-31 Thread Abhijeet Malawade
Public bug reported:

User can send 'X-Openstack-Request-Id' headers while calling any glance api.
Glance uses this 'X-Openstack-Request-Id' sent from headers for logging and 
also adds same request-id in response headers.

User can send any value (long string) as 'X-Openstack-Request-Id' header to 
glance service,
because of this log file can get filled with invalid (or long) request-ids.

IMO glance should not take 'request-id' sent from user, it should always
create it's own (valid) 'request-id'.


1. curl command to send 'X-Openstack-Request-Id' header image-list api:

$ curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*'
-H 'Connection: keep-alive' -H 'X-Auth-Token:
63282e92e8e64be2a89587cfaada3554' -H 'X-Openstack-Request-Id: testing--
123456'
http://10.69.4.173:9292/v2/images?limit=1

HTTP/1.1 200 OK
Content-Length: 856
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: 
req-testing--123456
Date: Fri, 31 Jul 2015 07:13:39 GMT
Connection: keep-alive

{images: [{status: active, name: cirros-0.3.4-x86_64-uec,
tags: [], kernel_id: a03839e1-95db-459c-97d0-711daab0,
container_format: ami, created_at: 2015-07-31T07:01:03Z,
ramdisk_id: 90c59147-1afe-4ede-b1da-435ab1ef98f6, disk_format:
ami, updated_at: 2015-07-31T07:01:04Z, visibility: public,
self: /v2/images/26b712f3-22a9-45fb-aa8f-f9851d55e71d, min_disk:
0, protected: false, id: 26b712f3-22a9-45fb-aa8f-f9851d55e71d,
size: 25165824, file: /v2/images/26b712f3-22a9-45fb-aa8f-
f9851d55e71d/file, checksum: eb9139e4942121f22bbc2afc0400b2a4,
owner: 632960b4c18c4257bb404d0047be922c, virtual_size: null,
min_ram: 0, schema: /v2/schemas/image}], next:
/v2/images?marker=26b712f3-22a9-45fb-aa8f-f9851d55e71dlimit=1,
schema: /v2/schemas/images, first: /v2/images?limit=1}

2. glance-api service logs:

2015-07-31 00:13:39.612 DEBUG oslo_policy.policy 
[testing--123456
 bd66b0b7b1c04d738cd5d79c5619fd2d 0d0dc11c6b1649068eb4c4068791a602] Reloaded 
policy file: /etc/glance/policy.json from (pid=27225) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:436
2015-07-31 00:13:39.613 DEBUG oslo_policy.policy 
[testing--123456
 bd66b0b7b1c04d738cd5d79c5619fd2d 0d0dc11c6b1649068eb4c4068791a602] Reloaded 
policy file: /etc/glance/policy.json from (pid=27225) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:436
2015-07-31 00:13:39.652 INFO eventlet.wsgi.server 
[testing--123456
 bd66b0b7b1c04d738cd5d79c5619fd2d 0d0dc11c6b1649068eb4c4068791a602] 10.69.4.173 
- - [31/Jul/2015 00:13:39] GET /v2/images?limit=1 HTTP/1.1 200 1161 0.042854

** Affects: glance
 Importance: Undecided
 Assignee: Abhijeet Malawade (abhijeet-malawade)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Abhijeet Malawade (abhijeet-malawade)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480191

Title:
  User can send 'request-id' from headers to glance api

Status in Glance:
  New

Bug description:
  User can send 'X-Openstack-Request-Id' headers while calling any glance api.
  Glance uses this 'X-Openstack-Request-Id' sent from headers for logging and 
also adds same request-id in response headers.

  User can send any value (long string) as 'X-Openstack-Request-Id' header to 
glance service,
  because of this log file can get filled with invalid (or long) request-ids.

  IMO glance should not take 'request-id' sent from user, it should
  always create it's own (valid) 'request-id'.

  
  1. curl command to send 'X-Openstack-Request-Id' header image-list api:

  $ curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept:
  */*' -H 'Connection: keep-alive' -H 'X-Auth-Token:
  63282e92e8e64be2a89587cfaada3554' -H 'X-Openstack-Request-Id: testing
  --
  
123456'
  http://10.69.4.173:9292/v2/images?limit=1

  HTTP/1.1 200 OK
  Content-Length: 856
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: 
req-testing--123456
  Date: Fri, 31 Jul 2015 07:13:39 GMT
  Connection: keep-alive

  {images: [{status: active, name: cirros-0.3.4-x86_64-uec,
  tags: [], kernel_id: 

[Yahoo-eng-team] [Bug 1480204] [NEW] cancel button is missing in security group's add rule modal

2015-07-31 Thread Masco Kaliyamoorthy
Public bug reported:

In security group's add rule modal, cancel button is missing.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480204

Title:
  cancel button is missing in security group's add rule modal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In security group's add rule modal, cancel button is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480226] [NEW] SAWarning: The IN-predicate on tags.tag was invoked with an empty sequence

2015-07-31 Thread Sergey Nikitin
Public bug reported:

When the 'to_delete' list of instance tags in db method
instance_tag_set() is empty, warnings are printed in the nova logs:

SAWarning: The IN-predicate on tags.tag was invoked with an empty
sequence. This results in a contradiction, which nonetheless can be
expensive to evaluate. Consider alternative strategies for improved
performance.

The fix is to not query the DB in that case.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480226

Title:
  SAWarning: The IN-predicate on tags.tag was invoked with an empty
  sequence

Status in OpenStack Compute (nova):
  New

Bug description:
  When the 'to_delete' list of instance tags in db method
  instance_tag_set() is empty, warnings are printed in the nova logs:

  SAWarning: The IN-predicate on tags.tag was invoked with an empty
  sequence. This results in a contradiction, which nonetheless can be
  expensive to evaluate. Consider alternative strategies for improved
  performance.

  The fix is to not query the DB in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480222] [NEW] hw:mem_page_size=2MB|1GB unsupported

2015-07-31 Thread Emma Foley
Public bug reported:

The spec Virt driver large pages allocation for guest RAM (
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
/virt-driver-large-pages.html ) was marked as complete for Kilo.

However, the options to include standard Hugepage sizes 2MB and 1GB is
not supported.

The flavor extra spec key hw:mem_page_size=2MB|1GB is not supported.

** Affects: nova
 Importance: Undecided
 Assignee: Emma Foley (emma-l-foley)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Emma Foley (emma-l-foley)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480222

Title:
  hw:mem_page_size=2MB|1GB unsupported

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The spec Virt driver large pages allocation for guest RAM (
  http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
  /virt-driver-large-pages.html ) was marked as complete for Kilo.

  However, the options to include standard Hugepage sizes 2MB and 1GB is
  not supported.

  The flavor extra spec key hw:mem_page_size=2MB|1GB is not supported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp