[Yahoo-eng-team] [Bug 1444798] Re: Fail to create group in multiple domains environment (Horizon kilo-3)

2015-04-15 Thread Lin Hua Cheng
*** This bug is a duplicate of bug 1437479 ***
https://bugs.launchpad.net/bugs/1437479

** This bug has been marked a duplicate of bug 1437479
   Groups create has a bad request url

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1444798

Title:
  Fail to create group in multiple domains environment (Horizon kilo-3)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After activating the multiple domains option in Horizon kilo-3 environment, 
"Create Group" dialog is not working properly.
  Problems:
  1. No input field verification
  2. Fail to create a group

  Note: please see attached local_settings.py for configuraiton details.

  Performt the following actions to reproduce the problem No.1:
  1. Log in as admin
  2. Display the Identity - Groups panel
  3. Click the "Create Group" button
  4. After the dialog is displayed, click the "Create Group" button.
  5. Dialog closed inmaturely without performing the input field verification 
and an error message is displayed.
  ("Create Group Dialog - Input Field Verification Not Working.jpg")

  Performt the following actions to reproduce the problem No.2:
  1. Log in as admin
  2. Display the Identity - Groups panel
  3. Click the "Create Group" button
  4. After the dialog is displayed, enter a proper name for the new group and 
click the "Create Group" button.
  5. Dialog closed but a new group is not created. Moreover, an error message 
is displayed.
  ("Create Group Dialog - Failed to create.jpg")

  Please refer to "Proposed Fix.jpg" for propsed solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1444798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444798] [NEW] Fail to create group in multiple domains environment (Horizon kilo-3)

2015-04-15 Thread Dixon Siu
Public bug reported:

After activating the multiple domains option in Horizon kilo-3 environment, 
"Create Group" dialog is not working properly.
Problems:
1. No input field verification
2. Fail to create a group

Note: please see attached local_settings.py for configuraiton details.

Performt the following actions to reproduce the problem No.1:
1. Log in as admin
2. Display the Identity - Groups panel
3. Click the "Create Group" button
4. After the dialog is displayed, click the "Create Group" button.
5. Dialog closed inmaturely without performing the input field verification and 
an error message is displayed.
("Create Group Dialog - Input Field Verification Not Working.jpg")

Performt the following actions to reproduce the problem No.2:
1. Log in as admin
2. Display the Identity - Groups panel
3. Click the "Create Group" button
4. After the dialog is displayed, enter a proper name for the new group and 
click the "Create Group" button.
5. Dialog closed but a new group is not created. Moreover, an error message is 
displayed.
("Create Group Dialog - Failed to create.jpg")

Please refer to "Proposed Fix.jpg" for propsed solution.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Configuration file, screen capture of the bug, proposed 
fix"
   
https://bugs.launchpad.net/bugs/1444798/+attachment/4376795/+files/InvestigationResults.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1444798

Title:
  Fail to create group in multiple domains environment (Horizon kilo-3)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After activating the multiple domains option in Horizon kilo-3 environment, 
"Create Group" dialog is not working properly.
  Problems:
  1. No input field verification
  2. Fail to create a group

  Note: please see attached local_settings.py for configuraiton details.

  Performt the following actions to reproduce the problem No.1:
  1. Log in as admin
  2. Display the Identity - Groups panel
  3. Click the "Create Group" button
  4. After the dialog is displayed, click the "Create Group" button.
  5. Dialog closed inmaturely without performing the input field verification 
and an error message is displayed.
  ("Create Group Dialog - Input Field Verification Not Working.jpg")

  Performt the following actions to reproduce the problem No.2:
  1. Log in as admin
  2. Display the Identity - Groups panel
  3. Click the "Create Group" button
  4. After the dialog is displayed, enter a proper name for the new group and 
click the "Create Group" button.
  5. Dialog closed but a new group is not created. Moreover, an error message 
is displayed.
  ("Create Group Dialog - Failed to create.jpg")

  Please refer to "Proposed Fix.jpg" for propsed solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1444798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444797] [NEW] ovs_lib: get_port_tag_dict race with port removal

2015-04-15 Thread YAMAMOTO Takashi
Public bug reported:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlVuYWJsZSB0byBleGVjdXRlIFwiIEFORCBtZXNzYWdlOiBcIi0tY29sdW1ucz1uYW1lLHRhZ1wiIEFORCBtZXNzYWdlOiBcIm92cy12c2N0bDogbm8gcm93XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTE1OTgwNzI3OX0=

http://logs.openstack.org/45/160245/42/check/check-tempest-dsvm-neutron-dvr/09be830/logs/screen-q-agt.txt.gz
2015-04-15 10:05:53.275 DEBUG neutron.agent.linux.utils [req-3c5b23d8-3c6f-4145-
bdc2-c21c106b4192 None None] 
Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 'list
-ports', 'br-int']
Exit code: 0
Stdin:
Stdout: patch-tun\nqr-5a91d525-f6\nqr-a84affc0-ff\nqvo34a97bc9-9a\nsg-1e0fedbc-a
9\nsg-29d238f1-d0\nsg-591a0885-ba\nsg-d27f8240-60\ntap2566d7d2-51\ntap3704a993-8
5\ntapd8d2b093-c5

Stderr:  execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:134
2015-04-15 10:05:53.275 DEBUG neutron.agent.linux.utils [req-3c5b23d8-3c6f-4145-
bdc2-c21c106b4192 None None] Running command (rootwrap daemon): ['ovs-vsctl', '-
-timeout=10', '--oneline', '--format=json', '--', '--columns=name,tag', 'list', 
'Port', 'patch-tun', 'qr-5a91d525-f6', 'qr-a84affc0-ff', 'qvo34a97bc9-9a', 'sg-1
e0fedbc-a9', 'sg-29d238f1-d0', 'sg-591a0885-ba', 'sg-d27f8240-60', 'tap2566d7d2-
51', 'tap3704a993-85', 'tapd8d2b093-c5'] execute_rootwrap_daemon /opt/stack/new/
neutron/neutron/agent/linux/utils.py:100
2015-04-15 10:05:53.276 3624 DEBUG neutron.agent.linux.ovsdb_monitor [-] Output 
received from ovsdb monitor: {"data":[["b3d74d22-a37d-4cdc-a85c-e14437a46cc8","d
elete","sg-591a0885-ba",75]],"headings":["row","action","name","ofport"]}
 _read_stdout /opt/stack/new/neutron/neutron/agent/linux/ovsdb_monitor.py:44
2015-04-15 10:05:53.280 DEBUG neutron.agent.linux.utils [req-3c5b23d8-3c6f-4145-
bdc2-c21c106b4192 None None] 
Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--co
lumns=name,tag', 'list', 'Port', u'patch-tun', u'qr-5a91d525-f6', u'qr-a84affc0-
ff', u'qvo34a97bc9-9a', u'sg-1e0fedbc-a9', u'sg-29d238f1-d0', u'sg-591a0885-ba',
 u'sg-d27f8240-60', u'tap2566d7d2-51', u'tap3704a993-85', u'tapd8d2b093-c5']
Exit code: 1
Stdin:
Stdout:
Stderr: ovs-vsctl: no row "sg-591a0885-ba" in table Port
 execute /opt/stack/new/neutron/neutron/agent/linux/utils.py:134
2015-04-15 10:05:53.281 ERROR neutron.agent.ovsdb.impl_vsctl [req-3c5b23d8-3c6f-
4145-bdc2-c21c106b4192 None None] Unable to execute ['ovs-vsctl', '--timeout=10'
, '--oneline', '--format=json', '--', '--columns=name,tag', 'list', 'Port', u'pa
tch-tun', u'qr-5a91d525-f6', u'qr-a84affc0-ff', u'qvo34a97bc9-9a', u'sg-1e0fedbc
-a9', u'sg-29d238f1-d0', u'sg-591a0885-ba', u'sg-d27f8240-60', u'tap2566d7d2-51'
, u'tap3704a993-85', u'tapd8d2b093-c5'].
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl Traceback (mos
t recent call last):
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl   File "/opt/s
tack/new/neutron/neutron/agent/ovsdb/impl_vsctl.py", line 63, in run_vsctl
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl log_fail_a
s_error=False).rstrip()
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl   File "/opt/s
tack/new/neutron/neutron/agent/linux/utils.py", line 137, in execute
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl raise Runt
imeError(m)
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl RuntimeError: 
2015-04-15 10:05:53.281 3624 TRACE neutron.agent.ovsdb.impl_vsctl Command: ['ovs
-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--columns=name,tag
', 'list', 'Port', u'patch-tun', u'qr-5a91d525-f6', u'qr-a84affc0-ff', u'qvo34a9
7bc9-9a', u'sg-1e0fedbc-a9', u'sg-29d238f1-d0', u'sg-591a0885-ba', u'sg-d27f8240
-60', u'tap2566d7d2-51', u'tap3704a993-85', u'tapd8d2b093-c5']

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444797

Title:
  ovs_lib: get_port_tag_dict race with port removal

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlVuYWJsZSB0byBleGVjdXRlIFwiIEFORCBtZXNzYWdlOiBcIi0tY29sdW1ucz1uYW1lLHRhZ1wiIEFORCBtZXNzYWdlOiBcIm92cy12c2N0bDogbm8gcm93XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyOTE1OTgwNzI3OX0=

  
http://logs.openstack.org/45/160245/42/check/check-tempest-dsvm-neutron-dvr/09be830/logs/screen-q-agt.txt.gz
  2015-04-15 10:05:53.275 DEBUG neutron.agent.linux.utils 
[req-3c5b23d8-3c6f-4145-
  bdc2-c21c106b4192 None None] 
  Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'list
  -ports', 'br-int']

[Yahoo-eng-team] [Bug 1444765] Re: admin's tenant_id is not the same with load_balancer's tenant_id in tempest tests

2015-04-15 Thread Madhusudhan Kandadai
** Package changed: ifupdown (Ubuntu) => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444765

Title:
  admin's tenant_id is not the same with load_balancer's tenant_id in
  tempest tests

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Here is the following scenario:

  This is happening only WITH tempest tests by inheriting the necessary
  class from 'tempest/api/neutron':

  (a) When creating a loadbalancer for non_admin user, I could see the
  'tenant_id' is equal to loadbalancer.get('tenant_id'). This sounds
  good to me and requires no attention.

  i.e.,

  credentials = cls.isolated_creds.get_primary_creds()
  mgr = tempest_clients.Manager(credentials=credentials)
  auth_provider = mgr.get_auth_provider(credentials)
  client_args = [auth_provider, 'network', 'regionOne']

  (b) whereas, when I create a loadbalancer using admin credentials,
  the tenant_id  NOT equals loadbalancer.get('tenant_id'). In general it
  SHOULD be equal.

  i.e,.

  credentials_admin = cls.isolated_creds.get_admin_creds()
  mgr_admin = tempest_clients.Manager(credentials=credentials_admin)
  auth_provider_admin = mgr_admin.get_auth_provider(credentials_admin)
  client_args = [auth_provider_admin, 'network', 'regionOne']

  Not sure why this is happening, the expected behavior should be "An
  user(either admin/non-admin) should be able to create a loadbalancer
  for the default tenant and that 'tenant_id' should be equal to the
  admin's 'tenant_id'. There are other test cases too specially for
  'admin' role got succeeded and behaving properly.

  Details about the code can be found at
  
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/base.py

  For the exact testcase:

  (a) For admin_testcase:  see line 55 - 61: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_admin.py
  (b) For non_admin testcase:  see line 259 -266: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_non_admin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444765] [NEW] admin's tenant_id is not the same with load_balancer's tenant_id in tempest tests

2015-04-15 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Here is the following scenario:

This is happening only WITH tempest tests by inheriting the necessary
class from 'tempest/api/neutron':

(a) When creating a loadbalancer for non_admin user, I could see the
'tenant_id' is equal to loadbalancer.get('tenant_id'). This sounds good
to me and requires no attention.

i.e.,

credentials = cls.isolated_creds.get_primary_creds()
mgr = tempest_clients.Manager(credentials=credentials)
auth_provider = mgr.get_auth_provider(credentials)
client_args = [auth_provider, 'network', 'regionOne']

(b) whereas, when I create a loadbalancer using admin credentials,  the
tenant_id  NOT equals loadbalancer.get('tenant_id'). In general it
SHOULD be equal.

i.e,.

credentials_admin = cls.isolated_creds.get_admin_creds()
mgr_admin = tempest_clients.Manager(credentials=credentials_admin)
auth_provider_admin = mgr_admin.get_auth_provider(credentials_admin)
client_args = [auth_provider_admin, 'network', 'regionOne']

Not sure why this is happening, the expected behavior should be "An
user(either admin/non-admin) should be able to create a loadbalancer for
the default tenant and that 'tenant_id' should be equal to the admin's
'tenant_id'. There are other test cases too specially for 'admin' role
got succeeded and behaving properly.

Details about the code can be found at
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/base.py

For the exact testcase:

(a) For admin_testcase:  see line 55 - 61: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_admin.py
(b) For non_admin testcase:  see line 259 -266: 
https://review.openstack.org/#/c/171832/7/neutron_lbaas/tests/tempest/v2/api/test_load_balancers_non_admin.py

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
admin's tenant_id is not the same with load_balancer's tenant_id in tempest 
tests
https://bugs.launchpad.net/bugs/1444765
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444776] [NEW] Strongswan vpnaas driver does not support rhel platform

2015-04-15 Thread Wei Hu
Public bug reported:

VPNaaS has already could support strongswan as its driver. But it is only for 
ubuntu, and can not support rhel or fedora.
Compared with ubuntu, Rhel using different CLI command and strongswan 
configuration directory.

** Affects: neutron
 Importance: Undecided
 Assignee: Wei Hu (huwei-xtu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Wei Hu (huwei-xtu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444776

Title:
  Strongswan vpnaas driver does not support rhel platform

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  VPNaaS has already could support strongswan as its driver. But it is only for 
ubuntu, and can not support rhel or fedora.
  Compared with ubuntu, Rhel using different CLI command and strongswan 
configuration directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444767] [NEW] scrubber edge cases orphan objects and records

2015-04-15 Thread Jesse J. Cook
Public bug reported:

The scrubber can leave orphaned objects and db records in error / edge
cases. This is because the order in which it updates the DB and object
store. Recommended solution:

For each image that has status pending_delete:
For each image location that has status pending_delete:
Delete the object in the object store
If error other than object not found, continue
Mark image location status as deleted
If all image locations are deleted, mark image as deleted
Else if no image locations are marked as pending_delete, change status to 
something else??? # I suppose it's possible an image_location would still be 
active or some other non-deleted status. I don't think we want to orphan the 
image_location by marking the image deleted in this case.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1444767

Title:
  scrubber edge cases orphan objects and records

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The scrubber can leave orphaned objects and db records in error / edge
  cases. This is because the order in which it updates the DB and object
  store. Recommended solution:

  For each image that has status pending_delete:
  For each image location that has status pending_delete:
  Delete the object in the object store
  If error other than object not found, continue
  Mark image location status as deleted
  If all image locations are deleted, mark image as deleted
  Else if no image locations are marked as pending_delete, change status to 
something else??? # I suppose it's possible an image_location would still be 
active or some other non-deleted status. I don't think we want to orphan the 
image_location by marking the image deleted in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1444767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444086] Re: cloud-init removes SharedConfig.XML on new instances of Azure

2015-04-15 Thread Scott Moser
** No longer affects: cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1444086

Title:
  cloud-init removes SharedConfig.XML on new instances of Azure

Status in walinuxagent package in Ubuntu:
  Fix Released

Bug description:
  The Azure DS is removing /var/lib/waagent/SharedConfig.xml on first
  boot, causing walinuxagent to fail.

  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Attempting to remove 
/var/lib/waagent/SharedConfig.xml
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[INFO]: removed stale 
file(s) in '/var/lib/waagent': ['/var/lib/waagent/SharedConfig.xml']
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Writing to 
/var/lib/waagent/ovf-env.xml - wb: [384] 1632 bytes
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command hostname 
with allowed return codes [0] (shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['hostname', 'daily-vivid-0414-1ce494fec4'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: pubhname: 
publishing hostname [phostname=ubuntu hostname=daily-vivid-0414-1ce494fec4 
policy=True interface=eth0]
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command ['sh', 
'-xc', 'i=$interface; x=0; ifdown $i || x=$?; ifup $i || x=$?; exit $x'] with 
allowed return codes [0] (shell=False, capture=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: publishing hostname took 
2.119 seconds (2.12)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking agent: 
['service', 'walinuxagent', 'start']
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:47:27 ubuntu [CLOUDINIT] util.py[DEBUG]: waiting for files took 
60.959 seconds
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: Did not find 
files, but going on: {'/var/lib/waagent/SharedConfig.xml'}
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: 
SharedConfig.xml missing, using static instance-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/walinuxagent/+bug/1444086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444753] Re: Tenant_Id needs valiation for Neutron-LBaas-API

2015-04-15 Thread Hong Hui Xiao
I happened to have some investigate about this kind of issues.  From the
bug history, this issue will be close as invalid. tenant_id can only be
passed in by admin. Neutron would think admin knows what he is doing,
and would allow admin to do anything.

Pls see the following bugs as reference.

https://bugs.launchpad.net/neutron/+bug/1200585
https://bugs.launchpad.net/neutron/+bug/1185206
https://bugs.launchpad.net/neutron/+bug/1067620
https://bugs.launchpad.net/neutron/+bug/1290408
https://bugs.launchpad.net/neutron/+bug/1338885
https://bugs.launchpad.net/neutron/+bug/1398992
https://bugs.launchpad.net/neutron/+bug/1440705
https://bugs.launchpad.net/neutron/+bug/1441373
https://bugs.launchpad.net/neutron/+bug/1440700

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444753

Title:
  Tenant_Id needs valiation for Neutron-LBaas-API

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Based on the description in the following link
  https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor

  tenant_id: only required if the caller has an admin role and wants to
  create a Health Monitor for another tenant.

  while in the tempest Admin API test, we found that when we pass an
  empty tenant id  or an invalid tenant id , we are still able to create
  services like health monitor,etc.  We believe that this is a bug,
  since tenantid is a UUID, it should not be invalid or empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444753] [NEW] Tenant_Id needs valiation for Neutron-LBaas-API

2015-04-15 Thread min wang
Public bug reported:

Based on the description in the following link
https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor

tenant_id: only required if the caller has an admin role and wants to
create a Health Monitor for another tenant.

while in the tempest Admin API test, we found that when we pass an empty
tenant id  or an invalid tenant id , we are still able to create
services like health monitor,etc.  We believe that this is a bug, since
tenantid is a UUID, it should not be invalid or empty.

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: bagpipe-l2 => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444753

Title:
  Tenant_Id needs valiation for Neutron-LBaas-API

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Based on the description in the following link
  https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor

  tenant_id: only required if the caller has an admin role and wants to
  create a Health Monitor for another tenant.

  while in the tempest Admin API test, we found that when we pass an
  empty tenant id  or an invalid tenant id , we are still able to create
  services like health monitor,etc.  We believe that this is a bug,
  since tenantid is a UUID, it should not be invalid or empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444753] [NEW] Tenant_Id needs valiation for Neutron-LBaas-API

2015-04-15 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Based on the description in the following link
https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Create_a_Health_Monitor

tenant_id: only required if the caller has an admin role and wants to
create a Health Monitor for another tenant.

while in the tempest Admin API test, we found that when we pass an empty
tenant id  or an invalid tenant id , we are still able to create
services like health monitor,etc.  We believe that this is a bug, since
tenantid is a UUID, it should not be invalid or empty.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Tenant_Id needs valiation for Neutron-LBaas-API
https://bugs.launchpad.net/bugs/1444753
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444748] [NEW] Remove unused policy rule for trust

2015-04-15 Thread Lin Hua Cheng
Public bug reported:

There are couple of policy rule that are unused for trust, even if the
user updates the rule it won't impact the policy check.

we should clean this up to avoid confusion.

This is step #1 as for fixing the policy rule for trust. See :
https://bugs.launchpad.net/keystone/+bug/1280084

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1444748

Title:
  Remove unused policy rule for trust

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There are couple of policy rule that are unused for trust, even if the
  user updates the rule it won't impact the policy check.

  we should clean this up to avoid confusion.

  This is step #1 as for fixing the policy rule for trust. See :
  https://bugs.launchpad.net/keystone/+bug/1280084

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1444748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444747] [NEW] add rpc version alias for compute task rpc api

2015-04-15 Thread Alex Xu
Public bug reported:

There isn't rpc alias for compute task api. If there is any change for
the compute task api, we need provide version cap for it to enable
rolling upgrade

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444747

Title:
  add rpc version alias for compute task rpc api

Status in OpenStack Compute (Nova):
  New

Bug description:
  There isn't rpc alias for compute task api. If there is any change for
  the compute task api, we need provide version cap for it to enable
  rolling upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444745] [NEW] Add RPC version alias for conductor and scheduler in kilo

2015-04-15 Thread Alex Xu
Public bug reported:

The rpc version alias need to be updated for conductor and scheduler.
Those alias is needed for rolling upgrade

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New


** Tags: kilo-rc-potential

** Summary changed:

- Add RPC version alias for kilo
+ Add RPC version alias for conductor and scheduler in kilo

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

** Tags added: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444745

Title:
  Add RPC version alias for conductor and scheduler in kilo

Status in OpenStack Compute (Nova):
  New

Bug description:
  The rpc version alias need to be updated for conductor and scheduler.
  Those alias is needed for rolling upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370226] Re: LibvirtISCSIVolumeDriver cannot find volumes that include pci-* in the /dev/disk/by-path device

2015-04-15 Thread Walt Boring
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370226

Title:
  LibvirtISCSIVolumeDriver cannot find volumes that include pci-* in the
  /dev/disk/by-path device

Status in OpenStack Compute (Nova):
  Fix Released
Status in Volume discovery and local storage management lib:
  Triaged

Bug description:
  I am currently unable to attach iSCSI volumes to our system because
  the path that is expected by the LibvirtISCSIVolumeDriver doesn't
  match what is being created in /dev/disk/by-path:

  2014-09-16 01:33:22.533 24304 DEBUG nova.openstack.common.lockutils 
[req-f466db73-0a7c-4e1f-85ad-473c688d0a68 None] Semaphore / lock released 
"connect_volume" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:328
  2014-09-16 01:33:22.534 24304 ERROR nova.virt.block_device 
[req-f466db73-0a7c-4e1f-85ad-473c688d0a68 None] [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] Driver failed to attach volume 
97e38815-c934-48a7-b343-880c5a9bf4b8 at /dev/vdd
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] Traceback (most recent call last):
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 252, in 
attach
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] device_type=self['device_type'], 
encryption=encryption)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1283, in 
attach_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] conf = 
self._connect_volume(connection_info, disk_info)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1237, in 
_connect_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] return 
driver.connect_volume(connection_info, disk_info)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 
325, in inner
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] return f(*args, **kwargs)
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/volume.py", line 295, in 
connect_volume
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] % (host_device))
  2014-09-16 01:33:22.534 24304 TRACE nova.virt.block_device [instance: 
097e5a6a-ed49-4914-a0ed-5d58959594c9] NovaException: iSCSI device not found at 
/dev/disk/by-path/ip-10.90.50.10:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node2-lun-4

  The paths that are being created, however, are  of the following
  format:

  [root@abba-n09 rules.d]# ll /dev/disk/by-path/
  total 0
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-0
 -> ../../sdc
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-1
 -> ../../sdd
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-2
 -> ../../sde
  lrwxrwxrwx. 1 root root  9 Sep 16 10:56 
pci-:0c:00.2-ip-10.90.50.11:3260-iscsi-iqn.1986-03.com.ibm:2145.abbav3700.node1-lun-3
 -> ../../sdf
  lrwxrwxrwx. 1 root root  9 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0 -> 
../../sda
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part1 
-> ../../sda1
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part2 
-> ../../sda2
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part3 
-> ../../sda3
  lrwxrwxrwx. 1 root root 10 Sep 10 18:46 pci-:16:00.0-scsi-0:2:0:0-part4 
-> ../../sda4
  lrwxrwxrwx. 1 root root  9 Sep 10 18:46 pci-:16:00.0-scsi-0:2:1:0 -> 
../../sdb
  [root@abba-n09 rules.d]#

  When the devices are created the physical location of the HBA is being 
included:
  0c:00.2 Mass storage controller: Emulex Corporation OneConnect 10Gb iSCSI 
Initiator (be3) (rev 02)

  Looking at the code, I see that theLibvirtISERVolumeDriver actually
  does the check that accounts for this

[Yahoo-eng-team] [Bug 1444532] Re: nova-scheduler doesnt reconnect to databases when started and database is down

2015-04-15 Thread Alejandro Comisario
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444532

Title:
  nova-scheduler doesnt reconnect to databases when started and database
  is down

Status in OpenStack Compute (Nova):
  New
Status in Ubuntu:
  New

Bug description:
  In Juno release (ubuntu packages), when you start nova-scheduler but
  database is down, the service never reconnects, the stacktrace is as
  follow :

  
  AUDIT nova.service [-] Starting scheduler node (version 2014.2.2)
  ERROR nova.openstack.common.threadgroup [-] (OperationalError) (2003, "Can't 
connect to MySQL server on '10.128.30.11' (111)") None None
  TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
  TRACE nova.openstack.common.threadgroup x.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  TRACE nova.openstack.common.threadgroup return self.thread.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, in 
wait
  TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
  TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, in 
main
  TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 490, 
in run_service
  TRACE nova.openstack.common.threadgroup service.start()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 169, in start
  TRACE nova.openstack.common.threadgroup self.host, self.binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 161, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup binary=binary, topic=None)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 949, in wrapper
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
  TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 279, in 
service_get_all_by
  TRACE nova.openstack.common.threadgroup result = 
self.db.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 136, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup return 
IMPL.service_get_by_args(context, host, binary)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 125, in 
wrapper
  TRACE nova.openstack.common.threadgroup return f(*args, **kwargs)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 490, in 
service_get_by_args
  TRACE nova.openstack.common.threadgroup result = model_query(context, 
models.Service).\
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 213, in 
model_query
  TRACE nova.openstack.common.threadgroup session = kwargs.get('session') 
or get_session(use_slave=use_slave)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 101, in 
get_session
  TRACE nova.openstack.common.threadgroup facade = _create_facade_lazily()
  TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 91, in 
_create_facade_lazily
  TRACE nova.openstack.common.threadgroup _ENGINE_FACADE = 
db_session.EngineFacade.from_config(CONF)
  TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
795, in from_config
  TRACE nova.openstack.common.threadgroup 

[Yahoo-eng-team] [Bug 1382440] Re: Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2015-04-15 Thread Walt Boring
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382440

Title:
  Detaching multipath volume doesn't work properly when using different
  targets with same portal for each multipath device

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in Volume discovery and local storage management lib:
  New

Bug description:
  Overview:
  On Icehouse(2014.1.2) with "iscsi_use_multipath=true", detaching iSCSI 
  multipath volume doesn't work properly. When we use different targets(IQNs) 
  associated with same portal for each different multipath device, all of 
  the targets will be deleted via disconnect_volume().

  This problem is not yet fixed in upstream. However, the attached patch
  fixes this problem.

  Steps to Reproduce:

  We can easily reproduce this issue without any special storage
  system in the following Steps:

1. configure "iscsi_use_multipath=True" in nova.conf on compute node.
2. configure "volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver"
   in cinder.conf on cinder node.
2. create an instance.
3. create 3 volumes and attach them to the instance.
4. detach one of these volumes.
5. check "multipath -ll" and "iscsiadm --mode session".

  Detail:

  This problem was introduced with the following patch which modified
  attaching and detaching volume operations for different targets
  associated with different portals for the same multipath device.

commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
Author: Xing Yang 
Date:   Date: Mon Jan 6 17:27:28 2014 -0500

  Fixed a problem in iSCSI multipath

  We found out that:

  > # Do a discovery to find all targets.
  > # Targets for multiple paths for the same multipath device
  > # may not be the same.
  > out = self._run_iscsiadm_bare(['-m',
  >   'discovery',
  >   '-t',
  >   'sendtargets',
  >   '-p',
  >   iscsi_properties['target_portal']],
  >   check_exit_code=[0, 255])[0] \
  > or ""
  >
  > ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
  ...
  > # If no other multipath device attached has the same iqn
  > # as the current device
  > if not in_use:
  > # disconnect if no other multipath devices with same iqn
  > self._disconnect_mpath(iscsi_properties, ips_iqns)
  > return
  > elif multipath_device not in devices:
  > # delete the devices associated w/ the unused multipath
  > self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

  When we use different targets(IQNs) associated with same portal for each 
different
  multipath device, the ips_iqns has all targets in compute node from the 
result of
  "iscsiadm -m discovery -t sendtargets -p ".
  Then, the _delete_mpath() deletes all of the targets in the ips_iqns
  via /sys/block/sdX/device/delete.

  For example, we create an instance and attach 3 volumes to the
  instance:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 23:0:0:1 sdd 8:48 active ready running
330010001 dm-5 IET,VIRTUAL-DISK
size=2.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 21:0:0:1 sdb 8:16 active ready running
330020001 dm-6 IET,VIRTUAL-DISK
size=3.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 22:0:0:1 sdc 8:32 active ready running

  Then we detach one of these volumes:

# nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
  ba88-4fe2-a570-9e35c4880d12

  As a result of detaching the volume, the compute node remains 3 iSCSI sessions
  and the instance fails to access the attached multipath devices:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19]

[Yahoo-eng-team] [Bug 1367189] Re: multipath not working with Storwize backend if CHAP enabled

2015-04-15 Thread Walt Boring
** Also affects: os-brick
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367189

Title:
  multipath not working with Storwize backend if CHAP enabled

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in Volume discovery and local storage management lib:
  New

Bug description:
  if I try to attach a volume to a VM while having multipath enabled in
  nova and CHAP enabled in the storwize backend, it fails:

  2014-09-09 11:37:14.038 22944 ERROR nova.virt.block_device 
[req-f271874a-9720-4779-96a8-01575641a939 a315717e20174b10a39db36b722325d6 
76d25b1928e7407392a69735a894c7fc] [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Driver failed to attach volume 
c460f8b7-0f1d-4657-bdf7-e142ad34a132 at /dev/vdb
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Traceback (most recent call last):
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 239, in 
attach
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] device_type=self['device_type'], 
encryption=encryption)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1235, in 
attach_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] disk_info)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1194, in 
volume_driver_method
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return method(connection_info, *args, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return f(*args, **kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 280, in 
connect_volume
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=[0, 255])[0] \
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 579, in 
_run_iscsiadm_bare
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] check_exit_code=check_exit_code)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 165, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] return processutils.execute(*cmd, 
**kwargs)
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] cmd=' '.join(cmd))
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] ProcessExecutionError: Unexpected error 
while running command.
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf iscsiadm -m discovery -t sendtargets -p 
192.168.1.252:3260
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Exit code: 5
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stdout: ''
  2014-09-09 11:37:14.038 22944 TRACE nova.virt.block_device [instance: 
108a81d0-eeb5-49a8-b3eb-e593f44bf897] Stderr: 'iscsiadm: Connection to 
Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O error, failed to 
receive a PDU\niscsiadm: retrying discovery login to 192.168.1.252\niscsiadm: 
Connection to Discovery Address 192.168.1.252 closed\niscsiadm: Login I/O 
error, failed to receive a PDU\niscsi

[Yahoo-eng-team] [Bug 1444086] Re: cloud-init removes SharedConfig.XML on new instances of Azure

2015-04-15 Thread Launchpad Bug Tracker
This bug was fixed in the package walinuxagent - 2.0.12-0ubuntu2

---
walinuxagent (2.0.12-0ubuntu2) vivid; urgency=medium

  * Fixed systemd unit file which caused SharedConfig.xml to be deleted by
Cloud-init (LP: #1444086).
 -- Ben HowardWed, 15 Apr 2015 10:59:38 -0600

** Changed in: walinuxagent (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1444086

Title:
  cloud-init removes SharedConfig.XML on new instances of Azure

Status in Init scripts for use on cloud images:
  Confirmed
Status in walinuxagent package in Ubuntu:
  Fix Released

Bug description:
  The Azure DS is removing /var/lib/waagent/SharedConfig.xml on first
  boot, causing walinuxagent to fail.

  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Attempting to remove 
/var/lib/waagent/SharedConfig.xml
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[INFO]: removed stale 
file(s) in '/var/lib/waagent': ['/var/lib/waagent/SharedConfig.xml']
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Writing to 
/var/lib/waagent/ovf-env.xml - wb: [384] 1632 bytes
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command hostname 
with allowed return codes [0] (shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['hostname', 'daily-vivid-0414-1ce494fec4'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: pubhname: 
publishing hostname [phostname=ubuntu hostname=daily-vivid-0414-1ce494fec4 
policy=True interface=eth0]
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command ['sh', 
'-xc', 'i=$interface; x=0; ifdown $i || x=$?; ifup $i || x=$?; exit $x'] with 
allowed return codes [0] (shell=False, capture=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: publishing hostname took 
2.119 seconds (2.12)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking agent: 
['service', 'walinuxagent', 'start']
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:47:27 ubuntu [CLOUDINIT] util.py[DEBUG]: waiting for files took 
60.959 seconds
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: Did not find 
files, but going on: {'/var/lib/waagent/SharedConfig.xml'}
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: 
SharedConfig.xml missing, using static instance-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1444086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444725] [NEW] logout url is hard coded / wrong

2015-04-15 Thread Eric Peterson
Public bug reported:

There is some javascript which hard codes the logout url to be 
"/auth/logout" at:

https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/angular/horizon.js#L29

This is incorrect in number of distributions / deployments where horizon
resides in a path such as "/dashboard"  or "/horizon"

** Affects: horizon
 Importance: Medium
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1444725

Title:
  logout url is hard coded / wrong

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  There is some javascript which hard codes the logout url to be 
  "/auth/logout" at:

  
https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/angular/horizon.js#L29

  This is incorrect in number of distributions / deployments where
  horizon resides in a path such as "/dashboard"  or "/horizon"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1444725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444683] [NEW] Still can't delete an VM in bad status from CLI/API on Kilo release

2015-04-15 Thread Alfred Shen
Public bug reported:

When a VM is in a bad status, there's no effective way to delete it from
CLI/API even after reset-state. The only known option is to delete it
from MySql tables. in Kilo release following Nova tables will need to be
touched which is inconvenient and could post a risk. Would suggest to
come up with a tool to clean up a dysfunctional VM without touching
MySql tables.

Entries to be delete in following order

nova.instance_info_caches
nova.block_device_mapping
nova.instance_actions_events
nova.instance_actions
nova.instance_system_metadata
nova.instances

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444683

Title:
  Still can't delete an VM in bad status from CLI/API on Kilo release

Status in OpenStack Compute (Nova):
  New

Bug description:
  When a VM is in a bad status, there's no effective way to delete it
  from CLI/API even after reset-state. The only known option is to
  delete it from MySql tables. in Kilo release following Nova tables
  will need to be touched which is inconvenient and could post a risk.
  Would suggest to come up with a tool to clean up a dysfunctional VM
  without touching MySql tables.

  Entries to be delete in following order

  nova.instance_info_caches
  nova.block_device_mapping
  nova.instance_actions_events
  nova.instance_actions
  nova.instance_system_metadata
  nova.instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444630] [NEW] nova-compute should stop handling virt lifecycle events when it's shutting down

2015-04-15 Thread Matt Riedemann
Public bug reported:

This is a follow on to bug 1293480 and related to bug 1408176 and bug
1443186.

There can be a race when rebooting a compute host where libvirt is
shutting down guest VMs and sending STOPPED lifecycle events up to nova
compute which then tries to stop them via the stop API, which sometimes
works and sometimes doesn't - the compute service can go down with a
vm_state of ACTIVE and task_state of powering-off which isn't resolve on
host reboot.

Sometimes the stop API completes and the instance is stopped with
power_state=4 (shutdown) in the nova database.  When the host comes back
up and libvirt restarts, it starts up the guest VMs which sends the
STARTED lifecycle event and nova handles that but because the vm_state
in the nova database is STOPPED and the power_state is 1 (running) from
the hypervisor, nova things it started up unexpectedly and stops it:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

So nova shuts the running guest down.

Actually the block in:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

conflicts with the statement in power_state.py:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/power_state.py?id=2015.1.0rc1#n19

"The hypervisor is always considered the authority on the status
of a particular VM, and the power_state in the DB should be viewed as a
snapshot of the VMs's state in the (recent) past."

Anyway, that's a different issue but the point is when nova-compute is
shutting down it should stop accepting lifecycle events from the
hypervisor (virt driver code) since it can't really reliably act on them
anyway - we can leave any sync up that needs to happen in init_host() in
the compute manager.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: compute kilo-backport-potential libvirt

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Tags added: kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444630

Title:
  nova-compute should stop handling virt lifecycle events when it's
  shutting down

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  This is a follow on to bug 1293480 and related to bug 1408176 and bug
  1443186.

  There can be a race when rebooting a compute host where libvirt is
  shutting down guest VMs and sending STOPPED lifecycle events up to
  nova compute which then tries to stop them via the stop API, which
  sometimes works and sometimes doesn't - the compute service can go
  down with a vm_state of ACTIVE and task_state of powering-off which
  isn't resolve on host reboot.

  Sometimes the stop API completes and the instance is stopped with
  power_state=4 (shutdown) in the nova database.  When the host comes
  back up and libvirt restarts, it starts up the guest VMs which sends
  the STARTED lifecycle event and nova handles that but because the
  vm_state in the nova database is STOPPED and the power_state is 1
  (running) from the hypervisor, nova things it started up unexpectedly
  and stops it:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  So nova shuts the running guest down.

  Actually the block in:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2015.1.0rc1#n6145

  conflicts with the statement in power_state.py:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/power_state.py?id=2015.1.0rc1#n19

  "The hypervisor is always considered the authority on the status
  of a particular VM, and the power_state in the DB should be viewed as a
  snapshot of the VMs's state in the (recent) past."

  Anyway, that's a different issue but the point is when nova-compute is
  shutting down it should stop accepting lifecycle events from the
  hypervisor (virt driver code) since it can't really reliably act on
  them anyway - we can leave any sync up that needs to happen in
  init_host() in the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444584] [NEW] Jshint should ignore legacy code before enabling undef or unused

2015-04-15 Thread Thai Tran
Public bug reported:

Ignoring jshint on legacy JS code

Once undef and unused are enabled, legacy code will throw over 365
errors. Remember that undef check for undefined variables while unused
checks for unused variables, both are good for quality control.

We have two options going forward:
1. Clean up legacy code
2. Ignore legacy code

Since legacy is stable enough, cleaning it up might have higher risk of
regression and does not improve code or add value. We should ignore
legacy code base on this reasoning.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1444584

Title:
  Jshint should ignore legacy code before enabling undef or unused

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Ignoring jshint on legacy JS code

  Once undef and unused are enabled, legacy code will throw over 365
  errors. Remember that undef check for undefined variables while unused
  checks for unused variables, both are good for quality control.

  We have two options going forward:
  1. Clean up legacy code
  2. Ignore legacy code

  Since legacy is stable enough, cleaning it up might have higher risk
  of regression and does not improve code or add value. We should ignore
  legacy code base on this reasoning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1444584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444581] [NEW] Rebuild of a volume-backed instance fails

2015-04-15 Thread Roman Podoliaka
Public bug reported:

If you try to rebuild a volume-backed instance, it fails.

malor@ubuntu:~/devstack$ nova image-list
+--+-+++
| ID   | Name| 
Status | Server |
+--+-+++
| 889d4783-de7f-4277-a2ff-46e6542a7c54 | cirros-0.3.2-x86_64-uec | 
ACTIVE ||
| 867aa81c-ddc3-45e3-9067-b70166c9b2e3 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE ||
| b8db9175-1368-4b45-a914-3ba5edcc044a | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE ||
| b92c34bb-91ee-426f-b945-8e341a0c8bdb | testvm  | 
ACTIVE ||
+--+-+++
malor@ubuntu:~/devstack$ neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| 243fbe0a-3be7-453b-9884-7df837461769 | private | 
dde8f0cc-6db0-4d6e-8e72-470535791055 10.0.0.0/24 |
| f4e4436c-27e5-4411-83d2-611f8d9af45c | public  | 
d753749d-0e14-4927-b9a3-cfccd6d21e09 |
+--+-+--+

Steps to reproduce:

1) build a volume-backed instance

nova boot --flavor m1.nano --nic net-id=243fbe0a-
3be7-453b-9884-7df837461769 --block-device
source=image,id=889d4783-de7f-4277-a2ff-
46e6542a7c54,dest=volume,size=1,shutdown=preserve,bootindex=0  demo

2) rebuild it with a new image

nova rebuild demo b92c34bb-91ee-426f-b945-8e341a0c8bdb


Expected result:

   instance is rebuilt using the new image and is in ACTIVE state

Actual result:

  instance is in ERROR state

  Traceback from nova-compute http://paste.openstack.org/show/204014/

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Summary changed:

- Rebuild of volume-backed instance fails
+ Rebuild of a volume-backed instance fails

** Tags added: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444581

Title:
  Rebuild of a volume-backed instance fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you try to rebuild a volume-backed instance, it fails.

  malor@ubuntu:~/devstack$ nova image-list
  
+--+-+++
  | ID   | Name| 
Status | Server |
  
+--+-+++
  | 889d4783-de7f-4277-a2ff-46e6542a7c54 | cirros-0.3.2-x86_64-uec | 
ACTIVE ||
  | 867aa81c-ddc3-45e3-9067-b70166c9b2e3 | cirros-0.3.2-x86_64-uec-kernel  | 
ACTIVE ||
  | b8db9175-1368-4b45-a914-3ba5edcc044a | cirros-0.3.2-x86_64-uec-ramdisk | 
ACTIVE ||
  | b92c34bb-91ee-426f-b945-8e341a0c8bdb | testvm  | 
ACTIVE ||
  
+--+-+++
  malor@ubuntu:~/devstack$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | 243fbe0a-3be7-453b-9884-7df837461769 | private | 
dde8f0cc-6db0-4d6e-8e72-470535791055 10.0.0.0/24 |
  | f4e4436c-27e5-4411-83d2-611f8d9af45c | public  | 
d753749d-0e14-4927-b9a3-cfccd6d21e09 |
  
+--+-+--+

  Steps to reproduce:

  1) build a volume-backed instance

  nova boot --flavor m1.nano --nic net-id=243fbe0a-
  3be7-453b-9884-7df837461769 --block-device
  source=image,id=889d4783-de7f-4277-a2ff-
  46e6542a7c54,dest=volume,size=1,shutdown=preserve,bootindex=0  demo

  2) rebuild it with a new image

  nova rebuild demo b92c34bb-91ee-426f-b945-8e341a0c8bdb

  
  Expected result:

 instance is rebuilt using the new image and is in ACTIVE state

  Actual result:

instance is in ERROR state

Traceback from nova-compute http://paste.openstack.org/show/204014/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-t

[Yahoo-eng-team] [Bug 1444559] [NEW] Create a volume should handle OverQuota exception

2015-04-15 Thread jichenjc
Public bug reported:

we can get exception.OverQuota  from cinder layer
but we didn't handle it in API layer when we create volume 
this may lead to a 500 error

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444559

Title:
  Create a volume should handle OverQuota exception

Status in OpenStack Compute (Nova):
  New

Bug description:
  we can get exception.OverQuota  from cinder layer
  but we didn't handle it in API layer when we create volume 
  this may lead to a 500 error

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444542] [NEW] create a volume with invalid snapshot should report 404 instead of 500

2015-04-15 Thread jichenjc
Public bug reported:

in v2.1 API,  when create a volume through os-volumes
if the snapshot ID is invalid, we should report 404 , but now 
exception.SnapshotNotFound is not correctly handled

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444542

Title:
  create a volume with invalid snapshot should report 404 instead of 500

Status in OpenStack Compute (Nova):
  New

Bug description:
  in v2.1 API,  when create a volume through os-volumes
  if the snapshot ID is invalid, we should report 404 , but now 
  exception.SnapshotNotFound is not correctly handled

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444086] Re: cloud-init removes SharedConfig.XML on new instances of Azure

2015-04-15 Thread Ben Howard
** Changed in: cloud-init (Ubuntu)
 Assignee: (unassigned) => Ben Howard (utlemming)

** Package changed: cloud-init (Ubuntu) => walinuxagent (Ubuntu)

** Changed in: walinuxagent (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1444086

Title:
  cloud-init removes SharedConfig.XML on new instances of Azure

Status in Init scripts for use on cloud images:
  Confirmed
Status in walinuxagent package in Ubuntu:
  Confirmed

Bug description:
  The Azure DS is removing /var/lib/waagent/SharedConfig.xml on first
  boot, causing walinuxagent to fail.

  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Attempting to remove 
/var/lib/waagent/SharedConfig.xml
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[INFO]: removed stale 
file(s) in '/var/lib/waagent': ['/var/lib/waagent/SharedConfig.xml']
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Writing to 
/var/lib/waagent/ovf-env.xml - wb: [384] 1632 bytes
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command hostname 
with allowed return codes [0] (shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['hostname', 'daily-vivid-0414-1ce494fec4'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: pubhname: 
publishing hostname [phostname=ubuntu hostname=daily-vivid-0414-1ce494fec4 
policy=True interface=eth0]
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:24 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command ['sh', 
'-xc', 'i=$interface; x=0; ifdown $i || x=$?; ifup $i || x=$?; exit $x'] with 
allowed return codes [0] (shell=False, capture=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 11 bytes from 
/proc/uptime
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: publishing hostname took 
2.119 seconds (2.12)
  Apr 14 11:46:26 ubuntu [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking agent: 
['service', 'walinuxagent', 'start']
  Apr 14 11:46:26 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Apr 14 11:47:27 ubuntu [CLOUDINIT] util.py[DEBUG]: waiting for files took 
60.959 seconds
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: Did not find 
files, but going on: {'/var/lib/waagent/SharedConfig.xml'}
  Apr 14 11:47:27 ubuntu [CLOUDINIT] DataSourceAzure.py[WARNING]: 
SharedConfig.xml missing, using static instance-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1444086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444532] [NEW] nova-scheduler doesnt reconnect to databases when started and database is down

2015-04-15 Thread Alejandro Comisario
Public bug reported:

In Juno release (ubuntu packages), when you start nova-scheduler but
database is down, the service never reconnects, the stacktrace is as
follow :


AUDIT nova.service [-] Starting scheduler node (version 2014.2.2)
ERROR nova.openstack.common.threadgroup [-] (OperationalError) (2003, "Can't 
connect to MySQL server on '10.128.30.11' (111)") None None
TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
TRACE nova.openstack.common.threadgroup x.wait()
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
TRACE nova.openstack.common.threadgroup return self.thread.wait()
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 173, in 
wait
TRACE nova.openstack.common.threadgroup return self._exit_event.wait()
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
TRACE nova.openstack.common.threadgroup return hubs.get_hub().switch()
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 293, in 
switch
TRACE nova.openstack.common.threadgroup return self.greenlet.switch()
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 212, in 
main
TRACE nova.openstack.common.threadgroup result = function(*args, **kwargs)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 490, 
in run_service
TRACE nova.openstack.common.threadgroup service.start()
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 169, in start
TRACE nova.openstack.common.threadgroup self.host, self.binary)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 161, in 
service_get_by_args
TRACE nova.openstack.common.threadgroup binary=binary, topic=None)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 949, in wrapper
TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 
139, in inner
TRACE nova.openstack.common.threadgroup return func(*args, **kwargs)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 279, in 
service_get_all_by
TRACE nova.openstack.common.threadgroup result = 
self.db.service_get_by_args(context, host, binary)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 136, in 
service_get_by_args
TRACE nova.openstack.common.threadgroup return 
IMPL.service_get_by_args(context, host, binary)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 125, in 
wrapper
TRACE nova.openstack.common.threadgroup return f(*args, **kwargs)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 490, in 
service_get_by_args
TRACE nova.openstack.common.threadgroup result = model_query(context, 
models.Service).\
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 213, in 
model_query
TRACE nova.openstack.common.threadgroup session = kwargs.get('session') or 
get_session(use_slave=use_slave)
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 101, in 
get_session
TRACE nova.openstack.common.threadgroup facade = _create_facade_lazily()
TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 91, in 
_create_facade_lazily
TRACE nova.openstack.common.threadgroup _ENGINE_FACADE = 
db_session.EngineFacade.from_config(CONF)
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
795, in from_config
TRACE nova.openstack.common.threadgroup 
retry_interval=conf.database.retry_interval)
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
711, in __init__
TRACE nova.openstack.common.threadgroup **engine_kwargs)
TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/session.py", line 
386, in create_engine
TRACE nova.openstack.common.threadgroup connection_trace=connection_trace
TRACE nova.opensta

[Yahoo-eng-team] [Bug 1444310] Re: keystone token response contains InternalURL for non admin user

2015-04-15 Thread Dolph Mathews
The internal URL is not intended to be obscured from users, but rather
is intended to provide a public API interface on a faster / more
efficient network interface (depending on the deployment). If users can
reach the internal endpoint (such as for glance), then they can likely
save bandwidth charges, etc.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1444310

Title:
  keystone token response contains InternalURL for non admin user

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  keystone token responses contains both the InternalURL and adminURL
  for non admin user (demo).

  This information should not be exposed to a non-admin user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1444310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444505] [NEW] Deprecated templatetags in Django 1.7

2015-04-15 Thread Rob Cresswell
Public bug reported:

There are a few deprecated templatetags that we are still using in
Horizon. These can be seen in the logs:

WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.  
 │
WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Description changed:

  There are a few deprecated templatetags that we are still using in
  Horizon. These can be seen in the logs:
  
- WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior. 
- WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the n
- on-autoescaping version is deprecated. Load it from the `future` tag library 
to start using the new behavior. 
+ WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
+ WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.  
 │
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1444505

Title:
  Deprecated templatetags in Django 1.7

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are a few deprecated templatetags that we are still using in
  Horizon. These can be seen in the logs:

  WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `cycle` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.  
 │
  WARNING:py.warnings:RemovedInDjango18Warning: 'The `firstof` template tag is 
changing to escape its arguments; the non-autoescaping version is deprecated. 
Load it from the `future` tag library to start using the new behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1444505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444469] Re: keystone should clean up expired tokens

2015-04-15 Thread Dolph Mathews
Docs:

  http://docs.openstack.org/admin-guide-cloud/content/flushing-expired-
tokens-from-token-database-table.html

In addition, Fernet tokens, introduced in Kilo, do not need to be
persisted to the database, and will leave your token table completely
empty:

  http://docs.openstack.org/developer/keystone/configuration.html#uuid-
pki-pkiz-or-fernet

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/169

Title:
  keystone should clean up expired tokens

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  As of Icehouse, at least, keystone doesn't ever clean up expired
  tokens.  After a few years, my keystone ridiculously huge, causing
  query timeouts and such.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444497] [NEW] Instance doesn't get an address via DHCP (nova-network) because of issue with live migration Edit

2015-04-15 Thread Timofey Durakov
Public bug reported:

When instance is migrated to another compute node, it's dhcp lease is not 
removed from the first compute node even after instance termination.
If a new instance got the same IP which was present in the previous instance 
created on the the first compute node where dhcp lease for this IP remains, 
then the dnsmasq refuse DHCP request of the IP address for a new instance with 
different MAC.

Steps to reproduce:
Scenario:
1. Create cluster (CentOS, nova-network with Flat-DHCP , Ceph for 
images and volumes)
2. Add 1 node with controller and ceph OSD roles
3. Add 2 node with compute and ceph OSD roles
4. Deploy the cluster

5. Create a VM
6. Wait until the VM got IP address via DHCP (in VM console log)
7. Migrate the VM to another compute node.
8. Terminate the VM.

9. Repeat stages from 5 to 8 several times (in my case - 4..6 times 
was enough) until a new instance stops receiving IP address via DHCP.
10. Check dnsmasq-dhcp.log (/var/log/daemon.log on the compute 
node) for messages like :
=
2014-11-09T20:28:29.671344+00:00 warning: not using configured address 10.0.0.2 
because it is leased to fa:16:3e:65:70:be

This means that:
   I. An instance was created on the compute node-1 and got a dhcp lease:
 nova-dhcpbridge.log
2014-11-09 20:12:03.811 27360 DEBUG nova.dhcpbridge [-] Called 'add' for mac 
'fa:16:3e:65:70:be' with ip '10.0.0.2' main 
/usr/lib/python2.6/site-packages/nova/cmd/dhcpbridge.py:135

  II. When the instance was migrating from compute node-1 to node-3, 
'dhcp_release' was not performed on compute node-1, please check the time range 
in the logs : 2014-11-09 20:14:36-37
 Running.log (node-1)
2014-11-09T20:14:36.647588+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf conntrack -D -r 10.0.0.2
### But there is missing a command like: sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

  III. On the compute node-3, DHCP lease was added and it was successfully 
removed when the instance was terminated:
 Running.log (node-3)
2014-11-09T20:15:17.250243+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

  IV. When an another instance got the same address '10.0.0.2' and was created 
on node-1, it didn't get IP address via DHCP:
 Running.log (node-1)
2014-11-09T20:28:29.671344+00:00 warning: not using configured address 10.0.0.2 
because it is leased to fa:16:3e:65:70:be

** Affects: nova
 Importance: Undecided
 Assignee: Timofey Durakov (tdurakov)
 Status: In Progress


** Tags: live-migration nova-network

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Timofey Durakov (tdurakov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/197

Title:
  Instance doesn't get an address via DHCP (nova-network) because of
  issue with live migration Edit

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When instance is migrated to another compute node, it's dhcp lease is not 
removed from the first compute node even after instance termination.
  If a new instance got the same IP which was present in the previous instance 
created on the the first compute node where dhcp lease for this IP remains, 
then the dnsmasq refuse DHCP request of the IP address for a new instance with 
different MAC.

  Steps to reproduce:
  Scenario:
  1. Create cluster (CentOS, nova-network with Flat-DHCP , Ceph for 
images and volumes)
  2. Add 1 node with controller and ceph OSD roles
  3. Add 2 node with compute and ceph OSD roles
  4. Deploy the cluster

  5. Create a VM
  6. Wait until the VM got IP address via DHCP (in VM console log)
  7. Migrate the VM to another compute node.
  8. Terminate the VM.

  9. Repeat stages from 5 to 8 several times (in my case - 4..6 
times was enough) until a new instance stops receiving IP address via DHCP.
  10. Check dnsmasq-dhcp.log (/var/log/daemon.log on the compute 
node) for messages like :
  =
  2014-11-09T20:28:29.671344+00:00 warning: not using configured address 
10.0.0.2 because it is leased to fa:16:3e:65:70:be

  This means that:
 I. An instance was created on the compute node-1 and got a dhcp lease:
   nova-dhcpbridge.log
  2014-11-09 20:12:03.811 27360 DEBUG nova.dhcpbridge [-] Called 'add' for mac 
'fa:16:3e:65:70:be' with ip '10.0.0.2' main 
/usr/lib/python2.6/site-packages/nova/cmd/dhcpbridge.py:135

II. When the instance was migrating fr

[Yahoo-eng-team] [Bug 1377161] Re: If volume-attach API is failed, Block Device Mapping record will remain

2015-04-15 Thread Mitsuhiro Tanino
** Changed in: cinder
   Status: In Progress => Invalid

** Changed in: python-cinderclient
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377161

Title:
  If volume-attach API is failed, Block Device Mapping record will
  remain

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Cinder:
  Invalid

Bug description:
  I executed volume-attach API(nova V2 API) when RabbitMQ was down.
  As result of above API execution, volume-attach API was failed and volume's 
status is still available.
  But, block device mapping record remains on nova DB.
  This condition is inconsistency.

  And, remained block device mapping record maybe cause some problems.
  (I'm researching now.)

  I used openstack juno-3.

  
--
  * Before executing volume-attach API:

  $ nova list
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | e93478bf-ee37-430f-93df-b3cf26540212 | available | None |  1   |
 None|  false   | |
  
+--+---+--+--+-+--+-+
  devstack@ubuntu-14-04-01-64-juno3-01:~$

  mysql> select * from block_device_mapping where instance_uuid = 
'0b529526-4c8d-4650-8295-b7155a977ba7';
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id  | device_name 
| delete_on_termination | snapshot_id | volume_id | volume_size | no_device | 
connection_info | instance_uuid| deleted | source_type 
| destination_type | guest_format | device_type | disk_bus | boot_index | 
image_id |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | 2014-10-02 18:36:08 | 2014-10-02 18:36:10 | NULL   | 145 | /dev/vda
| 1 | NULL| NULL  |NULL |  NULL | 
NULL| 0b529526-4c8d-4650-8295-b7155a977ba7 |   0 | image   
| local| NULL | disk| NULL |  0 | 
c1d264fd-c559-446e-9b94-934ba8249ae1 |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  1 row in set (0.00 sec)

  * After executing volume-attach API:
  $ nova list --all-t
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Di

[Yahoo-eng-team] [Bug 1444469] [NEW] keystone should clean up expired tokens

2015-04-15 Thread Andrew Bogott
Public bug reported:

As of Icehouse, at least, keystone doesn't ever clean up expired tokens.
After a few years, my keystone ridiculously huge, causing query timeouts
and such.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/169

Title:
  keystone should clean up expired tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  As of Icehouse, at least, keystone doesn't ever clean up expired
  tokens.  After a few years, my keystone ridiculously huge, causing
  query timeouts and such.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444456] [NEW] Performance leak in "metering" dashboard

2015-04-15 Thread Ilya Tyaptin
Public bug reported:

Some types of requests to ceilometer in metering dashboard is not
optimized. So we have a performance leak in environments with huge
amount of ceilometer data.

Main bottlenecks which I see:
* Using a meter-list for getting a "unit" for metric. Meter list get all 
available metric on environment, it's may be a thousands of records. It's 
better to use sample-list for this metric with "limit" 1. It gets a 1 sample 
for metric.
* 7 day period by default for collecting statistics. On huge amount of data 
it's may freeze the horizon and ceilometer api, it's better to get one day 
statistics by default.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/156

Title:
  Performance leak in "metering" dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Some types of requests to ceilometer in metering dashboard is not
  optimized. So we have a performance leak in environments with huge
  amount of ceilometer data.

  Main bottlenecks which I see:
  * Using a meter-list for getting a "unit" for metric. Meter list get all 
available metric on environment, it's may be a thousands of records. It's 
better to use sample-list for this metric with "limit" 1. It gets a 1 sample 
for metric.
  * 7 day period by default for collecting statistics. On huge amount of data 
it's may freeze the horizon and ceilometer api, it's better to get one day 
statistics by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444446] [NEW] VMware: resizing a instance that has no root disk fails

2015-04-15 Thread Gary Kotton
Public bug reported:

2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 1278, in 
_resize_create_ephemerals
2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher ds_ref = 
vmdk.device.backing.datastore
2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'NoneType' object has no attribute 'backing'
2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
2015-04-13 21:25:51.442 7852 ERROR oslo.messaging._drivers.common [-] Returning 
exception 'NoneType' object has no attribute 'backing' to caller

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/146

Title:
  VMware: resizing a instance that has no root disk fails

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 1278, in 
_resize_create_ephemerals
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher ds_ref = 
vmdk.device.backing.datastore
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'NoneType' object has no attribute 'backing'
  2015-04-13 21:25:51.437 7852 TRACE oslo.messaging.rpc.dispatcher 
  2015-04-13 21:25:51.442 7852 ERROR oslo.messaging._drivers.common [-] 
Returning exception 'NoneType' object has no attribute 'backing' to caller

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324130] Re: flakey test: test_add_copy_from_image_authorized_upload_image_authorized

2015-04-15 Thread Kamil Rykowski
The log file is not available anymore and the bug report mentioned by
Eddie which fixes the
`test_add_copy_from_image_authorized_upload_image_authorized` has been
released for Kilo RC1.

@Thomas: Feel free to reopen the bug if the problem still occurs and set
the bug status back to ''New''.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1324130

Title:
  flakey test:
  test_add_copy_from_image_authorized_upload_image_authorized

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  Full log here: http://logs.openstack.org/79/46479/19/check/gate-
  glance-python27/eef1666/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1324130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444439] [NEW] Resource tracker: unable to start nova compute

2015-04-15 Thread Gary Kotton
Public bug reported:

After a failure of the resize and a deletion of the instance. I am
unable to restart the nova compute due to the exception below. The
instance was deleted via nova api.

The DB is as follows:
mysql> select * from migrations;
+-+-+++--+--+---++--+--+--+--+--+-+
| created_at  | updated_at  | deleted_at | id | source_compute  
 | dest_compute | dest_host | status | instance_uuid
| old_instance_type_id | new_instance_type_id | source_node  | 
dest_node| deleted |
+-+-+++--+--+---++--+--+--+--+--+-+
| 2015-04-15 09:44:02 | 2015-04-15 09:44:08 | NULL   |  1 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | post-migrating | 
42264e24-1385-41f1-8dfc-120a1891ab05 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 09:48:13 | 2015-04-15 10:19:48 | NULL   |  2 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | reverted   | 
fcab4bde-d93e-4d79-ae35-9d1306da10a4 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:23:56 | 2015-04-15 10:24:03 | NULL   |  3 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | post-migrating | 
d074bbc0-b912-4c85-a02b-aabf56d45f0b |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:27:45 | 2015-04-15 10:28:16 | NULL   |  4 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | reverted   | 
21e59c96-fa2f-45e3-9070-e982a2dafea6 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:28:43 | 2015-04-15 10:29:16 | NULL   |  5 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirming | 
21e59c96-fa2f-45e3-9070-e982a2dafea6 |   10 |   
11 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:35:15 | 2015-04-15 10:53:16 | NULL   |  6 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirmed  | 
4abd75b5-bb91-4ce7-a928-2a96941ea9cb |   10 |   
14 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:35:39 | 2015-04-15 10:53:17 | NULL   |  7 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | confirmed  | 
5e01bddb-3978-4f6f-a4d3-6d24ed31afa4 |   14 |   
10 | domain-c167(DVS) | domain-c167(DVS) |   0 |
| 2015-04-15 10:55:01 | 2015-04-15 10:55:02 | NULL   |  8 | 
Ubuntu1404Server | Ubuntu1404Server | 10.160.94.173 | migrating  | 
20017567-5c83-4918-b269-525169009026 |   10 |   
15 | domain-c167(DVS) | domain-c167(DVS) |   0 |
+-+-+++--+--+---++--+--+--+--+--+-+
8 rows in set (0.00 sec)

2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup Traceback (most 
recent call last):
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 145, in wait
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup x.wait()
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/threadgroup.py", line 47, in wait
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in 
wait
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in 
switch
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
2015-04-15 04:47:04.821 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in 
main
2

[Yahoo-eng-team] [Bug 1444421] [NEW] Launch instance fails with nova network

2015-04-15 Thread Matthias Runge
Public bug reported:

git checkout from kilo rc1:

I have deployed as system with nova network instead of neutron.

While trying to launch an instance, I'm getting:
Internal Server Error: /project/instances/launch
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
164, in get_response
response = response.render()
  File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
158, in render
self.content = self.rendered_content
  File "/usr/lib/python2.7/site-packages/django/template/response.py", line 
135, in rendered_content
content = template.render(context, self._request)
  File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
return self.template.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, in 
render
return self._render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, in 
_render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
576, in render
return self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", line 
56, in render
result = self.nodelist.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, in 
render
bit = self.render_node(node, context)
  File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, in 
render_node
return node.render(context)
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
217, in render
nodelist.append(node.render(context))
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
322, in render
match = condition.eval(context)
  File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 
933, in eval
return self.value.resolve(context, ignore_failures=True)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 647, in 
resolve
obj = self.var.resolve(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 787, in 
resolve
value = self._resolve_lookup(context)
  File "/usr/lib/python2.7/site-packages/django/template/base.py", line 847, in 
_resolve_lookup
current = current()
  File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 439, in 
has_required_fields
return any(field.required for field in self.action.fields.values())
  File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 368, in 
action
context)
  File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 707, in __init__
super(SetNetworkAction, self).__init__(request, *args, **kwargs)
  File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 138, in 
__init__
self._populate_choices(request, context)
  File "/home/mrunge/work/horizon/horizon/workflows/base.py", line 151, in 
_populate_choices
bound_field.choices = meth(request, context)
  File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py",
 line 721, in populate_network_choices
return instance_utils.network_field_data(request)
  File 
"/home/mrunge/work/horizon/openstack_dashboard/dashboards/project/instances/utils.py",
 line 97, in network_field_data
if not networks:
UnboundLocalError: local variable 'networks' referenced before assignment


Fun fact is, this only occurs, when using admin credentials. with user, this 
doesn't happen.

The error message shown in horizon is: Error: Invalid service catalog
service: network

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: kilo-backport-potential

** Changed in: horizon
Milestone: None => ongoing

** Changed in: horizon
Milestone: ongoing => next

** Changed in: horizon
Milestone: next => liberty-1

** Tags added: kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/121

Title:
  Launch instance fails with nova network

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  git checkout from kilo rc1:

  I have deployed as system with nova network instead of neutron.

  While trying to launch an inst

[Yahoo-eng-team] [Bug 1444422] [NEW] Glyphicon-eye position in User Credentials dialog

2015-04-15 Thread Timur Sufiev
Public bug reported:

Steps:
1. Login as admin user.
2. Navigate to Horizon page: 
http:///horizon/project/access_and_security/
3. Click on "View Credentials" button.
4. Check position of eye icon.
Actual: icon above "EC2 Secret Key" field.
Expected: should be at the right end of field.

** Affects: horizon
 Importance: Low
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/122

Title:
  Glyphicon-eye position in User Credentials dialog

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Steps:
  1. Login as admin user.
  2. Navigate to Horizon page: 
http:///horizon/project/access_and_security/
  3. Click on "View Credentials" button.
  4. Check position of eye icon.
  Actual: icon above "EC2 Secret Key" field.
  Expected: should be at the right end of field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408480] Re: PciDevTracker passes context module instead of instance

2015-04-15 Thread Alan Pevec
While this is Low by itself, it is required for Juno backport of bug
1415768

** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => In Progress

** Changed in: nova/juno
   Importance: Undecided => High

** Changed in: nova/juno
 Assignee: (unassigned) => Nikola Đipanov (ndipanov)

** Tags removed: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408480

Title:
  PciDevTracker passes context module instead of instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  Currently, the code in the PciDevTracker.__init__() method of
  nova/pci/manager.py reads:

  ```
  def __init__(self, node_id=None): 

  
  """Create a pci device tracker.   

  


  
  If a node_id is passed in, it will fetch pci devices information  

  
  from database, otherwise, it will create an empty devices list

  
  and the resource tracker will update the node_id information later.   

  
  """   

  


  
  super(PciDevTracker, self).__init__() 

  
  self.stale = {}   

  
  self.node_id = node_id

  
  self.stats = stats.PciDeviceStats()   

  
  if node_id:   

  
  self.pci_devs = list( 

  
  objects.PciDeviceList.get_by_compute_node(context, node_id))  

  
  else: 

  
  self.pci_devs = []

  
  self._initial_instance_usage()  
  ```

  The problem is that in the call to
  `objects.PciDeviceList.get_by_compute_node(context, node_id)`, there
  is no local value for the 'context' parameter, so as a result, the
  context module defined in the imports is what is passed.

  Instead, the parameter should be changed to
  `context.get_admin_context()`.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415768] Re: the pci deivce assigned to instance is inconsistent with DB record when restarting nova-compute

2015-04-15 Thread Alan Pevec
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New => In Progress

** Changed in: nova/juno
   Importance: Undecided => High

** Changed in: nova/juno
 Assignee: (unassigned) => Nikola Đipanov (ndipanov)

** Tags removed: juno-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415768

Title:
  the pci deivce assigned to instance is inconsistent with DB record
  when restarting nova-compute

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  After restarting nova-compute process, I found that the pci device
  assigned to instance in libvirt.xml was different with the record in
  'pci_devices' DB table.

  Every time nova-compute was restarted, pci_tracker.allocations was
  reset to empty dict, it didn't contain the pci devices had been
  allocated to instances, so some pci devices would be reallocated to
  the instances, and record these pci into DB, maybe they was
  inconsistent with the libvirt.xml.

  IOW, nova-compute would reallocated the pci device for the instance
  with pci request when restarting.

  See details:
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/resource_tracker.py#n347

  This is a probabilistic problem, not always can be reproduced. If the
  instance have a lot of pci devices, it happen more.

  Face this bug in kilo master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444397] [NEW] single allowed address pair rule can exhaust entire ipset space

2015-04-15 Thread Kevin Benton
Public bug reported:

The hash type used by the ipsets is 'ip' which explodes a CIDR into
every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
allowed address pairs extension allows CIDRs so a single allowed address
pair set can exhaust the entire IPset and break the security group rules
for a tenant.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress


** Tags: kilo-rc-potential

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Tags added: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444397

Title:
  single allowed address pair rule can exhaust entire ipset space

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The hash type used by the ipsets is 'ip' which explodes a CIDR into
  every member address (i.e. 10.100.0.0/16 becomes 65k entries). The
  allowed address pairs extension allows CIDRs so a single allowed
  address pair set can exhaust the entire IPset and break the security
  group rules for a tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375252] Re: Hostname change is not preserved across reboot on Azure Ubuntu VMs

2015-04-15 Thread Dan Watkins
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Dan Watkins (daniel-thewatkins)

** Changed in: walinuxagent (Ubuntu)
   Status: In Progress => Invalid

** Changed in: cloud-init
   Status: New => In Progress

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1375252

Title:
  Hostname change is not preserved across reboot on Azure Ubuntu VMs

Status in Init scripts for use on cloud images:
  In Progress
Status in cloud-init package in Ubuntu:
  New
Status in walinuxagent package in Ubuntu:
  Invalid

Bug description:
  Whilst a hostname change is immediately effective using the hostname
  or hostnamectl commands, and changing the hostname this way is
  propagated to the hostname field in the Azure dashboard, upon
  rebooting the Ubuntu VM the hostname reverts to the Virtual Machine
  name as displayed in the Azure dashboard.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: walinuxagent 2.0.8-0ubuntu1~14.04.0
  ProcVersionSignature: Ubuntu 3.13.0-36.63-generic 3.13.11.6
  Uname: Linux 3.13.0-36-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.4
  Architecture: amd64
  Date: Mon Sep 29 12:48:56 2014
  ProcEnviron:
   TERM=xterm
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_GB.UTF-8
   SHELL=/bin/bash
  SourcePackage: walinuxagent
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.waagent.conf: 2014-09-29T09:37:10.758660

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1375252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444359] [NEW] Can't add Nova security group by ID to server

2015-04-15 Thread Ran Ziv
Public bug reported:

Nova security groups are supposedly name-unique, while Neutron security groups 
are not necessarily so.
I've noticed that while it's possible to add Neutron security groups to server 
by using the security group ID, it's impossible to do so for a Nova security 
group (only works by name).

This is problematic when one wants to add a security group to a server
when they don't know (or care) which type of security group it is, since
if they use ID it won't work for Nova security groups, and using name
might run into ambiguity with Neutron security groups.

The solution should be allowing Nova security groups to be added to a
server by ID as well.


P.S.:
I've found this bug:
https://bugs.launchpad.net/nova/+bug/1161472
The one I'm writing about sounds to me like the next step in this.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444359

Title:
  Can't add Nova security group by ID to server

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova security groups are supposedly name-unique, while Neutron security 
groups are not necessarily so.
  I've noticed that while it's possible to add Neutron security groups to 
server by using the security group ID, it's impossible to do so for a Nova 
security group (only works by name).

  This is problematic when one wants to add a security group to a server
  when they don't know (or care) which type of security group it is,
  since if they use ID it won't work for Nova security groups, and using
  name might run into ambiguity with Neutron security groups.

  The solution should be allowing Nova security groups to be added to a
  server by ID as well.

  
  P.S.:
  I've found this bug:
  https://bugs.launchpad.net/nova/+bug/1161472
  The one I'm writing about sounds to me like the next step in this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349345] Re: Neutron does not contain checks for running migration upgrade->downgrade->upgrade

2015-04-15 Thread Ann Kamyshnikova
This is not valid bug now as we do not have downgrades in migrations.

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Ann Kamyshnikova (akamyshnikova) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349345

Title:
  Neutron does not contain checks for running migration
  upgrade->downgrade->upgrade

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Big number of existing migrations had problems with downgrade, because
  developers forgot to test this. To make it easier a test should be
  created that will run each migration through
  upgrade->downgrade->upgrade.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1349345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444310] [NEW] keystone token response contains InternalURL for non admin user

2015-04-15 Thread Attila Fazekas
Public bug reported:

keystone token responses contains both the InternalURL and adminURL for
non admin user (demo).

This information should not be exposed to a non-admin user.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1444310

Title:
  keystone token response contains InternalURL for non admin user

Status in OpenStack Identity (Keystone):
  New

Bug description:
  keystone token responses contains both the InternalURL and adminURL
  for non admin user (demo).

  This information should not be exposed to a non-admin user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1444310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444300] [NEW] nova-compute service doesn't restart if resize operation fails

2015-04-15 Thread Rajesh Tailor
Public bug reported:

If instance is resizing and user tries to delete the instance, in that
case instance gets deleted successfully. After instance deletion, greenthread 
which was resizing the instance raises InstanceNotFound error, which caught in 
errors_out_migration and raises "KeyError: 'migration' ".

Now if user tries to restart the n-cpu service, it fails with
InstanceNotFound error.

Steps to reproduce:
1. Create instance
2. Resize instance
3. Delete instance while resize is in progress (scp/rsync process is running)
4. Instance is deleted successfully and instance files are cleaned from source 
compute node
5. When scp/rsync process completes it throws error "InstanceNotFound" and 
later the migration status remains in 'migrating' status. After catching 
InstanceNotFound error in _errors_out_migration decorator, it throws "KeyError: 
'migration'" from errors_out_migration decorator, where migration is expected 
to be a kwargs, but it is passed as args.
It throws below error:

2015-04-14 23:29:12.466 ERROR nova.compute.manager 
[req-2b4e3718-a1fa-4603-bd9e-6c9481f75e16 demo demo] [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Setting instance vm_state to ERROR
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Traceback (most recent call last):
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/compute/manager.py", line 6358, in 
_error_out_instance_on_exception
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] yield
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/compute/manager.py", line 3984, in resize_instance
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] timeout, retry_interval)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6318, in 
migrate_disk_and_power_off
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] shared_storage)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] six.reraise(self.type_, self.value, 
self.tb)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6313, in 
migrate_disk_and_power_off
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] libvirt_utils.copy_image(from_path, 
img_path, host=dest)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/virt/libvirt/utils.py", line 327, in copy_image
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] execute('scp', src, dest)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/opt/stack/nova/nova/virt/libvirt/utils.py", line 55, in execute
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] return utils.execute(*args, **kwargs)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File "/opt/stack/nova/nova/utils.py", 
line 206, in execute
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] return processutils.execute(*cmd, 
**kwargs)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 
238, in execute
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] cmd=sanitized_cmd)
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] ProcessExecutionError: Unexpected error 
while running command.
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Command: scp 
/opt/stack/data/nova/instances/1f24201e-4eac-4dc4-9532-8fb863949a09_resize/disk.config
 
10.69.4.172:/opt/stack/data/nova/instances/1f24201e-4eac-4dc4-9532-8fb863949a09/disk.config
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Exit code: 1
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: 
1f24201e-4eac-4dc4-9532-8fb863949a09] Stdout: u''
2015-04-14 23:29:12.466 TRACE nova.compute.manager [instance: