[Yahoo-eng-team] [Bug 1362451] [NEW] ingore .idea folder in glance

2014-08-28 Thread ling-yun
Public bug reported:

If we use JetBrains PyCharm as python develop tool, JetBrains PyCharm
would automatically generate its config folder name .idea in the root
dir of your python code. Many Code projects, such as nova and cinder,
have ignore .idea folder. Project glance should also ignore .idea
folder.

** Affects: glance
 Importance: Undecided
 Assignee: ling-yun (zengyunling)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = ling-yun (zengyunling)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362451

Title:
  ingore .idea folder in glance

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If we use JetBrains PyCharm as python develop tool, JetBrains PyCharm
  would automatically generate its config folder name .idea in the root
  dir of your python code. Many Code projects, such as nova and cinder,
  have ignore .idea folder. Project glance should also ignore .idea
  folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1362451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362454] [NEW] ingore .idea folder in neutron

2014-08-28 Thread ling-yun
Public bug reported:

If we use JetBrains PyCharm as python develop tool, JetBrains PyCharm
would automatically generate its config folder name .idea in the root
dir of your python code. Many Code projects, such as nova and cinder,
have ignore .idea folder. Project neutron should also ignore .idea
folder.

** Affects: neutron
 Importance: Undecided
 Assignee: ling-yun (zengyunling)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = ling-yun (zengyunling)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362454

Title:
  ingore .idea folder in neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If we use JetBrains PyCharm as python develop tool, JetBrains PyCharm
  would automatically generate its config folder name .idea in the root
  dir of your python code. Many Code projects, such as nova and cinder,
  have ignore .idea folder. Project neutron should also ignore .idea
  folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362466] [NEW] iptables metering removes wrong labels on update

2014-08-28 Thread Angus Lees
Public bug reported:

If a router is removed from the list passed to update_routers(), the
iptables_driver removes the labels for the last(?) router passed, not
the one removed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362466

Title:
  iptables metering removes wrong labels on update

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If a router is removed from the list passed to update_routers(), the
  iptables_driver removes the labels for the last(?) router passed, not
  the one removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362471] [NEW] Can not delete flavor extraspec both for v2/v3 api

2014-08-28 Thread Alex Xu
Public bug reported:

Reproduce as below:

$ nova flavor-show 1
++--+
| Property   | Value|
++--+
| OS-FLV-DISABLED:disabled   | False|
| OS-FLV-EXT-DATA:ephemeral  | 0|
| disk   | 1|
| extra_specs| {a: 5, b: 1} |
| id | 1|
| name   | m1.tiny  |
| os-flavor-access:is_public | True |
| ram| 512  |
| rxtx_factor| 1.0  |
| swap   |  |
| vcpus  | 1|
++--+


$ nova flavor-key 1 unset a

$ nova flavor-show 1
++--+
| Property   | Value|
++--+
| OS-FLV-DISABLED:disabled   | False|
| OS-FLV-EXT-DATA:ephemeral  | 0|
| disk   | 1|
| extra_specs| {a: 5, b: 1} |
| id | 1|
| name   | m1.tiny  |
| os-flavor-access:is_public | True |
| ram| 512  |
| rxtx_factor| 1.0  |
| swap   |  |
| vcpus  | 1|
++--+


This is due to flavor extraspec deleted as below:

flavor = objects.Flavor.get_by_flavor_id(context, flavor_id)
del flavor.extra_specs[id]
flavor.save()


'del flavor.extra_specs[id]' can't trigger the property setter, then flavor 
object won't know the extraspecs is changed.

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362471

Title:
  Can not delete flavor extraspec both for v2/v3 api

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Reproduce as below:

  $ nova flavor-show 1
  ++--+
  | Property   | Value|
  ++--+
  | OS-FLV-DISABLED:disabled   | False|
  | OS-FLV-EXT-DATA:ephemeral  | 0|
  | disk   | 1|
  | extra_specs| {a: 5, b: 1} |
  | id | 1|
  | name   | m1.tiny  |
  | os-flavor-access:is_public | True |
  | ram| 512  |
  | rxtx_factor| 1.0  |
  | swap   |  |
  | vcpus  | 1|
  ++--+

  
  $ nova flavor-key 1 unset a

  $ nova flavor-show 1
  ++--+
  | Property   | Value|
  ++--+
  | OS-FLV-DISABLED:disabled   | False|
  | OS-FLV-EXT-DATA:ephemeral  | 0|
  | disk   | 1|
  | extra_specs| {a: 5, b: 1} |
  | id | 1|
  | name   | m1.tiny  |
  | os-flavor-access:is_public | True |
  | ram| 512  |
  | rxtx_factor| 1.0  |
  | swap   |  |
  | vcpus  | 1|
  ++--+


  This is due to flavor extraspec deleted as below:

  flavor = objects.Flavor.get_by_flavor_id(context, flavor_id)
  del flavor.extra_specs[id]
  flavor.save()

  
  'del flavor.extra_specs[id]' can't trigger the property setter, then flavor 
object won't know the extraspecs is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362480] [NEW] Datacenter moid should be a value not a tuple

2014-08-28 Thread Yang Yu
Public bug reported:

In edge_appliance_driver.py, there is a comma added when setting the
datacenter moid, so the result is the value datacenter moid is changed
to the tuple type, that is wrong.

 if datacenter_moid:
edge['datacenterMoid'] = datacenter_moid,  === Should remove the ','
return edge

** Affects: neutron
 Importance: Low
 Assignee: Yang Yu (yuyangbj)
 Status: New


** Tags: vmware

** Changed in: neutron
 Assignee: (unassigned) = Yang Yu (yuyangbj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362480

Title:
  Datacenter moid should be a value not a tuple

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In edge_appliance_driver.py, there is a comma added when setting the
  datacenter moid, so the result is the value datacenter moid is changed
  to the tuple type, that is wrong.

   if datacenter_moid:
  edge['datacenterMoid'] = datacenter_moid,  === Should remove the ','
  return edge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326955] Re: v1 API GET on image member not implemented

2014-08-28 Thread Zhi Yan Liu
** Changed in: glance
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1326955

Title:
  v1 API GET on image member not implemented

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  Despite having a client call `glanceclient.image_members.get(image_id,
  member_id)` [1], the GET call on /v1/image/uuid/members/id is not
  implemented in the v1 API and returns a 405: Method Not Allowed error.

  I suspect that this was an unintentional omission. The method is
  listed in the router, but in image_members the comment indicates that
  a 405 response is intentional. [2,3] It shouldn't be hard for me to
  implement the fix, but I want to make sure that there wasn't an
  intentional reason for leaving the API call out?

  [1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/image_members.py#L34
  [2] 
https://github.com/openstack/glance/blob/master/glance/api/v1/router.py#L71
  [3] 
https://github.com/openstack/glance/blob/master/glance/api/v1/members.py#L105

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1326955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362488] [NEW] nova api should have a option to override novncproxy_base_url which is a compute option

2014-08-28 Thread Ishant Tyagi
Public bug reported:

novncproxy_base_url  is the only option in my compute conf which points
to controller public network. All other points to controller private ip.

If the public IP address of controller changes, then I need to update
all the host with this url.

Nova compute is currently using vncproxy_base_url only to construct the
access url and returns this access url to nova-api. Therefore this
option can be overridden in nova-api. With this I can just update the
nova api conf when controller public ip changes instead of changing conf
file of all compute nodes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362488

Title:
  nova api should have a option to override novncproxy_base_url which is
  a compute option

Status in OpenStack Compute (Nova):
  New

Bug description:
  novncproxy_base_url  is the only option in my compute conf which
  points to controller public network. All other points to controller
  private ip.

  If the public IP address of controller changes, then I need to update
  all the host with this url.

  Nova compute is currently using vncproxy_base_url only to construct
  the access url and returns this access url to nova-api. Therefore this
  option can be overridden in nova-api. With this I can just update the
  nova api conf when controller public ip changes instead of changing
  conf file of all compute nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362513] [NEW] libvirt: connect_volume scans all LUNs, it will be very slow with a large number of volumes

2014-08-28 Thread Shen Wang
Public bug reported:

Tested OpenStack version: IceHouse 2014.1, master branch still has this issue.
Host version: CentOS 6, 2.6.32-431.el6.x86_64

I have done some work to test the performance of LUN scanning with multipath, 
use the way like what Nova dose. 
In my test, The host connected with almost 900 LUNs.
First, I use 'iscsiadm' with '--rescan' to discover LUNs.
Second, I use 'multipath -r' to construct multipath devices.
The tow steps scans all of the LUNs, and costs more then 2 minutes.

According to connect_volume in nova.virt.libvirt.volume.py:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252,
Nova also uses the tow steps to detect new multipath volume, this tow
will scan all of the LUNs, including all the others which already
connected. So if a host has a large number of LUNs connected to it, the
connect_volume will be very slow.

I think connect_volume needn't scan all of the LUNs, only need scan the
LUN specified by connection_info.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362513

Title:
  libvirt: connect_volume scans all LUNs, it will be very slow with a
  large number of volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tested OpenStack version: IceHouse 2014.1, master branch still has this issue.
  Host version: CentOS 6, 2.6.32-431.el6.x86_64

  I have done some work to test the performance of LUN scanning with multipath, 
use the way like what Nova dose. 
  In my test, The host connected with almost 900 LUNs.
  First, I use 'iscsiadm' with '--rescan' to discover LUNs.
  Second, I use 'multipath -r' to construct multipath devices.
  The tow steps scans all of the LUNs, and costs more then 2 minutes.

  According to connect_volume in nova.virt.libvirt.volume.py:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L252,
  Nova also uses the tow steps to detect new multipath volume, this tow
  will scan all of the LUNs, including all the others which already
  connected. So if a host has a large number of LUNs connected to it,
  the connect_volume will be very slow.

  I think connect_volume needn't scan all of the LUNs, only need scan
  the LUN specified by connection_info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362488] Re: nova api should have a option to override novncproxy_base_url which is a compute option

2014-08-28 Thread Ishant Tyagi
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362488

Title:
  nova api should have a option to override novncproxy_base_url which is
  a compute option

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  novncproxy_base_url  is the only option in my compute conf which
  points to controller public network. All other points to controller
  private ip.

  If the public IP address of controller changes, then I need to update
  all the host with this url.

  Nova compute is currently using vncproxy_base_url only to construct
  the access url and returns this access url to nova-api. Therefore this
  option can be overridden in nova-api. With this I can just update the
  nova api conf when controller public ip changes instead of changing
  conf file of all compute nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359035] Re: Unable to feel significance of nova net-create

2014-08-28 Thread Ghanshyam Mann
Confirmed that nova net-create  neutron net-create create network
successfully when nova-network  neutron is enabled respectively.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359035

Title:
  Unable to feel significance of nova net-create

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I create network using -

  #nova net-create  Test-net 10.0.0.0/8

  follwing error is received -

  ERROR (ClientException): Create networks failed (HTTP 503) (Request-
  ID: req-00cca4f8-ec13-44b0-99ac-05573c1da49b)

  nova-api-logs are as ---

  2014-08-20 10:21:26.412 ERROR 
nova.api.openstack.compute.contrib.os_tenant_networks 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Create networks failed
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks Traceback (most recent 
call last):
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/os_tenant_networks.py, 
line 184, in create
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks label=label, **kwargs)
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/network/base_api.py, line 97, in create
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks raise 
NotImplementedError()
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks NotImplementedError
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
  2014-08-20 10:21:26.439 INFO nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] HTTP exception thrown: 
Create networks failed
  2014-08-20 10:21:26.440 DEBUG nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Returning 503 to user: 
Create networks failed __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1200
  2014-08-20 10:21:26.440 INFO nova.osapi_compute.wsgi.server 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] 10.0.9.49 POST 
/v2/6a1118be3e51427384bcebade69e1703/os-tenant-networks HTTP/1.1 status: 503 
len: 278 time: 0.1678212

  Also similar kind of bug was raised -
  https://bugs.launchpad.net/nova/+bug/1172173

  But if one can not create a network using cli nova net-create as of
  above reported bug then what is the significance of having this CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362528] [NEW] cirros starts with file system in read only mode

2014-08-28 Thread Salvatore Orlando
Public bug reported:

Query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

The VM boots incorrectly, the SSH service does not start, and the
connection fails.

http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-pg-
full/603e3c6/console.html#_2014-08-26_08_59_39_951


Only observed with neutron, 1 gate hit in 7 days.
No hint about the issue in syslog or libvirt logs.

** Affects: neutron
 Importance: Medium
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362528

Title:
  cirros starts with file system in read only mode

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.

  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-
  pg-full/603e3c6/console.html#_2014-08-26_08_59_39_951

  
  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359140] Re: NotImplementedError during nova network-create

2014-08-28 Thread Ghanshyam Mann
Confirmed that nova net-create  neutron net-create create network
successfully when nova-network  neutron is enabled respectively.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359140

Title:
  NotImplementedError during nova network-create

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using devstack development environment.

  
  my localrc file is as -

  ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
  DEST=/opt/stack
  disable_service n-net
  enable_service tempest
  API_RATE_LIMIT=False
  VOLUME_BACKING_FILE_SIZE=4G
  VIRT_DRIVER=libvirt
  SWIFT_REPLICAS=1
  export OS_NO_CACHE=True
  SCREEN_LOGDIR=/opt/stack/screen-logs
  SYSLOG=True
  SKIP_EXERCISES=boot_from_volume,client-env
  ROOTSLEEP=0
  ACTIVE_TIMEOUT=60
  Q_USE_SECGROUP=True
  BOOT_TIMEOUT=90
  ASSOCIATE_TIMEOUT=60
  ADMIN_PASSWORD=Password
  MYSQL_PASSWORD=Password
  RABBIT_PASSWORD=Password
  SERVICE_PASSWORD=Password
  SERVICE_TOKEN=tokentoken
  SWIFT_HASH=Password

  I am trying to create a network using nova network-create CLI.

  I tries following command -

  #[raies@localhost devstack]$ nova network-create --fixed-range-v4 
192.168.1.0/8 test-net-1
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-da64fd03-3637-4a80-9b6f-28e6d477e60d)

  
  Traces of n-api logs are as -

  
   _http_log_response 
/opt/stack/python-keystoneclient/keystoneclient/session.py:196
  2014-08-20 15:37:57.212 DEBUG nova.api.openstack.wsgi 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Action: 'create', body: 
{network: {cidr: 192.168.1.0/8, label: test-net-1}} _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:931
  2014-08-20 15:37:57.212 DEBUG nova.api.openstack.wsgi 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Calling method 'bound 
method NetworkController.create of 
nova.api.openstack.compute.contrib.os_networks.NetworkController object at 
0x37eb650' (Content-type='application/json', Accept='application/json') 
_process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:936
  2014-08-20 15:37:57.213 DEBUG nova.api.openstack.compute.contrib.os_networks 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Creating network with 
label test-net-1 create 
/opt/stack/nova/nova/api/openstack/compute/contrib/os_networks.py:129
  2014-08-20 15:37:57.213 ERROR nova.api.openstack 
[req-da64fd03-3637-4a80-9b6f-28e6d477e60d admin admin] Caught error:
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 124, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1296, in send
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/request.py, line 1260, in 
call_application
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 565, 
in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
self._app(env, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/routes/middleware.py, line 131, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 
/usr/lib/python2.7/site-packages/webob/dec.py, line 144, in __call__
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-08-20 15:37:57.213 17128 TRACE nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1362557] [NEW] Performance of list_projects_for_user impacting keystone

2014-08-28 Thread Henry Nash
Public bug reported:

The assignment call list_projects_for_user() is commonly used - not
least every time you issue a scoped token.  Ina test configuration,
this method was consuming 36% of all keystone clock time.  This call
searches the assignments table (which has one row for every assignment)
by actor_id.  Although actor_id is part of a composite primary key, when
used alone it is like any other non-indexed column.

Adding an index for actor ID would significantly improve performance,
and since the read of such a table is probably much more frequency than
new role assignment being added, this seems like a good trade-off.

Such an index would also improve the performance of get
role_assignments, when used to get role assignments for a user - which
would seem a likely common usage pattern

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362557

Title:
  Performance of list_projects_for_user impacting keystone

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The assignment call list_projects_for_user() is commonly used - not
  least every time you issue a scoped token.  Ina test configuration,
  this method was consuming 36% of all keystone clock time.  This call
  searches the assignments table (which has one row for every
  assignment) by actor_id.  Although actor_id is part of a composite
  primary key, when used alone it is like any other non-indexed column.

  Adding an index for actor ID would significantly improve performance,
  and since the read of such a table is probably much more frequency
  than new role assignment being added, this seems like a good trade-
  off.

  Such an index would also improve the performance of get
  role_assignments, when used to get role assignments for a user - which
  would seem a likely common usage pattern

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362593] [NEW] str validators maybe need a default length limit

2014-08-28 Thread Wei Wang
Public bug reported:

As the tiltle, our str validators though have max_length as a param, 
but the default is None.

So when a very long str input, we'll get a internal server error 
which very ugly.

** Affects: neutron
 Importance: Undecided
 Assignee: Wei Wang (damon-devops)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Wei Wang (damon-devops)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362593

Title:
  str validators maybe need a default length limit

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As the tiltle, our str validators though have max_length as a param, 
  but the default is None.

  So when a very long str input, we'll get a internal server error 
  which very ugly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362596] [NEW] API Test Failure: tempest.api.compute.servers.test_server_actions.[ServerActionsTestJSON, ServerActionsTestXML]

2014-08-28 Thread Kyle Mestery
Public bug reported:

See here: http://logs.openstack.org/99/115799/7/gate/gate-tempest-dsvm-
neutron-pg/6394ec4/console.html

2014-08-28 09:30:46.102 | ==
2014-08-28 09:30:46.103 | Failed 2 tests - output below:
2014-08-28 09:30:46.103 | ==
2014-08-28 09:30:46.103 | 
2014-08-28 09:30:46.103 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
2014-08-28 09:30:46.104 | 
--
2014-08-28 09:30:46.104 | 
2014-08-28 09:30:46.104 | Captured traceback:
2014-08-28 09:30:46.105 | ~~~
2014-08-28 09:30:46.105 | Traceback (most recent call last):
2014-08-28 09:30:46.105 |   File 
tempest/api/compute/servers/test_server_actions.py, line 59, in setUpClass
2014-08-28 09:30:46.106 | cls.server_id = cls.rebuild_server(None)
2014-08-28 09:30:46.106 |   File tempest/api/compute/base.py, line 354, 
in rebuild_server
2014-08-28 09:30:46.106 | resp, server = 
cls.create_test_server(wait_until='ACTIVE', **kwargs)
2014-08-28 09:30:46.106 |   File tempest/api/compute/base.py, line 254, 
in create_test_server
2014-08-28 09:30:46.107 | raise ex
2014-08-28 09:30:46.107 | BuildErrorException: Server 
de02306b-a65c-47ef-86ee-64afc61794e3 failed to build and is in ERROR status
2014-08-28 09:30:46.107 | Details: {u'message': u'No valid host was found. 
', u'code': 500, u'created': u'2014-08-28T08:48:40Z'}
2014-08-28 09:30:46.108 | 
2014-08-28 09:30:46.108 | 
2014-08-28 09:30:46.108 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestXML)
2014-08-28 09:30:46.109 | 
-
2014-08-28 09:30:46.109 | 
2014-08-28 09:30:46.109 | Captured traceback:
2014-08-28 09:30:46.110 | ~~~
2014-08-28 09:30:46.110 | Traceback (most recent call last):
2014-08-28 09:30:46.110 |   File 
tempest/api/compute/servers/test_server_actions.py, line 59, in setUpClass
2014-08-28 09:30:46.110 | cls.server_id = cls.rebuild_server(None)
2014-08-28 09:30:46.111 |   File tempest/api/compute/base.py, line 354, 
in rebuild_server
2014-08-28 09:30:46.111 | resp, server = 
cls.create_test_server(wait_until='ACTIVE', **kwargs)
2014-08-28 09:30:46.111 |   File tempest/api/compute/base.py, line 254, 
in create_test_server
2014-08-28 09:30:46.112 | raise ex
2014-08-28 09:30:46.112 | BuildErrorException: Server 
e966b5a5-6d8b-4fa2-8b90-cd28e5445a2a failed to build and is in ERROR status
2014-08-28 09:30:46.112 | Details: {'message': 'No valid host was found. ', 
'code': '500', 'details': 'None', 'created': '2014-08-28T08:48:44Z'}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362596

Title:
  API Test Failure:
  
tempest.api.compute.servers.test_server_actions.[ServerActionsTestJSON,ServerActionsTestXML]

Status in OpenStack Compute (Nova):
  New

Bug description:
  See here: http://logs.openstack.org/99/115799/7/gate/gate-tempest-
  dsvm-neutron-pg/6394ec4/console.html

  2014-08-28 09:30:46.102 | ==
  2014-08-28 09:30:46.103 | Failed 2 tests - output below:
  2014-08-28 09:30:46.103 | ==
  2014-08-28 09:30:46.103 | 
  2014-08-28 09:30:46.103 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
  2014-08-28 09:30:46.104 | 
--
  2014-08-28 09:30:46.104 | 
  2014-08-28 09:30:46.104 | Captured traceback:
  2014-08-28 09:30:46.105 | ~~~
  2014-08-28 09:30:46.105 | Traceback (most recent call last):
  2014-08-28 09:30:46.105 |   File 
tempest/api/compute/servers/test_server_actions.py, line 59, in setUpClass
  2014-08-28 09:30:46.106 | cls.server_id = cls.rebuild_server(None)
  2014-08-28 09:30:46.106 |   File tempest/api/compute/base.py, line 354, 
in rebuild_server
  2014-08-28 09:30:46.106 | resp, server = 
cls.create_test_server(wait_until='ACTIVE', **kwargs)
  2014-08-28 09:30:46.106 |   File tempest/api/compute/base.py, line 254, 
in create_test_server
  2014-08-28 09:30:46.107 | raise ex
  2014-08-28 09:30:46.107 | BuildErrorException: Server 
de02306b-a65c-47ef-86ee-64afc61794e3 failed to build and is in ERROR status
  2014-08-28 09:30:46.107 | Details: {u'message': u'No valid host was 
found. ', u'code': 500, u'created': u'2014-08-28T08:48:40Z'}
  2014-08-28 09:30:46.108 | 
  2014-08-28 09:30:46.108 | 
  2014-08-28 09:30:46.108 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestXML)
  2014-08-28 09:30:46.109 | 

[Yahoo-eng-team] [Bug 1362595] [NEW] move_vhds_into_sr - invalid cookie

2014-08-28 Thread Bob Ball
Public bug reported:

When moving VHDs on the filesystem a coalesce may be in progress.  The
result of this is that the VHD file is not valid when it is copied as it
is being actively changed - and the VHD cookie is invalid.

Seen in XenServer CI: http://dd6b71949550285df7dc-
dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/36/109836/4/23874/run_tests.log

2014-08-28 12:26:37.538 | Traceback (most recent call last):
2014-08-28 12:26:37.543 |   File 
tempest/api/compute/servers/test_server_actions.py, line 251, in 
test_resize_server_revert
2014-08-28 12:26:37.550 | 
self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')
2014-08-28 12:26:37.556 |   File 
tempest/services/compute/json/servers_client.py, line 179, in 
wait_for_server_status
2014-08-28 12:26:37.563 | raise_on_error=raise_on_error)
2014-08-28 12:26:37.570 |   File tempest/common/waiters.py, line 77, in 
wait_for_server_status
2014-08-28 12:26:37.577 | server_id=server_id)
2014-08-28 12:26:37.583 | BuildErrorException: Server 
e58677ac-dd72-4f10-9615-cb6763f34f50 failed to build and is in ERROR status
2014-08-28 12:26:37.589 | Details: {u'message': 
u'[\'XENAPI_PLUGIN_FAILURE\', \'move_vhds_into_sr\', \'Exception\', VDI 
\'/var/run/sr-mount/16f5c980-eeb6-0fd3-e9b1-dec616309984/os-images/instancee58677ac-dd72-4f10-9615-cb6763f34f50/535cd7f2-80a5-463a-935c-9c4f52ba0ecf.vhd\'
 has an invalid footer: \' invalid cook', u'code': 500, u'created': 
u'2014-08-28T11:57:01Z'}

** Affects: nova
 Importance: Medium
 Assignee: John Garbutt (johngarbutt)
 Status: In Progress


** Tags: xenserver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362595

Title:
  move_vhds_into_sr - invalid cookie

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When moving VHDs on the filesystem a coalesce may be in progress.  The
  result of this is that the VHD file is not valid when it is copied as
  it is being actively changed - and the VHD cookie is invalid.

  Seen in XenServer CI: http://dd6b71949550285df7dc-
  
dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/36/109836/4/23874/run_tests.log

  2014-08-28 12:26:37.538 | Traceback (most recent call last):
  2014-08-28 12:26:37.543 |   File 
tempest/api/compute/servers/test_server_actions.py, line 251, in 
test_resize_server_revert
  2014-08-28 12:26:37.550 | 
self.client.wait_for_server_status(self.server_id, 'VERIFY_RESIZE')
  2014-08-28 12:26:37.556 |   File 
tempest/services/compute/json/servers_client.py, line 179, in 
wait_for_server_status
  2014-08-28 12:26:37.563 | raise_on_error=raise_on_error)
  2014-08-28 12:26:37.570 |   File tempest/common/waiters.py, line 77, in 
wait_for_server_status
  2014-08-28 12:26:37.577 | server_id=server_id)
  2014-08-28 12:26:37.583 | BuildErrorException: Server 
e58677ac-dd72-4f10-9615-cb6763f34f50 failed to build and is in ERROR status
  2014-08-28 12:26:37.589 | Details: {u'message': 
u'[\'XENAPI_PLUGIN_FAILURE\', \'move_vhds_into_sr\', \'Exception\', VDI 
\'/var/run/sr-mount/16f5c980-eeb6-0fd3-e9b1-dec616309984/os-images/instancee58677ac-dd72-4f10-9615-cb6763f34f50/535cd7f2-80a5-463a-935c-9c4f52ba0ecf.vhd\'
 has an invalid footer: \' invalid cook', u'code': 500, u'created': 
u'2014-08-28T11:57:01Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356519] Re: purge-props does not honor property protections

2014-08-28 Thread Michael Turek
I realized I had been not been assigning the proper role to my user. The
feature appears to be working properly. Updated bug status to 'Invalid'.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1356519

Title:
  purge-props does not honor property protections

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  When removing all properties of an image using --purge-props,
  protected properties are also removed. I saw this behavior using v1
  glance-client and v1 api. The documentation for protected properties
  states:

  Property protections will still be honoured if 'X-glance-registry-
  Purge-props' is set to 'True'. That is, if you request to modify
  properties with this header set to `True`, you will not be able to
  delete or update properties for which you do not have the relevant
  permissions. Properties which are not included in the request and for
  which you do have delete permissions will still be removed.

  This does not seem to happen. My hope is to restore/create this
  functionality.

  So far, I'm convinced that this is a glance issue rather than a
  glance-client issue. If it turns out to be otherwise, I will adjust
  the bug report accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1356519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362618] [NEW] Old confusing policies in policy.json

2014-08-28 Thread Salvatore Orlando
Public bug reported:

The following policies have not been used since grizzly:

subnets:private:read: rule:admin_or_owner,
subnets:private:write: rule:admin_or_owner,
subnets:shared:read: rule:regular_user,
subnets:shared:write: rule:admin_only,

keeping them confuses users and leads to think this syntax for
specifying policies is still valid.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362618

Title:
  Old confusing policies in policy.json

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following policies have not been used since grizzly:

  subnets:private:read: rule:admin_or_owner,
  subnets:private:write: rule:admin_or_owner,
  subnets:shared:read: rule:regular_user,
  subnets:shared:write: rule:admin_only,

  keeping them confuses users and leads to think this syntax for
  specifying policies is still valid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362623] [NEW] Check for the max personality files fails

2014-08-28 Thread b.sudhakoushik
Public bug reported:

I tried to create a nova server injecting 7 files, when i checked the
max number of files allowed for that Tenant through  nova absolute-
limits is 6.

Ideally i guess it should throw an error saying the limit got exceeded

nova --version 2.18.1.32

steps i followed

1)  nova  boot  --flavor m1.tiny --image c60462a7-07e3-4703-bbab-
baeaa6b7a2fb --file /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
/home/cirros/abctxt=/home/ubuntu/bsk/abc1   bsk1

Server got created

2) nova absolute-limits
+-+---+
| Name| Value |
+-+---+
| maxServerMeta   | 128   |
| maxPersonality  | 6 |
| maxImageMeta| 128   |
| maxPersonalitySize  | 10240 |
| maxTotalRAMSize | 51200 |
| maxSecurityGroupRules   | 20|
| maxTotalKeypairs| 100   |
| totalRAMUsed| 4736  |
| maxSecurityGroups   | 10|
| totalFloatingIpsUsed| 0 |
| totalInstancesUsed  | 10|
| totalSecurityGroupsUsed | 1 |
| maxTotalFloatingIps | 10|
| maxTotalInstances   | 15|
| totalCoresUsed  | 10|
| maxTotalCores   | 20|
+-+---+

i even checked the limits for nova quota-class-show  for the tenant
'demo' i am working on.

3) nova quota-class-show demo
+-+---+
| Quota   | Limit |
+-+---+
| instances   | 10|
| cores   | 20|
| ram | 51200 |
| floating_ips| 10|
| fixed_ips   | -1|
| metadata_items  | 128   |
| injected_files  | 5 |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes| 255   |
| key_pairs   | 100   |
| security_groups | 10|
| security_group_rules| 20|
+-+---+

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: absolute-limits nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362623

Title:
  Check for the max personality files fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried to create a nova server injecting 7 files, when i checked the
  max number of files allowed for that Tenant through  nova absolute-
  limits is 6.

  Ideally i guess it should throw an error saying the limit got exceeded

  nova --version 2.18.1.32

  steps i followed

  1)  nova  boot  --flavor m1.tiny --image c60462a7-07e3-4703-bbab-
  baeaa6b7a2fb --file /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1 --file
  /home/cirros/abctxt=/home/ubuntu/bsk/abc1   bsk1

  Server got created

  2) nova absolute-limits
  +-+---+
  | Name| Value |
  +-+---+
  | maxServerMeta   | 128   |
  | maxPersonality  | 6 |
  | maxImageMeta| 128   |
  | maxPersonalitySize  | 10240 |
  | maxTotalRAMSize | 51200 |
  | maxSecurityGroupRules   | 20|
  | maxTotalKeypairs| 100   |
  | totalRAMUsed| 4736  |
  | maxSecurityGroups   | 10|
  | totalFloatingIpsUsed| 0 |
  | totalInstancesUsed  | 10|
  | totalSecurityGroupsUsed | 1 |
  | maxTotalFloatingIps | 10|
  | maxTotalInstances   | 15|
  | totalCoresUsed  | 10|
  | maxTotalCores   | 20|
  +-+---+

  i even checked the limits for nova quota-class-show  for the tenant
  'demo' i am working on.

  3) nova quota-class-show demo
  +-+---+
  | Quota   | Limit |
  +-+---+
  | instances   | 10|
  | cores   | 20|
  | ram | 51200 |
  | floating_ips| 10|
  | fixed_ips   | -1|
  | metadata_items  | 128   |
  | injected_files  | 5 |
  | injected_file_content_bytes | 10240 |
  | injected_file_path_bytes| 255   |
  | key_pairs   | 100   |
  | security_groups | 10|
  | security_group_rules| 20|
  +-+---+

To manage 

[Yahoo-eng-team] [Bug 1362630] [NEW] keystone catalog command line fails with 'NoneType' object has no attribute 'has_service_catalog'

2014-08-28 Thread Игор Миловановић
Public bug reported:

I am running keystone on icehouse, and client version  0.7.1
I source admin rc file
with OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT
and run:

$ keystone catalog

This fails with:

'NoneType' object has no attribute 'has_service_catalog'

If I unset OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT, keystone catalog succeeds, 
but...
running:

$ keystone tenant-list

hangs  timeouts... as my installation (Fuel 5.0.1 based) has adminURL
on internal non-accessible network (I am running from public network,
remote host)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362630

Title:
  keystone catalog command line fails with  'NoneType' object has no
  attribute 'has_service_catalog'

Status in OpenStack Identity (Keystone):
  New

Bug description:
  I am running keystone on icehouse, and client version  0.7.1
  I source admin rc file
  with OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT
  and run:

  $ keystone catalog

  This fails with:

  'NoneType' object has no attribute 'has_service_catalog'

  If I unset OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT, keystone catalog 
succeeds, but...
  running:

  $ keystone tenant-list

  hangs  timeouts... as my installation (Fuel 5.0.1 based) has adminURL
  on internal non-accessible network (I am running from public network,
  remote host)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362676] [NEW] Hyper-V agent doesn't create stateful security group rules

2014-08-28 Thread Claudiu Belu
Public bug reported:

Hyper-V agent does not create stateful security group rules (ACLs),
meaning it doesn't allow any response traffic to pass through.

For example, the following security group rule:
{direction: ingress, remote_ip_prefix: null, protocol: tcp, 
port_range_max: 22,  port_range_min: 22, ethertype: IPv4}
Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Claudiu Belu (cbelu)

** Description changed:

  Hyper-V agent does not create stateful security group rules (ACLs),
- which doesn't allow any traffic response to pass through.
+ meaning it doesn't allow any response traffic to pass through.
  
  For example, the following security group rule:
  {direction: ingress, remote_ip_prefix: null, protocol: tcp, 
port_range_max: 22,  port_range_min: 22, ethertype: IPv4}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362676

Title:
  Hyper-V agent doesn't create stateful security group rules

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hyper-V agent does not create stateful security group rules (ACLs),
  meaning it doesn't allow any response traffic to pass through.

  For example, the following security group rule:
  {direction: ingress, remote_ip_prefix: null, protocol: tcp, 
port_range_max: 22,  port_range_min: 22, ethertype: IPv4}
  Allows tcp  inbound traffic through port 22, but since the Hyper-V agent does 
not add this rule as stateful, the reply traffic never received, unless 
specifically added an egress security group rule as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362678] [NEW] multi-domain has problems with LDAP identity on default domain

2014-08-28 Thread Marcus Klein
Public bug reported:

What I try to achieve:

I want to authenticate all users of the default domain against our
company's central LDAP server. This works pretty good.

For Heat I need a user storage that is writable. Our central LDAP server
can not be written from OpenStack. Therefore I configured the heat
domain with SQL identity.

This all works up to the point, when the heat domain admin needs to be
authorized. This authorization request is always processed with the LDAP
identity. I don't know whether the domain is missing here for the
keystone V3 API authorization request or keystone does not route the
request correctly to the SQL identity. To clarify this, I opened this
bug and Steven Hardy encouraged me to do so.

/etc/keystone/keystone.conf:

[identity]
default_domain_id=default
domain_specific_drivers_enabled=true
domain_config_dir=/etc/keystone/domains
driver = keystone.identity.backends.ldap.Identity

[ldap]
url=ldap://ldap2.open-xchange.com:389
suffix=dc=open-xchange,dc=com
etc.

/etc/keystone/domains/keystone.heat.conf:

[identity]
driver = keystone.identity.backends.sql.Identity

[ldap]

/etc/heat/heat.conf:
deferred_auth_method=trusts
trusts_delegated_roles=heat_stack_owner
heat_stack_user_role=heat_stack_user
stack_user_domain=a904d890e0de47dc9f2090c20bb1f45c
stack_domain_admin=heat_domain_admin
stack_domain_admin_password=

openstack --os-token $OS_TOKEN --os-url=http://contorller:5000/v3 
--os-identity-api-version=3 domain list
+--+-+-+--+
| ID   | Name    | Enabled | Description
  |
+--+-+-+--+
| a904d890e0de47dc9f2090c20bb1f45c | heat    | True    | Owns users and 
projects created by heat  |
| default  | Default | True    | Owns users and tenants 
(i.e. projects) available on Identity API v2. |
+--+-+-+--+

openstack --os-token $OS_TOKEN --os-url=http://controller:5000/v3 
--os-identity-api-version=3 user create --password  --domain 
a904d890e0de47dc9f2090c20bb1f45c --description Manages users and projects 
created by heat heat_domain_admin
+-+-+
| Field   | Value   
    |
+-+-+
| description | Manages users and projects created by heat  
    |
| domain_id   | a904d890e0de47dc9f2090c20bb1f45c
    |
| enabled | True
    |
| id  | 38877ca5daed4c9fbbb6c853d3d88e36
    |
| links   | {u'self': 
u'http://controller-test:5000/v3/users/38877ca5daed4c9fbbb6c853d3d88e36'} |
| name    | heat_domain_admin   
    |
+-+-+

openstack --os-token $OS_TOKEN --os-url=http://controller:5000/v3 --os-
identity-api-version=3 role add --user 38877ca5daed4c9fbbb6c853d3d88e36
--domain a904d890e0de47dc9f2090c20bb1f45c admin

Everything set up according to:
http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-1-trusts.html
http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-2-stack.html

I tested this using this example stack: https://github.com/openstack
/heat-templates/blob/master/hot/software-config/example-templates
/example-deploy-sequence.yaml

Then I get the following authentication problem in keystone:
2014-08-28 13:20:40.172 4915 INFO eventlet.wsgi.server [-] 10.20.31.200 - - 
[28/Aug/2014 13:20:40] POST /v3/auth/tokens HTTP/1.1 201 12110 0.163805
2014-08-28 13:20:40.326 4915 DEBUG keystone.middleware.core [-] Auth token not 
in the request header. Will not build auth context. process_request 
/usr/lib/python2.7/dist-packages/keystone/middleware/core.py:271
2014-08-28 13:20:40.329 4915 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py:181
2014-08-28 13:20:40.355 4915 DEBUG keystone.notifications [-] CADF Event: 
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': 
{'typeURI': 'service/security/account/user', 'host': {'agent': 
'python-keystoneclient', 'a
ddress': '10.20.31.200'}, 'id': 
'openstack:d7c2f1ec-aae3-4fe5-8721-a82ca842eca3', 'name': 

[Yahoo-eng-team] [Bug 1362699] [NEW] amd64 is not a valid arch

2014-08-28 Thread Jim Rollenhagen
Public bug reported:

'amd64' is not in the list of valid architectures, this should be
canonicalized to 'x86_64'.

** Affects: nova
 Importance: Undecided
 Assignee: Jim Rollenhagen (jim-rollenhagen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362699

Title:
  amd64 is not a valid arch

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  'amd64' is not in the list of valid architectures, this should be
  canonicalized to 'x86_64'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362704] [NEW] Volume Type Extra Specs needs modal dialog

2014-08-28 Thread Richard Hagarty
Public bug reported:

Currently, when a user wants to create an Extra Spec, the panel changes
to show 2 input fields - key and value.

This deviates from other like Horizon panels where the new fields are
presented in a modal dialog.

Actually, I think this is what was intended because if you look at the
code in volume_types/extras/tables.py:ExtraSpecCreate(), the class it
specifies is ajax-modal.

But a trailing , was left off, invalidating the call.  Simply adding a
trailing comma to this call will result in a modal dialog being
displayed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362704

Title:
  Volume Type Extra Specs needs modal dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently, when a user wants to create an Extra Spec, the panel
  changes to show 2 input fields - key and value.

  This deviates from other like Horizon panels where the new fields
  are presented in a modal dialog.

  Actually, I think this is what was intended because if you look at the
  code in volume_types/extras/tables.py:ExtraSpecCreate(), the class it
  specifies is ajax-modal.

  But a trailing , was left off, invalidating the call.  Simply adding
  a trailing comma to this call will result in a modal dialog being
  displayed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362672] Re: Volume stuck in deleting state cannot be deleted.

2014-08-28 Thread Clark Boylan
I have removed openstack-ci from the bug as this appears to be an
interaction with nova and cinder (which I have added to the bug).

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362672

Title:
  Volume stuck in deleting state cannot be deleted.

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  [root@ctrl01 test_cloud]# nova volume-snapshot-list
  
+--+--+--+--+--+
  | ID   | Volume ID
| Status   | Display Name | Size |
  
+--+--+--+--+--+
  | 89053e9b-d35a-47d2-98dd-4b031ce4c6b4 | 505fd31d-7b33-4afa-ad0d-e6fb1475a994 
| deleting | FuncTests_Python_XCloudAPI_VolumeSnapshot_IQIG8P | 2|
  | 9925ed11-8c0e-4979-be61-1f156ed4ba2c | 5f51859e-001d-4ceb-9e70-3a371233615b 
| deleting | FuncTests_Python_XCloudAPI_VolumeSnapshot_IQIG8P | 2|
  
+--+--+--+--+--+
  [root@pdc-ostck-ctrl01 test_cloud]# nova volume-snapshot-delete 
89053e9b-d35a-47d2-98dd-4b031ce4c6b4
  ERROR: Invalid snapshot: Volume Snapshot status must be available or error 
(HTTP 400) (Request-ID: req-735926a0-e306-4635-9e51-49d7b27614cf)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319023] Re: TestDashboardBasicOps.test_basic_scenario fails with HTTP Error 500: INTERNAL SERVER ERROR

2014-08-28 Thread Brant Knudson
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1319023

Title:
  TestDashboardBasicOps.test_basic_scenario  fails with HTTP Error 500:
  INTERNAL SERVER ERROR

Status in OpenStack Dashboard (Horizon):
  New
Status in Tempest:
  New

Bug description:
  You can see the full failure at
  http://logs.openstack.org/31/92831/1/check/check-grenade-
  dsvm/ed47c7e/console.html

  excerpted:

  2014-05-13 04:21:42.780 | {2} 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario
 [24.934203s] ... FAILED
  2014-05-13 04:21:42.780 | 
  2014-05-13 04:21:42.780 | Captured traceback:
  2014-05-13 04:21:42.780 | ~~~
  2014-05-13 04:21:42.780 | Traceback (most recent call last):
  2014-05-13 04:21:42.780 |   File tempest/test.py, line 126, in wrapper
  2014-05-13 04:21:42.780 | return f(self, *func_args, **func_kwargs)
  2014-05-13 04:21:42.780 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 75, in test_basic_scenario
  2014-05-13 04:21:42.780 | self.user_login()
  2014-05-13 04:21:42.780 |   File 
tempest/scenario/test_dashboard_basic_ops.py, line 66, in user_login
  2014-05-13 04:21:42.781 | self.opener.open(req, 
urllib.urlencode(params))
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
406, in open
  2014-05-13 04:21:42.781 | response = meth(req, response)
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
519, in http_response
  2014-05-13 04:21:42.781 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
438, in error
  2014-05-13 04:21:42.781 | result = self._call_chain(*args)
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
378, in _call_chain
  2014-05-13 04:21:42.781 | result = func(*args)
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
625, in http_error_302
  2014-05-13 04:21:42.781 | return self.parent.open(new, 
timeout=req.timeout)
  2014-05-13 04:21:42.781 |   File /usr/lib/python2.7/urllib2.py, line 
406, in open
  2014-05-13 04:21:42.782 | response = meth(req, response)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
519, in http_response
  2014-05-13 04:21:42.782 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
438, in error
  2014-05-13 04:21:42.782 | result = self._call_chain(*args)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
378, in _call_chain
  2014-05-13 04:21:42.782 | result = func(*args)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
625, in http_error_302
  2014-05-13 04:21:42.782 | return self.parent.open(new, 
timeout=req.timeout)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
406, in open
  2014-05-13 04:21:42.782 | response = meth(req, response)
  2014-05-13 04:21:42.782 |   File /usr/lib/python2.7/urllib2.py, line 
519, in http_response
  2014-05-13 04:21:42.783 | 'http', request, response, code, msg, hdrs)
  2014-05-13 04:21:42.783 |   File /usr/lib/python2.7/urllib2.py, line 
444, in error
  2014-05-13 04:21:42.783 | return self._call_chain(*args)
  2014-05-13 04:21:42.783 |   File /usr/lib/python2.7/urllib2.py, line 
378, in _call_chain
  2014-05-13 04:21:42.783 | result = func(*args)
  2014-05-13 04:21:42.783 |   File /usr/lib/python2.7/urllib2.py, line 
527, in http_error_default
  2014-05-13 04:21:42.783 | raise HTTPError(req.get_full_url(), code, 
msg, hdrs, fp)
  2014-05-13 04:21:42.783 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR
  2014-05-13 04:21:42.783 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1319023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362784] [NEW] Should remove duplicated get_port_from_device method in various plugins

2014-08-28 Thread Akihiro Motoki
Public bug reported:

Various plugins have get_port_from_devices (or get_port_and_sgs) for Security 
Group server side RPC callback.
However these methods are 99% same and only differences are:
-  == port_id vs .startswith(port_id), and
- how to get port_id from the passed device (netdev name for linuxbridge and 
port-id for OVS)

Such code duplication should be removed.

** Affects: neutron
 Importance: Low
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: sg-fw

** Changed in: neutron
 Assignee: amo (amo) = Akihiro Motoki (amotoki)

** Changed in: neutron
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362784

Title:
  Should remove duplicated get_port_from_device method in various
  plugins

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Various plugins have get_port_from_devices (or get_port_and_sgs) for Security 
Group server side RPC callback.
  However these methods are 99% same and only differences are:
  -  == port_id vs .startswith(port_id), and
  - how to get port_id from the passed device (netdev name for linuxbridge 
and port-id for OVS)

  Such code duplication should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362799] [NEW] Hard reboot escalation regression

2014-08-28 Thread Johannes Erdfelt
Public bug reported:

Nova used to allow a hard reboot when an instance is already being soft
rebooted. However, with commit cc0be157d005c5588fe5db779fc30fefbf22b44d,
this is no longer allowed.

This is because two new task states were introduced, REBOOT_PENDING and
REBOOT_STARTED (and corresponding values for hard reboots). A soft
reboot now spends most of it's time in REBOOT_STARTED instead of
REBOOTING.

REBOOT_PENDING and REBOOT_STARTED were not added to the
@check_instance_state decorator. As a result, an attempt to hard reboot
an instance which is stuck trying to do a soft reboot will now fail with
an InstanceInvalidState exception.

This provides a poor user experience since a reboot is often attempted
for instances that aren't responsive. A soft reboot is not guaranteed to
work even if the system is responsive. The soft reboot prevents a hard
reboot from being performed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362799

Title:
  Hard reboot escalation regression

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova used to allow a hard reboot when an instance is already being
  soft rebooted. However, with commit
  cc0be157d005c5588fe5db779fc30fefbf22b44d, this is no longer allowed.

  This is because two new task states were introduced, REBOOT_PENDING
  and REBOOT_STARTED (and corresponding values for hard reboots). A soft
  reboot now spends most of it's time in REBOOT_STARTED instead of
  REBOOTING.

  REBOOT_PENDING and REBOOT_STARTED were not added to the
  @check_instance_state decorator. As a result, an attempt to hard
  reboot an instance which is stuck trying to do a soft reboot will now
  fail with an InstanceInvalidState exception.

  This provides a poor user experience since a reboot is often attempted
  for instances that aren't responsive. A soft reboot is not guaranteed
  to work even if the system is responsive. The soft reboot prevents a
  hard reboot from being performed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362842] [NEW] Can't re-attach volumes to instances

2014-08-28 Thread Jason Khabra
Public bug reported:

When detaching a volume from an instance, and then re-attaching that
same volume to that same instance nova fails to attach.

Error:
2014-08-28 13:36:02.134 ERROR nova.virt.block_device 
[req-79e463d4-7e6d-4fce-8b04-e98de64d91a7 admin admin] 
[instance: b813c603-6dad-44bd-acdb-76f1fd84899f] Driver failed 
to attach volume a2439dd9-49ef-4ce0-8b91-155ff6ecd3b0 at /dev/vdb
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] Traceback (most recent call last):
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/opt/stack/nova/nova/virt/block_device.py, line 252, in attach
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] device_type=self['device_type'], 
encryption=encryption)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1315, in attach_volume
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] 
self._disconnect_volume(connection_info, disk_dev)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] six.reraise(self.type_, 
self.value, self.tb)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1306, in attach_volume
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] 
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 179, in doit
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 139, in 
proxy_call
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] rv = execute(f,*args,**kwargs)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 77, in tworker
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] rv = meth(*args,**kwargs)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 420, in attachDeviceFlags
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] libvirtError: internal error unable 
to execute QEMU command 'device_add': Duplicate ID 'virtio-disk1' for device
2014-08-28 13:36:02.134 TRACE nova.virt.block_device [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] 
2014-08-28 13:36:02.136 DEBUG nova.volume.cinder 
[req-79e463d4-7e6d-4fce-8b04-e98de64d91a7 admin admin] 
Cinderclient connection created using URL: 
http://10.50.142.1:8776/v1/a7b96ab31ea340ff8a6c900c7b3449ba from 
(pid=36738) get_cinder_client_version 
/opt/stack/nova/nova/volume/cinder.py:238
2014-08-28 13:36:02.748 ERROR nova.compute.manager 
[req-79e463d4-7e6d-4fce-8b04-e98de64d91a7 admin admin] 
[instance: b813c603-6dad-44bd-acdb-76f1fd84899f] Failed to 
attach a2439dd9-49ef-4ce0-8b91-155ff6ecd3b0 at /dev/vdb
2014-08-28 13:36:02.748 TRACE nova.compute.manager [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f] Traceback (most recent call last):
2014-08-28 13:36:02.748 TRACE nova.compute.manager [instance: 
b813c603-6dad-44bd-acdb-76f1fd84899f]   File 

[Yahoo-eng-team] [Bug 1362847] [NEW] Spell Errors in Keystone core.py

2014-08-28 Thread Rishabh
Public bug reported:

There are few spelling errors that I observed in the Keystone core.py.

** Affects: keystone
 Importance: Undecided
 Assignee: Rishabh (rishabja)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Rishabh (rishabja)

** Changed in: keystone
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362847

Title:
  Spell Errors in Keystone core.py

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  There are few spelling errors that I observed in the Keystone core.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362846] [NEW] stack parameters marked as hidden should use a password field

2014-08-28 Thread Miguel Grinberg
Public bug reported:

In the launch stack form, any heat template parameters that are marked
as hidden are presented as a regular input field, exactly the same as a
normal string field.

Given that hidden fields are marked as such due to their sensitive
nature, it would be better to use a password entry field for them.

How to reproduce:

1. Use the following template:

heat_template_version: 2013-05-23
parameters:
  password:
type: string
hidden: true

2. Note how in the Launch Stack form for the above template the
password field at the bottom shows characters as you type them.

** Affects: horizon
 Importance: Undecided
 Assignee: Miguel Grinberg (miguelgrinberg)
 Status: New


** Tags: heat

** Changed in: horizon
 Assignee: (unassigned) = Miguel Grinberg (miguelgrinberg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362846

Title:
  stack parameters marked as hidden should use a password field

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the launch stack form, any heat template parameters that are marked
  as hidden are presented as a regular input field, exactly the same as
  a normal string field.

  Given that hidden fields are marked as such due to their sensitive
  nature, it would be better to use a password entry field for them.

  How to reproduce:

  1. Use the following template:

  heat_template_version: 2013-05-23
  parameters:
password:
  type: string
  hidden: true

  2. Note how in the Launch Stack form for the above template the
  password field at the bottom shows characters as you type them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362850] [NEW] Filter button on Host Aggregates does not work

2014-08-28 Thread Mohan Seri
Public bug reported:

Click on Filter Button on Host Aggregates page and notice the below
error: (Since the Filter text box is an auto filter, we may need not
require Filter button)

TemplateSyntaxError at /admin/aggregates/
name
Request Method: POST
Request URL:http://localhost/admin/aggregates/
Django Version: 1.6.5
Exception Type: TemplateSyntaxError
Exception Value:
name
Exception Location: 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py in 
__getattr__, line 489
Python Executable:  /usr/bin/python
Python Version: 2.7.3
Python Path:
['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
 '/opt/stack/python-keystoneclient',
 '/opt/stack/python-glanceclient',
 '/opt/stack/python-cinderclient',
 '/opt/stack/python-novaclient',
 '/opt/stack/python-swiftclient',
 '/opt/stack/python-neutronclient',
 '/opt/stack/python-heatclient',
 '/opt/stack/python-openstackclient',
 '/opt/stack/keystone',
 '/opt/stack/swift',
 '/opt/stack/glance',
 '/opt/stack/cinder',
 '/opt/stack/neutron',
 '/opt/stack/nova',
 '/opt/stack/horizon',
 '/opt/stack/heat',
 '/opt/stack/python-troveclient',
 '/opt/stack/trove',
 '/opt/stack/tempest',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-linux2',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages/PIL',
 '/usr/lib/python2.7/dist-packages/gst-0.10',
 '/usr/lib/python2.7/dist-packages/gtk-2.0',
 '/usr/lib/pymodules/python2.7',
 '/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-client',
 '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel',
 '/usr/lib/python2.7/dist-packages/ubuntuone-couch',
 '/usr/lib/python2.7/dist-packages/ubuntuone-installer',
 '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol',
 '/opt/stack/horizon/openstack_dashboard']
Server time:Thu, 28 Aug 2014 23:00:23 +

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362850

Title:
  Filter button on Host Aggregates does not work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Click on Filter Button on Host Aggregates page and notice the below
  error: (Since the Filter text box is an auto filter, we may need not
  require Filter button)

  TemplateSyntaxError at /admin/aggregates/
  name
  Request Method:   POST
  Request URL:  http://localhost/admin/aggregates/
  Django Version:   1.6.5
  Exception Type:   TemplateSyntaxError
  Exception Value:  
  name
  Exception Location:   
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py in 
__getattr__, line 489
  Python Executable:/usr/bin/python
  Python Version:   2.7.3
  Python Path:  
  ['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
   '/opt/stack/python-keystoneclient',
   '/opt/stack/python-glanceclient',
   '/opt/stack/python-cinderclient',
   '/opt/stack/python-novaclient',
   '/opt/stack/python-swiftclient',
   '/opt/stack/python-neutronclient',
   '/opt/stack/python-heatclient',
   '/opt/stack/python-openstackclient',
   '/opt/stack/keystone',
   '/opt/stack/swift',
   '/opt/stack/glance',
   '/opt/stack/cinder',
   '/opt/stack/neutron',
   '/opt/stack/nova',
   '/opt/stack/horizon',
   '/opt/stack/heat',
   '/opt/stack/python-troveclient',
   '/opt/stack/trove',
   '/opt/stack/tempest',
   '/usr/lib/python2.7',
   '/usr/lib/python2.7/plat-linux2',
   '/usr/lib/python2.7/lib-tk',
   '/usr/lib/python2.7/lib-old',
   '/usr/lib/python2.7/lib-dynload',
   '/usr/local/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages/PIL',
   '/usr/lib/python2.7/dist-packages/gst-0.10',
   '/usr/lib/python2.7/dist-packages/gtk-2.0',
   '/usr/lib/pymodules/python2.7',
   '/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
   '/usr/lib/python2.7/dist-packages/ubuntuone-client',
   '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel',
   '/usr/lib/python2.7/dist-packages/ubuntuone-couch',
   '/usr/lib/python2.7/dist-packages/ubuntuone-installer',
   '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol',
   '/opt/stack/horizon/openstack_dashboard']
  Server time:  Thu, 28 Aug 2014 23:00:23 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362854] [NEW] Incorrect regex on rootwrap for encrypted volumes ln creation

2014-08-28 Thread John Griffith
Public bug reported:

While running Tempest tests against my device, the encryption tests
consistently fail to attach.  Turns out the problem is an attempt to
create symbolic link for encryption process, however the rootwrap spec
is restricted to targets with the default openstack.org iqn.

Error Message from n-cpu:

Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln
--symbolic --force /dev/mapper/ip-10.10.8.112:3260-iscsi-
iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-
6b4269af9d4f.4710-lun-0 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-
iqn.2010-01.com.sol


Rootwrap entry currently implemented:

ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
-.*-iscsi-iqn.2010-10.org.openstack:volume-.*, /dev/disk/by-path/ip
-.*-iscsi-iqn.2010-10.org.openstack:volume-.*

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Missing rootwrap for encrypted volumes
+ Incorrect regex on rootwrap for encrypted volumes ln creation

** Description changed:

+ While running Tempest tests against my device, the encryption tests
+ consistently fail to attach.  Turns out the problem is an attempt to
+ create symbolic link for encryption process, however the rootwrap spec
+ is restricted to targets with the default openstack.org iqn.
  
- Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln --symbolic 
--force 
/dev/mapper/ip-10.10.8.112:3260-iscsi-iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-6b4269af9d4f.4710-lun-0
 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-iqn.2010-01.com.sol
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 412, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher payload)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 71, in wrapped
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 296, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher pass
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 282, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 324, in decorated_function
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
- 2014-08-28 17:10:31.613 16021 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 312, in decorated_function
- 

[Yahoo-eng-team] [Bug 1362863] [NEW] reply queues fill up with unacked messages

2014-08-28 Thread Sam Morrison
Public bug reported:

Since upgrading to icehouse we consistently get reply_x queues
filling up with unacked messages. To fix this I have to restart the
service. This seems to happen when something is wrong for a short period
of time and it doesn't clean up after itself.

So far I've seen the issue with nova-api, nova-compute, nova-network,
nova-api-metadata, cinder-api but I'm sure there are others.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362908] [NEW] snat namespace remain on network node after the router is deleted

2014-08-28 Thread Stephen Ma
Public bug reported:

On a controller node, with L3 agent mode of 'dvr_snat', the snat
namespace remains on the node even after the router is deleted.

This problem is  reproduced on a 3 node setup with 2 compute nodes and
one controller node setup using devstack. L3 agent mode on compute nodes
is 'dvr'.  The mode on the controller node is 'dvr-snat'.

1. Create a network and a subnetwork.
2. Boot a VM using the network.
3. Create a router
4. Run neutron router-interface-add router subnet
5. Run neutron router-gateway-set router public
6. Wait awhile, then do neutron router-gateway-clear router public
7. Run neutron router-interface-delete router subnet
8. delete the router.

The router namespace is deleted on the control node. but the snat
namespace of the router remains.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362908

Title:
  snat namespace remain on network node after the router is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On a controller node, with L3 agent mode of 'dvr_snat', the snat
  namespace remains on the node even after the router is deleted.

  This problem is  reproduced on a 3 node setup with 2 compute nodes and
  one controller node setup using devstack. L3 agent mode on compute
  nodes is 'dvr'.  The mode on the controller node is 'dvr-snat'.

  1. Create a network and a subnetwork.
  2. Boot a VM using the network.
  3. Create a router
  4. Run neutron router-interface-add router subnet
  5. Run neutron router-gateway-set router public
  6. Wait awhile, then do neutron router-gateway-clear router public
  7. Run neutron router-interface-delete router subnet
  8. delete the router.

  The router namespace is deleted on the control node. but the snat
  namespace of the router remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362916] [NEW] _rescan_multipath construct wrong parameter for “multipath -r”

2014-08-28 Thread Shen Wang
Public bug reported:

At 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L590, 
the purpose of self._run_multipath('-r', check_exit_code=[0, 1, 21]) is to 
setup a command to reconstruct multipath devices.
But the result of it is multipath - r, not the right format multipath -r.

I think brackets is missed for '-r', it should be modified to
self._run_multipath(['-r'], check_exit_code=[0, 1, 21])

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362916

Title:
  _rescan_multipath construct wrong parameter for “multipath -r”

Status in OpenStack Compute (Nova):
  New

Bug description:
  At 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L590, 
the purpose of self._run_multipath('-r', check_exit_code=[0, 1, 21]) is to 
setup a command to reconstruct multipath devices.
  But the result of it is multipath - r, not the right format multipath -r.

  I think brackets is missed for '-r', it should be modified to
  self._run_multipath(['-r'], check_exit_code=[0, 1, 21])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362929] [NEW] libvirt: KVM live migration failed due to VIR_DOMAIN_XML_MIGRATABLE flag

2014-08-28 Thread Qin Zhao
Public bug reported:

OS version: RHEL 6.5
libvirt version:  libvirt-0.10.2-29.el6_5.9.x86_64

When I attempt to live migrate my KVM instance using latest Juno code on
RHEL 6.5, I notice nova-compute error on source compute node:

2014-08-27 09:24:41.836 26638 ERROR nova.virt.libvirt.driver [-]
[instance: 1b1618fa-ddbd-4fce-aa04-720a72ec7dfe] Live Migration failure:
unsupported configuration: Target CPU model SandyBridge does not match
source (null)

And this libvirt error on source compute node:

2014-08-27 09:32:24.955+: 17721: error : virCPUDefIsEqual:753 :
unsupported configuration: Target CPU model SandyBridge does not match
source (null)

After looking into the code, I notice that 
https://review.openstack.org/#/c/73428/ adds VIR_DOMAIN_XML_MIGRATABLE flag to 
dump instance xml. With this flag, the KVM instance xml will include full CPU 
information like this:
  cpu mode='host-model' match='exact'
model fallback='allow'SandyBridge/model
vendorIntel/vendor

Without this flag, the xml will not have those CPU information:
  cpu mode='host-model'
model fallback='allow'/
topology sockets='1' cores='1' threads='1'/
  /cpu

The CPU model of my source and destination server are exactly identical.
So I suspect it is a side effect of
https://review.openstack.org/#/c/73428/. When libvirtd doing
virDomainDefCheckABIStability(), its src domain xml does not include CPU
model info, so that the checking fails.

After I remove the code change of
https://review.openstack.org/#/c/73428/ from my compute node, this
libvirt checking error does not occur anymore.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

** Tags added: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362929

Title:
  libvirt: KVM live migration failed due to VIR_DOMAIN_XML_MIGRATABLE
  flag

Status in OpenStack Compute (Nova):
  New

Bug description:
  OS version: RHEL 6.5
  libvirt version:  libvirt-0.10.2-29.el6_5.9.x86_64

  When I attempt to live migrate my KVM instance using latest Juno code
  on RHEL 6.5, I notice nova-compute error on source compute node:

  2014-08-27 09:24:41.836 26638 ERROR nova.virt.libvirt.driver [-]
  [instance: 1b1618fa-ddbd-4fce-aa04-720a72ec7dfe] Live Migration
  failure: unsupported configuration: Target CPU model SandyBridge does
  not match source (null)

  And this libvirt error on source compute node:

  2014-08-27 09:32:24.955+: 17721: error : virCPUDefIsEqual:753 :
  unsupported configuration: Target CPU model SandyBridge does not match
  source (null)

  After looking into the code, I notice that 
https://review.openstack.org/#/c/73428/ adds VIR_DOMAIN_XML_MIGRATABLE flag to 
dump instance xml. With this flag, the KVM instance xml will include full CPU 
information like this:
cpu mode='host-model' match='exact'
  model fallback='allow'SandyBridge/model
  vendorIntel/vendor

  Without this flag, the xml will not have those CPU information:
cpu mode='host-model'
  model fallback='allow'/
  topology sockets='1' cores='1' threads='1'/
/cpu

  The CPU model of my source and destination server are exactly
  identical. So I suspect it is a side effect of
  https://review.openstack.org/#/c/73428/. When libvirtd doing
  virDomainDefCheckABIStability(), its src domain xml does not include
  CPU model info, so that the checking fails.

  After I remove the code change of
  https://review.openstack.org/#/c/73428/ from my compute node, this
  libvirt checking error does not occur anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp