[Yahoo-eng-team] [Bug 1327935] [NEW] clear text passwords shown in log file at DEBUG level

2014-06-09 Thread Giulio Fidente
Public bug reported:

horizon seems to be printing in the log file the passwords in clear text
at the DEBUG level

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327935

Title:
  clear text passwords shown in log file at DEBUG level

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  horizon seems to be printing in the log file the passwords in clear
  text at the DEBUG level

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1327935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327955] [NEW] fwaas:Error not thrown when setting protocol as icmp and destination /source port while creating firewall rule

2014-06-09 Thread Rajkumar
Public bug reported:

Error not thrown when setting protocol as icmp and destination /source
port while creating firewall rule

Steps to Reproduce: 
create firewall rule with protocol as icmp and destination port as 20

Actual Results: 
It is creating firewal rule with protocol as icmp and destination port as 20 in 
cli. However since icmp protocol doesn't use source/destination port , It was 
taken only as ICMP in the output of  iptable-save in router

Expected Results: 
the cli should throw error
 
 
-A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-iv426dd1dbb
-A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-ov426dd1dbb
-A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-fwaas-defau
-A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT
-A neutron-l3-agent-fwaas-defau -j DROP
-A neutron-l3-agent-iv426dd1dbb -m state --state INVALID -j DROP
-A neutron-l3-agent-iv426dd1dbb -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-iv426dd1dbb -p icmp -j 
DROP-taken
 as only icmp
-A neutron-l3-agent-ov426dd1dbb -m state --state INVALID -j DROP
-A neutron-l3-agent-ov426dd1dbb -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-l3-agent-ov426dd1dbb -p icmp -j DROP

 
 
root@IH-HL-OSC:~# fwrc --name r9 --protocol icmp --destination-port 20 --action 
deny
Created a new firewall_rule:
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address |  |
| destination_port   | 20   
|- port 20 also taken
| enabled| True |
| firewall_policy_id |  |
| id | 29bca0ca-17c8-4fc8-a816-c14ce2824bed |
| ip_version | 4|
| name   | r9   |
| position   |  |
| protocol   | icmp |
| shared | False|
| source_ip_address  |  |
| source_port|  |
| tenant_id  | 8aac6cceec774dec8821d76e0c1bdd8c |
++--+

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327955

Title:
  fwaas:Error not thrown when setting protocol as icmp and destination
  /source port while creating firewall rule

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Error not thrown when setting protocol as icmp and destination /source
  port while creating firewall rule

  Steps to Reproduce: 
  create firewall rule with protocol as icmp and destination port as 20

  Actual Results: 
  It is creating firewal rule with protocol as icmp and destination port as 20 
in cli. However since icmp protocol doesn't use source/destination port , It 
was taken only as ICMP in the output of  iptable-save in router

  Expected Results: 
  the cli should throw error
   
   
  -A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-iv426dd1dbb
  -A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-ov426dd1dbb
  -A neutron-l3-agent-FORWARD -o qr-+ -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-FORWARD -i qr-+ -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT
  -A neutron-l3-agent-fwaas-defau -j DROP
  -A neutron-l3-agent-iv426dd1dbb -m state --state INVALID -j DROP
  -A neutron-l3-agent-iv426dd1dbb -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-iv426dd1dbb -p icmp -j 
DROP-taken
 as only icmp
  -A neutron-l3-agent-ov426dd1dbb -m state --state INVALID -j DROP
  -A neutron-l3-agent-ov426dd1dbb -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-ov426dd1dbb -p icmp -j DROP

   
   
  root@IH-HL-OSC:~# fwrc --name r9 --protocol icmp --destination-port 20 
--action deny
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  

[Yahoo-eng-team] [Bug 1235112] Re: VMware driver not discovering iscsi targets while attaching cinder volumes

2014-06-09 Thread Thierry Carrez
** Changed in: nova
Milestone: next = None

** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235112

Title:
  VMware driver not discovering iscsi targets while attaching cinder
  volumes

Status in OpenStack Compute (Nova):
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  In Progress

Bug description:
  VMware drivers cannot dynamically add iscsi targets presented to it
  while attaching a cinder volume. As a result, the instance cannot be
  attached to a cinder volume and fails with a message 'unable to find
  iscsi targets'.

  This is because the driver fails to scan the Host Bus Adapter with the
  iscsi target portal (or target host). We need to fix the driver to
  scan the HBA by specifying the target portal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1235112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327959] [NEW] fwaas:firewall rule doesn't throw error when setting dest. ip address as network and took it as /32

2014-06-09 Thread Rajkumar
Public bug reported:

when creating firewall rule if destination/source ipaddress as 10.10.10.0, it 
doesnt throw error and took it as 10.10.10.0/32
Steps to Reproduce: 
 
 
create firewall rule with destination ip address as 10.10.10.0 

Actual Results: 
root@IGA-OSC:~# fwru re --source-ip-address 10.10.1.0 --destination-ip-address 
10.10.2.0
Updated firewall_rule: re
root@IGA-OSC:~# fwrs re
++--+
| Field  | Value|
++--+
| action | deny |
| description|  |
| destination_ip_address | 10.10.2.0|
| destination_port   |  |
| enabled| True |
| firewall_policy_id | 924d41cd-fad1-4ed4-9114-6dd704382bd3 |
| id | ed8769fc-e4b7-4306-b8ca-95350c80ca22 |
| ip_version | 4|
| name   | re   |
| position   | 1|
| protocol   | icmp |
| shared | False|
| source_ip_address  | 10.10.1.0|
| source_port|  |
| tenant_id  | d9481c57a11c46eea62886938b5378a7 |
++--+
 
In routers iptable-save output
 
 neutron-vpn-agen-iv47a808890 -s 10.10.1.0/32 -d 10.10.2.0/32 -p icmp -j DROP 
-- it got the /32 as subnet for network which s invalid
-A neutron-vpn-agen-iv47a808890 -d 10.10.10.25/32 -p icmp -j DROP
-A neutron-vpn-agen-iv47a808890 -d 10.10.10.24/32 -p icmp -j DROP
-A neutron-vpn-agen-iv47a808890 -s 192.52.1.3/32 -d 192.52.1.45/32 -p tcp -m 
tcp --dport 22:23 -j DROP
-A neutron-vpn-agen-iv47a808890 -j ACCEPT
-A neutron-vpn-agen-ov47a808890 -m state --state INVALID -j DROP
-A neutron-vpn-agen-ov47a808890 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A neutron-vpn-agen-ov47a808890 -s 10.10.1.0/32 -d 10.10.2.0/32 -p icmp -j DROP
-A neutron-vpn-agen-ov47a808890 -d 10.10.10.25/32 -p icmp -j DROP
-A neutron-vpn-agen-ov47a808890 -d 10.10.10.24/32 -p icmp -j DROP
-A neutron-vpn-agen-ov47a808890 -s 192.52.1.3/32 -d 192.52.1.45/32 -p tcp -m 
tcp --dport 22:23 -j DROP
-A neutron-vpn-agen-ov47a808890 -j ACCEPT
 
 
Expected Results
It should throw error specifying that the given ip address is network

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327959

Title:
  fwaas:firewall rule doesn't throw error when setting dest. ip address
  as network and took it as /32

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when creating firewall rule if destination/source ipaddress as 10.10.10.0, it 
doesnt throw error and took it as 10.10.10.0/32
  Steps to Reproduce: 
   
   
  create firewall rule with destination ip address as 10.10.10.0 

  Actual Results: 
  root@IGA-OSC:~# fwru re --source-ip-address 10.10.1.0 
--destination-ip-address 10.10.2.0
  Updated firewall_rule: re
  root@IGA-OSC:~# fwrs re
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address | 10.10.2.0|
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id | 924d41cd-fad1-4ed4-9114-6dd704382bd3 |
  | id | ed8769fc-e4b7-4306-b8ca-95350c80ca22 |
  | ip_version | 4|
  | name   | re   |
  | position   | 1|
  | protocol   | icmp |
  | shared | False|
  | source_ip_address  | 10.10.1.0|
  | source_port|  |
  | tenant_id  | d9481c57a11c46eea62886938b5378a7 |
  ++--+
   
  In routers iptable-save output
   
   neutron-vpn-agen-iv47a808890 -s 10.10.1.0/32 -d 10.10.2.0/32 -p icmp -j DROP 
-- it got the /32 as subnet for network which s 

[Yahoo-eng-team] [Bug 1327975] [NEW] Use import from six.moves to import the queue module

2014-06-09 Thread Christian Berendt
Public bug reported:

The name of the synchronized queue class is queue instead of Queue in
Python3.

** Affects: neutron
 Importance: Undecided
 Assignee: Christian Berendt (berendt)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327975

Title:
  Use import from six.moves to import the queue module

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The name of the synchronized queue class is queue instead of Queue in
  Python3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327974] [NEW] hyperv unit test agent failure

2014-06-09 Thread Kevin Benton
Public bug reported:

The hyperv unit appear to not properly mock all cases of report_state
calls so an occasional exception will be thrown on an unrelated
patch.[1]

1. http://logs.openstack.org/01/96201/6/gate/gate-neutron-
python27/2b0de5e/console.html

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327974

Title:
  hyperv unit test agent failure

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The hyperv unit appear to not properly mock all cases of report_state
  calls so an occasional exception will be thrown on an unrelated
  patch.[1]

  1. http://logs.openstack.org/01/96201/6/gate/gate-neutron-
  python27/2b0de5e/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327935] Re: clear text passwords shown in log file at DEBUG level

2014-06-09 Thread Julie Pichon
*** This bug is a duplicate of bug 1004114 ***
https://bugs.launchpad.net/bugs/1004114

Thanks for the reply, I will mark this as a duplicate of bug 1004114.

** This bug has been marked a duplicate of bug 1004114
   Password logging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327935

Title:
  clear text passwords shown in log file at DEBUG level

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  horizon seems to be printing in the log file the passwords in clear
  text at the DEBUG level

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1327935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328019] [NEW] Neutron db migration fails when adding a non-nullable column

2014-06-09 Thread Jun Xie
Public bug reported:

 add_column() of alembic does not work for adding a non-nullable column with a
default value to an existing database table in DB2.
 in this bug,   
neutron/db/migration/alembic_migrations/versions/128e042a2b68_ext_gw_mode.py  
adds a column 'enable_snat' to table 'routers'.

** Affects: neutron
 Importance: Undecided
 Assignee: Jun Xie (junxiebj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Jun Xie (junxiebj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328019

Title:
  Neutron db migration fails when adding a non-nullable column

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
   add_column() of alembic does not work for adding a non-nullable column with a
  default value to an existing database table in DB2.
   in this bug,   
neutron/db/migration/alembic_migrations/versions/128e042a2b68_ext_gw_mode.py  
  adds a column 'enable_snat' to table 'routers'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328052] [NEW] Using the v3cloudsample policy file, project admins can't administer users

2014-06-09 Thread Udi Kalifon
Public bug reported:

Project admins should be allowed to create, list, edit and delete users
in their domains. Here is the rule from the v3cloudsample policy file:

admin_and_matching_target_user_domain_id: rule:admin_required and 
domain_id:%(target.user.domain_id)s,
admin_and_matching_user_domain_id: rule:admin_required and 
domain_id:%(user.domain_id)s,
identity:get_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
identity:list_users: rule:cloud_admin or 
rule:admin_and_matching_domain_id,
identity:create_user: rule:cloud_admin or 
rule:admin_and_matching_user_domain_id,
identity:update_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
identity:delete_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,

However when I try it I get a forbidden error, and I can only use
credentials of an admin on the domain to perform these actions. To
recreate:

1) Authenticate as the cloud admin
2) Create a domain
3) Create a user in the new domain and give it the admin role on the domain
4) Authenticate as the domain admin
5) Create a project in the domain
6) Create a user and give it the admin role on the project
7) Authenticate as the project admin
8) Try to create more users for your project, or edit/delete users in your 
project

= forbidden

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328052

Title:
  Using the v3cloudsample policy file, project admins can't administer
  users

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Project admins should be allowed to create, list, edit and delete
  users in their domains. Here is the rule from the v3cloudsample policy
  file:

  admin_and_matching_target_user_domain_id: rule:admin_required and 
domain_id:%(target.user.domain_id)s,
  admin_and_matching_user_domain_id: rule:admin_required and 
domain_id:%(user.domain_id)s,
  identity:get_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
  identity:list_users: rule:cloud_admin or 
rule:admin_and_matching_domain_id,
  identity:create_user: rule:cloud_admin or 
rule:admin_and_matching_user_domain_id,
  identity:update_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
  identity:delete_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,

  However when I try it I get a forbidden error, and I can only use
  credentials of an admin on the domain to perform these actions. To
  recreate:

  1) Authenticate as the cloud admin
  2) Create a domain
  3) Create a user in the new domain and give it the admin role on the domain
  4) Authenticate as the domain admin
  5) Create a project in the domain
  6) Create a user and give it the admin role on the project
  7) Authenticate as the project admin
  8) Try to create more users for your project, or edit/delete users in your 
project

  = forbidden

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328067] [NEW] Token with placeholder ID issued

2014-06-09 Thread Steven Hardy
Public bug reported:

We're seeing test failures, where it seems that an invalid token is
issued, with the ID of placeholder

http://logs.openstack.org/69/97569/2/check/check-tempest-dsvm-
full/565d328/logs/screen-h-eng.txt.gz

See context_auth_token_info which is being passed using the auth_token
keystone.token_info request environment variable (ref
https://review.openstack.org/#/c/97568/ which is the previous patch in
the chain from the log referenced above).

It seems like auth_token is getting a token, but there's some sort of
race in the backend which prevents an actual token being stored?  Trying
to use placeholder as a token ID doesn't work, so it seems like this
default assigned in the controller is passed back to auth_token, which
treats it as a valid token, even though it's not.

https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L121

I'm not sure how to debug this further, as I can't reproduce this
problem locally.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328067

Title:
  Token with placeholder ID issued

Status in OpenStack Identity (Keystone):
  New

Bug description:
  We're seeing test failures, where it seems that an invalid token is
  issued, with the ID of placeholder

  http://logs.openstack.org/69/97569/2/check/check-tempest-dsvm-
  full/565d328/logs/screen-h-eng.txt.gz

  See context_auth_token_info which is being passed using the auth_token
  keystone.token_info request environment variable (ref
  https://review.openstack.org/#/c/97568/ which is the previous patch in
  the chain from the log referenced above).

  It seems like auth_token is getting a token, but there's some sort of
  race in the backend which prevents an actual token being stored?
  Trying to use placeholder as a token ID doesn't work, so it seems
  like this default assigned in the controller is passed back to
  auth_token, which treats it as a valid token, even though it's not.

  
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L121

  I'm not sure how to debug this further, as I can't reproduce this
  problem locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251565] Re: flavor disk size do not take effect when using rbd image backend

2014-06-09 Thread Pádraig Brady
*** This bug is a duplicate of bug 1219658 ***
https://bugs.launchpad.net/bugs/1219658

** This bug is no longer a duplicate of bug 1247467
   resizing of rbd volumes is still broken
** This bug has been marked a duplicate of bug 1219658
   Wrong image size using rbd backend for libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251565

Title:
  flavor disk size do not take effect when using rbd image backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  when you are using rbd image backend for nova instances then flavor
  disk size will not take effect. For example, you boot a VM and specify
  10G as root disk size. But the image is only 1G. Then VM will be
  spawned and the root disk size will expands to 10G. But filesystem
  still is 1G. we need to resize instance's filesystem on creating
  image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1251565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328052] Re: Using the v3cloudsample policy file, project admins can't administer users

2014-06-09 Thread Ajaya Agrawal
I don't think you should report this as a bug. V3cloudsample policy file
is just for a reference. You could easily modify it to meet your needs.
For e.g. you could do:

project_admin_required: role:admin and 
project_id:%(target.user.default_project_id)s 
identity:create_user: rule:cloud_admin or 
rule:admin_and_matching_user_domain_id or rule: project_admin_required,
identity:update_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id or rule: project_admin_required,
identity:delete_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id or rule: project_admin_required

Caution: The above rules work when you assign a default project while
creating the user.

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328052

Title:
  Using the v3cloudsample policy file, project admins can't administer
  users

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Project admins should be allowed to create, list, edit and delete
  users in their domains. Here is the rule from the v3cloudsample policy
  file:

  admin_and_matching_target_user_domain_id: rule:admin_required and 
domain_id:%(target.user.domain_id)s,
  admin_and_matching_user_domain_id: rule:admin_required and 
domain_id:%(user.domain_id)s,
  identity:get_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
  identity:list_users: rule:cloud_admin or 
rule:admin_and_matching_domain_id,
  identity:create_user: rule:cloud_admin or 
rule:admin_and_matching_user_domain_id,
  identity:update_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,
  identity:delete_user: rule:cloud_admin or 
rule:admin_and_matching_target_user_domain_id,

  However when I try it I get a forbidden error, and I can only use
  credentials of an admin on the domain to perform these actions. To
  recreate:

  1) Authenticate as the cloud admin
  2) Create a domain
  3) Create a user in the new domain and give it the admin role on the domain
  4) Authenticate as the domain admin
  5) Create a project in the domain
  6) Create a user and give it the admin role on the project
  7) Authenticate as the project admin
  8) Try to create more users for your project, or edit/delete users in your 
project

  = forbidden

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327065] Re: typo in cloud-config-user-groups.txt

2014-06-09 Thread Scott Moser
Joern, Sorry for the doc bug.

This looks like it is documented correctly in trunk and in Ubuntu
releases other than 12.04. See doc/examples/cloud-config-user-groups.txt
at [1].

So I'm going to mark this as fix-released, and I wouldn't plan on doing
through a StableReleaseUpdates [2] process for this on 12.04.

If you think I've made an error, please feel free to move the bug back
to 'New', with an explanation.

Scott

[1] 
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config-user-groups.txt
[2] https://wiki.ubuntu.com/StableReleaseUpdates


** Changed in: cloud-init
   Status: New = Fix Released

** Changed in: cloud-init
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1327065

Title:
  typo in cloud-config-user-groups.txt

Status in Init scripts for use on cloud images:
  Fix Released

Bug description:
  Hi,
  please fix doc/examples/cloud-config-user-groups.txt change 
ssh-authorized-key to ssh-authorized-keys.
  Cheers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1327065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328128] [NEW] In ovs agent, when a port is added and then removed in a short time, this port may lose its vlan tag.

2014-06-09 Thread Chengli Xu
Public bug reported:

When a port is added, ovs agent runs scan_ports to get all changed
ports, so it will be in port_info['current'] and port_info['added'].
Then this port is removed (by nova or ohters) before
treat_devices_added_or_updated called, means
self.int_br.get_vif_port_by_id(device) returns None and for loop just
continues, no treat_vif_port called and this port is not in
lvm.vif_ports, however this port is still in current ports and saved
in reg_ports before next scan. When we add this port back again, it
would not be treated as added since it's in reg_ports and not treated as
updated since it's not in any lvm.vif_ports, this port losts vlan tag
permanently util we remove it again.

I think the fix is simple, if self.int_br.get_vif_port_by_id(device)
cannot get port, just resync next time.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs

** Description changed:

  When a port is added, ovs agent runs scan_ports to get all changed
  ports, so it will be in port_info['current'] and port_info['added'].
  Then this port is removed (by nova or ohters) before
  treat_devices_added_or_updated called, means
  self.int_br.get_vif_port_by_id(device) returns None and for loop just
  continues, no treat_vif_port called and this port is not in
  lvm.vif_ports, however this port is still in current ports and saved
  in reg_ports before next scan. When we add this port back again, it
  would not be treated as added since it's in reg_ports and not treated as
  updated since it's not in any lvm.vif_ports, this port losts vlan tag
  permanently util we remove it again.
  
  I think the fix is simple, if self.int_br.get_vif_port_by_id(device)
- cannot get port, just resync.
+ cannot get port, just resync next time.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328128

Title:
  In ovs agent, when a port is added and then removed in a short time,
  this port may lose its vlan tag.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a port is added, ovs agent runs scan_ports to get all changed
  ports, so it will be in port_info['current'] and port_info['added'].
  Then this port is removed (by nova or ohters) before
  treat_devices_added_or_updated called, means
  self.int_br.get_vif_port_by_id(device) returns None and for loop just
  continues, no treat_vif_port called and this port is not in
  lvm.vif_ports, however this port is still in current ports and saved
  in reg_ports before next scan. When we add this port back again, it
  would not be treated as added since it's in reg_ports and not treated
  as updated since it's not in any lvm.vif_ports, this port losts vlan
  tag permanently util we remove it again.

  I think the fix is simple, if self.int_br.get_vif_port_by_id(device)
  cannot get port, just resync next time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328134] [NEW] [SRU] packaging for openstack icehouse 2014.1.1 release

2014-06-09 Thread Corey Bryant
Public bug reported:

OpenStack 2014.1.1 released today (09 June, 2014).

From the release email from Alan Pevec:

A total of 79 bugs have been fixed across all projects. These
updates to Icehouse are intended to be low risk with no
intentional regressions or API changes. The list of bugs, tarballs and
other milestone information for each project may be found on Launchpad:

https://launchpad.net/ceilometer/icehouse/2014.1.1
https://launchpad.net/cinder/icehouse/2014.1.1
https://launchpad.net/glance/icehouse/2014.1.1
https://launchpad.net/heat/icehouse/2014.1.1
https://launchpad.net/horizon/icehouse/2014.1.1
https://launchpad.net/keystone/icehouse/2014.1.1
https://launchpad.net/neutron/icehouse/2014.1.1
https://launchpad.net/nova/icehouse/2014.1.1

OpenStack Database Service (Trove) did not have stable/icehouse fixes
at this time and will skip 2014.1.1 release.

Release notes may be found on the wiki:

https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.1

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: nova (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328134

Title:
  [SRU] packaging for openstack icehouse 2014.1.1 release

Status in OpenStack Compute (Nova):
  New

Bug description:
  OpenStack 2014.1.1 released today (09 June, 2014).

  From the release email from Alan Pevec:

  A total of 79 bugs have been fixed across all projects. These
  updates to Icehouse are intended to be low risk with no
  intentional regressions or API changes. The list of bugs, tarballs and
  other milestone information for each project may be found on Launchpad:

  https://launchpad.net/ceilometer/icehouse/2014.1.1
  https://launchpad.net/cinder/icehouse/2014.1.1
  https://launchpad.net/glance/icehouse/2014.1.1
  https://launchpad.net/heat/icehouse/2014.1.1
  https://launchpad.net/horizon/icehouse/2014.1.1
  https://launchpad.net/keystone/icehouse/2014.1.1
  https://launchpad.net/neutron/icehouse/2014.1.1
  https://launchpad.net/nova/icehouse/2014.1.1

  OpenStack Database Service (Trove) did not have stable/icehouse fixes
  at this time and will skip 2014.1.1 release.

  Release notes may be found on the wiki:

  https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314674] Re: Unable to connect to VCenter 5.5 VimFaultException: Server raised fault: 'Element tag ns0:RetrieveServiceContent uses an undefined namespace prefix ns0

2014-06-09 Thread James Page
** Also affects: suds (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314674

Title:
  Unable to connect to VCenter 5.5 VimFaultException: Server raised
  fault: 'Element tag ns0:RetrieveServiceContent uses an undefined
  namespace prefix ns0

Status in OpenStack Compute (Nova):
  Invalid
Status in “suds” package in Ubuntu:
  New

Bug description:
  I'm currently trying to integrate an OpenStack testbed (based on
  Icehouse nova-2014.1 , Ubuntu 14.04 standard packages) with VCenter. I
  configured nova.conf http://docs.openstack.org/trunk/config-
  reference/content/vmware.html:

  compute_driver=vmwareapi.VMwareVCDriver

  reserved_host_memory_mb=0

  [vmware]
  host_ip=192.168.0.146
  host_username=root
  host_password=password_here
  cluster_name=VCOS
  datastore_regex=qnap*

  Using the password I'm able to login to VCenter using vSphere Web
  Client, Cluster VCOS was created using DRS, and I also defined a port
  group br-int on the ESXi hosts in the cluster. Although OpenStack Nova
  using KVM works like a breeze on two other compute nodes, I constantly
  get error messages on the node running VMwareVCDriver in note-
  compute.log

  2014-04-30 16:44:10.263 1383 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
  SOAP-ENV:Envelope xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
 ns1:Body
ns0:RetrieveServiceContent
   _this type=ServiceInstanceServiceInstance/_this
/ns0:RetrieveServiceContent
 /ns1:Body
  /SOAP-ENV:Envelope
  2014-04-30 16:44:10.265 1383 CRITICAL nova.virt.vmwareapi.driver [-] Unable 
to connect to server at 192.168.78.103, sleeping for 60 seconds
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver Traceback (most 
recent call last):
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 795, in 
_create_session
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver self.vim = 
self._get_vim_object()
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 784, in 
_get_vim_object
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver return 
vim.Vim(protocol=self._scheme, host=self._host_ip)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 117, in 
__init__
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver 
self._service_content = self.retrieve_service_content()
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 120, in 
retrieve_service_content
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver return 
self.RetrieveServiceContent(ServiceInstance)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 196, in 
vim_request_handler
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver raise 
error_util.VimFaultException(fault_list, excep)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver 
VimFaultException: Server raised fault: 'Element tag ns0:RetrieveServiceContent 
uses an undefined namespace prefix ns0
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP body
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 224
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP envelope
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 38
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
HTTP request before method was determined
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 0'
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver

  ...to me it seems like there is some problem with the SOAP message...
  ns0 is not defined as a namespace und SOAP-ENV:Envelope?

  I also tried a fresh install of the VCenter Appliance 5.1 and 5.5
  without any luck. From the error message names above I cannot see any
  configuration etc. I might have missed?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1316271] Re: Network Security: VM hosts can SSH to compute node

2014-06-09 Thread Thierry Carrez
** Also affects: ossn
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: Incomplete = Won't Fix

** Information type changed from Public Security to Public

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316271

Title:
  Network Security: VM hosts can SSH to compute node

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  Hi guys,

  We're still using nova-network and we'll be using it for a while
  and we noticed that the VM guests can contact the compute nodes on all
  ports ... The one we're the most preoccupied with is SSH.   We've
  written the following patch in order to isolate the VM guests from the
  VM hosts.

  --- linux_net.py.orig   2014-05-05 17:25:10.171746968 +
  +++ linux_net.py2014-05-05 18:42:54.569209220 +
  @@ -805,6 +805,24 @@

  
   @utils.synchronized('lock_gateway', external=True)
  +def isolate_compute_from_guest(network_ref):
  +if not network_ref:
  +return
  +
  +iptables_manager.ipv4['filter'].add_rule('INPUT',
  + '-p tcp -d %s --dport 8775 '
  + '-j ACCEPT' % 
network_ref['dhcp_server'])
  +iptables_manager.ipv4['filter'].add_rule('FORWARD',
  + '-p tcp -d %s --dport 8775 '
  + '-j ACCEPT' % 
network_ref['dhcp_server'])
  +iptables_manager.ipv4['filter'].add_rule('INPUT',
  + '-d %s '
  + '-j DROP' % 
network_ref['dhcp_server'])
  +iptables_manager.ipv4['filter'].add_rule('FORWARD',
  + '-d %s '
  + '-j DROP' % 
network_ref['dhcp_server'])
  +iptables_manager.apply()
  +
   def initialize_gateway_device(dev, network_ref):
   if not network_ref:
   return
  @@ -1046,6 +1064,7 @@
   try:
   _execute('kill', '-HUP', pid, run_as_root=True)
   _add_dnsmasq_accept_rules(dev)
  +isolate_compute_from_guest(network_ref)
   return
   except Exception as exc:  # pylint: disable=W0703
   LOG.error(_('Hupping dnsmasq threw %s'), exc)
  @@ -1098,6 +1117,7 @@

   _add_dnsmasq_accept_rules(dev)

  +isolate_compute_from_guest(network_ref)

   @utils.synchronized('radvd_start')
   def update_ra(context, dev, network_ref):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328134] Re: [SRU] packaging for openstack icehouse 2014.1.1 release

2014-06-09 Thread Chuck Short
** Project changed: nova = ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328134

Title:
  [SRU] packaging for openstack icehouse 2014.1.1 release

Status in Ubuntu:
  New

Bug description:
  OpenStack 2014.1.1 released today (09 June, 2014).

  From the release email from Alan Pevec:

  A total of 79 bugs have been fixed across all projects. These
  updates to Icehouse are intended to be low risk with no
  intentional regressions or API changes. The list of bugs, tarballs and
  other milestone information for each project may be found on Launchpad:

  https://launchpad.net/ceilometer/icehouse/2014.1.1
  https://launchpad.net/cinder/icehouse/2014.1.1
  https://launchpad.net/glance/icehouse/2014.1.1
  https://launchpad.net/heat/icehouse/2014.1.1
  https://launchpad.net/horizon/icehouse/2014.1.1
  https://launchpad.net/keystone/icehouse/2014.1.1
  https://launchpad.net/neutron/icehouse/2014.1.1
  https://launchpad.net/nova/icehouse/2014.1.1

  OpenStack Database Service (Trove) did not have stable/icehouse fixes
  at this time and will skip 2014.1.1 release.

  Release notes may be found on the wiki:

  https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1328134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327425] Re: With default configuration Horizon is exposed to session-fixation attack

2014-06-09 Thread Thierry Carrez
Yes, I think it would make sense to issue a security note on that topic. The 
article by Pablo is a good read.
It's a well known issue so i'll make it public.

** Information type changed from Private Security to Public

** Also affects: ossn
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Won't Fix

** Changed in: horizon
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327425

Title:
  With default configuration Horizon is exposed to session-fixation
  attack

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  With the default configuration, if an attacker can obtain a sessionid
  value from a user, the attacker can view and perform actions as that
  user.  This ability does not go away after the user has logged out.

  To view a potential exploit:
  1)  Create an admin profile with access to the admin project and a non admin 
profile with no access to the admin project
  2)  Log in to Horizon as the admin, navigate to the project/instances page.  
Launch some vms.
  3)  Open up firebug and capture the sessionid value.
  4)  Log out of the admin user.
  5)  Log in as the non admin user
  6)  navigate to the project/instances page
  7)  Use firebug to past in the admin value of the session id value
  8)  click the project/instances link again to force a round trip.
  *!* It's possible for the non admin user to view all of the admin project vms
  9)  In the action column choose More-Terminate Instance
  *!* It's possible for the non admin user to delete an admin project vm.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1327425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328157] [NEW] ports not being deleted

2014-06-09 Thread Dan Radez
Public bug reported:

I have a /24 for my floating ips and a revolving set of users. I delete
unused floating ips and clear router gatways regularly, but my ports
don't clear and I start to get ip allocation errors for setting the
gateway:

Error: Failed to set gateway No more IP addresses available on network
b418a23d-39fb-4d09-82ee-4a2768ea508b.


Shouldn't these ports either be deleted when the associated resource is
cleaned up or maybe be reuse to avoid this allocation error?

[root@host3 ~]# neutron floatingip-list | wc -l
33
[root@host3 ~]# neutron router-list| grep -v null | wc -l
17
[root@host3 ~]# neutron port-list | grep floatingip-subnet | wc -l
223


[root@host4 ~]# rpm -qa | grep neutron
python-neutronclient-2.3.4-1.el6.noarch
openstack-neutron-2014.1-18.el6.noarch
openstack-neutron-openvswitch-2014.1-18.el6.noarch
python-neutron-2014.1-18.el6.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328157

Title:
  ports not being deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I have a /24 for my floating ips and a revolving set of users. I
  delete unused floating ips and clear router gatways regularly, but my
  ports don't clear and I start to get ip allocation errors for setting
  the gateway:

  Error: Failed to set gateway No more IP addresses available on network
  b418a23d-39fb-4d09-82ee-4a2768ea508b.


  Shouldn't these ports either be deleted when the associated resource
  is cleaned up or maybe be reuse to avoid this allocation error?

  [root@host3 ~]# neutron floatingip-list | wc -l
  33
  [root@host3 ~]# neutron router-list| grep -v null | wc -l
  17
  [root@host3 ~]# neutron port-list | grep floatingip-subnet | wc -l
  223

  
  [root@host4 ~]# rpm -qa | grep neutron
  python-neutronclient-2.3.4-1.el6.noarch
  openstack-neutron-2014.1-18.el6.noarch
  openstack-neutron-openvswitch-2014.1-18.el6.noarch
  python-neutron-2014.1-18.el6.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328162] [NEW] Tempest fails to delete firewall in 300 seconds

2014-06-09 Thread Eugene Nikanorov
Public bug reported:

Similar to bug #1314313 but this is another failure.

In some tempest runs a test fails to delete firewall within 300 seconds.

That happens because at the point firewall agent sends deleting
confirmation to neutron server, firewall object is already updated to a
state unexpected by deleting method.

Example of the issue:

http://logs.openstack.org/18/97218/2/gate/gate-tempest-dsvm-
neutron/e03d166/console.html#_2014-06-07_10_33_34_506

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328162

Title:
  Tempest fails to delete firewall in 300 seconds

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Similar to bug #1314313 but this is another failure.

  In some tempest runs a test fails to delete firewall within 300
  seconds.

  That happens because at the point firewall agent sends deleting
  confirmation to neutron server, firewall object is already updated to
  a state unexpected by deleting method.

  Example of the issue:

  http://logs.openstack.org/18/97218/2/gate/gate-tempest-dsvm-
  neutron/e03d166/console.html#_2014-06-07_10_33_34_506

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328067] Re: Token with placeholder ID issued

2014-06-09 Thread Dolph Mathews
** Changed in: keystone
   Importance: Undecided = Critical

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: python-keystoneclient
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328067

Title:
  Token with placeholder ID issued

Status in OpenStack Identity (Keystone):
  New
Status in Python client library for Keystone:
  New

Bug description:
  We're seeing test failures, where it seems that an invalid token is
  issued, with the ID of placeholder

  http://logs.openstack.org/69/97569/2/check/check-tempest-dsvm-
  full/565d328/logs/screen-h-eng.txt.gz

  See context_auth_token_info which is being passed using the auth_token
  keystone.token_info request environment variable (ref
  https://review.openstack.org/#/c/97568/ which is the previous patch in
  the chain from the log referenced above).

  It seems like auth_token is getting a token, but there's some sort of
  race in the backend which prevents an actual token being stored?
  Trying to use placeholder as a token ID doesn't work, so it seems
  like this default assigned in the controller is passed back to
  auth_token, which treats it as a valid token, even though it's not.

  
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L121

  I'm not sure how to debug this further, as I can't reproduce this
  problem locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328181] [NEW] NSX: remove_router_interface might fail because of NAT rule mismatch

2014-06-09 Thread Salvatore Orlando
Public bug reported:

The remove_router_interface for the VMware NSX plugin expects a precise number 
of SNAT rules for a subnet.
If the actual number of NAT rules differs from the expected one, an exception 
is raised.

The reasons for this might be:
- earlier failure in remove_router_interface
- NSX API client tampering with NSX objects
- etc.

In any case, the remove_router_interface operation should succeed
removing every match for the NAT rule to delete from the NSX logical
router.

sample traceback: http://paste.openstack.org/show/83427/

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: havana-backport-potential icehouse-backport-potential vmware

** Summary changed:

- NSX: remote_router_interface might fail because of NAT rule mismatch
+ NSX: remove_router_interface might fail because of NAT rule mismatch

** Tags added: havana-backport-potential icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328181

Title:
  NSX: remove_router_interface might fail because of NAT rule mismatch

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The remove_router_interface for the VMware NSX plugin expects a precise 
number of SNAT rules for a subnet.
  If the actual number of NAT rules differs from the expected one, an exception 
is raised.

  The reasons for this might be:
  - earlier failure in remove_router_interface
  - NSX API client tampering with NSX objects
  - etc.

  In any case, the remove_router_interface operation should succeed
  removing every match for the NAT rule to delete from the NSX logical
  router.

  sample traceback: http://paste.openstack.org/show/83427/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314674] Re: Unable to connect to VCenter 5.5 VimFaultException: Server raised fault: 'Element tag ns0:RetrieveServiceContent uses an undefined namespace prefix ns0

2014-06-09 Thread James Page
Fixed version uploaded to trusty-proposed for SRU team review.

** Description changed:

- I'm currently trying to integrate an OpenStack testbed (based on
- Icehouse nova-2014.1 , Ubuntu 14.04 standard packages) with VCenter. I
- configured nova.conf http://docs.openstack.org/trunk/config-
- reference/content/vmware.html:
+ [Impact]
+ Users of the Nova VMWare integration can't use the distro provided package.
+ 
+ [Test Case]
+ sudo apt-get install nova-compute-vmware
+ (configure /etc/nova/nova.conf to point to a vsphere deployment)
+ error in original bug report
+ 
+ [Regression potential]
+ The fix is to drop a distro patch which has all ready been dropped in Debian 
and utopic.
+ 
+ [Original Bug Report]
+ I'm currently trying to integrate an OpenStack testbed (based on Icehouse 
nova-2014.1 , Ubuntu 14.04 standard packages) with VCenter. I configured 
nova.conf http://docs.openstack.org/trunk/config-reference/content/vmware.html:
  
  compute_driver=vmwareapi.VMwareVCDriver
  
  reserved_host_memory_mb=0
  
  [vmware]
  host_ip=192.168.0.146
  host_username=root
  host_password=password_here
  cluster_name=VCOS
  datastore_regex=qnap*
  
  Using the password I'm able to login to VCenter using vSphere Web
  Client, Cluster VCOS was created using DRS, and I also defined a port
  group br-int on the ESXi hosts in the cluster. Although OpenStack Nova
  using KVM works like a breeze on two other compute nodes, I constantly
  get error messages on the node running VMwareVCDriver in note-
  compute.log
  
  2014-04-30 16:44:10.263 1383 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
  SOAP-ENV:Envelope xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
-ns1:Body
-   ns0:RetrieveServiceContent
-  _this type=ServiceInstanceServiceInstance/_this
-   /ns0:RetrieveServiceContent
-/ns1:Body
+    ns1:Body
+   ns0:RetrieveServiceContent
+  _this type=ServiceInstanceServiceInstance/_this
+   /ns0:RetrieveServiceContent
+    /ns1:Body
  /SOAP-ENV:Envelope
  2014-04-30 16:44:10.265 1383 CRITICAL nova.virt.vmwareapi.driver [-] Unable 
to connect to server at 192.168.78.103, sleeping for 60 seconds
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver Traceback (most 
recent call last):
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 795, in 
_create_session
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver self.vim = 
self._get_vim_object()
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py, line 784, in 
_get_vim_object
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver return 
vim.Vim(protocol=self._scheme, host=self._host_ip)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 117, in 
__init__
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver 
self._service_content = self.retrieve_service_content()
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 120, in 
retrieve_service_content
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver return 
self.RetrieveServiceContent(ServiceInstance)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver File 
/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vim.py, line 196, in 
vim_request_handler
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver raise 
error_util.VimFaultException(fault_list, excep)
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver 
VimFaultException: Server raised fault: 'Element tag ns0:RetrieveServiceContent 
uses an undefined namespace prefix ns0
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP body
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 224
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
SOAP envelope
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 38
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver while parsing 
HTTP request before method was determined
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver at line 1, 
column 0'
  2014-04-30 16:44:10.265 1383 TRACE nova.virt.vmwareapi.driver
  
  ...to me it seems like there is some problem with the SOAP message...
  ns0 is not defined as a namespace und SOAP-ENV:Envelope?
  
  I also tried a fresh install of the VCenter Appliance 5.1 and 5.5
  without 

[Yahoo-eng-team] [Bug 1328201] [NEW] Cannot fetch Certs with Compressed token provider

2014-06-09 Thread Adam Young
Public bug reported:

The simple_cert extension has a check that prevents fetching
certificates if the Token provider is not the PKI provider.

** Affects: keystone
 Importance: Critical
 Assignee: Adam Young (ayoung)
 Status: In Progress

** Changed in: keystone
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328201

Title:
  Cannot fetch Certs with Compressed token provider

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The simple_cert extension has a check that prevents fetching
  certificates if the Token provider is not the PKI provider.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328222] [NEW] BigSwitch: Sync function is missing network information

2014-06-09 Thread Kevin Benton
Public bug reported:

The Big Switch full topology synchronization function isn't including
all of the information about a network. It's only including the ports
and floating IP addresses. The subnets are missing as well as the tenant
of the network.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New


** Tags: icehouse-backport-potential

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

** Tags added: icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328222

Title:
  BigSwitch: Sync function is missing network information

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Big Switch full topology synchronization function isn't including
  all of the information about a network. It's only including the ports
  and floating IP addresses. The subnets are missing as well as the
  tenant of the network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328245] [NEW] libvirt does not store connection_info after BFV setup

2014-06-09 Thread Dan Smith
Public bug reported:

If booting from a volume, the virt driver does the setup of the volume
with cinder before starting the instance. This differs from the attach
volume case, which is managed by nova itself. Since the connect
operation could yield new details in the connection_info structure that
need to be persisted until teardown time, it is important that the
connection_info be written back after connect completes. Nova's
attach_volume() does this, but libvirt does not. Specifically in the
case of the fibre channel code, this means we don't persist information
about multipath devices which means we don't fully tear down everything
at disconnect time.

This is present in at least Havana, and I expect it is present in
Icehosue and master as well.

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: Confirmed


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328245

Title:
  libvirt does not store connection_info after BFV setup

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  If booting from a volume, the virt driver does the setup of the volume
  with cinder before starting the instance. This differs from the attach
  volume case, which is managed by nova itself. Since the connect
  operation could yield new details in the connection_info structure
  that need to be persisted until teardown time, it is important that
  the connection_info be written back after connect completes. Nova's
  attach_volume() does this, but libvirt does not. Specifically in the
  case of the fibre channel code, this means we don't persist
  information about multipath devices which means we don't fully tear
  down everything at disconnect time.

  This is present in at least Havana, and I expect it is present in
  Icehosue and master as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326811] Re: Client failing with six =1.6 error

2014-06-09 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/98263
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=76ed427ca17fb271974b4882c0b5e3c18ed3d889
Submitter: Jenkins
Branch:master

commit 76ed427ca17fb271974b4882c0b5e3c18ed3d889
Author: Mathieu Gagné mga...@iweb.com
Date:   Thu Jun 5 16:50:40 2014 -0400

Update setuptools to latest for .dist-info support

Support for .dist-info directories was added in setuptools 0.6.28.

At this moment, Ubuntu Precise 12.04 provides setuptools 0.6.24
which is too old for our needs.

Six is installed from wheel which uses the .dist-info directory.
For six to be found, we need to install setuptools = 0.6.28.

Updating setuptools to the latest version using pip will provide use
the needed version to make six discoverable.

Closes-bug: #1326811
Change-Id: I761d0aeb2b8b593cee38d512afc8fed6a2d1fe37


** Changed in: devstack
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326811

Title:
  Client failing with six =1.6 error

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Compute (Nova):
  New
Status in OpenStack Command Line Client:
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:

  13:20:45 + screen -S stack -p key -X stuff 'cd /opt/stack/keystone  
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--debug  echo $! /opt/stack/status/stack/key.pid; fg || echo key failed to 
start | tee /opt/stack/status/stack/key.failure
  '
  13:20:45 Waiting for keystone to start...
  13:20:45 + echo 'Waiting for keystone to start...'
  13:20:45 + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -k -s 
http://10.5.141.237:5000/v2.0/ /dev/null; do sleep 1; done'
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + SERVICE_ENDPOINT=http://10.5.141.237:35357/v2.0
  13:20:46 + is_service_enabled tls-proxy
  13:20:46 ++ set +o
  13:20:46 ++ grep xtrace
  13:20:46 + local 'xtrace=set -o xtrace'
  13:20:46 + set +o xtrace
  13:20:46 + return 1
  13:20:46 + export OS_TOKEN=be19c524ddc92109a224
  13:20:46 + OS_TOKEN=be19c524ddc92109a224
  13:20:46 + export OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + OS_URL=http://10.5.141.237:35357/v2.0
  13:20:46 + create_keystone_accounts
  13:20:46 ++ openstack project create admin
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:46 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:46 + ADMIN_TENANT=
  13:20:46 ++ openstack user create admin --project '' --email 
ad...@example.com --password 3de4922d8b6ac5a1aad9
  13:20:46 ++ grep ' id '
  13:20:46 ++ get_field 2
  13:20:46 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_USER=
  13:20:47 ++ openstack role create admin
  13:20:47 ++ grep ' id '
  13:20:47 ++ get_field 2
  13:20:47 ++ read data
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + ADMIN_ROLE=
  13:20:47 + openstack role add --project --user
  13:20:47 ERROR: openstackclient.shell Exception raised: six=1.6.0
  13:20:47 + exit_trap
  13:20:47 + local r=1
  13:20:47 ++ jobs -p
  13:20:47 + jobs=
  13:20:47 + [[ -n '' ]]
  13:20:47 + kill_spinner
  13:20:47 + '[' '!' -z '' ']'
  13:20:47 + exit 1

  https://rdjenkins.dyndns.org/job/Trove-Gate/3974/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1326811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328267] [NEW] Admin Hypervisor table - change order of columns

2014-06-09 Thread Cindy Lu
Public bug reported:

very, very minor nitpick.

It may be a personal preference, but I would like to see the order of
the columns show 'Used | Total' instead of vice versa.

Right now in the Hypervisors table, the columns are:

Hostname | Type | VCPUs (total) | VCPUs (used) | RAM (total) | RAM
(used) | Storage (total) | Storage (used) |  Instances

Total column followed by Used column.  It would be nice to swap the
order to:

Hostname | Type | VCPUs (used) | VCPUs (total) | RAM (used) | RAM
(total) | Storage (used) | Storage (total) |  Instances

The graph also shows 'Used 1 of 2.'  That way, the graph and the table
are parallel with each other

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Hypervisor table - change order of columns
+ Admin Hypervisor table - change order of columns

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1328267

Title:
  Admin Hypervisor table - change order of columns

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  very, very minor nitpick.

  It may be a personal preference, but I would like to see the order of
  the columns show 'Used | Total' instead of vice versa.

  Right now in the Hypervisors table, the columns are:

  Hostname | Type | VCPUs (total) | VCPUs (used) | RAM (total) | RAM
  (used) | Storage (total) | Storage (used) |  Instances

  Total column followed by Used column.  It would be nice to swap the
  order to:

  Hostname | Type | VCPUs (used) | VCPUs (total) | RAM (used) | RAM
  (total) | Storage (used) | Storage (total) |  Instances

  The graph also shows 'Used 1 of 2.'  That way, the graph and the table
  are parallel with each other

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1328267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328276] [NEW] test_list_image_filters.ListImageFiltersTest failed to create image

2014-06-09 Thread Attila Fazekas
Public bug reported:

This test boots two server almost at the same time (starting the second 
instance before 1 first active),
waits until both server is ACTIVE.

Than creates the first snapshot create:
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_17_836
Instance: 33e632b0-1162-482e-8d41-b31f0d333429
snapshot: 5b88b608-7fdf-4073-8beb-749dc32ad10f


When n-cpu fails to acquire state change lock, the image gets deleted.
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_54_800

Instance: 85c8873a-4066-4596-a8b4-5a6b2c221774
snapshot: cdc2a7a1-f384-46a7-ab01-78fb7555af81

The lock acquire / image creation should be retried instead of deleting
the image.

Console Exception (print delayed, by cleanup):
...
2014-06-06 20:17:05.764 | NotFound: Object not found
2014-06-06 20:17:05.764 | Details: {itemNotFound: {message: Image not 
found., code: 404}}

http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-
full/ae1f95a/console.html.gz#_2014-06-06_20_17_05_758


Actual GET request in the n-api log.
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-api.txt.gz#_2014-06-06_20_16_55_824

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328276

Title:
  test_list_image_filters.ListImageFiltersTest failed to create  image

Status in OpenStack Compute (Nova):
  New

Bug description:
  This test boots two server almost at the same time (starting the second 
instance before 1 first active),
  waits until both server is ACTIVE.

  Than creates the first snapshot create:
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_17_836
  Instance: 33e632b0-1162-482e-8d41-b31f0d333429
  snapshot: 5b88b608-7fdf-4073-8beb-749dc32ad10f

  
  When n-cpu fails to acquire state change lock, the image gets deleted.
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-cpu.txt.gz#_2014-06-06_20_16_54_800

  Instance: 85c8873a-4066-4596-a8b4-5a6b2c221774
  snapshot: cdc2a7a1-f384-46a7-ab01-78fb7555af81

  The lock acquire / image creation should be retried instead of
  deleting the image.

  Console Exception (print delayed, by cleanup):
  ...
  2014-06-06 20:17:05.764 | NotFound: Object not found
  2014-06-06 20:17:05.764 | Details: {itemNotFound: {message: Image 
not found., code: 404}}

  http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-
  full/ae1f95a/console.html.gz#_2014-06-06_20_17_05_758

  
  Actual GET request in the n-api log.
  
http://logs.openstack.org/32/97532/3/check/check-tempest-dsvm-full/ae1f95a/logs/screen-n-api.txt.gz#_2014-06-06_20_16_55_824

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303856] Re: please mark a volume as read-only

2014-06-09 Thread Cindy Lu
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1303856

Title:
  please mark a volume as read-only

Status in OpenStack Dashboard (Horizon):
  New
Status in Python client library for Cinder:
  In Progress

Bug description:
  currently, once I update a volume to read-only its only shown in the 
metadata. 
  however, I think a read-only change (could be worse when we change from 
read-only to read/write) is really important and should be very visible to the 
user. 

  as you can see, we can only see that the volume is read-only in the
  metadata:

  [root@host ~(keystone_admin)]# cinder create --display-name dafna 10
  +-+--+
  |   Property  |Value |
  +-+--+
  | attachments |  []  |
  |  availability_zone  | nova |
  |   bootable  |false |
  |  created_at |  2014-04-03T13:16:03.137237  |
  | display_description | None |
  | display_name|dafna |
  |  encrypted  |False |
  |  id | 51c841ec-24a5-49fe-8441-01593b39b2f7 |
  |   metadata  |  {}  |
  | size|  10  |
  | snapshot_id | None |
  | source_volid| None |
  |status   |   creating   |
  | volume_type | None |
  +-+--+
  [root@host ~(keystone_admin)]# cinder list 
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | 51c841ec-24a5-49fe-8441-01593b39b2f7 | available |dafna |  10  |
 None|  false   | |
  | 54d28f61-2f0f-4be4-8e27-855a50a50c33 | available |   emptyVol   |  1   |
 None|  false   | |
  | 82a7a825-6106-4139-9bc8-4334ccc38e85 | available | volFromImage |  1   |
 None|   true   | |
  
+--+---+--+--+-+--+-+

  [root@host ~(keystone_admin)]# cinder readonly-mode-update 
51c841ec-24a5-49fe-8441-01593b39b2f7 true
  [root@host ~(keystone_admin)]# cinder list 
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | 51c841ec-24a5-49fe-8441-01593b39b2f7 | available |dafna |  10  |
 None|  false   | |
  | 54d28f61-2f0f-4be4-8e27-855a50a50c33 | available |   emptyVol   |  1   |
 None|  false   | |
  | 82a7a825-6106-4139-9bc8-4334ccc38e85 | available | volFromImage |  1   |
 None|   true   | |
  
+--+---+--+--+-+--+-+
  [root@host ~(keystone_admin)]# cinder show 
51c841ec-24a5-49fe-8441-01593b39b2f7
  ++--+
  |Property|Value |
  ++--+
  |  attachments   |  []  |
  |   availability_zone| nova |
  |bootable|false |
  |   created_at   |  2014-04-03T13:16:03.00  |
  |  display_description   | None |
  |  display_name  |dafna |
  |   encrypted|False |
  |   id   | 51c841ec-24a5-49fe-8441-01593b39b2f7 |
  |metadata|{u'readonly': u'True'}|
  | os-vol-host-attr:host  |  orange-vdsf.qa.lab.tlv.redhat.com   |
  | os-vol-mig-status-attr:migstat | None |
  | 

[Yahoo-eng-team] [Bug 1328288] [NEW] openvswitch agent fails with bridges longer than 11 chars

2014-06-09 Thread Kevin Benton
Public bug reported:

The openvswitch agent will try to construct veth pairs with names longer
than the maximum allowed (15) and fail. VMs will then have no external
connectivity.

This happens in cases where the bridge name is very long (e.g. int-br-
bonded).

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress


** Tags: icehouse-backport-potential

** Tags added: icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328288

Title:
  openvswitch agent fails with bridges longer than 11 chars

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The openvswitch agent will try to construct veth pairs with names
  longer than the maximum allowed (15) and fail. VMs will then have no
  external connectivity.

  This happens in cases where the bridge name is very long (e.g. int-br-
  bonded).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328293] [NEW] tempest test_delete_server_while_in_attached_volume fails Invalid volume status available/error

2014-06-09 Thread Brant Knudson
Public bug reported:

This keystone change [1] failed with an error in the
tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_server_while_in_attached_volume
test in the gate-tempest-dsvm-full job.

Here's the log:

http://logs.openstack.org/45/84945/13/gate/gate-tempest-dsvm-
full/600e742/console.html.gz#_2014-06-06_18_53_45_253

The error tempest reports is

 Details: Volume None failed to reach in-use status within the required
time (196 s).

So it's waiting for the volume to reach a status which it doesn't get to
in 196 s. Looks like the volume is in attaching status.

So maybe tempest isn't waiting long enough, or nova / cinder is hung or
just takes too long?

There's also a problem in that tempest says the volume is None when it
should be the volume ID (49fbde74-6e6a-4781-a271-787aa2deb674)

[1] https://review.openstack.org/#/c/84945/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328293

Title:
  tempest test_delete_server_while_in_attached_volume fails Invalid
  volume status available/error

Status in OpenStack Compute (Nova):
  New

Bug description:
  This keystone change [1] failed with an error in the
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_server_while_in_attached_volume
  test in the gate-tempest-dsvm-full job.

  Here's the log:

  http://logs.openstack.org/45/84945/13/gate/gate-tempest-dsvm-
  full/600e742/console.html.gz#_2014-06-06_18_53_45_253

  The error tempest reports is

   Details: Volume None failed to reach in-use status within the
  required time (196 s).

  So it's waiting for the volume to reach a status which it doesn't get
  to in 196 s. Looks like the volume is in attaching status.

  So maybe tempest isn't waiting long enough, or nova / cinder is hung
  or just takes too long?

  There's also a problem in that tempest says the volume is None when it
  should be the volume ID (49fbde74-6e6a-4781-a271-787aa2deb674)

  [1] https://review.openstack.org/#/c/84945/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327065] Re: typo in cloud-config-user-groups.txt

2014-06-09 Thread Joern Heissler
Sorry to bother you :)
The problem is on line 72 of 
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config-user-groups.txt
I'm not too familiar with bzr, but I assume that this is the trunk version, so 
it's not fixed.
Please see also the attached diff file for clarification.

** Changed in: cloud-init
   Status: Fix Released = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1327065

Title:
  typo in cloud-config-user-groups.txt

Status in Init scripts for use on cloud images:
  Fix Committed

Bug description:
  Hi,
  please fix doc/examples/cloud-config-user-groups.txt change 
ssh-authorized-key to ssh-authorized-keys.
  Cheers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1327065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328321] [NEW] Big Switch: Consistency watchdog calling wrong method

2014-06-09 Thread Kevin Benton
Public bug reported:

The consistency watch dog is calling the wrong method for a health check
which is raising an exception. However, since it is in a greenthread,
the exception is silently discarded so the watchdog dies without any
indication.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328321

Title:
  Big Switch: Consistency watchdog calling wrong method

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The consistency watch dog is calling the wrong method for a health
  check which is raising an exception. However, since it is in a
  greenthread, the exception is silently discarded so the watchdog dies
  without any indication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328331] [NEW] Big Switch: servermanager consistency hash doesn't work in HA deployments

2014-06-09 Thread Kevin Benton
Public bug reported:

The Big Switch servermanager records the consistency hash to the
database every time it gets updated but it does not retrieve the latest
value from the database whenever it includes it in an HTTP request. This
is fine in single neutron server deployments because the cached version
on the object is always the latest, but this isn't always the case in HA
deployments where another server updates the consistency DB.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328331

Title:
  Big Switch: servermanager consistency hash doesn't work in HA
  deployments

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Big Switch servermanager records the consistency hash to the
  database every time it gets updated but it does not retrieve the
  latest value from the database whenever it includes it in an HTTP
  request. This is fine in single neutron server deployments because the
  cached version on the object is always the latest, but this isn't
  always the case in HA deployments where another server updates the
  consistency DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319619] Re: Cannot delete unused 'default' security group for removed project

2014-06-09 Thread Thang Pham
According to 
http://docs.openstack.org/trunk/openstack-ops/content/security_groups.html: 
All projects have a default security group, which is applied to instances 
that have no other security group defined. Unless 
changed, this security group denies all incoming traffic.

The fact that you cannot delete a default security group seems correct.
I do not believe this a bug, at least based on the documentation.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319619

Title:
  Cannot delete unused 'default' security group for removed project

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When creating new project, there will generates a 'default' security group.
  However, after deleting the project, the  'default' security group cannot be 
deleted.
  Always met the error msg when run 'nova secgroup-delete ${group_id}'.
  The msg looks like
  'ERROR: Unable to delete system group 'default' (HTTP 400) (Request-ID: 
req-2aa9a7d2-2c7d-4abc-a961-e98e06dc2fd5)'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328362] [NEW] ext.check_env is not used

2014-06-09 Thread YAMAMOTO Takashi
Public bug reported:

check_env method for extension descriptor is not documented or used.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328362

Title:
  ext.check_env is not used

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  check_env method for extension descriptor is not documented or used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278796] Re: Horizon Ceilometer hard-coded availability zone

2014-06-09 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1278796

Title:
  Horizon Ceilometer hard-coded availability zone

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  I spent the last couple of hours trying to figure out why nothing was
  showing up under the 'Compute' menu in the 'Resource Usage' panel in
  Horizon. I am using a custom availability zone name for my Nova
  compute nodes. I am using 'openstack-dashboard' version 2013.2.1-1 on
  a CentOS 6.5 server. If you look in:

  /usr/share/openstack-
  dashboard/openstack_dashboard/dashboards/admin/metering/tabs.py

  at around line 40, you will see a 'query' object that looks for
  instances by availability zone:

  query = [{field: metadata.OS-EXT-AZ:availability_zone,
op: eq,
value: nova}]

  The ceilometer panel in Horizon should account for the fact that users
  may have custom (and possibly multiple) availability zones. You could
  add an additional drop down menu in the panel to select from a list of
  the current availability zones in the database. Replace the hard-coded
  value of 'nova' with a variable that is populated from a drop down
  menu in Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1278796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328367] [NEW] Do not set vm error state when raise MigrationError

2014-06-09 Thread Xiang BZ Zhou
Public bug reported:

Control Node: 101.0.0.20(also has compute service , but do not use it)
Compute Node:  101.0.0.30

nova version:
2014.1.b2-847-ga891e04

in control node nova.conf
allow_resize_to_same_host = True
and
in compute node nova.conf
allow_resize_to_same_host = False

detail:
1. boot an instance in compute node
nova boot --image 51c4a908-c028-4ce2-bbd1-8b0e15d8d829 --flavor 84 --nic 
net-id=308840da-6440-4599-923a-2edd290971d3 --availability-zone 
nova:compute.localdomain migrate_test

2. resize it to flavor type 1
nova resize   migrate_test 1

3.the instance has set to error state when resize failed.

#nova list
+--+--++-+-+---+
| a1424990-182a-4bc2-8c17-aa4808a49472 | migrate_test | ERROR  | resize_prep | 
Running | private=20.0.0.15 |
+--+--++-+-+---+

#nova show

| config_drive |

   |
| created  | 2014-06-09T09:31:35Z   

   |
| fault| {message: class 
'nova.exception.MigrationError', code: 500, details:   File 
\/opt/stack/nova/nova/compute/manager.py\, line 3104, in prep_resize |
|  | node)  

   |
|  |   File 
\/opt/stack/nova/nova/compute/manager.py\, line 3058, in _prep_resize 
   |
|  | raise 
exception.MigrationError(msg)   
|
|  | , created: 2014-06-10T03:54:39Z}
 |
| flavor   | m1.micro (84)  

   |
| hostId   | 
f73013b029032929598a4a54586e4469c2c7cd676c147f6601f73c58


error log in compute node:

2014-06-10 11:54:48.372 ERROR nova.compute.manager 
[req-6a4ac25a-7d24-40c6-9f8d-435b4adb6fff admin admin] [instance: a1424990-182a
-4bc2-8c17-aa4808a49472] Setting instance vm_state to ERROR
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] Traceback (most recent call la
st):
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File /opt/stack/nova/nova/c
ompute/manager.py, line 5231, in _error_out_instance_on_exception
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] yield
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File /opt/stack/nova/nova/c
ompute/manager.py, line 3111, in prep_resize
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] filter_properties)
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File 
/opt/stack/nova/nova/compute/manager.py, line 3104, in prep_resize
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] node)
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File 
/opt/stack/nova/nova/compute/manager.py, line 3058, in _prep_resize
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] raise exception.MigrationError(msg)
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] MigrationError: destination same as 
source!
2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]

bug reason:
1. nova-scheduler is allowed to scheduler to compute node (due to controller 
nova.conf)

2. but nova-compute is not allowed to resize in same host (due to
compute node nova.conf)

3.
a)compute side _prep_resize() function set instance into error state:

self._set_instance_error_state(context, instance['uuid'])
...
then raise exception

b)
compute node reschedule the instance again, failed again

self._reschedule_resize_or_reraise(context, image, instance,
 exc_info, instance_type, reservations, request_spec,
   

[Yahoo-eng-team] [Bug 1328375] [NEW] The 'x-openstack-request-id' from cinder cannot be output to the log.

2014-06-09 Thread Takashi NATSUME
Public bug reported:

Cinder returns a response including 'x-openstack-request-id' in the HTTP 
response header when nova calls cinder.
But nova cannot output 'x-openstack-request-id' to the log( if the call is 
successful).
If nova outputs 'x-openstack-request-id' to the log, it will enable us to 
perform the analysis more efficiently.

Before:

2014-06-10 10:34:13.636 DEBUG nova.volume.cinder 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Cinderclient connection 
created using URL: http://10.0.2.15:8776/v1/5b25b7114cd34d41a9415bbc47a07c81 
cinderclient /opt/stack/nova/nova/volume/cinder.py:94
2014-06-10 10:34:13.640 INFO urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Starting new HTTP 
connection (1): 10.0.2.15
2014-06-10 10:34:13.641 DEBUG urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Setting read timeout to 
None _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:375
2014-06-10 10:34:16.381 DEBUG urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] POST 
/v1/5b25b7114cd34d41a9415bbc47a07c81/volumes/e4fe2d26-fccb-475e-9992-c8e25a418118/action
 HTTP/1.1 200 447 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415


After:

2014-06-10 13:40:19.423 DEBUG nova.volume.cinder 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Cinderclient connection 
created using URL: http://10.0.2.15:8776/v1/d35af2c7a90581879aecbc448203 
cinderclient /opt/stack/nova/nova/volume/cinder.py:97
(snipped...)
2014-06-10 13:40:19.424 DEBUG cinderclient.client 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] 
REQ: curl -i 
http://10.0.2.15:8776/v1/d35af2c7a90581879aecbc448203/volumes/7a7d47c7-b31d-41bb-874f-f37dd175a4a4/action
 -X POST -H X-Auth-Project-Id: d35af2c7a90581879aecbc448203 -H 
User-Agent: python-cinderclient -H Content-Type: application/json -H 
Accept: application/json -H X-Auth-Token: (snipped...) -d '{os-attach: 
{instance_uuid: cad01ef1-2728-4a9a-b4d6-da1a783a627b, mountpoint: 
/dev/vdb, mode: rw}}'
 http_log_req /opt/stack/python-cinderclient/cinderclient/client.py:130
2014-06-10 13:40:19.427 INFO urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Starting new HTTP 
connection (1): 10.0.2.15
2014-06-10 13:40:19.428 DEBUG urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Setting read timeout to 
None _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:375
2014-06-10 13:40:19.909 DEBUG urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] POST 
/v1/d35af2c7a90581879aecbc448203/volumes/7a7d47c7-b31d-41bb-874f-f37dd175a4a4/action
 HTTP/1.1 202 0 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
(snipped...)
2014-06-10 13:40:19.910 DEBUG cinderclient.client 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] RESP: [202] 
CaseInsensitiveDict({'date': 'Tue, 10 Jun 2014 04:40:19 GMT', 'content-length': 
'0', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 
'req-b0e7bccf-cc70-4646-93c0-bd94090cc5f0'})
RESP BODY: 
 http_log_resp /opt/stack/python-cinderclient/cinderclient/client.py:139


** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Takashi NATSUME (natsume-takashi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328375

Title:
  The 'x-openstack-request-id' from cinder cannot be output to the log.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Cinder returns a response including 'x-openstack-request-id' in the HTTP 
response header when nova calls cinder.
  But nova cannot output 'x-openstack-request-id' to the log( if the call is 
successful).
  If nova outputs 'x-openstack-request-id' to the log, it will enable us to 
perform the analysis more efficiently.

  Before:
  

[Yahoo-eng-team] [Bug 1328382] [NEW] wrong doc for disabaled_reason in service list

2014-06-09 Thread jiang, yunhong
Public bug reported:

In http://developer.openstack.org/api-ref-compute-v2-ext.html, the
'disabled_reason field will be None for xml format and 'null’  for json
format. However, in the documentation, it's stated as , which is not
correct.


Below is the output of nova client:

yjiang5@otccloud06:/opt/stack/nova$ nova --debug service-list

..

RESP BODY: {services: [{status: enabled, binary: nova-
conductor, zone: internal, state: up, updated_at:
2014-06-10T05:28:39.00, host: otccloud06, disabled_reason:
null, id: 1}, {status: enabled, binary: nova-compute, zone:
nova, state: up, updated_at: 2014-06-10T05:28:48.00,
host: otccloud06, disabled_reason: null, id: 2}, {status:
enabled, binary: nova-cert, zone: internal, state: up,
updated_at: 2014-06-10T05:28:46.00, host: otccloud06,
disabled_reason: null, id: 3}, {status: enabled, binary:
nova-network, zone: internal, state: up, updated_at:
2014-06-10T05:28:48.00, host: otccloud06, disabled_reason:
null, id: 4}, {status: enabled, binary: nova-scheduler,
zone: internal, state: up, updated_at:
2014-06-10T05:28:48.00, host: otccloud06, disabled_reason:
null, id: 5}, {status: enabled, binary: nova-consoleauth,
zone: internal, state: up, updated_at:
2014-06-10T05:28:44.00, host: otccloud06, disabled_reason:
null, id: 6}]}

** Affects: nova
 Importance: Low
 Assignee: jiang, yunhong (yunhong-jiang)
 Status: New


** Tags: api

** Changed in: nova
 Assignee: (unassigned) = jiang, yunhong (yunhong-jiang)

** Changed in: nova
Milestone: None = juno-1

** Tags added: api

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328382

Title:
  wrong doc for disabaled_reason in service list

Status in OpenStack Compute (Nova):
  New

Bug description:
  In http://developer.openstack.org/api-ref-compute-v2-ext.html, the
  'disabled_reason field will be None for xml format and 'null’  for
  json format. However, in the documentation, it's stated as , which
  is not correct.

  
  Below is the output of nova client:

  yjiang5@otccloud06:/opt/stack/nova$ nova --debug service-list

  ..

  RESP BODY: {services: [{status: enabled, binary: nova-
  conductor, zone: internal, state: up, updated_at:
  2014-06-10T05:28:39.00, host: otccloud06, disabled_reason:
  null, id: 1}, {status: enabled, binary: nova-compute,
  zone: nova, state: up, updated_at:
  2014-06-10T05:28:48.00, host: otccloud06, disabled_reason:
  null, id: 2}, {status: enabled, binary: nova-cert, zone:
  internal, state: up, updated_at: 2014-06-10T05:28:46.00,
  host: otccloud06, disabled_reason: null, id: 3}, {status:
  enabled, binary: nova-network, zone: internal, state:
  up, updated_at: 2014-06-10T05:28:48.00, host:
  otccloud06, disabled_reason: null, id: 4}, {status: enabled,
  binary: nova-scheduler, zone: internal, state: up,
  updated_at: 2014-06-10T05:28:48.00, host: otccloud06,
  disabled_reason: null, id: 5}, {status: enabled, binary:
  nova-consoleauth, zone: internal, state: up, updated_at:
  2014-06-10T05:28:44.00, host: otccloud06, disabled_reason:
  null, id: 6}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp