[Yahoo-eng-team] [Bug 1156456] Re: libvirt CPU info doesn't count NUMA cells

2015-09-15 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/223869

** Changed in: nova
   Status: Invalid => In Progress

** Changed in: nova
 Assignee: (unassigned) => Nicolas Simonds (nicolas.simonds)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1156456

Title:
  libvirt CPU info doesn't count NUMA cells

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The libvirt driver, when counting sockets/cores/etc., does not take
  NUMA architectures into account.  This can cause applications using
  data from the Nova API to under-report the total number of
  sockets/cores/etc. on compute nodes with more than one NUMA cell.

  Example, on a production system with 2 NUMA cells:

  $ grep ^proc /proc/cpuinfo | wc -l
32

  $ python simple_test_script_to_ask_nova_for_cpu_topology.py
  {u'cores': u'8', u'threads': u'2', u'sockets': u'1'}

  So, if one were relying solely on Nova to obtain information about
  this system's capabilities, the results would inaccurate results.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1156456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496204] [NEW] DVR: no need to reschedule_router if router gateway update

2015-09-15 Thread shihanzhang
Public bug reported:

With None DVR router, if router_gateway changes, it should
reschedule_router to proper l3 agents, the reason is bellow:

"  When external_network_bridge is set, each L3 agent can be associated
with at most one external network. If router's new external gateway
is on other network then the router needs to be rescheduled to the
proper l3 agent."

But with DVR router, I think it is no need to reschedule_router(there is
no other  l3 agents), and a serious problem is that during
reschedule_router, the communication is broken related to this router.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496204

Title:
  DVR: no need to reschedule_router if router gateway update

Status in neutron:
  New

Bug description:
  With None DVR router, if router_gateway changes, it should
  reschedule_router to proper l3 agents, the reason is bellow:

  "  When external_network_bridge is set, each L3 agent can be associated
  with at most one external network. If router's new external gateway
  is on other network then the router needs to be rescheduled to the
  proper l3 agent."

  But with DVR router, I think it is no need to reschedule_router(there
  is no other  l3 agents), and a serious problem is that during
  reschedule_router, the communication is broken related to this router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495876] Re: nova.conf - configuration options in OpenStack Configuration Reference  - kilo

2015-09-15 Thread Shuquan Huang
OpenStack Configuration Reference is not the right place to fix the problem :) 
This table is automatically generated from the code, so it should be fixed in 
nova.
We need to change the description in nova.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Shuquan Huang (shuquan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495876

Title:
  nova.conf - configuration options in OpenStack Configuration Reference
  - kilo

Status in OpenStack Compute (nova):
  New
Status in openstack-manuals:
  In Progress

Bug description:
  
  ---
  Built: 2015-08-27T08:45:20 00:00
  git SHA: f062eb42bbc512386ac572b5b830fb4e21c72a41
  URL: 
http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/compute/section_compute-options-reference.xml
  xml:id: list-of-compute-config-options

  
  iscsi_use_multipath = False   (BoolOpt) Use multipath connection of the iSCSI 
volume

  
  above description is incorrect and very misleading

  actually, this option is applicable for both FC/iSCSI volumes

  
  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471289] Re: Fernet tokens and Federated Identities result in token scope failures

2015-09-15 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1471289

Title:
  Fernet tokens and Federated Identities result in token scope failures

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  New

Bug description:
  When keystone is configured to use fernet tokens and also configured
  to be a SP for an external IDP then the token data received by nova
  and other services appear to not contain the right information,
  resulting in errors from nova-api-os-compute such as:

  Returning 400 to user: Malformed request URL: URL's project_id
  '69f5cff441e04554b285d7772630dec1' doesn't match Context's project_id
  'None'

  When keystone is switched to use uuid tokens, then everything works as
  expected.

  Further debugging of the request to the nova api shows:

  'HTTP_X_USER_DOMAIN_NAME': None,
  'HTTP_X_DOMAIN_ID': None,
  'HTTP_X_PROJECT_DOMAIN_ID': None,
  'HTTP_X_ROLES': '',
  'HTTP_X_TENANT_ID': None,
  'HTTP_X_PROJECT_DOMAIN_NAME': None,
  'HTTP_X_TENANT': None,
  'HTTP_X_USER': u'S-1-5-21-2917001131-1385516553-613696311-1108',
  'HTTP_X_USER_DOMAIN_ID': None,
  'HTTP_X_AUTH_PROJECT_ID': '69f5cff441e04554b285d7772630dec1',
  'HTTP_X_DOMAIN_NAME': None,
  'HTTP_X_PROJECT_NAME': None,
  'HTTP_X_PROJECT_ID': None,
  'HTTP_X_USER_NAME': u'S-1-5-21-2917001131-1385516553-613696311-1108'

  Comparing the interaction of nova-api-os-compute with keystone for the
  token validation between an internal user and a federated user, the
  following is seen:

  ### federated user ###
  2015-07-03 14:43:05.229 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i 
--insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H 
"X-Subject-Token: {SHA1}acff9b5962270fec270e693eacb4c987c335f5c5" -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193
  2015-07-03 14:43:05.265 8103 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 402 x-subject-token: 
{SHA1}acff9b5962270fec270e693eacb4c987c335f5c5 vary: X-Auth-Token keep-alive: 
timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: 
Fri, 03 Jul 2015 14:43:05 GMT content-type: application/json 
x-openstack-request-id: req-df3dce71-3174-4753-b883-11eb31a67d7c
  RESP BODY: {"token": {"methods": ["token"], "expires_at": 
"2015-07-04T02:43:04.00Z", "extras": {}, "user": {"OS-FEDERATION": 
{"identity_provider": {"id": "adfs-idp"}, "protocol": {"id": "saml2"}, 
"groups": []}, "id": "S-1-5-21-2917001131-1385516553-613696311-1108", "name": 
"S-1-5-21-2917001131-1385516553-613696311-1108"}, "audit_ids": 
["_a6BbQ6mSoGAY2u9NN0tFA"], "issued_at": "2015-07-03T14:43:04.00Z"}}
   
  ### internal user ###
  2015-07-03 14:28:31.875 8103 DEBUG keystoneclient.session [-] REQ: curl -g -i 
--insecure -X GET https://sp.testenvironment.local:5000/v3/auth/tokens -H 
"X-Subject-Token: {SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d" -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}a6a8a70ae39c533379eccd51b6d253f264d59f14" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:193
  2015-07-03 14:28:31.949 8103 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 6691 x-subject-token: 
{SHA1}b9c6748d65a0492faa9862fabf0a56fd5fdd255d vary: X-Auth-Token keep-alive: 
timeout=5, max=100 server: Apache/2.4.7 (Ubuntu) connection: Keep-Alive date: 
Fri, 03 Jul 2015 14:28:31 GMT content-type: application/json 
x-openstack-request-id: req-6e0ed9f4-46c3-4c79-b444-f72963fc9503
  RESP BODY: {"token": {"methods": ["password"], "roles": [{"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "expires_at": 
"2015-07-04T02:28:31.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "0f491c8551c04cdc804a479af0bf13ec", "name": "demo"}, 
"catalog": "", "extras": {}, "user": {"domain": {"id": "default", 
"name": "Default"}, "id": "76c8c3017c954d88a6ad69ee4cb656d6", "name": "test"}, 
"audit_ids": ["aAN_V0c6SLSI0Rm1hoScCg"], "issued_at": 
"2015-07-03T14:28:31.00Z"}}

  The data structures that come back from keystone are clearly quite
  different.

  ### configuration environment ###

  Ubuntu 14.04 OS
  nova==12.0.0.0a1.dev51 # commit a4f4be370be06cfc9aa3ed30d2445277e832376f from 
master branch
  keystone==8.0.0.0a1.dev12 # commit a7ca13b687dd284f0980d768b11a3d1b52b4106e 
from master branch
  python-keystoneclient==1.6.1.dev19 # commit 
d238cc9af4927d1092de207db978536d712af129 from master branch
  python-openstackclient==1.5.1.dev11# commit 
2d6bc8f4c38dbf997e3e71119f13f0328b4a8669 from master branch
  

[Yahoo-eng-team] [Bug 1496222] [NEW] Requirements update breaks keystone install on 3'rd party CI systems

2015-09-15 Thread John Griffith
Public bug reported:

After this change: 
https://github.com/openstack/keystone/commit/db6c7d9779378a3a6a6c52c47fa0a303c9038508
 systems that run clean devstack installs are now failing during stack.sh for:
2015-09-16 02:30:22.901 | Ignoring dnspython3: markers "python_version=='3.4'" 
don't match your environment
2015-09-16 02:30:23.035 | Obtaining file:///opt/stack/keystone
2015-09-16 02:30:23.464 | Complete output from command python setup.py 
egg_info:
2015-09-16 02:30:23.464 | error in setup command: Invalid environment 
marker: (python_version=='2.7' # MPL)
2015-09-16 02:30:23.464 | 
2015-09-16 02:30:23.464 | 
2015-09-16 02:30:23.465 | Command "python setup.py egg_info" failed with error 
code 1 in /opt/stack/keystone

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496222

Title:
  Requirements update breaks keystone install on 3'rd party CI systems

Status in Keystone:
  New

Bug description:
  After this change: 
https://github.com/openstack/keystone/commit/db6c7d9779378a3a6a6c52c47fa0a303c9038508
 systems that run clean devstack installs are now failing during stack.sh for:
  2015-09-16 02:30:22.901 | Ignoring dnspython3: markers 
"python_version=='3.4'" don't match your environment
  2015-09-16 02:30:23.035 | Obtaining file:///opt/stack/keystone
  2015-09-16 02:30:23.464 | Complete output from command python setup.py 
egg_info:
  2015-09-16 02:30:23.464 | error in setup command: Invalid environment 
marker: (python_version=='2.7' # MPL)
  2015-09-16 02:30:23.464 | 
  2015-09-16 02:30:23.464 | 
  2015-09-16 02:30:23.465 | Command "python setup.py egg_info" failed with 
error code 1 in /opt/stack/keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496235] [NEW] Boot from volume faild with availability_zone option, in case of cinder do not have availability_zone

2015-09-15 Thread Ikuo Kumagai
Public bug reported:

My envirement ,Nova has 2 availability_zone and Cinder has no availability_zone.
When I run two command "cinder create" and "nova boot" separately ,  it is 
finished normally.
But run at one time, and an error error occurs as below.(It is the same on the 
dashboard)


$ nova boot --flavor m1.small --block-device 
source=image,id=4014a3f7-507b-4692-86c8-8224bbcc7102,dest=volume,size=10,shutdown=delete,bootindex=0
 --nic net-id=4d9e9847-80b5-46ca-8439-344930a59825 --availability_zone az2 test1

$ nova list
+--+---++--+-+---+
| ID   | Name  | Status | Task State   
| Power State | Networks  |
+--+---++--+-+---+
| a0dc0f03-155c-422b-b0ac-996fc17e0989 | test1 | ERROR  | block_device_mapping 
| NOSTATE |   |
+--+---++--+-+---+

$ nova show
+--+--+
| Property | Value  


  |
+--+--+
| OS-DCF:diskConfig| MANUAL 


  |
| OS-EXT-AZ:availability_zone  | az1


  |
| OS-EXT-SRV-ATTR:host | compute011-az1 


 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute011-az1.maas


 |
| OS-EXT-SRV-ATTR:instance_name| instance-0514  


  |
| OS-EXT-STS:power_state   | 0  


  |
| OS-EXT-STS:task_state| block_device_mapping   


  |
| OS-EXT-STS:vm_state  | error  


  |
| OS-SRV-USG:launched_at   | -  


  |
| OS-SRV-USG:terminated_at | -  


  |
| accessIPv4   |
 

[Yahoo-eng-team] [Bug 1496239] [NEW] neutron-fwaas check_migartion fails

2015-09-15 Thread Akihiro Motoki
Public bug reported:

neutron-fwaas check-miration fails with the following error:

ubuntu@dev16:.../migration/alembic_migrations/versions (master)$ 
neutron-db-manage --subproject neutron-fwaas check_migration
  Running branches for neutron-fwaas ...
kilo (branchpoint)
 -> c40fbb377ad (expand)
 -> 67c8e8d61d5 (contract) (head)

  OK
  FAILED: HEADS file does not match migration timeline heads, expected: 
4b47ea298795, 67c8e8d61d5

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496239

Title:
  neutron-fwaas check_migartion fails

Status in neutron:
  New

Bug description:
  neutron-fwaas check-miration fails with the following error:

  ubuntu@dev16:.../migration/alembic_migrations/versions (master)$ 
neutron-db-manage --subproject neutron-fwaas check_migration
Running branches for neutron-fwaas ...
  kilo (branchpoint)
   -> c40fbb377ad (expand)
   -> 67c8e8d61d5 (contract) (head)

OK
FAILED: HEADS file does not match migration timeline heads, expected: 
4b47ea298795, 67c8e8d61d5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496244] [NEW] rule change via GUI/CLI puts FW in ERROR mode

2015-09-15 Thread Alex Syafeyev
Public bug reported:

We have FW rules attached to policy which is assigned to a FW.
After editing the rule the FW goes into error state

http://pastebin.com/eF5fCnEe

Repoducible 100%

LOGS:
http://pastebin.com/cHjMX2Q3

Kilo- openstack-neutron-fwaas-2015.1.1-1.el7ost

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- We have FW rules attached to policy which is assigned to a FW. 
- After editing the rule the FW goes into error state 
+ We have FW rules attached to policy which is assigned to a FW.
+ After editing the rule the FW goes into error state
  
  http://pastebin.com/eF5fCnEe
  
- 
  Repoducible 100%
  
- LOGS: 
+ LOGS:
  http://pastebin.com/cHjMX2Q3
  
- Kilo- https://bugzilla.redhat.com/show_bug.cgi?id=1258606
+ Kilo- openstack-neutron-fwaas-2015.1.1-1.el7ost

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496244

Title:
  rule change via GUI/CLI puts FW in ERROR mode

Status in neutron:
  New

Bug description:
  We have FW rules attached to policy which is assigned to a FW.
  After editing the rule the FW goes into error state

  http://pastebin.com/eF5fCnEe

  Repoducible 100%

  LOGS:
  http://pastebin.com/cHjMX2Q3

  Kilo- openstack-neutron-fwaas-2015.1.1-1.el7ost

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494330] Re: envionment markers with comments error in pbr

2015-09-15 Thread Steve Martinelli
*** This bug is a duplicate of bug 1496222 ***
https://bugs.launchpad.net/bugs/1496222

** This bug is no longer a duplicate of bug 1487835
   Misparse of some comments in requirements
** This bug has been marked a duplicate of bug 1496222
   Requirements update breaks keystone install on 3'rd party CI systems

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1494330

Title:
  envionment markers with comments error in pbr

Status in Keystone:
  Confirmed

Bug description:
  https://review.openstack.org/#/c/222000/ in keystone is a new
  requirements update that came after
  https://review.openstack.org/203336 requirements update.

  the keystone change is failing, when I run it locally the output
  includes:

Running setup.py install for keystone   


  Complete output from command /opt/stack/keystone/.tox/py27/bin/python2.7 
-c "import setuptools, 
tokenize;__file__='/tmp/pip-UCUFAy-build/setup.py';exec(compile(getattr(tokenize,
 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
install --record /tmp/pip-ZKQdft-record/install-record.txt 
--single-version-externally-managed --compile --install-headers 
/opt/stack/keystone/.tox/py27/include/site/python2.7/keystone:   
  error in setup command: Invalid environment marker: 
(python_version=='2.7' # MPL)   

  
  So it looks like something isn't handling comments in setup.cfg lines, or 
comments can't be put in setup.cfg lines.

  A couple of options:

  1) Remove the comment from the global requirements file.
  2) Have the requirements update tool strip comments when updating setup.cfg.
  3) Maybe it's pbr that needs to handle it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1494330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469073] Re: disk-bus=scsi in block-device-mapping is invalid

2015-09-15 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469073

Title:
  disk-bus=scsi in block-device-mapping is invalid

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Using the body to create a instance, the instance is active, but instance os 
can't be loaded:
  {
"server": {
  "name": "BFVS",
  "imageRef": "91959f5d-16f9-4dd0-823e-81fa11d2add3",
  "block_device_mapping_v2": [
{
  "device_type": "disk",
  "uuid": "91959f5d-16f9-4dd0-823e-81fa11d2add3",
  "source_type": "image",
  "destination_type": "volume",
  "boot_index": "0",
  "delete_on_termination": false,
  "volume_size": "4",
  "disk_bus": "scsi"
}
  ],
  "flavorRef": "2",
  "max_count": 1,
  "min_count": 1,
  "networks": [
{
  "uuid": "984e18a5-3225-49b4-861a-779ad75d6e0a"
}
  ],
  "user_data": 
"IyEvYmluL3NoCnBhc3N3ZCB1YnVudHU8PEVPRgp1YnVudHUKdWJ1bnR1CkVPRgpzZWQgLWkgJ3MvUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBuby9QYXNzd29yZEF1dGhlbnRpY2F0aW9uIHllcy9nJyAvZXRjL3NzaC9zc2hkX2NvbmZpZwpzZXJ2aWNlIHNzaCByZXN0YXJ0"
}
  }

  The libvirt.xml generated is:
  



  
  



ed94715e-3e46-4463-9669-4f80393c48ef
  

  The nova.conf is:
  [DEFAULT]
  vif_plugging_timeout = 300
  vif_plugging_is_fatal = True
  linuxnet_interface_driver =
  security_group_api = neutron
  network_api_class = nova.network.neutronv2.api.API
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
  compute_driver = libvirt.LibvirtDriver
  default_ephemeral_format = ext4
  metadata_workers = 12
  ec2_workers = 12
  osapi_compute_workers = 12
  rpc_backend = rabbit
  keystone_ec2_url = http://103.30.0.58:5000/v2.0/ec2tokens
  ec2_dmz_host = 103.30.0.58
  vncserver_proxyclient_address = 127.0.0.1
  vncserver_listen = 127.0.0.1
  vnc_enabled = true
  xvpvncproxy_base_url = http://103.30.0.58:6081/console
  novncproxy_base_url = http://103.30.0.58:6080/vnc_auto.html
  force_config_drive = True
  use_syslog = True
  send_arp_for_ha = True
  multi_host = True
  instances_path = /opt/stack/data/nova/instances
  state_path = /opt/stack/data/nova
  enabled_apis = ec2,osapi_compute,metadata
  instance_name_template = instance-%08x
  my_ip = 103.30.0.58
  s3_port = 
  s3_host = 103.30.0.58
  default_floating_pool = public
  force_dhcp_release = True
  dhcpbridge_flagfile = /etc/nova/nova.conf
  scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
  rootwrap_config = /etc/nova/rootwrap.conf
  api_paste_config = /etc/nova/api-paste.ini
  allow_migrate_to_same_host = True
  allow_resize_to_same_host = True
  debug = True
  verbose = True

  [database]
  connection = mysql://root:xxx@103.30.0.58/nova?charset=utf8

  [osapi_v3]
  enabled = True

  [keystone_authtoken]
  signing_dir = /var/cache/nova
  cafile = /opt/stack/data/ca-bundle.pem
  auth_uri = http://103.30.0.58:5000
  project_domain_id = default
  project_name = service
  user_domain_id = default
  password = xxx
  username = nova
  auth_url = http://103.30.0.58:35357
  auth_plugin = password

  [oslo_concurrency]
  lock_path = /opt/stack/data/nova

  [spice]
  enabled = false
  html5proxy_base_url = http://103.30.0.58:6082/spice_auto.html

  [oslo_messaging_rabbit]
  rabbit_userid = stackrabbit
  rabbit_password = xxx
  rabbit_hosts = 103.30.0.58

  [glance]
  api_servers = http://103.30.0.58:9292

  [cinder]
  os_region_name = RegionOne

  [libvirt]
  vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  inject_partition = -2
  live_migration_uri = qemu+ssh://stack@%s/system
  use_usb_tablet = False
  cpu_mode = none
  virt_type = kvm
  images_volume_group = novavg
  images_type = lvm

  [neutron]
  service_metadata_proxy = True
  url = http://103.30.0.58:9696
  region_name = RegionOne
  admin_tenant_name = service
  auth_strategy = keystone
  admin_auth_url = http://103.30.0.58:35357/v2.0
  admin_password = xxx
  admin_username = neutron

  [keymgr]
  fixed_key = ded30f541b44b53c3ed8304ba20bb9d710b123de7313398b02993e72c2fadc69

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496164] [NEW] Building on Horizon tutorial startdash command broken

2015-09-15 Thread Cindy Lu
Public bug reported:

http://docs.openstack.org/developer/horizon/topics/tutorial.html

Run the following commands:



mkdir openstack_dashboard/dashboards/mydashboard

./run_tests.sh -m startdash mydashboard \
  --target openstack_dashboard/dashboards/mydashboard



startdash command is broken.  Gives "KeyError: extensions" on this line:

https://github.com/openstack/horizon/blob/master/horizon/management/commands/startdash.py#L50

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496164

Title:
  Building on Horizon tutorial startdash command  broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  http://docs.openstack.org/developer/horizon/topics/tutorial.html

  Run the following commands:

  

  mkdir openstack_dashboard/dashboards/mydashboard

  ./run_tests.sh -m startdash mydashboard \
--target openstack_dashboard/dashboards/mydashboard

  

  startdash command is broken.  Gives "KeyError: extensions" on this
  line:

  
https://github.com/openstack/horizon/blob/master/horizon/management/commands/startdash.py#L50

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496197] [NEW] notify_decorator bad getattr default value

2015-09-15 Thread Andrei V. Ostapenko
Public bug reported:

branch: master
In nova/notifications.py:

91method = getattr(notifier, CONF.default_notification_level.lower(),
92 'info')
93method(ctxt, name, body)

getattr tries to get method from notifier by its name in config. If it
fails string 'info' is returned and then called

** Affects: nova
 Importance: Undecided
 Assignee: Andrei V. Ostapenko (aostapenko)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496197

Title:
  notify_decorator bad getattr default value

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  branch: master
  In nova/notifications.py:

  91method = getattr(notifier, CONF.default_notification_level.lower(),
  92 'info')
  93method(ctxt, name, body)

  getattr tries to get method from notifier by its name in config. If it
  fails string 'info' is returned and then called

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447164] Re: require_admin_context() does not account for policy.json rulesets

2015-09-15 Thread Diana Clarke
Thanks for the additional context, Alex. I'll close this bug (mark it as
invalid).

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: Diana Clarke (diana-clarke) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447164

Title:
  require_admin_context() does not account for policy.json rulesets

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The API RBAC is done using a policy.json file which allows fine-grained 
control over each API endpoint by setting specific rules.
  Consequently, some defaulted admin-only endpoints can be opened by modifying 
their corresponding policy rules to be for anyone.

  Unfortunately, in many places (in the DB and at the API level
  following the blueprint api-policy-v3 ), there is a call to
  context.require_admin_context() which is just checking if the user is
  admin or no but doesn't match with the policy rules.

  As we all agreed with api-policy-v3 that RBAC should be done at the
  API level, there is no reason to keep that call to
  context.require_admin_context() and we should assume that policy.json
  is the single source of truth for knowing the access rights.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496201] [NEW] DVR: router namespace can't be deleted if bulk delete VMs

2015-09-15 Thread shihanzhang
Public bug reported:

With DVR router,  if we bulk delete VMs on from a compute node, the router 
namespace will remain(not always happen, but for most part)
reproduce steps:
1. create a DVR router,  add a subnet to this router
2. create two VMs on one compute node, note that these are only these two VMs 
on this compute 
3. bulk delete these two VMs through Nova API

the router namespace will remain for the most part.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496201

Title:
  DVR: router namespace can't be deleted if bulk delete VMs

Status in neutron:
  New

Bug description:
  With DVR router,  if we bulk delete VMs on from a compute node, the router 
namespace will remain(not always happen, but for most part)
  reproduce steps:
  1. create a DVR router,  add a subnet to this router
  2. create two VMs on one compute node, note that these are only these two VMs 
on this compute 
  3. bulk delete these two VMs through Nova API

  the router namespace will remain for the most part.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492759] Re: heat-engine refers to a non-existent novaclient's method

2015-09-15 Thread Steve Baker
This looks like a node specific dependency issue

** Changed in: heat
Milestone: liberty-rc1 => None

** Changed in: heat
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492759

Title:
  heat-engine refers to a non-existent novaclient's method

Status in heat:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Openstack Kilo on Centos 7

  I cannot create a stack. heat-engine failed regardless of template what used 
for.
   
  Error message: ERROR: Property error: : resources.pgpool.properties.flavor: : 
'OpenStackComputeShell' object has no attribute '_discover_extensions

  heat-engine log:
  
  2015-09-06 15:34:08.242 19788 DEBUG oslo_messaging._drivers.amqp [-] unpacked 
context: {u'username': None, u'user_id': u'665b2e5b102a413c90433933aade392b', 
u'region_name': None, u'roles': [u'user', u'heat_stack_owner'], 
u'user_identity': u'- daddy', u'tenant_id': 
u'b408e8f5cb56432a96767c83583ea051', u'auth_token': u'***', u'auth_token_info': 
{u'token': {u'methods': [u'password'], u'roles': [{u'id': 
u'0698f895b3544a20ac511c6e287691d4', u'name': u'user'}, {u'id': 
u'2061bd7e4e9d4da4a3dc2afff69a823e', u'name': u'heat_stack_owner'}], 
u'expires_at': u'2015-09-06T14:34:08.136737Z', u'project': {u'domain': {u'id': 
u'default', u'name': u'Default'}, u'id': u'b408e8f5cb56432a96767c83583ea051', 
u'name': u'daddy'}, u'catalog': [{u'endpoints': [{u'url': 
u'http://172.17.1.1:9292', u'interface': u'admin', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'5dce804bafb34b159ec1b4385460a481'}, 
{u'url': u'http://172.17.1.1:9292', u'interface': u'public', u'region': 
u'CEURegion', u'region_id
 ': u'CEURegion', u'id': u'a5728528ead84649bd561f9841011ff4'}, {u'url': 
u'http://172.17.1.1:9292', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'e205b5ba78e0479fb391d90f4958a8a0'}], 
u'type': u'image', u'id': u'0a0dd8432bd64f88b2c1ffd3d5d23b78', u'name': 
u'glance'}, {u'endpoints': [{u'url': u'http://172.17.1.1:9696', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'15831ae42aa143cb94f0d3adc1b353fb'}, {u'url': u'http://172.17.1.1:9696', 
u'interface': u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'74bf11a2b9334256bf9abdc618556e2b'}, {u'url': 
u'http://172.17.1.1:9696', u'interface': u'internal', u'region': u'CEURegion', 
u'region_id': u'CEURegion', u'id': u'd326b2c9fa614cad8586c79ab76a66a0'}], 
u'type': u'network', u'id': u'0e75266a6c284a289edb11b1c627c53f', u'name': 
u'neutron'}, {u'endpoints': [{u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'int
 ernal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'083e629299bb429ba6ad1bf03451e8db'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'3942023115194893bb6762d02e47524a'}, {u'url': 
u'http://172.17.1.1:8774/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'b6f4f8a8bc33444b862cd3d9360c67e2'}], u'type': u'compute', u'id': 
u'2a259406aeef4667873d06ef361a1c44', u'name': u'nova'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'919bab67f54b4973807dcefb37fc22aa'}, {u'url': 
u'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'internal', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'ce0963a3cfba44deb818f7d0551d8bdf'}, {u'url': u
 'http://172.17.1.1:8776/v2/b408e8f5cb56432a96767c83583ea051', u'interface': 
u'public', u'region': u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'e98842d6a18840f7a1d0595957eaa4d6'}], u'type': u'volume', u'id': 
u'5e3afcf192bb4ad8ad9bfd589b0641b9', u'name': u'cinder'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:8000/v1', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'4385c791314e4f8a926411b9f4707513'}, {u'url': u'http://172.17.1.1:8000/v1', 
u'interface': u'admin', u'region': u'CEURegion', u'region_id': u'CEURegion', 
u'id': u'a1ed10e71e3d4c81b4f3e175f4c29e3f'}, {u'url': 
u'http://172.17.1.1:8000/v1', u'interface': u'internal', u'region': 
u'CEURegion', u'region_id': u'CEURegion', u'id': 
u'd6d2e7dc54fc4abbb99d93f95d795340'}], u'type': u'cloudformation', u'id': 
u'7a80a5d594414d6fb07f5332bca1d0e1', u'name': u'heat-cfn'}, {u'endpoints': 
[{u'url': u'http://172.17.1.1:5000/v2.0', u'interface': u'public', u'region': 
u'CEURegion', u'region_id': u'CEUR
 egion', u'id': u'0fef9f451d9b42bcaeea6addda1c3870'}, {u'url': 
u'http://172.17.1.1:35357/v2.0', u'interface': u'admin', 

[Yahoo-eng-team] [Bug 1496219] [NEW] get image error when boot a instance

2015-09-15 Thread Li Min Liu
Public bug reported:

2015-09-16 11:26:10.018 DEBUG nova.quota 
[req-4d4a1b1e-3ea3-41bd-b0d4-8b3adf2a1fd0 admin admin] Getting all quota usages 
for project: 457fcc6f0fb049a89bef6271495788c6 from (pid=5839) 
get_project_quotas /opt/stack/nova/nova/quota.py:290
2015-09-16 11:26:10.030 INFO nova.osapi_compute.wsgi.server 
[req-4d4a1b1e-3ea3-41bd-b0d4-8b3adf2a1fd0 admin admin] 10.0.10.50 "GET 
/v2.1/457fcc6f0fb049a89bef6271495788c6/limits?reserved=1 HTTP/1.1" status: 200 
len: 779 time: 0.0331810
^C
[stack@devstack logs]$ less -R n-api.log.2015-09-15-171330
2015-09-16 11:26:09.816 ERROR nova.api.openstack.extensions 
[req-27b137d8-77da-4082-8553-05598efff073 admin admin] Unexpected exception in 
API method
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 597, in create
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions **create_kwargs)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/hooks.py", line 149, in inner
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions rv = f(*args, 
**kwargs)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1557, in create
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1139, in _create_instance
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions image_id, 
boot_meta = self._get_image(context, image_href)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 849, in _get_image
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions image = 
self.image_api.get(context, image_href)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/api.py", line 93, in get
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
show_deleted=show_deleted)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 320, in show
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 497, in _translate_from_glance
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions 
include_locations=include_locations)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/image/glance.py", line 559, in _extract_attributes
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions queued = 
getattr(image, 'status') == 'queued'
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 490, in __getattr__
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions self.get()
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 508, in get
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions new = 
self.manager.get(self.id)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 493, in __getattr__
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions raise 
AttributeError(k)
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions AttributeError: id
2015-09-16 11:26:09.816 TRACE nova.api.openstack.extensions
2015-09-16 11:26:09.817 INFO nova.api.openstack.wsgi 
[req-27b137d8-77da-4082-8553-05598efff073 admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496219

Title:
  get image error when boot a instance

Status in OpenStack Compute (nova):
  New

Bug description:
  2015-09-16 11:26:10.018 DEBUG nova.quota 

[Yahoo-eng-team] [Bug 1496220] [NEW] error in setup command: Invalid environment marker: (python_version=='2.7' # MPL)

2015-09-15 Thread Sam Wan
Public bug reported:

https://review.openstack.org/#/c/222000/ introduced a change in 
keystone/setup.cfg:
---
@@ -24,7 +24,7 @@ packages =
 [extras]
 ldap =
   python-ldap>=2.4:python_version=='2.7'
-  ldappool>=1.0 # MPL
+  ldappool>=1.0:python_version=='2.7' # MPL
 memcache =
   python-memcached>=1.56
 mongodb =
-

and CI failed with below error:
--
" error in setup command: Invalid environment marker: (python_version=='2.7' # 
MPL)"


Seems there's something wrong with comment handling.

This is similar to https://bugs.launchpad.net/pbr/+bug/1487835 but it's
not the same.

we should remove the '# ... ' comments

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496220

Title:
  error in setup command: Invalid environment marker:
  (python_version=='2.7' # MPL)

Status in Keystone:
  New

Bug description:
  https://review.openstack.org/#/c/222000/ introduced a change in 
keystone/setup.cfg:
  ---
  @@ -24,7 +24,7 @@ packages =
   [extras]
   ldap =
 python-ldap>=2.4:python_version=='2.7'
  -  ldappool>=1.0 # MPL
  +  ldappool>=1.0:python_version=='2.7' # MPL
   memcache =
 python-memcached>=1.56
   mongodb =
  -

  and CI failed with below error:
  --
  " error in setup command: Invalid environment marker: (python_version=='2.7' 
# MPL)"
  

  Seems there's something wrong with comment handling.

  This is similar to https://bugs.launchpad.net/pbr/+bug/1487835 but
  it's not the same.

  we should remove the '# ... ' comments

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490985] Re: Inconsistent between subnet-create & subnet-update command

2015-09-15 Thread Jakub Libosvar
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490985

Title:
  Inconsistent between subnet-create & subnet-update command

Status in python-neutronclient:
  New

Bug description:
  Description of problem:
  When creating subnet with allocation pool we using flag that calls :  " 
--allocation-pool"

  the full command : 
  $ neutron subnet-create external_network 10.35.166.0/24 --disable-dhcp 
--gateway 10.35.166.254  --allocation-pool start=10.35.166.1,end=10.35.166.100

  When we want to update the subnet we need to use this flags : 
  --allocation-pools type=dict list=true start=10.35.166.1,end=10.35.166.55

  the full command is : 
  [root@cougar16 ~(keystone_admin)]# neutron  subnet-update 
c609b9ce-18d7-4b44-b8e0-5354a7b92caa --allocation-pools type=dict list=true 
start=10.35.166.1,end=10.35.166.55
  Updated subnet: c609b9ce-18d7-4b44-b8e0-5354a7b92caa


  Version-Release number of selected component (if applicable):

  [root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron 
  openstack-neutron-2014.1.5-2.el7ost.noarch
  python-neutronclient-2.3.4-3.el7ost.noarch
  python-neutron-2014.1.5-2.el7ost.noarch
  openstack-neutron-ml2-2014.1.5-2.el7ost.noarch
  openstack-neutron-openvswitch-2014.1.5-2.el7ost.noarch
  How reproducible:

  
  Steps to Reproduce:
  1.Install OSP 5 on  RHEL 7.1 
  2.run the command : neutron net-create external_network 
--provider:network_type=vlan  --provider:segmentation_id=181 
--provider:physical_network physnet --router:external

  3.run the command : neutron subnet-create external_network
  10.35.166.0/24 --disable-dhcp --gateway 10.35.166.254  --allocation-
  pool start=10.35.166.1,end=10.35.166.100

  4. neutron subnet-update c609b9ce-18d7-4b44-b8e0-5354a7b92caa 
--allocation-pool start=10.35.166.1,end=10.35.166.90
  Unrecognized attribute(s) 'allocation_pool'
  Actual results:

  5.neutron subnet-update c609b9ce-18d7-4b44-b8e0-5354a7b92caa 
--allocation-pools start=10.35.166.1,end=10.35.166.90
  Invalid input for allocation_pools. Reason: Invalid data format for IP pool: 
'start=10.35.166.1,end=10.35.166.90'.

  
  Expected results:
  the command in step 4 should work.

  Additional info:
  the command that work is : 
  neutron  subnet-update c609b9ce-18d7-4b44-b8e0-5354a7b92caa 
--allocation-pools type=dict list=true start=10.35.166.1,end=10.35.166.55
  Updated subnet: c609b9ce-18d7-4b44-b8e0-5354a7b92caa

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1490985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495815] [NEW] Hard to translate "Displaying %s of %s items" (cannot control the order of substitutions)

2015-09-15 Thread Akihiro Motoki
Public bug reported:

horizon/locale/djangojs.pot has the following string.

#: static/framework/util/filters/filters.js:177
#, python-format
msgid "Displaying %s of %s items"
msgstr ""

In some languages, there is a need to swap the order of the two %s.
%s should be replaced by %(keyword)s (keyword substitution).

The current horizon/static/framework/util/filters/filters.js is as
follows:

176 var total = ensureNonNegative(totalInput);
177 var format = gettext('Displaying %s of %s items');
178 return interpolate(format, [count, total]);

L.177 should be:

 var format = gettext('Displaying %(count)s of %(total)s items');

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495815

Title:
  Hard to translate "Displaying %s of %s items" (cannot control the
  order of substitutions)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  horizon/locale/djangojs.pot has the following string.

  #: static/framework/util/filters/filters.js:177
  #, python-format
  msgid "Displaying %s of %s items"
  msgstr ""

  In some languages, there is a need to swap the order of the two %s.
  %s should be replaced by %(keyword)s (keyword substitution).

  The current horizon/static/framework/util/filters/filters.js is as
  follows:

  176 var total = ensureNonNegative(totalInput);
  177 var format = gettext('Displaying %s of %s items');
  178 return interpolate(format, [count, total]);

  L.177 should be:

   var format = gettext('Displaying %(count)s of %(total)s items');

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495834] [NEW] [VMware] Launching an instance with large image size crashes nova-compute

2015-09-15 Thread Giridhar Jayavelu
Public bug reported:

Created an image of size ~6.5GB.
Launching a new instance from this image crashes nova-compute.

I'm observing nova-compute node running out of memory.

This could probably be due to reading entire file stream in memory
without using proper chunk size.

This is from git source and not any distribution.

git log -1
commit 1cf97bd096112b8d2e0eb95fd2a636a53cbf0bcc
Merge: a54c0d6 17fe88a
Author: Jenkins 
Date:   Mon Sep 14 02:34:18 2015 +

Merge "Fix typo in lib/keystone"

nova image-show fa376c74-4058-492b-9081-f31522f640f6
+--+--+
| Property | Value|
+--+--+
| OS-EXT-IMG-SIZE:size | 6997009920   |
| created  | 2015-09-14T10:53:09Z |
| id   | fa376c74-4058-492b-9081-f31522f640f6 |
| minDisk  | 0|
| minRam   | 0|
| name | win2k12-01   |
| progress | 100  |
| status   | ACTIVE   |
| updated  | 2015-09-14T10:59:33Z |
+--+--+

Attached n-cpu.log

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "n-cpu log"
   https://bugs.launchpad.net/bugs/1495834/+attachment/4464738/+files/n-cpu.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495834

Title:
  [VMware] Launching an instance with large image size crashes nova-
  compute

Status in OpenStack Compute (nova):
  New

Bug description:
  Created an image of size ~6.5GB.
  Launching a new instance from this image crashes nova-compute.

  I'm observing nova-compute node running out of memory.

  This could probably be due to reading entire file stream in memory
  without using proper chunk size.

  This is from git source and not any distribution.

  git log -1
  commit 1cf97bd096112b8d2e0eb95fd2a636a53cbf0bcc
  Merge: a54c0d6 17fe88a
  Author: Jenkins 
  Date:   Mon Sep 14 02:34:18 2015 +

  Merge "Fix typo in lib/keystone"

  nova image-show fa376c74-4058-492b-9081-f31522f640f6
  +--+--+
  | Property | Value|
  +--+--+
  | OS-EXT-IMG-SIZE:size | 6997009920   |
  | created  | 2015-09-14T10:53:09Z |
  | id   | fa376c74-4058-492b-9081-f31522f640f6 |
  | minDisk  | 0|
  | minRam   | 0|
  | name | win2k12-01   |
  | progress | 100  |
  | status   | ACTIVE   |
  | updated  | 2015-09-14T10:59:33Z |
  +--+--+

  Attached n-cpu.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218372] Re: [libvirt] resize fails when using NFS shared storage

2015-09-15 Thread Matt Riedemann
https://review.openstack.org/#/c/28424/ landed in Havana.  Is this still
valid?  I know there were some fixes for Ceph shared storage and resize
made in Kilo which we also backported to stable/juno.  I'm not sure if
those would also resolve issues for NFS, but I'd think they are related,
so marking this invalid at this point.  Please re-open if this is still
an issue.

** Changed in: nova
   Status: Confirmed => Invalid

** Tags added: nfs resize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218372

Title:
  [libvirt] resize fails when using NFS shared storage

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  With two hosts installed using devstack with a multi-node
  configuration and the directory /opt/stack/data/nova/instances/ shared
  using NFS.

  When performing a resize I get the following error (Complete traceback
  in http://paste.openstack.org/show/45368/):

  "qemu-img: Could not open
  '/opt/stack/data/nova/instances/7dbeb7f2-39e2-4f1d-8228-0b7a84d27745/disk':
  Permission denied\n"

  This problem was introduced with patch
  https://review.openstack.org/28424 which modified the behaviour of
  migrate/resize when using shared storage. Before that, the disk was
  moved to the new host using ssh even if using shared storage (which
  could cause some data loss when an error happened) but now, if we're
  using shared storage it won't send the disk to the other host but only
  assume that it will be accessible from there. In the end both are
  using the same storage, why should this be a problem?

  After doing some research on how NFS handles its shares on the client
  side, I realized that NFS client keeps a file cache with the file name
  and the inodes which, if no process asks for it before, will be
  refreshed on intervals of from 3 to 60 seconds (See nfs options
  ac[dir|reg][min|max] in nfs' manpage). So, if a process tries to
  access a file which has been renamed on the remote server it will be
  accessing the old version because the name is still pointing to the
  old inode (cache won't be updated when accessing a file but only when
  asking for the file attributes, e.g. ls -lh)

  In the resize case, the origin compute node renamed the instance
  directory to "$INSTANCE_DIR/_resize" (owned by root
  after qemu stops) and created the new instance disk from it under the
  new directory "$INSTANCE_DIR/".

  From the destination host, even thought we were trying to access the
  new disk file from "$INSTANCE_DIR//disk" we were still
  holding the old inode for that path which pointed to
  "$INSTANCE_DIR/_resize/disk" (owned by root,
  inaccessible, the wrong image, etc, etc).

  If the NFS share is mounted with the option "noac" which (from
  manpage) "forces application writes to become synchronous so that
  local changes to a file become visible on the server immediately".
  This prevents the files to be out of sync, but it comes with the
  drawback of issuing a network call for every file operation which may
  cause performance issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496063] [NEW] Add an integration test for switching between 2 projects of one user

2015-09-15 Thread Timur Sufiev
Public bug reported:

The bug 1450963 would have been caught earlier, if we had an integration
test which switched 2 projects for one user and checked for absence of
error messages.

** Affects: horizon
 Importance: Wishlist
 Status: New


** Tags: integration-tests

** Tags added: integration-tests

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496063

Title:
  Add an integration test for switching between 2 projects of one user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The bug 1450963 would have been caught earlier, if we had an
  integration test which switched 2 projects for one user and checked
  for absence of error messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463372] Re: nova secgroup-list-rules shows empty table

2015-09-15 Thread melanie witt
** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463372

Title:
   nova secgroup-list-rules shows empty table

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We see no secgroups rules with nova command-
  We should see the existing  rules even with nova command, Specially if we see 
the rules in GUI via COMPUTE tab.

  1. see security groups with

  neutron security-group-rule-list

  2. see security groups with nova command

  nova secgroup-list-rules GROUPID

  nova secgroup-list-rules 54db0a3c-fc5d-4faf-8b1a
  +-+---+-+--+--+
  | IP Protocol | From Port | To Port | IP Range | Source Group |
  +-+---+-+--+--+
  | |   | |  | default  |
  | |   | |  | default  |
  +-+---+-+--+---+

  neutron security-group-rule-list
  
+--++---+---+---+-+
  | id   | security_group | direction | 
ethertype | protocol/port | remote  |
  
+--++---+---+---+-+
  | 0e1cdfae-38d6-4d58-b624-011c2c05e165 | default| ingress   | IPv6
  | any   | default (group) |
  | 13c64385-ac4c-4321-bd3f-ec3e0ca939e1 | default| ingress   | IPv4
  | any   | default (group) |
  | 261ae2ec-686c-4e53-9578-1f55d92e280d | default| egress| IPv4
  | any   | any |
  | 41071f04-db2c-4e36-b5f0-8da2331e0382 | sec_group  | egress| IPv4
  | icmp  | any |
  | 45639c5d-cf4d-4231-a462-b180b9e52eaf | default| egress| IPv6
  | any   | any |
  | 5bab336e-410f-4323-865a-eeafee3fc3eb | sec_group  | ingress   | IPv4
  | icmp  | any |
  | 5e0cb33f-0a3c-41f8-8562-a549163d655e | sec_group  | egress| IPv6
  | any   | any |
  | 67409c83-3b62-4ba5-9e0d-93b23a81722a | default| egress| IPv4
  | any   | any |
  | 82676e25-f37c-4c57-9f7e-ffbe481501b5 | sec_group  | egress| IPv4
  | any   | any |
  | 89c232f4-ec90-46ba-989f-87d7348a9ea9 | default| ingress   | IPv4
  | any   | default (group) |
  | ad50904e-3cd4-43e2-9ab4-c7cb5277cc4d | sec_group  | egress| IPv4
  | 1-65535/tcp   | any |
  | c3386b79-06a8-4609-8db7-2924e092e5e9 | default| egress| IPv6
  | any   | any |
  | c37fe4d0-01b4-40f9-a069-15c8f3edffe4 | default| egress| IPv6
  | any   | any |
  | c51371f1-d3ae-4223-a044-f7b9b2eeb8a1 | sec_group  | ingress   | IPv4
  | 1-65535/udp   | any |
  | d3d6c1b3-bde5-45ce-a950-5bfd0fc7fc5c | default| ingress   | IPv6
  | any   | default (group) |
  | d4888c02-0b56-412e-bf02-dfd27ce84580 | sec_group  | egress| IPv4
  | 1-65535/udp   | any |
  | d7e0aee8-eee4-4ca1-b67e-ec4864a71492 | default| ingress   | IPv4
  | any   | default (group) |
  | df6504e5-0adb-411a-9313-4bad7074c42e | default| ingress   | IPv6
  | any   | default (group) |
  | e0ef6e04-575b-43ed-8179-c221d1e4f962 | default| egress| IPv4
  | any   | any |
  | e828f2ef-518f-4c67-a328-6dafc16431b9 | sec_group  | ingress   | IPv4
  | 1-65535/tcp   | any |
  
+--++---+---+---+-+

  Kilo+rhel7.1
  python-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-1.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-1.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-1.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-1.el7ost.noarch
  python-neutron-lbaas-2015.1.0-3.el7ost.noarch
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch

  openstack-nova-common-2015.1.0-4.el7ost.noarch
  openstack-nova-cert-2015.1.0-4.el7ost.noarch
  openstack-nova-compute-2015.1.0-4.el7ost.noarch
  openstack-nova-console-2015.1.0-4.el7ost.noarch
  python-nova-2015.1.0-4.el7ost.noarch
  openstack-nova-scheduler-2015.1.0-4.el7ost.noarch
  python-novaclient-2.23.0-1.el7ost.noarch
  openstack-nova-api-2015.1.0-4.el7ost.noarch
  openstack-nova-novncproxy-2015.1.0-4.el7ost.noarch
  openstack-nova-conductor-2015.1.0-4.el7ost.noarch

To manage 

[Yahoo-eng-team] [Bug 1494988] Re: Context selector is broken

2015-09-15 Thread Lin Hua Cheng
Timur, Justin: thanks for validating the bug

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1494988

Title:
  Context selector is broken

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  project picker seem broken in master

  this can be reproduced if horizon is setup with keystone v3

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1494988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496080] [NEW] Text in Usage.CSV not translated

2015-09-15 Thread Tony Dunbar
Public bug reported:

I prepared my environment using the pseudo translation tool to use
German.

When I navigate to the Admin->System->Overview and click on the
"Download CSV Summary" button, the generated csv file contains strings
which are not translated into the locale I'm using.

Doug Fish looked at the issue and found some resources aren't being
added to the django pot file, likely because they are coming from csv
versus html.

** Affects: horizon
 Importance: Medium
 Assignee: Tony Dunbar (adunbar)
 Status: Confirmed


** Tags: i18n

** Attachment added: "translation.jpg"
   
https://bugs.launchpad.net/bugs/1496080/+attachment/4465378/+files/translation.jpg

** Changed in: horizon
 Assignee: (unassigned) => Tony Dunbar (adunbar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496080

Title:
  Text in Usage.CSV not translated

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  I prepared my environment using the pseudo translation tool to use
  German.

  When I navigate to the Admin->System->Overview and click on the
  "Download CSV Summary" button, the generated csv file contains strings
  which are not translated into the locale I'm using.

  Doug Fish looked at the issue and found some resources aren't being
  added to the django pot file, likely because they are coming from csv
  versus html.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496085] [NEW] network_data.json links can have None as MTU

2015-09-15 Thread Josh Gachnang
Public bug reported:

If a Neutron network does not have an MTU set, the MTU for the links in
network_data.json will be set to None. The spec seems to indicate all
links will have an MTU. Either we should drop the key from the link if
MTU isn't set, or set it to a safe default value like 1500.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: liberty-rc-potential

** Summary changed:

- network_data.json has no default for MTU
+ network_data.json has no default for MTU on links

** Summary changed:

- network_data.json has no default for MTU on links
+ network_data.json links can have None as MTU

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496085

Title:
  network_data.json links can have None as MTU

Status in OpenStack Compute (nova):
  New

Bug description:
  If a Neutron network does not have an MTU set, the MTU for the links
  in network_data.json will be set to None. The spec seems to indicate
  all links will have an MTU. Either we should drop the key from the
  link if MTU isn't set, or set it to a safe default value like 1500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368391] Re: sqlalchemy-migrate 0.9.2 is breaking nova unit tests

2015-09-15 Thread Ihar Hrachyshka
** Changed in: sqlalchemy-migrate
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368391

Title:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance havana series:
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in sqlalchemy-migrate:
  Fix Released

Bug description:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

  OperationalError: (OperationalError) cannot commit - no transaction is
  active u'COMMIT;' ()

  http://logs.openstack.org/39/117839/18/gate/gate-nova-
  python27/8a7aa8c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495862] [NEW] admin image table have no tenant_name when UpdateRow

2015-09-15 Thread zhu.rong
Public bug reported:

Admin Images table when the image have UpdateRow action, it will give the 
message:
The attribute tenant_name doesn't exist on 

and can not display the tenant name

** Affects: horizon
 Importance: Undecided
 Assignee: zhu.rong (zhu-rong)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => zhu.rong (zhu-rong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495862

Title:
  admin image table have no tenant_name when UpdateRow

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Admin Images table when the image have UpdateRow action, it will give the 
message:
  The attribute tenant_name doesn't exist on 

  and can not display the tenant name

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495874] [NEW] py34 gate failing upstream

2015-09-15 Thread Rob Cresswell
Public bug reported:

http://logs.openstack.org/04/220404/12/check/gate-horizon-
python34/468e4c4/console.html

The key part seems to be :

2015-09-15 06:29:49.717 |   File 
"/home/jenkins/workspace/gate-horizon-python34/horizon/tables/base.py", line 
1078, in __new__
2015-09-15 06:29:49.717 | columns = base.base_columns.items() + columns
2015-09-15 06:29:49.718 | TypeError: unsupported operand type(s) for +: 
'ItemsView' and 'list'

Introduced by:
https://github.com/openstack/horizon/commit/dee5c9d3f48b0a092ddf065f360518c2a6015861

** Affects: horizon
 Importance: Critical
 Status: New

** Changed in: horizon
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495874

Title:
  py34 gate failing upstream

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  http://logs.openstack.org/04/220404/12/check/gate-horizon-
  python34/468e4c4/console.html

  The key part seems to be :

  2015-09-15 06:29:49.717 |   File 
"/home/jenkins/workspace/gate-horizon-python34/horizon/tables/base.py", line 
1078, in __new__
  2015-09-15 06:29:49.717 | columns = base.base_columns.items() + columns
  2015-09-15 06:29:49.718 | TypeError: unsupported operand type(s) for +: 
'ItemsView' and 'list'

  Introduced by:
  
https://github.com/openstack/horizon/commit/dee5c9d3f48b0a092ddf065f360518c2a6015861

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495887] [NEW] VMware driver:reconnection didn't work

2015-09-15 Thread xhzhf
Public bug reported:

Detailed Process:
1. nova-compute can not connect to rabbitMQ and nova-compute does not execute 
any regular task because of some reason. 
2. the session will expire after a few hours.
3. regular task collects nodes hardware info and fail
4. nova-compute try to judge  session is active
5._is_current_session_active will get faultmsg.
6.when executing str(msg), msg is encoded by utf-8,  my vCenter is chinese 
version. so throw  UnicodeEncodeError
7.reconnection will not work. Then we have to restart nova-compute

Stack:
TRACE oslo.vmware.api Traceback (most recent call last):
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 94, in _func
TRACE oslo.vmware.api result = f(*args, **kwargs)
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 298, in _invoke_api
TRACE oslo.vmware.api if self._is_current_session_active():
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 354, in 
_is_current_session_active
TRACE oslo.vmware.api userName=self._session_username)
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/service.py", line 197, in 
request_handler
TRACE oslo.vmware.api excep, details)
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/exceptions.py", line 81, in 
__init__
TRACE oslo.vmware.api super(VimFaultException, self).__init__(message, 
cause)
TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/exceptions.py", line 52, in 
__init__
TRACE oslo.vmware.api self.msg = str(message)
TRACE oslo.vmware.api UnicodeEncodeError: 'ascii' codec can't encode characters 
in position 0-10: ordinal not in range(128)

Suggested Solution:
when logging to vCenter, we set local to 'en'.  we will not encounter 
encode/decode problem.

** Affects: nova
 Importance: Undecided
 Assignee: xhzhf (guoyongxhzhf)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => xhzhf (guoyongxhzhf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495887

Title:
  VMware driver:reconnection didn't work

Status in OpenStack Compute (nova):
  New

Bug description:
  Detailed Process:
  1. nova-compute can not connect to rabbitMQ and nova-compute does not execute 
any regular task because of some reason. 
  2. the session will expire after a few hours.
  3. regular task collects nodes hardware info and fail
  4. nova-compute try to judge  session is active
  5._is_current_session_active will get faultmsg.
  6.when executing str(msg), msg is encoded by utf-8,  my vCenter is chinese 
version. so throw  UnicodeEncodeError
  7.reconnection will not work. Then we have to restart nova-compute

  Stack:
  TRACE oslo.vmware.api Traceback (most recent call last):
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 94, in _func
  TRACE oslo.vmware.api result = f(*args, **kwargs)
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 298, in _invoke_api
  TRACE oslo.vmware.api if self._is_current_session_active():
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/api.py", line 354, in 
_is_current_session_active
  TRACE oslo.vmware.api userName=self._session_username)
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/service.py", line 197, in 
request_handler
  TRACE oslo.vmware.api excep, details)
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/exceptions.py", line 81, in 
__init__
  TRACE oslo.vmware.api super(VimFaultException, self).__init__(message, 
cause)
  TRACE oslo.vmware.api   File 
"/usr/lib/python2.7/site-packages/oslo/vmware/exceptions.py", line 52, in 
__init__
  TRACE oslo.vmware.api self.msg = str(message)
  TRACE oslo.vmware.api UnicodeEncodeError: 'ascii' codec can't encode 
characters in position 0-10: ordinal not in range(128)

  Suggested Solution:
  when logging to vCenter, we set local to 'en'.  we will not encounter 
encode/decode problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495884] [NEW] image's backend file was deleted while it was still being use.

2015-09-15 Thread wangxiyuan
Public bug reported:

Reproduce:

1.create an image A, add backend 'X'  to it's location .

2.create another image B, add the same backend 'X' to it's location.

3.show the two image, their status are both 'active'.

4.delete image A.  After this setep, the backend X will be deleted as
well.

5. show the image B. Its status is still 'active'. Obviously, image B's
backend file  'X' has been deleted, So B can't be use anymore.


So IMHO, before we delete the backend file, we should check that whether the 
file is being use. If yes, we should not delete it directly.

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

** Summary changed:

- image's location was deleted while it was still being use.
+ image's backend file was deleted while it was still being use.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1495884

Title:
  image's backend file was deleted while it was still being use.

Status in Glance:
  New

Bug description:
  Reproduce:

  1.create an image A, add backend 'X'  to it's location .

  2.create another image B, add the same backend 'X' to it's location.

  3.show the two image, their status are both 'active'.

  4.delete image A.  After this setep, the backend X will be deleted as
  well.

  5. show the image B. Its status is still 'active'. Obviously, image
  B's backend file  'X' has been deleted, So B can't be use anymore.

  
  So IMHO, before we delete the backend file, we should check that whether the 
file is being use. If yes, we should not delete it directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1495884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496012] [NEW] Glance example configs needs refreshing

2015-09-15 Thread Erno Kuvaja
Public bug reported:

We need to generate new set of example configs for Glance Liberty
release.

** Affects: glance
 Importance: Critical
 Assignee: Erno Kuvaja (jokke)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1496012

Title:
  Glance example configs needs refreshing

Status in Glance:
  In Progress

Bug description:
  We need to generate new set of example configs for Glance Liberty
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1496012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493026] [NEW] location-add return error when add new location to 'queued' image

2015-09-15 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Reproduce:

1. create a new image:
glance image-create --disk-format qcow2 --container-format bare --name test

suppose the image'id is 1

2.add location to the image:

glance location-add 1 --url 

Result :  the client raise an error:'The administrator has disabled API
access to image locations'.

But when use REST API to reproduce the step 2, it runs well and the image's 
status will be changed into 'active'.
According to the code: 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L735-L750
I think we should add check in glance like client does.

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: In Progress

-- 
location-add return error when add new location to 'queued' image
https://bugs.launchpad.net/bugs/1493026
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493026] Re: location-add return error when add new location to 'queued' image

2015-09-15 Thread wangxiyuan
** Description changed:

  Reproduce:
  
  1. create a new image:
  glance image-create --disk-format qcow2 --container-format bare --name test
  
  suppose the image'id is 1
  
  2.add location to the image:
  
  glance location-add 1 --url 
  
- 
- Result :  the client raise an error:'The administrator has disabled API 
access to image locations'.
- 
+ Result :  the client raise an error:'The administrator has disabled API
+ access to image locations'.
  
  But when use REST API to reproduce the step 2, it runs well and the image's 
status will be changed into 'active'.
+ According to the code: 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L735-L750
+ I think we should add check in glance like client does.

** Project changed: python-glanceclient => glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1493026

Title:
  location-add return error when add new location to 'queued' image

Status in Glance:
  In Progress

Bug description:
  Reproduce:

  1. create a new image:
  glance image-create --disk-format qcow2 --container-format bare --name test

  suppose the image'id is 1

  2.add location to the image:

  glance location-add 1 --url 

  Result :  the client raise an error:'The administrator has disabled
  API access to image locations'.

  But when use REST API to reproduce the step 2, it runs well and the image's 
status will be changed into 'active'.
  According to the code: 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L735-L750
  I think we should add check in glance like client does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1493026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496041] [NEW] Document accept requests on base paths rather than separate ports

2015-09-15 Thread Brant Knudson
Public bug reported:


The identity service is expected to be on ports 5000 and 35357 for historical 
reasons. It's been a dream for some time to have the identity service, along 
with the rest of the OpenStack services, available on a path on the normal HTTP 
port so that we're not polluting the port space so much, and also port 35357 
has problems on Linux since it's in the default ephemeral port range.

With keystone switching to being served by Apache Httpd or some other
full-featured web server (as opposed to eventlet) this is actually
pretty easy to accomplish. Httpd (and other web servers) allows you to
route multiple paths / ports to the wsgi process, so you can have :5000
and :443/identity going to the same place (same with :35357 and
:443/identity_admin), all in the same server.

Keystone ships a sample config file in httpd/wsgi-keystone.conf so we'll
update that to support both the virtual hosts on different ports and
path handling.

If we agree on this we can get some tests going to ensure the rest of
the OpenStack ecosystem is ready by changing devstack to use the new
config.

Eventually we can "deprecate" running identity service on 5000 and 35357
and instead use :443/identity and /identity_admin.

** Affects: keystone
 Importance: Wishlist
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496041

Title:
  Document accept requests on base paths rather than separate ports

Status in Keystone:
  In Progress

Bug description:
  
  The identity service is expected to be on ports 5000 and 35357 for historical 
reasons. It's been a dream for some time to have the identity service, along 
with the rest of the OpenStack services, available on a path on the normal HTTP 
port so that we're not polluting the port space so much, and also port 35357 
has problems on Linux since it's in the default ephemeral port range.

  With keystone switching to being served by Apache Httpd or some other
  full-featured web server (as opposed to eventlet) this is actually
  pretty easy to accomplish. Httpd (and other web servers) allows you to
  route multiple paths / ports to the wsgi process, so you can have
  :5000 and :443/identity going to the same place (same with :35357 and
  :443/identity_admin), all in the same server.

  Keystone ships a sample config file in httpd/wsgi-keystone.conf so
  we'll update that to support both the virtual hosts on different ports
  and path handling.

  If we agree on this we can get some tests going to ensure the rest of
  the OpenStack ecosystem is ready by changing devstack to use the new
  config.

  Eventually we can "deprecate" running identity service on 5000 and
  35357 and instead use :443/identity and /identity_admin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483159] Re: Canonical naming for non-x86 architectures

2015-09-15 Thread James Page
** Also affects: simplestreams
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483159

Title:
  Canonical naming for non-x86 architectures

Status in OpenStack Compute (nova):
  In Progress
Status in simplestreams:
  New
Status in nova package in Ubuntu:
  Triaged

Bug description:
  Various non-x86 architectures (POWER and ARM) don't correctly
  canonicalize into things that libvirt natively understands.

  The attached patches normalizes some alternative architecture strings
  into standardized ones for Nova/libvirt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495937] [NEW] test_killed_monitor_respawns fails with MismatchError

2015-09-15 Thread Jakub Libosvar
Public bug reported:

t1.56: 
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns(vsctl)_StringException:
 Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/functional/agent/linux/test_ovsdb_monitor.py", line 91, 
in test_killed_monitor_respawns
self.assertEqual(output1, output2)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = 'row action controller datapath_id datapath_type external_ids 
fail_mode flood_vlans flow_tables ipfix mirrors name netflow other_config ports 
protocols sflow status stp_enable _version'
actual= ''

example of failure: http://logs.openstack.org/25/212425/7/check/gate-
neutron-dsvm-functional/e3e237a/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495937

Title:
  test_killed_monitor_respawns fails with MismatchError

Status in neutron:
  New

Bug description:
  t1.56: 
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestOvsdbMonitor.test_killed_monitor_respawns(vsctl)_StringException:
 Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/functional/agent/linux/test_ovsdb_monitor.py", line 91, 
in test_killed_monitor_respawns
  self.assertEqual(output1, output2)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = 'row action controller datapath_id datapath_type external_ids 
fail_mode flood_vlans flow_tables ipfix mirrors name netflow other_config ports 
protocols sflow status stp_enable _version'
  actual= ''

  example of failure: http://logs.openstack.org/25/212425/7/check/gate-
  neutron-dsvm-functional/e3e237a/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496045] [NEW] Horizon cannot display >1.5K users from LDAP

2015-09-15 Thread Paul Karikh
Public bug reported:

If Keystone is set up with LDAP and there is a lot of users (look like 1500 
users if an the threshold value), Horizon can't fetch all users from domain and 
shows error "Error: Unable to retrieve user list."
There is no issues if number of users in LDAP is much smaller.
Also fetching 1K users from LDAP takes too long time (in comparsion with MySQL).
Affected pages:
identity/users
identity/domains (cannot list domain members)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496045

Title:
  Horizon cannot display >1.5K users from LDAP

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If Keystone is set up with LDAP and there is a lot of users (look like 1500 
users if an the threshold value), Horizon can't fetch all users from domain and 
shows error "Error: Unable to retrieve user list."
  There is no issues if number of users in LDAP is much smaller.
  Also fetching 1K users from LDAP takes too long time (in comparsion with 
MySQL).
  Affected pages:
  identity/users
  identity/domains (cannot list domain members)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1496045/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition

2015-09-15 Thread Kyle Mestery
Per discussion with Zzelle in channel, this only affects Kilo and Juno.
Marking as such.

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided => High

** Changed in: neutron/juno
   Status: New => Confirmed

** Changed in: neutron/juno
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  Invalid
Status in neutron juno series:
  Confirmed
Status in neutron kilo series:
  Confirmed

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 

[Yahoo-eng-team] [Bug 1484148] Re: neutronclient gate broken following VPNaaS infra changes

2015-09-15 Thread Kyle Mestery
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Status: New => In Progress

** Changed in: python-neutronclient
   Importance: Undecided => High

** Changed in: python-neutronclient
 Assignee: (unassigned) => Paul Michali (pcm)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484148

Title:
  neutronclient gate broken following VPNaaS infra changes

Status in python-neutronclient:
  In Progress

Bug description:
  https://etherpad.openstack.org/p/vpn-test-changes

  stable/kilo breakage is tracked here:
  https://bugs.launchpad.net/neutron/+bug/1483266

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1484148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition

2015-09-15 Thread Kyle Mestery
** Changed in: neutron/kilo
   Status: New => Confirmed

** Changed in: neutron/kilo
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
Milestone: liberty-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  Invalid
Status in neutron juno series:
  Confirmed
Status in neutron kilo series:
  Confirmed

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.856 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 

[Yahoo-eng-team] [Bug 1496055] [NEW] Status-line of HTTP responce is wrong

2015-09-15 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Some negative tempest test cases fail when nova is ran by 
Apache(NOVA_USE_MOD_WSGI=True).
For example:
2015-09-08 09:30:29.696 | 
tempest.api.compute.admin.test_hosts_negative.HostsAdminNegativeTestJSON.test_update_host_with_invalid_maintenance_mode[id-ab1e230e-5e22-41a9-8699-82b9947915d4,negative]
2015-09-08 09:30:29.697 | 
-
2015-09-08 09:30:29.697 | 
2015-09-08 09:30:29.697 | Captured traceback:
2015-09-08 09:30:29.697 | ~~~
2015-09-08 09:30:29.697 | Traceback (most recent call last):
2015-09-08 09:30:29.697 |   File 
"tempest/api/compute/admin/test_hosts_negative.py", line 95, in 
test_update_host_with_invalid_maintenance_mode
2015-09-08 09:30:29.697 | maintenance_mode='invalid')
2015-09-08 09:30:29.697 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
2015-09-08 09:30:29.697 | self.assertThat(our_callable, matcher)
2015-09-08 09:30:29.697 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
2015-09-08 09:30:29.698 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
2015-09-08 09:30:29.698 | mismatch = matcher.match(matchee)
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
2015-09-08 09:30:29.698 | mismatch = 
self.exception_matcher.match(exc_info)
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
2015-09-08 09:30:29.698 | mismatch = matcher.match(matchee)
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 414, in match
2015-09-08 09:30:29.698 | reraise(*matchee)
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
2015-09-08 09:30:29.698 | result = matchee()
2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 969, in __call__
2015-09-08 09:30:29.699 | return self._callable_object(*self._args, 
**self._kwargs)
2015-09-08 09:30:29.699 |   File 
"tempest/services/compute/json/hosts_client.py", line 54, in update_host
2015-09-08 09:30:29.699 | resp, body = self.put("os-hosts/%s" % 
hostname, request_body)
2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 319, in put
2015-09-08 09:30:29.699 | return self.request('PUT', url, 
extra_headers, headers, body)
2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
2015-09-08 09:30:29.699 | resp, resp_body)
2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 680, in _error_checker
2015-09-08 09:30:29.699 | raise 
exceptions.UnexpectedContentType(str(resp.status))
2015-09-08 09:30:29.699 | tempest_lib.exceptions.UnexpectedContentType: 
Unexpected content type provided
2015-09-08 09:30:29.699 | Details: 500

Root cause of this - nova doesn't attach textual phrase after error code in 
status line of HTTP response.
But there is a strict rule about constructing status line for HTTP:
'...Status-Line, consisting of the protocol version followed by a numeric 
status code and its associated textual phrase, with each element separated by 
SP characters' (http://www.faqs.org/rfcs/rfc2616.html)

** Affects: nova
 Importance: Undecided
 Assignee: Marian Horban (mhorban)
 Status: New

-- 
Status-line of HTTP responce is wrong
https://bugs.launchpad.net/bugs/1496055
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496055] Re: Status-line of HTTP responce is wrong

2015-09-15 Thread Julien Danjou
** Project changed: nova-project => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496055

Title:
  Status-line of HTTP responce is wrong

Status in OpenStack Compute (nova):
  New

Bug description:
  Some negative tempest test cases fail when nova is ran by 
Apache(NOVA_USE_MOD_WSGI=True).
  For example:
  2015-09-08 09:30:29.696 | 
tempest.api.compute.admin.test_hosts_negative.HostsAdminNegativeTestJSON.test_update_host_with_invalid_maintenance_mode[id-ab1e230e-5e22-41a9-8699-82b9947915d4,negative]
  2015-09-08 09:30:29.697 | 
-
  2015-09-08 09:30:29.697 | 
  2015-09-08 09:30:29.697 | Captured traceback:
  2015-09-08 09:30:29.697 | ~~~
  2015-09-08 09:30:29.697 | Traceback (most recent call last):
  2015-09-08 09:30:29.697 |   File 
"tempest/api/compute/admin/test_hosts_negative.py", line 95, in 
test_update_host_with_invalid_maintenance_mode
  2015-09-08 09:30:29.697 | maintenance_mode='invalid')
  2015-09-08 09:30:29.697 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 422, in assertRaises
  2015-09-08 09:30:29.697 | self.assertThat(our_callable, matcher)
  2015-09-08 09:30:29.697 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 433, in assertThat
  2015-09-08 09:30:29.698 | mismatch_error = self._matchHelper(matchee, 
matcher, message, verbose)
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 483, in _matchHelper
  2015-09-08 09:30:29.698 | mismatch = matcher.match(matchee)
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  2015-09-08 09:30:29.698 | mismatch = 
self.exception_matcher.match(exc_info)
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  2015-09-08 09:30:29.698 | mismatch = matcher.match(matchee)
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 414, in match
  2015-09-08 09:30:29.698 | reraise(*matchee)
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  2015-09-08 09:30:29.698 | result = matchee()
  2015-09-08 09:30:29.698 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 969, in __call__
  2015-09-08 09:30:29.699 | return self._callable_object(*self._args, 
**self._kwargs)
  2015-09-08 09:30:29.699 |   File 
"tempest/services/compute/json/hosts_client.py", line 54, in update_host
  2015-09-08 09:30:29.699 | resp, body = self.put("os-hosts/%s" % 
hostname, request_body)
  2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 319, in put
  2015-09-08 09:30:29.699 | return self.request('PUT', url, 
extra_headers, headers, body)
  2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  2015-09-08 09:30:29.699 | resp, resp_body)
  2015-09-08 09:30:29.699 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 680, in _error_checker
  2015-09-08 09:30:29.699 | raise 
exceptions.UnexpectedContentType(str(resp.status))
  2015-09-08 09:30:29.699 | tempest_lib.exceptions.UnexpectedContentType: 
Unexpected content type provided
  2015-09-08 09:30:29.699 | Details: 500

  Root cause of this - nova doesn't attach textual phrase after error code in 
status line of HTTP response.
  But there is a strict rule about constructing status line for HTTP:
  '...Status-Line, consisting of the protocol version followed by a numeric 
status code and its associated textual phrase, with each element separated by 
SP characters' (http://www.faqs.org/rfcs/rfc2616.html)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496122] [NEW] CRITICAL nova [-] ImportError: No module named middleware.request_id

2015-09-15 Thread Adam Wien
Public bug reported:

I'm attempting to deploy openstack using chef. I'm using cookbook
version 12.0 on the liberty deb repository and the os-compute-single-
controller-no-network role. I'm having an issue where nova-api-os-
compute is failing to start. Here is the stack trace.

=2015-09-15 19:27:54.203 98118 WARNING keystonemiddleware.auth_token [-] Use of 
the auth_admin_prefix, auth_host, auth_port, auth_protocol, identity_uri, 
admin_token, admin_user, admin_password, and admin_tenant_name configuration 
options is deprecated in favor of auth_plugin and related options and may be 
removed in a future release.
2015-09-15 19:27:54.204 98118 CRITICAL nova [-] ImportError: No module named 
middleware.request_id
2015-09-15 19:27:54.204 98118 ERROR nova Traceback (most recent call last):
2015-09-15 19:27:54.204 98118 ERROR nova   File "/usr/bin/nova-api-os-compute", 
line 10, in 
2015-09-15 19:27:54.204 98118 ERROR nova sys.exit(main())
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 45, in main
2015-09-15 19:27:54.204 98118 ERROR nova server = 
service.WSGIService('osapi_compute', use_ssl=should_use_ssl)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 322, in __init__
2015-09-15 19:27:54.204 98118 ERROR nova self.app = 
self.loader.load_app(name)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/wsgi.py", line 533, in load_app
2015-09-15 19:27:54.204 98118 ERROR nova return deploy.loadapp("config:%s" 
% self.config_path, name=name)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
2015-09-15 19:27:54.204 98118 ERROR nova return loadobj(APP, uri, 
name=name, **kw)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
2015-09-15 19:27:54.204 98118 ERROR nova return context.create()
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
2015-09-15 19:27:54.204 98118 ERROR nova return 
self.object_type.invoke(self)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2015-09-15 19:27:54.204 98118 ERROR nova **context.local_conf)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
2015-09-15 19:27:54.204 98118 ERROR nova val = callable(*args, **kw)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/urlmap.py", line 160, in 
urlmap_factory
2015-09-15 19:27:54.204 98118 ERROR nova app = loader.get_app(app_name, 
global_conf=global_conf)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
2015-09-15 19:27:54.204 98118 ERROR nova name=name, 
global_conf=global_conf).create()
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
2015-09-15 19:27:54.204 98118 ERROR nova return 
self.object_type.invoke(self)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2015-09-15 19:27:54.204 98118 ERROR nova **context.local_conf)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
2015-09-15 19:27:54.204 98118 ERROR nova val = callable(*args, **kw)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/api/auth.py", line 86, in 
pipeline_factory_v21
2015-09-15 19:27:54.204 98118 ERROR nova return _load_pipeline(loader, 
local_conf[CONF.auth_strategy].split())
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/nova/api/auth.py", line 60, in _load_pipeline
2015-09-15 19:27:54.204 98118 ERROR nova filters = [loader.get_filter(n) 
for n in pipeline[:-1]]
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 354, in 
get_filter
2015-09-15 19:27:54.204 98118 ERROR nova name=name, 
global_conf=global_conf).create()
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 366, in 
filter_context
2015-09-15 19:27:54.204 98118 ERROR nova FILTER, name=name, 
global_conf=global_conf)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 458, in 
get_context
2015-09-15 19:27:54.204 98118 ERROR nova section)
2015-09-15 19:27:54.204 98118 ERROR nova   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 517, in 

[Yahoo-eng-team] [Bug 1494642] Re: vm can't be running if it was stop not through openstack

2015-09-15 Thread Mark Doffman
Probably shouldn't be stopping a VM not through openstack. Doing so not
supported.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1494642

Title:
  vm can't be running if it was stop not through openstack

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  when a vm is  stop not through openstack and then when it is running no 
though openstack again, it will be forced  stopped by openstack.
  I think it can be choosen to force to stop the vm though configure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1494642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496138] [NEW] logging a warning when someone accesses / seems unnecessary and wasteful

2015-09-15 Thread Matt Fischer
Public bug reported:

Our load balancer health checks (and other folks too) just load the main
glance URL and look for an http status of 300 to determine if glance is
okay. Starting I think in Kilo, glance changed and now logs a warning.
This is highly unnecessary and ends up generating gigs of useless logs
which make diagnosing real issues more difficult.

At the least this should be an INFO, but ideally there's no point in
logging this at all.

2015-08-04 17:42:43.058 24075 WARNING glance.api.middleware.version_negotiation 
[-] Unknown version. Returning version choices.
2015-08-04 17:42:43.577 24071 WARNING glance.api.middleware.version_negotiation 
[-] Unknown version. Returning version choices.
2015-08-04 17:42:45.083 24076 WARNING glance.api.middleware.version_negotiation 
[-] Unknown version. Returning version choices.
2015-08-04 17:42:45.317 24064 WARNING glance.api.middleware.version_negotiation 
[-] Unknown version. Returning version choices.
2015-08-04 17:42:47.092 24074 WARNING glance.api.middleware.version_negotiation 
[-] Unknown version. Returning version choices.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1496138

Title:
  logging a warning when someone accesses / seems unnecessary and
  wasteful

Status in Glance:
  New

Bug description:
  Our load balancer health checks (and other folks too) just load the
  main glance URL and look for an http status of 300 to determine if
  glance is okay. Starting I think in Kilo, glance changed and now logs
  a warning. This is highly unnecessary and ends up generating gigs of
  useless logs which make diagnosing real issues more difficult.

  At the least this should be an INFO, but ideally there's no point in
  logging this at all.

  2015-08-04 17:42:43.058 24075 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:43.577 24071 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:45.083 24076 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:45.317 24064 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.
  2015-08-04 17:42:47.092 24074 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1496138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494866] Re: L3 HA router ports 'host' field do not point to the active router replica

2015-09-15 Thread Assaf Muller
Since
https://review.openstack.org/#/q/I8475548947526d8ea736ed7aa754fd0ca475cae2,n,z
we actually do update the port bindings 'host' field when HA router
states change. That patch was backported to Kilo, I am assuming the
reporter observed this behavior on an older Kilo version.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494866

Title:
  L3 HA router ports 'host' field do not point to the active router
  replica

Status in neutron:
  Fix Released

Bug description:
  We are using kilo. In our setup, we have 3 neutron controllers, l3
  agents are running on all the 3 neutron controllers. We make l3_ha =
  true in all the 3 neutron.conf.

  We notice that when we attach a network to a router, the gateway
  namespace is allocated to a controller node which doesn't match the
  record in neutron db. Following is one example.

  Create a router, a network, attach the network to the router.

  1. neutron tells that the gateway ip 1.1.1.1 is at controller-1
  [stack@c5220-01 ~]$ neutron port-show 3306c360-5a3d-4a08-aa92-017498758963
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:host_id   | overcloud-controller-1.localdomain  
   |
  | binding:profile   | {}  
   |
  | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}  
   |
  | binding:vif_type  | ovs 
   |
  | binding:vnic_type | normal  
   |
  | device_id | 934f0b90-2d98-4d54-b9ca-5222aac2199d
   |
  | device_owner  | network:router_interface
   |
  | extra_dhcp_opts   | 
   |
  | fixed_ips | {"subnet_id": 
"463c2f0c-5d56-4abb-8b30-8450d8306f46", "ip_address": "1.1.1.1"} |
  | id| 3306c360-5a3d-4a08-aa92-017498758963
   |
  | mac_address   | fa:16:3e:72:34:4c   
   |
  | name  | 
   |
  | network_id| 98f125b6-6d4d-4417-a0b3-e8d9ff530d6f
   |
  | security_groups   | 
   |
  | status| ACTIVE  
   |
  | tenant_id | 4ef11838925940eb9d177ae9345711ee
   |
  
+---++

  
  2. However, the gateway ip is at controller-2
  [heat-admin@overcloud-controller-2 ~]$ sudo ip netns exec 
qrouter-934f0b90-2d98-4d54-b9ca-5222aac2199d ifconfig
  ha-6d47f13a-b7: flags=4163  mtu 1500
  inet 169.254.192.6  netmask 255.255.192.0  broadcast 169.254.255.255
  inet6 fe80::f816:3eff:fe43:9b80  prefixlen 64  scopeid 0x20
  ether fa:16:3e:43:9b:80  txqueuelen 1000  (Ethernet)
  RX packets 20  bytes 1638 (1.5 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 309  bytes 16926 (16.5 KiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  lo: flags=73  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  inet6 ::1  prefixlen 128  scopeid 0x10
  loop  txqueuelen 0  (Local Loopback)
  RX packets 0  bytes 0 (0.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 0  bytes 0 (0.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  qg-22431202-eb: flags=4163  mtu 1500
  inet 10.8.87.25  netmask 255.255.255.0  broadcast 0.0.0.0
  inet6 fe80::f816:3eff:febd:56ad  prefixlen 64  scopeid 0x20
  ether fa:16:3e:bd:56:ad  txqueuelen 1000  (Ethernet)
  RX packets 36  

[Yahoo-eng-team] [Bug 1496135] [NEW] libvirt live-migration will not honor destination vcpu_pin_set config

2015-09-15 Thread Nikola Đipanov
Public bug reported:

Reporting this based on code inspection of the current master (commit:
9f61d1eb642785734f19b5b23365f80f033c3d9a)

When we attempt to live-migrate an instance onto a host that has a
different vcpu_pin_set than the one that was on the source host, we may
either break the policy set by the destination host or fail (as we will
not recalculate the vcpu cpuset attribute to match that of the
destination host, so we may end up with an invalid range).

The first solution that jumps out is to make sure the XML is updated in
https://github.com/openstack/nova/blob/6d68462c4f20a0b93a04828cb829e86b7680d8a4/nova/virt/libvirt/driver.py#L5422

However that would mean passing over the requested info from the
destination host.

** Affects: nova
 Importance: Medium
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496135

Title:
  libvirt live-migration will not honor destination vcpu_pin_set config

Status in OpenStack Compute (nova):
  New

Bug description:
  Reporting this based on code inspection of the current master (commit:
  9f61d1eb642785734f19b5b23365f80f033c3d9a)

  When we attempt to live-migrate an instance onto a host that has a
  different vcpu_pin_set than the one that was on the source host, we
  may either break the policy set by the destination host or fail (as we
  will not recalculate the vcpu cpuset attribute to match that of the
  destination host, so we may end up with an invalid range).

  The first solution that jumps out is to make sure the XML is updated
  in
  
https://github.com/openstack/nova/blob/6d68462c4f20a0b93a04828cb829e86b7680d8a4/nova/virt/libvirt/driver.py#L5422

  However that would mean passing over the requested info from the
  destination host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496146] [NEW] instance_actions table should have a UniqueConstraint on instance_uuid and request_id

2015-09-15 Thread Matt Riedemann
Public bug reported:

An instance action should be unique per instance uuid and request ID.
That's pointed out in the data model:

https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L1241

"The intention is that there will only be one of these per user request.
A lookup by (instance_uuid, request_id) should always return a single
result."

It's also enforced in the API:

https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L5719

def action_get_by_request_id(context, instance_uuid, request_id):
"""Get the action by request_id and given instance."""
action = _action_get_by_request_id(context, instance_uuid, request_id)
return action

But that is not actually enforced in the schema using a
UniqueConstraint.

This is a low priority but it's technically something we should have in
the data model/schema.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496146

Title:
  instance_actions table should have a UniqueConstraint on instance_uuid
  and request_id

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  An instance action should be unique per instance uuid and request ID.
  That's pointed out in the data model:

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L1241

  "The intention is that there will only be one of these per user
  request.  A lookup by (instance_uuid, request_id) should always return
  a single result."

  It's also enforced in the API:

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L5719

  def action_get_by_request_id(context, instance_uuid, request_id):
  """Get the action by request_id and given instance."""
  action = _action_get_by_request_id(context, instance_uuid, request_id)
  return action

  But that is not actually enforced in the schema using a
  UniqueConstraint.

  This is a low priority but it's technically something we should have
  in the data model/schema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495960] [NEW] linuxbridge with vlan crashes when long device names used

2015-09-15 Thread Andreas Scheuring
Public bug reported:

The linuxbridge agent creates a linux vlan-device for each openstack vlan 
network that has been defined. Therefore the code uses the following naming 
scheme : .
Example: eth-dev-name: eth0, vlan-id: 1000 --> eth0.1000

This works fine, if eth-dev-name is a short name like "eth0". If there
is a long device name (e.g. long-device-name) this will cause trouble,
as the vlan device name "long-device-name.1000" exceeds the max length
of a linux network device, which is 15 chars.

Today the linuxbridge agent fails with

Command: ['ip', 'link', 'add', 'link', 'too_long_name', 'name', 
'too_long_name.1007', 'type', 'vlan', 'id', 1007]
Exit code: 255
Stderr: Error: argument "too_long_name.1007" is wrong: "name" too long

The same problem needs to be solved for the new macvtap agent that is
currently under development [1] as well


[1] https://bugs.launchpad.net/neutron/+bug/1480979

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495960

Title:
  linuxbridge with vlan crashes when long device names used

Status in neutron:
  New

Bug description:
  The linuxbridge agent creates a linux vlan-device for each openstack vlan 
network that has been defined. Therefore the code uses the following naming 
scheme : .
  Example: eth-dev-name: eth0, vlan-id: 1000 --> eth0.1000

  This works fine, if eth-dev-name is a short name like "eth0". If there
  is a long device name (e.g. long-device-name) this will cause trouble,
  as the vlan device name "long-device-name.1000" exceeds the max length
  of a linux network device, which is 15 chars.

  Today the linuxbridge agent fails with

  Command: ['ip', 'link', 'add', 'link', 'too_long_name', 'name', 
'too_long_name.1007', 'type', 'vlan', 'id', 1007]
  Exit code: 255
  Stderr: Error: argument "too_long_name.1007" is wrong: "name" too long

  The same problem needs to be solved for the new macvtap agent that is
  currently under development [1] as well

  
  [1] https://bugs.launchpad.net/neutron/+bug/1480979

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495976] [NEW] Image location help text only reference http

2015-09-15 Thread Zhenguo Niu
Public bug reported:

Glance supports http/https url to upload, but Image location field help
text only reference http, should add https as well to keep consistent
with the backend.

** Affects: horizon
 Importance: Undecided
 Assignee: Zhenguo Niu (niu-zglinux)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1495976

Title:
  Image location help text only reference http

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Glance supports http/https url to upload, but Image location field
  help text only reference http, should add https as well to keep
  consistent with the backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1495976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495946] [NEW] Nova boot does not respect zone name if node is specified

2015-09-15 Thread Ilya Shakhat
Public bug reported:

Version DevStack / Liberty (commit-id 9f61d1eb)

Steps to reproduce:
1. Boot new instance specifying non-existing zone name but valid host name:
nova boot --image  --nic net-id= --flavor m1.nano 
--availability-zone foo:devstack my_vm
2. nova show my_vm shows that the instance was started in availability zone 
"nova".

It's expected that Nova rejects to boot instance within wrong zone, but
it uses only hostname part of it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495946

Title:
  Nova boot does not respect zone name if node is specified

Status in OpenStack Compute (nova):
  New

Bug description:
  Version DevStack / Liberty (commit-id 9f61d1eb)

  Steps to reproduce:
  1. Boot new instance specifying non-existing zone name but valid host name:
  nova boot --image  --nic net-id= --flavor m1.nano 
--availability-zone foo:devstack my_vm
  2. nova show my_vm shows that the instance was started in availability zone 
"nova".

  It's expected that Nova rejects to boot instance within wrong zone,
  but it uses only hostname part of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1495946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp