[Yahoo-eng-team] [Bug 1518296] [NEW] Non snated packet should be blocked

2015-11-20 Thread Takanori Miyagishi
Public bug reported:

In current neutron, when running "neutron router-gateway-set" with
specified router's "enable_snat" is false, then non-SNAT'ed packets can
arrive at other tenant via external-network.  The packets don't pass
through other tenant's gateway, but take extra load to external network.

The packet should be NAT'ed when flowing on external network.  Non-
SNAT'ed packets don't need to flow on external network.

Therefore, non-SNAT'ed packets should be dropped at inside of own
tenant.

I will fix as follows:

  * The router is Legacy mode and enable_snat is True:
No change from current implementation.

  * The router is Legacy mode and enable_snat is False:
Add new rule for dropping outbound non-SNAT'ed packets.

  * The router is DVR mode and enable_snat is True:
No change from current implementation.

  * The router is Legacy mode and enable_snat is False:
Don't create SNAT name space.

** Affects: neutron
 Importance: Undecided
 Assignee: Takanori Miyagishi (miyagishi-t)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Takanori Miyagishi (miyagishi-t)

** Description changed:

- In current neutron, when running "neutron router-gateway-set" with specified
- router's "enable_snat" is false, then non-SNAT'ed packets can arrive at other
- tenant via external-network.  The packets don't pass through other tenant's
- gateway, but take extra load to external network.
+ In current neutron, when running "neutron router-gateway-set" with
+ specified router's "enable_snat" is false, then non-SNAT'ed packets can
+ arrive at other tenant via external-network.  The packets don't pass
+ through other tenant's gateway, but take extra load to external network.
  
- The packet should be NAT'ed when flowing on external network.
- Non-SNAT'ed packets don't need to flow on external network.
+ The packet should be NAT'ed when flowing on external network.  Non-
+ SNAT'ed packets don't need to flow on external network.
  
  Therefore, non-SNAT'ed packets should be dropped at inside of own
  tenant.
  
  I will fix as follows:
  
-   * The router is Legacy mode and enable_snat is True:
- No change from current implementation.
-  
-   * The router is Legacy mode and enable_snat is False:
- Add new rule for dropping outbound non-SNAT'ed packets.
+   * The router is Legacy mode and enable_snat is True:
+ No change from current implementation.
  
-   * The router is DVR mode and enable_snat is True:
- No change from current implementation.
- 
-   * The router is Legacy mode and enable_snat is False:
- Don't create SNAT name space.
+   * The router is Legacy mode and enable_snat is False:
+ Add new rule for dropping outbound non-SNAT'ed packets.
+ 
+   * The router is DVR mode and enable_snat is True:
+ No change from current implementation.
+ 
+   * The router is Legacy mode and enable_snat is False:
+ Don't create SNAT name space.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518296

Title:
  Non snated packet should be blocked

Status in neutron:
  New

Bug description:
  In current neutron, when running "neutron router-gateway-set" with
  specified router's "enable_snat" is false, then non-SNAT'ed packets
  can arrive at other tenant via external-network.  The packets don't
  pass through other tenant's gateway, but take extra load to external
  network.

  The packet should be NAT'ed when flowing on external network.  Non-
  SNAT'ed packets don't need to flow on external network.

  Therefore, non-SNAT'ed packets should be dropped at inside of own
  tenant.

  I will fix as follows:

    * The router is Legacy mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Add new rule for dropping outbound non-SNAT'ed packets.

    * The router is DVR mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Don't create SNAT name space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515485] Re: Heat CFN signals do not pass authorization

2015-11-20 Thread Jesse Pretorius
** Also affects: openstack-ansible/trunk
   Importance: Medium
   Status: Triaged

** Also affects: openstack-ansible/kilo
   Importance: Undecided
   Status: New

** Also affects: openstack-ansible/liberty
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible/trunk
   Status: Triaged => Invalid

** Changed in: openstack-ansible/liberty
   Status: New => Invalid

** Changed in: openstack-ansible/kilo
   Status: New => In Progress

** Changed in: openstack-ansible/kilo
   Status: In Progress => Triaged

** Changed in: openstack-ansible/kilo
Milestone: None => 11.2.5

** Changed in: openstack-ansible/kilo
 Assignee: (unassigned) => Jesse Pretorius (jesse-pretorius)

** Changed in: openstack-ansible/trunk
Milestone: 11.2.5 => None

** Changed in: openstack-ansible/kilo
Milestone: 11.2.5 => 11.2.6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1515485

Title:
  Heat CFN signals do not pass authorization

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) kilo series:
  Incomplete
Status in openstack-ansible:
  Invalid
Status in openstack-ansible kilo series:
  Triaged
Status in openstack-ansible liberty series:
  Invalid
Status in openstack-ansible trunk series:
  Invalid

Bug description:
  Note that this bug applies to the Kilo release. Master does not appear
  to have this problem. I did not test liberty yet.

  Heat templates that rely on CFN signals timeout because the API calls
  that execute these signals return 403 errors. Heat signals, on the
  other side, do work.

  The problem was reported to me by Alex Cantu. I have verified it on
  his multinode lab and have also reproduced on my own single-node
  system hosted on a public cloud server.  I suspect liberty/master
  avoided the problem after Jesse and I reworked the Heat configuration
  to use Keystone v3 the last day before the L release.

  Example template, which can be executed in an AIO after running the
  tempest playbook:

  heat_template_version: 2013-05-23

  resources:
    wait_condition:
  type: AWS::CloudFormation::WaitCondition
  properties:
    Handle: { get_resource: wait_handle }
    Count: 1
    Timeout: 600

    wait_handle:
  type: AWS::CloudFormation::WaitConditionHandle

    my_instance:
  type: OS::Nova::Server
  properties:
    image: cirros
    flavor: m1.tiny
    networks:
  - network: "private"
    user_data_format: RAW
    user_data:
  str_replace:
    template: |
  #!/bin/sh
  echo "wc_notify"
  curl -H "Content-Type:" -X PUT wc_notify --data-binary 
'{"status": "SUCCESS"}'
    params:
  wc_notify: { get_resource: wait_handle }

  This template should end very quickly, as it starts a cirros instance
  that just sends a signal back to heat. But instead, it timeouts. The
  user data script dumps the signal URL to the console log, if you then
  try to send the signal manually you will get a 403. The original 403
  can also be seen in the heat-api-cfn.log file. Here is the log
  snippet:

  2015-11-12 05:13:34.491 1862 INFO heat.api.aws.ec2token [-] Checking AWS 
credentials..
  2015-11-12 05:13:34.492 1862 INFO heat.api.aws.ec2token [-] AWS credentials 
found, checking against keystone.
  2015-11-12 05:13:34.493 1862 INFO heat.api.aws.ec2token [-] Authenticating 
with http://172.29.236.100:5000/v3/ec2tokens
  2015-11-12 05:13:34.533 1862 INFO heat.api.aws.ec2token [-] AWS 
authentication failure.
  2015-11-12 05:13:34.534 1862 INFO eventlet.wsgi.server [-] 
10.0.3.181,172.29.236.100 - - [12/Nov/2015 05:13:34] "PUT 
/v1/waitcondition/arn%3Aopenstack%3Aheat%3A%3A683acadf4d04489f8e991b44014e6fc1%3Astacks%2Fwc1%2Faa4083b6-ce6c-411f-9df9-d059abacf40c%2Fresources%2Fwait_handle?Timestamp=2015-11-12T05%3A12%3A27Z=HmacSHA256=65657d1021e24e49ba4fb6f217ca4a22=2=aCG%2FO04MNLzSlf5gIBGw1hMcC7bQzB3pZXVKzXLLNSo%3D
 HTTP/1.1" 403 301 0.043961

  For reference, the curl command to trigger the signal is: curl -H
  "Content-Type:" -X PUT "

[Yahoo-eng-team] [Bug 1448602] Re: Policy related operations of Identity v3 API in API Complete Reference need modification.

2015-11-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/230627
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=19ba28ae497b120497f4fa661ca2376f57a1ce95
Submitter: Jenkins
Branch:master

commit 19ba28ae497b120497f4fa661ca2376f57a1ce95
Author: Diane Fleming 
Date:   Fri Oct 2 15:04:13 2015 -0500

Update Identity v3 to match spec

Match the spec here:

http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#openstack-identity-api-v3

Updated calls, parameters, faults, descriptions, and samples for

* API versions
* Credentials
* Domains
* Endpoints
* Groups
* Policies
* Projects
* Regions
* Roles
* Role assignments
& Services
* Tokens
* Users

Code samples: Removed unused code samples.

Change-Id: Ida11399456a4fa6ccff6336a02005192e1897a54
Closes-Bug: #1448602
Closes-Bug: #1513587
Closes-Bug: #1512305


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1448602

Title:
  Policy related operations of Identity v3 API in API Complete Reference
  need modification.

Status in OpenStack Identity (keystone):
  Triaged
Status in openstack-api-site:
  Fix Released

Bug description:
  Recently, when I was trying to construct policy related HTTP requests
  according to the API Complete Reference's Identity v3 API page, the
  requests I constructed are not accepted by Keystone.

  So I used openstackclient to do what I want and got the network
  packets using wireshark. Then I found that the correct requests and
  responses are actually not the ones recorded in API Complete
  Reference.

  For example, in order to create a policy, the API details showed in
  API Complete Reference: http://developer.openstack.org/api-ref-
  identity-v3.html#createPolicy  indicates that "project_id" and
  "user_id" are needed, while actually they are not! The HTTP requests
  constructed by openstackclient are same with those in API guide of
  Keystone, still, taking policy creation as an example,
  http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3.html#create-policy

  Thus, I think that the policy related API reference in API Complete
  Reference is out dated and should be modified according to Keystone's
  api specs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512305] Re: keystone api-site is out of date

2015-11-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/230627
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=19ba28ae497b120497f4fa661ca2376f57a1ce95
Submitter: Jenkins
Branch:master

commit 19ba28ae497b120497f4fa661ca2376f57a1ce95
Author: Diane Fleming 
Date:   Fri Oct 2 15:04:13 2015 -0500

Update Identity v3 to match spec

Match the spec here:

http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#openstack-identity-api-v3

Updated calls, parameters, faults, descriptions, and samples for

* API versions
* Credentials
* Domains
* Endpoints
* Groups
* Policies
* Projects
* Regions
* Roles
* Role assignments
& Services
* Tokens
* Users

Code samples: Removed unused code samples.

Change-Id: Ida11399456a4fa6ccff6336a02005192e1897a54
Closes-Bug: #1448602
Closes-Bug: #1513587
Closes-Bug: #1512305


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1512305

Title:
  keystone api-site is out of date

Status in OpenStack Identity (keystone):
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  http://docs.openstack.org/developer/keystone/api_curl_examples.html
  http://developer.openstack.org/api-ref-identity-v3.html
  comparing the content of these links, u will find some difference, these are 
some attributes missing in docs

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1512305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370177] Re: Lack of EC2 image attributes for volume backed snapshot.

2015-11-20 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370177

Title:
  Lack of EC2 image attributes for volume backed snapshot.

Status in ec2-api:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  For EBS images AWS returns device names, volume sizes, delete on
  termination flags in block device mapping structure.

  $ euca-describe-images ami-d13845e1
  IMAGE ami-d13845e1amazon/amzn-ami-hvm-2014.03.2.x86_64-ebsamazon  
available   public  x86_64  machine ebs hvm 
xen
  BLOCKDEVICEMAPPING/dev/xvda   snap-d15cde24   8   true

  The same in xml form:
  
  
  /dev/xvda
  
  snap-d15cde24
  8
  true
  standard
  
  
  

  But Nova didn't do it now:

  $ euca-describe-images ami-000a
  IMAGE ami-000aNone (sn-in)ef3ddd7aa4b24cda974200baef02730b
available   private machine aki-0002ari-0003
instance-store
  BLOCKDEVICEMAPPINGsnap-0005

  The same in xml form:
    
  
    
  snap-0005
    
  
    

  NB. In Grizzly device names and delete on termination flags was returned. It 
was changed by 
https://github.com/openstack/nova/commit/33e3d4c6b9e0b11500fe47d861110be1c1981572
  Now these attributes are not stored in instance snapshots, so there is no way 
to output them.

  Device name is most critical attribute. Because there is another one 
compatibility issue (see https://bugs.launchpad.net/nova/+bug/1370250): Nova 
isn't able to adjust attributes of volume being created at instance launch 
stage. For example in AWS we can change volume size and delete on termination 
flag of a device by set new values in parameters of run instance command. To 
identify the device in image block device mapping we use device name. For 
example:
  euca-run-instances ... -b /dev/vda=:100
  runs an instance with vda device increased to 100 GB.
  Thus if we haven't device names in images, we haven't a chance to fix this 
compatibility problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370250] Re: Can not set volume attributes at instance launch by EC2 API

2015-11-20 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370250

Title:
  Can not set volume attributes at instance launch by EC2 API

Status in ec2-api:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  AWS allows to change block device attributes (such as volume size,
  delete on termination behavior, existence) at instance launch.

  For example, image xxx has devices:
  vda, size 10, delete on termination
  vdb, size 100, delete on termination
  vdc, size 100, delete on termination
  We can run an instance by
  euca-run-instances ... xxx -b /dev/vda=:20 -b /dev/vdb=::false -b 
/dev/vdc=none
  to get the instance with devices:
  vda, size 20, delete on termination
  vdb, size 100, not delete on termination

  For Nova we get now:
  $ euca-run-instances --instance-type m1.nano -b /dev/vda=::true ami-000a
  euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is 
Invalid: Unrecognized legacy format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273983] Re: Pagination not implemented for DescribeTags

2015-11-20 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273983

Title:
  Pagination not implemented for DescribeTags

Status in ec2-api:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Amazon's API tells that it has MaxResults and NextToken tags for pagination. 
There is no such thing in our API
  
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeTags.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1273983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370901] Re: Nova EC2 doesn't create empty volume while launching an instance

2015-11-20 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370901

Title:
  Nova EC2 doesn't create empty volume while launching an instance

Status in ec2-api:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  AWS is able to create and attach a new empty volume while launching an 
instance. See 
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RunInstances.html:
  ---
  To create an empty Amazon EBS volume, omit the snapshot ID and specify a 
volume size instead. For example: "/dev/sdh=:20".
  ---
  This can be set by run_instances parameters, and by image block device 
mapping structure.

  But Nova EC2 isn't able to do this:

  $ euca-run-instances --instance-type m1.nano ami-0001 
--block-device-mapping /dev/vdd=:1
  euca-run-instances: error (InvalidBDMFormat): Block Device Mapping is 
Invalid: Unrecognized legacy format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/1370901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518321] [NEW] cannot set injected_files in build_instance pre hook

2015-11-20 Thread Richard Megginson
Public bug reported:

http://lists.openstack.org/pipermail/openstack-
dev/2015-November/079904.html

I have some code that uses the build_instance pre hook to set 
injected_files in the new instance.  With the kilo code, the argv[7] was 
passed as [] - so I could append/extend this value to add more 
injected_files.  With the latest code, this is passed as None, so I 
can't set it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518321

Title:
  cannot set injected_files in build_instance pre hook

Status in OpenStack Compute (nova):
  New

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-November/079904.html

  I have some code that uses the build_instance pre hook to set 
  injected_files in the new instance.  With the kilo code, the argv[7] was 
  passed as [] - so I could append/extend this value to add more 
  injected_files.  With the latest code, this is passed as None, so I 
  can't set it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 829880] Re: object store doesn't like key with '/'

2015-11-20 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/829880

Title:
  object store doesn't like key with '/'

Status in ec2-api:
  Fix Released
Status in pyjuju:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in juju package in Ubuntu:
  Fix Released

Bug description:
  It looks like it should be correct given that its taking a hash of the
  key for the filename in the bucket dir, but it looks like its running
  afoul before it gets there.. sample script to reproduce (python+boto)
  against nova-objectstore (s3server.py)

  
  import boto
  import os
  from boto.s3.connection import S3Connection, OrdinaryCallingFormat

  s3 = S3Connection(
  aws_access_key_id=os.environ["EC2_ACCESS_KEY"],
  aws_secret_access_key=os.environ["EC2_SECRET_KEY"],
  host = os.environ["S3_URL"][len("http://;):],
  is_secure = False,
  calling_format=OrdinaryCallingFormat())

  bucket = s3.create_bucket("es-testing-123")

  print "new key"
  key = bucket.new_key("abc.txt")
  key.set_contents_from_string("abcdef")

  print "new nested key"
  key = bucket.new_key("zoo/abc.txt")
  key.set_contents_from_string("abcdef")

  """
  Fails with
  S3ResponseError: 404 Not Found
  404 Not Found

  The resource could not be found.
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/ec2-api/+bug/829880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517818] Re: update rbac policy with any input when there is only 1 policy in system

2015-11-20 Thread Kevin Benton
This is a combination of bugs. Neutronclient is trying to lookup using
the 'name' field which doesn't exist on policies. But even if it were
using the correct 'id' field, filtering is broken for UnionModels which
RBAC depends on.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517818

Title:
  update rbac policy with any input when there is only 1 policy in
  system

Status in neutron:
  New
Status in python-neutronclient:
  In Progress

Bug description:
  I leave a policy in rbac. This policy is created by admin user . I
  stay in use the same user. And now I can use  neutron rbac-update [any
  values] , then it will return error.

  
  repro
  --
  neutron rbac-list
  
+--+--+
  | id   | object_id
|
  
+--+--+
  | d14a977d-c19f-4bf5-abe1-d5820456385e | a80d09eb-9ef2-47a4-baac-90133894366a 
|
  
+--+--+

  neutron rbac-update 222
  
-
  Conflict: RBAC policy on object a80d09eb-9ef2-47a4-baac-90133894366a cannot 
be removed because other objects depend on it.
  Details: Callback 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change failed 
with "Unable to reconfigure sharing settings for network 
a80d09eb-9ef2-47a4-baac-90133894366a. Multiple tenants are using it."
  log
  ---
  2015-11-19 10:05:43.024 ERROR neutron.callbacks.manager 
[req-99ef207b-7422-4bb7-a257-4c7ee00ee114 admin 
5d73438ed76a4399b8d2996a699146c5] Error during notification for 
neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change 
rbac-policy, before_update
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager Traceback (most 
recent call last):
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/callbacks/manager.py", line 141, in _notify_loop
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 142, in 
validate_network_rbac_policy_change
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager tenant_to_check = 
None
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 190, in 
ensure_no_tenant_ports_on_network
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager raise 
n_exc.InvalidSharedSetting(network=network_id)
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager InvalidSharedSetting: 
Unable to reconfigure sharing settings for network 
a80d09eb-9ef2-47a4-baac-90133894366a. Multiple tenants are using it.
  2015-11-19 10:05:43.024 TRACE neutron.callbacks.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518016] Re: Nova kilo requires concurrency 1.8.2 or better

2015-11-20 Thread James Page
** Changed in: python-oslo.concurrency (Ubuntu)
   Status: New => Triaged

** Changed in: python-oslo.concurrency (Ubuntu)
   Importance: Undecided => Critical

** Changed in: python-oslo.concurrency (Ubuntu)
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Also affects: python-oslo.concurrency (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Changed in: python-oslo.concurrency (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: python-oslo.concurrency (Ubuntu Vivid)
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: python-oslo.concurrency (Ubuntu Vivid)
   Importance: Undecided => Critical

** Changed in: python-oslo.concurrency (Ubuntu Vivid)
   Status: New => Triaged

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/kilo
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/kilo
   Status: New => Triaged

** Changed in: cloud-archive/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518016

Title:
  Nova kilo requires concurrency 1.8.2 or better

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in OpenStack Compute (nova):
  Incomplete
Status in python-oslo.concurrency package in Ubuntu:
  Fix Released
Status in python-oslo.concurrency source package in Vivid:
  Triaged

Bug description:
  OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to
  the addition of on_execute and on_completion to the execute(..)
  function.  The latest Ubuntu OpenStack Kilo packages currently have
  code that depend on this new updated release.  This results in a crash
  in some operations like resizes or migrations.

  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last):
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in 
_error_out_instance_on_exception
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in 
resize_instance
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, 
self.tb)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in 
copy_image
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, 
on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in 
execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, 
**kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 

[Yahoo-eng-team] [Bug 1505708] Re: [Sahara] Page with node group configs long parsed

2015-11-20 Thread Tatiana Ovchinnikova
This bug cannot be reproduced on devstack by Nov 18, 2015. If in your
environment it's still present, please provide more information and feel
free to reopen it.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1505708

Title:
  [Sahara] Page with node group configs long parsed

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  ENVIRONMENT: devstack(13.10.2015)

  
  STEPS TO REPRODUCE:
  1. Navigate to "Node group templates"
  2. Click on "Create template"
  3. Select "Cloudera", "5.4.0"

  
  RESULT:  Page long parsed (size ~10MB)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1505708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518230] [NEW] AggregateInstanceExtraSpecsFilter and ComputeCapabilitiesFilter check flavor extra_specs which causes a conflict

2015-11-20 Thread SongRuixia
Public bug reported:

In nova.conf,when scheduler_default_filters include ComputeCapabilitiesFilter 
and AggregateInstanceExtraSpecsFilter, I set flavor extra_specs and aggregate1 
to node_type=kvm,then I boot an instance with this flavor,except the instance 
is assigned to hosts of aggregate1(az1) ,but the vm state is error,in scheduler 
log:
2015-11-20 09:55:53.577 12227 INFO nova.filters 
[req-b48f1a33-fc4d-4ea2-82a4-9a827e6a7a61 bf2c688edb5d41b0ba66e1a4fe510985 
db57c7f7bc7f420fb4f0c1663da05a42] Filter ComputeCapabilitiesFilter returned 0 
hosts
 I think this is unreasonable,AggregateInstanceExtraSpecsFilter and 
ComputeCapabilitiesFilter are all check flavor extra_specs,This causes a 
conflict.I expect can choose the matching host according to flavor extra_specs 
and aggregate metadata.
The ComputeCapabilitiesFilter always return 0 hosts when the flavor extra_specs 
add new properties.

test steps:
1、 nova aggregate-list 
+++---+
| Id | Name   | Availability Zone |
+++---+
| 1  | aggregate1 | az1   |
| 2  | aggregate2 | az2   |
+++---+
2、 nova aggregate-set-metadata aggregate1 node_type=kvm
Metadata has been successfully updated for aggregate 1.
+++---+---+--+
| Id | Name   | Availability Zone | Hosts | 
Metadata |
+++---+---+--+
| 1  | aggregate1 | az1   | '98f537e1af0d', 'SBCR-chenling-slot4' | 
'availability_zone=az1', 'node_type=kvm' |
+++---+---+--+
3、nova flavor-key 2 set node_type=kvm

4、add AggregateInstanceExtraSpecsFilter to scheduler_default_filters in 
nova.conf
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,PciPassthroughFilter,AggregateInstanceExtraSpecsFilter
5.、systemctl restart openstack-nova-scheduler.service
6、boot an instance,but ComputeCapabilitiesFilter return 0 hosts
nova boot --flavor 2 --boot-volume 8a2ba353-75a6-4eea-bf3c-35f8f42b94a2 --nic 
net-id=ab66d9e3-c6ee-49ec-b192-fdee4d41c088  test_vm1
scheduler log:
2015-11-20 09:55:53.559 12227 INFO nova.scheduler.filter_scheduler 
[req-b48f1a33-fc4d-4ea2-82a4-9a827e6a7a61 bf2c688edb5d41b0ba66e1a4fe510985 
db57c7f7bc7f420fb4f0c1663da05a42] Attempting to build 1 instance(s) uuids: 
[u'70eb860d-e0ce-4202-a250-6f15ccf2021a']
2015-11-20 09:55:53.577 12227 INFO nova.filters 
[req-b48f1a33-fc4d-4ea2-82a4-9a827e6a7a61 bf2c688edb5d41b0ba66e1a4fe510985 
db57c7f7bc7f420fb4f0c1663da05a42] Filter ComputeCapabilitiesFilter returned 0 
hosts
2015-11-20 09:55:53.578 12227 WARNING nova.scheduler.driver 
[req-b48f1a33-fc4d-4ea2-82a4-9a827e6a7a61 bf2c688edb5d41b0ba66e1a4fe510985 
db57c7f7bc7f420fb4f0c1663da05a42] [instance: 
70eb860d-e0ce-4202-a250-6f15ccf2021a] Setting instance to ERROR state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518230

Title:
  AggregateInstanceExtraSpecsFilter and ComputeCapabilitiesFilter check
  flavor extra_specs which causes a conflict

Status in OpenStack Compute (nova):
  New

Bug description:
  In nova.conf,when scheduler_default_filters include ComputeCapabilitiesFilter 
and AggregateInstanceExtraSpecsFilter, I set flavor extra_specs and aggregate1 
to node_type=kvm,then I boot an instance with this flavor,except the instance 
is assigned to hosts of aggregate1(az1) ,but the vm state is error,in scheduler 
log:
  2015-11-20 09:55:53.577 12227 INFO nova.filters 
[req-b48f1a33-fc4d-4ea2-82a4-9a827e6a7a61 bf2c688edb5d41b0ba66e1a4fe510985 
db57c7f7bc7f420fb4f0c1663da05a42] Filter ComputeCapabilitiesFilter returned 0 
hosts
   I think this is unreasonable,AggregateInstanceExtraSpecsFilter and 
ComputeCapabilitiesFilter are all check flavor extra_specs,This causes a 
conflict.I expect can choose the matching host according to flavor extra_specs 
and aggregate metadata.
  The ComputeCapabilitiesFilter always return 0 hosts when the flavor 
extra_specs add new properties.

  test steps:
  1、 nova aggregate-list 
  +++---+
  | Id | Name   | Availability Zone |
  +++---+
  | 1  | aggregate1 | az1   |
  | 2  | aggregate2 | az2   |
  +++---+
  2、 nova aggregate-set-metadata aggregate1 node_type=kvm
  Metadata has been successfully updated for aggregate 1.
  

[Yahoo-eng-team] [Bug 1518016] Re: Nova kilo requires concurrency 1.8.2 or better

2015-11-20 Thread James Page
concurrency 1.8.2 uploaded to vivid-proposed for SRU team review; as
soon as its accepted into proposed well get it into the UCA for kilo as
well for testing.

** Changed in: nova
   Status: Incomplete => Invalid

** Changed in: python-oslo.concurrency (Ubuntu)
   Status: Fix Released => Invalid

** Changed in: python-oslo.concurrency (Ubuntu)
   Importance: Critical => Undecided

** Changed in: python-oslo.concurrency (Ubuntu)
 Assignee: Corey Bryant (corey.bryant) => (unassigned)

** Changed in: python-oslo.concurrency (Ubuntu Vivid)
 Assignee: Corey Bryant (corey.bryant) => (unassigned)

** Changed in: cloud-archive/kilo
   Importance: High => Critical

** Summary changed:

- Nova kilo requires concurrency 1.8.2 or better
+ [SRU] Nova kilo requires concurrency 1.8.2 or better

** Description changed:

- OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the
- addition of on_execute and on_completion to the execute(..) function.
- The latest Ubuntu OpenStack Kilo packages currently have code that
- depend on this new updated release.  This results in a crash in some
- operations like resizes or migrations.
+ [Impact]
+ Some operations on instances will fail due to missing functions in 
oslo-concurrency 1.8.0 that the latest Nova stable release requires.
+ 
+ [Test Case]
+ Resize or migrate an instance on the latest stable kilo updates
+ 
+ [Regression Potential]
+ Minimal - this is recommended and tested upstream already.
+ 
+ [Original Bug Report]
+ OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the 
addition of on_execute and on_completion to the execute(..) function.  The 
latest Ubuntu OpenStack Kilo packages currently have code that depend on this 
new updated release.  This results in a crash in some operations like resizes 
or migrations.
  
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last):
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in 
_error_out_instance_on_exception
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in 
resize_instance
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, 
self.tb)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in 
copy_image
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, 
on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in 
execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, 
**kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 174, 
in 

[Yahoo-eng-team] [Bug 1241027] Re: Intermitent Selenium unit test timout error

2015-11-20 Thread Thomas Goirand
I've reopened the issue, as there's no sign that it was fixed.

** Changed in: horizon
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241027

Title:
  Intermitent Selenium unit test timout error

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  I have the following error *SOMETIMES* (eg: sometimes it does work,
  sometimes it doesn't):

  This is surprising, because the python-selenium, which is non-free,
  isn't installed in my environment, and we were supposed to have a
  patch to not use it if it was detected it wasn't there.

  Since there's a 2 seconds timeout, probably it happens when my server
  is busy. I would suggest to first try increasing this timeout to
  something like 5 seconds or something similar...

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 227, in run
  self.tearDown()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 350, in
  tearDown
  self.teardownContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 366, in
  teardownContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 179, in tearDownClass
  super(SeleniumTestCase, cls).tearDownClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1170, in tearDownClass
  cls.server_thread.join()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1094, in join
  self.httpd.shutdown()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  984, in shutdown
  "Failed to shutdown the live test server in 2 seconds. The "
  RuntimeError: Failed to shutdown the live test server in 2 seconds. The
  server might be stuck or generating a slow response.

  The same way, there's this one, which must be related (or shall I say,
  due to the previous error?):

  ERROR: test suite for 
  --
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
  self.setUp()
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
  self.setupContext(ancestor)
File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in
  setupContext
  try_run(context, names)
File "/usr/lib/python2.7/dist-packages/nose/util.py", line 469, in try_run
  return func()
File
  
"/home/zigo/sources/openstack/havana/horizon/build-area/horizon-2013.2~rc3/horizon/test/helpers.py",
  line 173, in setUpClass
  super(SeleniumTestCase, cls).setUpClass()
File "/usr/lib/python2.7/dist-packages/django/test/testcases.py", line
  1160, in setUpClass
  raise cls.server_thread.error
  WSGIServerException: [Errno 98] Address already in use

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514576] Re: Nova volume-attach should also allow user to provide volume name for the input.

2015-11-20 Thread Markus Zoeller (markus_z)
The REST API of Nova only accepts UUIDs [1]. The translation from object
name to its UUID (like volume name to its UUID) is usually done via the
novaclient. That's why I added the python-novaclient as affected project
and marked it as invalid for the Nova project.

[1] http://developer.openstack.org/api-ref-compute-v2.1.html#attach

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1514576

Title:
  Nova volume-attach should also allow user to provide volume name for
  the input.

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  New

Bug description:
  Nova version 2.32.0

  When we want to attach an existing volume to an exiting instance then
  it accepts only volume uuid to specify for attaching to the instance.

  for example

  name of the instance : test_vm
  name of the volume my-volume 

  I did :
  nova volume-attach  test_vm my-volume

  Expected result:
  it should allow instance to attach the volume.

  Actual result:
  ERROR (NotFound): Volume my-volume could not be found. (HTTP 404) 
(Request-ID: req-0ba9225c-cc39-44b6-a85b-758cc2c97e2c)

  
  It should also accept volume name if its unique throughout all the volume 
names like it allows instance name instead of UUID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1514576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518267] [NEW] let nova client specify version

2015-11-20 Thread jichenjc
Public bug reported:

curl -g -i -X DELETE
http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/os-
tenant-networks/e91c9951-ba75-4d83-bc43-ff3e86c2ffbb -H "User-Agent:
python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-
API-Version: 2.6" -H "X-Auth-Token:
{SHA1}d177bfb5ab231cfc1215f8acd26cbc3a5967ff14"

by default we will use 2.6 now in novaclinet, there is no way to use version 
like 2.1 or 2.2 etc
so we should add a option to accept it as a param (of course, check with 
maximum param like 2.6 now)

** Affects: python-novaclient
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

** Project changed: nova => python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518267

Title:
  let nova client specify version

Status in python-novaclient:
  New

Bug description:
  curl -g -i -X DELETE
  http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/os-
  tenant-networks/e91c9951-ba75-4d83-bc43-ff3e86c2ffbb -H "User-Agent:
  python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-
  API-Version: 2.6" -H "X-Auth-Token:
  {SHA1}d177bfb5ab231cfc1215f8acd26cbc3a5967ff14"

  by default we will use 2.6 now in novaclinet, there is no way to use version 
like 2.1 or 2.2 etc
  so we should add a option to accept it as a param (of course, check with 
maximum param like 2.6 now)

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1518267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518016] Re: [SRU] Nova kilo requires concurrency 1.8.2 or better

2015-11-20 Thread Corey Bryant
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu)
   Status: New => Invalid

** Changed in: nova (Ubuntu Vivid)
   Status: New => Fix Committed

** Changed in: nova (Ubuntu Vivid)
   Status: Fix Committed => In Progress

** Changed in: nova (Ubuntu Vivid)
 Assignee: (unassigned) => Corey Bryant (corey.bryant)

** Changed in: nova (Ubuntu Vivid)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518016

Title:
  [SRU] Nova kilo requires concurrency 1.8.2 or better

Status in Ubuntu Cloud Archive:
  In Progress
Status in Ubuntu Cloud Archive kilo series:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in python-oslo.concurrency package in Ubuntu:
  Invalid
Status in nova source package in Vivid:
  In Progress
Status in python-oslo.concurrency source package in Vivid:
  In Progress

Bug description:
  [Impact]
  Some operations on instances will fail due to missing functions in 
oslo-concurrency 1.8.0 that the latest Nova stable release requires.

  [Test Case]
  Resize or migrate an instance on the latest stable kilo updates

  [Regression Potential]
  Minimal - this is recommended and tested upstream already.

  [Original Bug Report]
  OpenStack Nova Kilo release requires 1.8.2 or higher, this is due to the 
addition of on_execute and on_completion to the execute(..) function.  The 
latest Ubuntu OpenStack Kilo packages currently have code that depend on this 
new updated release.  This results in a crash in some operations like resizes 
or migrations.

  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] Traceback (most recent call last):
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6459, in 
_error_out_instance_on_exception
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] yield
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4054, in 
resize_instance
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] timeout, retry_interval)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6353, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] shared_storage)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] six.reraise(self.type_, self.value, 
self.tb)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6342, in 
migrate_disk_and_power_off
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 329, in 
copy_image
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] on_execute=on_execute, 
on_completion=on_completion)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 55, in 
execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return utils.execute(*args, **kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 207, in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24] return processutils.execute(*cmd, 
**kwargs)
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager [instance: 
c04c1cf3-fbd9-40fd-be2e-e7dc06eb9f24]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py", line 174, 
in execute
  2015-11-19 16:26:24.103 7779 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1501914] Re: Liberty devstack failed to launch instance w/ NetApp eSeries.

2015-11-20 Thread Alex Meade
After working with Hong directly, we have determined this was an
environment issue.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501914

Title:
  Liberty devstack failed to launch instance w/ NetApp eSeries.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. Exact version of Nova/OpenStack you are running:

  Liberty Devstack

  commit f4485bae9c719ee6b0c243cf5a69a6461df0bf23
  Merge: ace1e8f e5a6f82
  Author: Jenkins 
  Date:   Thu Oct 1 07:14:41 2015 +

  Merge "Cleanup nova v2.1 API testing options"

  
  2. Relevant log files:  n-cpu.log file is in the attachment.

  3. Reproduce steps:
  - Setup is running with Liberty devstack version on Ubuntu 14.04.
  - Connected to a NetApp eSeries (iSCSI) for storage.  (Using multipath to 
manage the array)
  - Launch an instance from Horizon, by selecting "launch instance", input an 
Instance Name, m1.small, Instance count: 1, select "Boot from image (creates a 
new volume)", select "cirros..." image, default size(20G for small), then click 
on "Launch"

  - The launch instance failed with the following error:

  Error: Failed to perform requested operation on instance "testvm", the
  instance has an error status: Please try again later [Error: Build of
  instance 1304643b-f8f2-4894-89d8-26c1b8d3e438 aborted: Block Device
  Mapping is Invalid.].

  It seems the host failed to get the new disk from the eSeries storage.

  Did some more tests with the following observation:

  When I create a new (1st) volume with certain image (cirros), the host 
created a host on the array, started the iSCSI sessions, and mapped the volume. 
 But then the iSCSI sessions disconnected and the host failed to discover the 
volume, “sudo multipath –ll” did not list any volume.
   
  Then I tried to create a 2nd instance, the host restarted the iSCSI sessions, 
created and mapped a new (2nd) volume.  This time the host discovered the first 
volume, but not the newly created (2nd) volume.   Also, the iSCSI sessions stay 
this time, they didn’t get disconnected.
   
  It seem like there might be a problem with when the newly added volume is 
being discover is not in proper order, the discover/rescan command is being use 
too early.

  Also, tried the same with the Kilo Devstack version, and this version
  is working fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242620] Re: "Unable to add token to revocation list" warning happened when revoking token in memcache

2015-11-20 Thread Adam Young
Moving to Fernet tokens.  Revocations will be handled by revocation
events, not revocation list.  Memcache as a storage mechanism for PKI
tokens was deeply flawed, as dropping tokens from Memcache effectively
unrevoked them.

** Changed in: keystone
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1242620

Title:
  "Unable to add token to revocation list" warning happened when
  revoking token in memcache

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  Memcache backend is used to store the token. When revoking a token, such 
error reported.
  "Unable to add token to revocation list"

  As a result, the revoked token could not be added to revocation-list in 
memcache although the token was actually revoked.
  I found this warning always happen when the size of value of the 
revocation-list key in memcache is about 512K. 

  Expected result:
  No warning exception should be raised when revoking token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1242620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450940] Re: Refactor angular features enablement

2015-11-20 Thread Travis Tripp
** Changed in: horizon
   Importance: Medium => Wishlist

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Travis Tripp (travis-tripp) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450940

Title:
  Refactor angular features enablement

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  LAUNCH_INSTANCE_NG_ENABLED & LAUNCH_INSTANCE_LEGACY_ENABLED
  were added as new feature toggles in Kilo. We added multiple
  spots on the python code that looks it up in a rather non-declarative
  and non abstract way. We also just created a generic settings
  angular service. This worked, but now we are thinking about
  ways to standardize on this concept so that a common features
  API on the Python side and angular side can be used abstract
  looking up whether or not a feature is enabled. This provides
  better abstraction and isolation between code that needs to know
  if a feature is enabled or not and will allow more standardization
  or logic in the future to determine whether or not a feature is enabled
  with less likelihood of having to rewrite all existing code.

  This is not intended to replace all existing settings in settings.py.

  For example, current feature lookup on python side looks like this:

  getattr(settings, 'LAUNCH_INSTANCE_LEGACY_ENABLED', True):

  It would be better if we can simply say:

  features.enabled('LAUNCH_INSTANCE_LEGACY', True)

  Similarly on the angular side, you inject a settings service which has
  a pretty direct binding to the underlying python settings.  Using and
  injecting a Feature service will allow us to use different
  methodologies in the future without having to change code.  It will
  provide an abstraction layer.

  To support standard settings service lookups, we will create a common
  features area under local_settings.py with the following structure:

  FEATURE = {
  'LAUNCH_INSTANCE_NG': {
  'enabled': True,
  },
  'IDENTITY_USERS_TABLE_NG': {
  'enabled': True,
  }
  }

  This will enable simple lookup using a python utility or the angular
  settings service via a helper function.

  This structure will enable much richer future fields to be added to
  describe the feature, its status, etc.  e.g.:

  FEATURES = {
  'LAUNCH_INSTANCE_NG': {
  'enabled': True,
  'description': 'super cool next gen launch instance',
  'status': 'beta'
  }
  }

  Initially, all that will determine if a feature is enabled will be the
  'enabled' toggle. But in the future other fields or logic could be
  used without disturbing the code the uses the feature utils or
  featureService.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450940/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518382] [NEW] nova list --deleted fails with a 500 response

2015-11-20 Thread Matt Riedemann
Public bug reported:

Nova version (mitaka):

stack@archive:/opt/stack/nova$ git log -1
commit f6a5a43e06c2af6325d7a3552c71e968565684fc
Merge: f268cf5 0df6fba
Author: Jenkins 
Date:   Mon Nov 16 18:01:40 2015 +

Merge "Remove duplicate server.kill on test shutdown"
stack@archive:/opt/stack/nova$


python-novaclient version:

stack@archive:/opt/stack/nova$ pip show python-novaclient
---
Metadata-Version: 2.0
Name: python-novaclient
Version: 2.35.0
Summary: Client library for OpenStack Compute API
Home-page: https://www.openstack.org
Author: OpenStack
Author-email: openstack-...@lists.openstack.org
License: Apache License, Version 2.0
Location: /usr/local/lib/python2.7/dist-packages
Requires: oslo.i18n, oslo.serialization, python-keystoneclient, argparse, 
Babel, oslo.utils, iso8601, requests, pbr, six, PrettyTable, simplejson
stack@archive:/opt/stack/nova$


As a non-admin user, I created some servers and deleted them:

mysql> select display_name,uuid,deleted from nova.instances;
+--+--+-+
| display_name | uuid | deleted |
+--+--+-+
| test | ca3e57c0-cf37-40ab-9322-4cde6fe242d9 |   1 |
| test2| 8d555e3b-41db-495d-8d03-4918a011472c |   2 |
| test3| ef4c5911-958a-47e7-b836-42da9d32c209 |   3 |
+--+--+-+
3 rows in set (0.00 sec)

As the admin user, I should be able to list them using 'nova list
--deleted', but that fails with a 500 because an InstanceNotFound is not
handled in the API code:

2015-11-20 16:26:11.033 ERROR nova.api.openstack.extensions 
[req-aaea6f78-bf6d-4c35-ad46-decba714767b admin demo] Unexpected exception in 
API method
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 280, in detail
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=True)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 406, in 
_get_servers
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions response = 
self._view_builder.detail(req, instance_list)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 151, in 
detail
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return 
self._list_view(self.show, request, instances, coll_name)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 163, in 
_list_view
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions server_list = 
[func(request, server)["server"] for server in servers]
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 291, in show
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions "flavor": 
self._get_flavor(request, instance),
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 223, in 
_get_flavor
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions instance_type = 
instance.get_flavor()
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 863, in get_flavor
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return 
getattr(self, attr)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
66, in getter
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions 
self.obj_load_attr(name)
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 853, in obj_load_attr
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions 
self._load_flavor()
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 744, in _load_flavor
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions 
expected_attrs=['flavor', 'system_metadata'])
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
171, in wrapper
2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions result = 
fn(cls, context, 

[Yahoo-eng-team] [Bug 1518382] Re: nova list --deleted fails with a 500 response

2015-11-20 Thread Matt Riedemann
This is invalid, it's a side effect of bug 1183523 where
archive_deleted_rows deletes some things but not others (like instances)
because of foreign key constraints.

So the problem I was having was the instance_extra table was empty, and
that's where the flavor information is stored for the instance.

When I cleaned my DB and re-did the scenario:

1. boot instance
2. delete instance
3. nova list --deleted

That works (as admin for #3).  So marking this invalid.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518382

Title:
  nova list --deleted fails with a 500 response

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova version (mitaka):

  stack@archive:/opt/stack/nova$ git log -1
  commit f6a5a43e06c2af6325d7a3552c71e968565684fc
  Merge: f268cf5 0df6fba
  Author: Jenkins 
  Date:   Mon Nov 16 18:01:40 2015 +

  Merge "Remove duplicate server.kill on test shutdown"
  stack@archive:/opt/stack/nova$


  python-novaclient version:

  stack@archive:/opt/stack/nova$ pip show python-novaclient
  ---
  Metadata-Version: 2.0
  Name: python-novaclient
  Version: 2.35.0
  Summary: Client library for OpenStack Compute API
  Home-page: https://www.openstack.org
  Author: OpenStack
  Author-email: openstack-...@lists.openstack.org
  License: Apache License, Version 2.0
  Location: /usr/local/lib/python2.7/dist-packages
  Requires: oslo.i18n, oslo.serialization, python-keystoneclient, argparse, 
Babel, oslo.utils, iso8601, requests, pbr, six, PrettyTable, simplejson
  stack@archive:/opt/stack/nova$

  
  As a non-admin user, I created some servers and deleted them:

  mysql> select display_name,uuid,deleted from nova.instances;
  +--+--+-+
  | display_name | uuid | deleted |
  +--+--+-+
  | test | ca3e57c0-cf37-40ab-9322-4cde6fe242d9 |   1 |
  | test2| 8d555e3b-41db-495d-8d03-4918a011472c |   2 |
  | test3| ef4c5911-958a-47e7-b836-42da9d32c209 |   3 |
  +--+--+-+
  3 rows in set (0.00 sec)

  As the admin user, I should be able to list them using 'nova list
  --deleted', but that fails with a 500 because an InstanceNotFound is
  not handled in the API code:

  2015-11-20 16:26:11.033 ERROR nova.api.openstack.extensions 
[req-aaea6f78-bf6d-4c35-ad46-decba714767b admin demo] Unexpected exception in 
API method
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 280, in detail
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=True)
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 406, in 
_get_servers
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions response = 
self._view_builder.detail(req, instance_list)
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 151, in 
detail
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return 
self._list_view(self.show, request, instances, coll_name)
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 163, in 
_list_view
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions server_list = 
[func(request, server)["server"] for server in servers]
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 291, in show
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions "flavor": 
self._get_flavor(request, instance),
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/views/servers.py", line 223, in 
_get_flavor
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions instance_type 
= instance.get_flavor()
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 863, in get_flavor
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions return 
getattr(self, attr)
  2015-11-20 16:26:11.033 TRACE nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1518436] [NEW] RFE: non-admins should be able to get their deleted instances

2015-11-20 Thread Matt Riedemann
Public bug reported:

Listing deleted instances is admin only, but it's not clear why non-
admins can't list deleted instances in their own project/tenant.  This
should be policy driven so that non-admins can list the deleted
instances in their project.

I'm not exactly sure where this is enforced in the code, however. It
doesn't fail, it just doesn't return anything:

stack@archive:~/devstack$ nova list --deleted
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+


This is slightly different but very explicit:

https://github.com/openstack/nova/blob/12.0.0/nova/api/openstack/compute/servers.py#L335-L340

Results in:

stack@archive:~/devstack$ nova list --deleted --status 'deleted'
ERROR (Forbidden): Only administrators may list deleted instances (HTTP 403) 
(Request-ID: req-fb8ed625-2f2d-45ff-87cd-b5571cdf1dac)

** Affects: nova
 Importance: Wishlist
 Status: Invalid


** Tags: api rfe

** Description changed:

  Listing deleted instances is admin only, but it's not clear why non-
  admins can't list deleted instances in their own project/tenant.  This
  should be policy driven so that non-admins can list the deleted
  instances in their project.
+ 
+ I'm not exactly sure where this is enforced in the code, however. It
+ doesn't fail, it just doesn't return anything:
+ 
+ stack@archive:~/devstack$ nova list --deleted
+ ++--+++-+--+
+ | ID | Name | Status | Task State | Power State | Networks |
+ ++--+++-+--+
+ ++--+++-+--+
+ 
+ 
+ This is slightly different but very explicit:
+ 
+ 
https://github.com/openstack/nova/blob/12.0.0/nova/api/openstack/compute/servers.py#L335-L340
+ 
+ Results in:
+ 
+ stack@archive:~/devstack$ nova list --deleted --status 'deleted'
+ ERROR (Forbidden): Only administrators may list deleted instances (HTTP 403) 
(Request-ID: req-fb8ed625-2f2d-45ff-87cd-b5571cdf1dac)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518436

Title:
  RFE: non-admins should be able to get their deleted instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Listing deleted instances is admin only, but it's not clear why non-
  admins can't list deleted instances in their own project/tenant.  This
  should be policy driven so that non-admins can list the deleted
  instances in their project.

  I'm not exactly sure where this is enforced in the code, however. It
  doesn't fail, it just doesn't return anything:

  stack@archive:~/devstack$ nova list --deleted
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  
  This is slightly different but very explicit:

  
https://github.com/openstack/nova/blob/12.0.0/nova/api/openstack/compute/servers.py#L335-L340

  Results in:

  stack@archive:~/devstack$ nova list --deleted --status 'deleted'
  ERROR (Forbidden): Only administrators may list deleted instances (HTTP 403) 
(Request-ID: req-fb8ed625-2f2d-45ff-87cd-b5571cdf1dac)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518443] [NEW] full stack test_ha_router failing

2015-11-20 Thread Manjeet Singh Bhatia
Public bug reported:

For some reason test_ha_router is not able to schedule router to both nodes. 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router  is failing 
on gate
as well as locally

I tried tox -e dsvm-fullstack
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router

reason:

logs link.

http://paste.openstack.org/show/479625/

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- full stack test failing 
+ full stack test_ha_router failing

** Description changed:

- For some reason test 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router  is failing 
on gate
- as well as locally 
+ For some reason test_ha_router is not able to schedule router to both nodes. 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router  is failing 
on gate
+ as well as locally
  
  I tried tox -e dsvm-fullstack
  neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router
  
  reason:
  
  logs link.
  
  http://paste.openstack.org/show/479625/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518443

Title:
  full stack test_ha_router failing

Status in neutron:
  New

Bug description:
  For some reason test_ha_router is not able to schedule router to both nodes. 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router  is failing 
on gate
  as well as locally

  I tried tox -e dsvm-fullstack
  neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_ha_router

  reason:

  logs link.

  http://paste.openstack.org/show/479625/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518444] [NEW] DVR: router namespace is not getting removed once all VMs from a compute node migrates to other node

2015-11-20 Thread Hardik Italia
Public bug reported:

Setup:
1)  Multimode setup with 1 controller & 2 compute nodes running linux+KVM.
2)  NFS for shared storage. (instances_path = 
/opt/stack/data/nova/instances is shared)
 
Steps:
1) Create 2 private networks.
2) Create a DVR router and add an interface to each of the above network.
3) Create 1st VM on private network 1 and on compute node1
4) Create 2nd VM on private network 2 and on compute node 2
5) Migrate VM2 from compute node 2 to compute node 1 (nova live-migrate VM2)
6) Notice that after VM2 migrates to compute node1, router namespace is still 
there on the compute node 2.


Example:

Before migration: VM11 & VM12 are hosted on the different compute nodes
(CN-1 & CN-2).

stack@CTL:~$ nova show vm11 | grep OS-EXT-SRV-ATTR:host
| OS-EXT-SRV-ATTR:host | CN-1   
  |
| OS-EXT-SRV-ATTR:hostname | vm11  
   |
stack@CTL:~$ nova show vm12 | grep OS-EXT-SRV-ATTR:host
| OS-EXT-SRV-ATTR:host | CN-2   
  |
| OS-EXT-SRV-ATTR:hostname | vm12   
  |


Router namespace is present on both the compute nodes:

stack@CN-1:~$ ip netns
qrouter-9d439e4e-c4c6-4901-ba32-0e793b4df3d8

stack@CN-2:~$ sudo ip netns
qrouter-9d439e4e-c4c6-4901-ba32-0e793b4df3d8


After migrating VM12 to CN-1:(Both VMs are now hosted on CN-1)

stack@CTL:~$ nova show vm11 | grep OS-EXT-SRV-ATTR:host
| OS-EXT-SRV-ATTR:host | CN-1   
  |
| OS-EXT-SRV-ATTR:hostname | vm11   
  |
stack@CTL:~$ nova show vm12 | grep OS-EXT-SRV-ATTR:host
| OS-EXT-SRV-ATTR:host | CN-1   
  |
| OS-EXT-SRV-ATTR:hostname | vm12   
  |


Router namespace is still present on the compute node2 which is not
hosting any VMs.

stack@CTL:~$ nova list
+--+--+++-++
| ID   | Name | Status | Task State | Power 
State | Networks   |
+--+--+++-++
| 0a2f82e0-3edd-47c5-aa24-a29d5b826a55 | vm11 | ACTIVE | -  | Running   
  | n1=1.1.1.4 |
| 1274d128-c39c-4598-a8f6-d4629a259bbc | vm12 | ACTIVE | -  | Running   
  | n2=2.2.2.3 |

stack@CN-2:~/devstack$ sudo ip netns
qrouter-9d439e4e-c4c6-4901-ba32-0e793b4df3d8

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518444

Title:
  DVR: router namespace is not getting removed once all VMs from a
  compute node migrates to other node

Status in neutron:
  New

Bug description:
  Setup:
  1)  Multimode setup with 1 controller & 2 compute nodes running linux+KVM.
  2)  NFS for shared storage. (instances_path = 
/opt/stack/data/nova/instances is shared)
   
  Steps:
  1) Create 2 private networks.
  2) Create a DVR router and add an interface to each of the above network.
  3) Create 1st VM on private network 1 and on compute node1
  4) Create 2nd VM on private network 2 and on compute node 2
  5) Migrate VM2 from compute node 2 to compute node 1 (nova live-migrate VM2)
  6) Notice that after VM2 migrates to compute node1, router namespace is still 
there on the compute node 2.

  
  Example:

  Before migration: VM11 & VM12 are hosted on the different compute
  nodes (CN-1 & CN-2).

  stack@CTL:~$ nova show vm11 | grep OS-EXT-SRV-ATTR:host
  | OS-EXT-SRV-ATTR:host | CN-1 
|
  | OS-EXT-SRV-ATTR:hostname | vm11  
 |
  stack@CTL:~$ nova show vm12 | grep OS-EXT-SRV-ATTR:host
  | OS-EXT-SRV-ATTR:host | CN-2 
|
  | OS-EXT-SRV-ATTR:hostname | vm12 
|

  
  Router namespace is present on both the compute nodes:

  stack@CN-1:~$ ip netns
  qrouter-9d439e4e-c4c6-4901-ba32-0e793b4df3d8

  stack@CN-2:~$ sudo ip netns
  qrouter-9d439e4e-c4c6-4901-ba32-0e793b4df3d8

  
  After migrating VM12 to CN-1:(Both VMs are now hosted on CN-1)

  stack@CTL:~$ nova show vm11 | grep OS-EXT-SRV-ATTR:host
  | OS-EXT-SRV-ATTR:host | CN-1 
|
  | OS-EXT-SRV-ATTR:hostname | vm11 
|
  stack@CTL:~$ nova show vm12 | grep 

[Yahoo-eng-team] [Bug 1518453] [NEW] Could not find default role "_member_" in Keystone

2015-11-20 Thread Obed N Munoz
Public bug reported:


This exception is obtained  when one is trying to manage members of Projects.

Steps for replicating it in Horizon:

1. Login into Horizon as 'admin'
2. Go to the Identity tab
3. Click on 'Projects' link
4. From the Projects list, click on 'Manage Members' in the 'admin' project row.


Workaround:

1. Create the role _member_
2. Add 'admin' user to '_member_' role

Once I added admin user to this role, it works. I'm not sure if it should be 
added to documentation or if there's an issue in 
other place.


Exception trace:

Nov 20 20:20:11 clr keystone-wsgi-admin[1754]: GET 
http://127.0.0.1:35357/v3/roles
Nov 20 20:20:11 clr uwsgi[1730]: 2015-11-20 20:20:11.252 1754 INFO 
keystone.common.wsgi [req-d35f0fa2-f5fe-463a-bbde-e7a90f1f72e0 - - - - -] GET 
http://127.0.0.1:35357/v3/roles
Nov 20 20:20:11 clr uwsgi[1730]: [pid: 1754|app: 0|req: 91/324] 127.0.0.1 () 
{42 vars in 580 bytes} [Fri Nov 20 20:20:11 2015] GET /v3/roles => generated 
392 bytes in 10 msecs (HTTP/1.1 200) 4 headers in 158 bytes (1 switches on core 
0)
Nov 20 20:20:11 clr nginx[2883]: clr nginx: 127.0.0.1 - - [20/Nov/2015:20:20:11 
+] "GET /v3/roles HTTP/1.1" 200 392 "-" "python-keystoneclient"
Nov 20 20:20:11 clr uwsgi[3970]: Recoverable error: Could not find default role 
"_member_" in Keystone
Nov 20 20:20:11 clr uwsgi[3970]: Problem instantiating action class.
Nov 20 20:20:11 clr uwsgi[3970]: Traceback (most recent call last):
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/horizon/workflows/base.py", line 370, in 
action
Nov 20 20:20:11 clr uwsgi[3970]: context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/share/httpd/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/identity/projects/workflows.py",
 line 208, in __init__
Nov 20 20:20:11 clr uwsgi[3970]: redirect=reverse(INDEX_URL))
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/horizon/exceptions.py", line 368, in handle
Nov 20 20:20:11 clr uwsgi[3970]: log_method, log_entry, log_level)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/horizon/exceptions.py", line 277, in 
handle_recoverable
Nov 20 20:20:11 clr uwsgi[3970]: raise Http302(redirect)
Nov 20 20:20:11 clr uwsgi[3970]: Http302
Nov 20 20:20:11 clr uwsgi[3970]: Internal Server Error: 
/identity/aaa6ed6cc2734ef29ee9b0373beedbdc/update/
Nov 20 20:20:11 clr uwsgi[3970]: Traceback (most recent call last):
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 164, in 
get_response
Nov 20 20:20:11 clr uwsgi[3970]: response = response.render()
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/response.py", line 158, in 
render
Nov 20 20:20:11 clr uwsgi[3970]: self.content = self.rendered_content
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/response.py", line 135, in 
rendered_content
Nov 20 20:20:11 clr uwsgi[3970]: content = template.render(context, 
self._request)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/backends/django.py", line 74, 
in render
Nov 20 20:20:11 clr uwsgi[3970]: return self.template.render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 210, in render
Nov 20 20:20:11 clr uwsgi[3970]: return self._render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 202, in _render
Nov 20 20:20:11 clr uwsgi[3970]: return self.nodelist.render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 905, in render
Nov 20 20:20:11 clr uwsgi[3970]: bit = self.render_node(node, context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 919, in 
render_node
Nov 20 20:20:11 clr uwsgi[3970]: return node.render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 576, in 
render
Nov 20 20:20:11 clr uwsgi[3970]: return self.nodelist.render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 905, in render
Nov 20 20:20:11 clr uwsgi[3970]: bit = self.render_node(node, context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/base.py", line 919, in 
render_node
Nov 20 20:20:11 clr uwsgi[3970]: return node.render(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 224, in 
render
Nov 20 20:20:11 clr uwsgi[3970]: nodelist.append(node.render(context))
Nov 20 20:20:11 clr uwsgi[3970]: File 
"/usr/lib/python2.7/site-packages/django/template/defaulttags.py", line 322, in 
render
Nov 20 20:20:11 clr uwsgi[3970]: match = condition.eval(context)
Nov 20 20:20:11 clr uwsgi[3970]: File 

[Yahoo-eng-team] [Bug 1518431] [NEW] Glance failed to upload image to swift storage via RadosGW

2015-11-20 Thread Andrey Shestakov
Public bug reported:

When glance configured with swift backend, and swift API provides via
RadosGW is unable to upload image.

Command:
glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
Logs:
http://paste.openstack.org/show/479621/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1518431

Title:
  Glance failed to upload image to swift storage via RadosGW

Status in Glance:
  New

Bug description:
  When glance configured with swift backend, and swift API provides via
  RadosGW is unable to upload image.

  Command:
  glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
  Logs:
  http://paste.openstack.org/show/479621/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1518431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518436] Re: RFE: non-admins should be able to get their deleted instances

2015-11-20 Thread Matt Riedemann
Opened a blueprint instead:

https://blueprints.launchpad.net/nova/+spec/non-admin-list-deleted-
instances

And will create a backlog spec for this.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518436

Title:
  RFE: non-admins should be able to get their deleted instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Listing deleted instances is admin only, but it's not clear why non-
  admins can't list deleted instances in their own project/tenant.  This
  should be policy driven so that non-admins can list the deleted
  instances in their project.

  I'm not exactly sure where this is enforced in the code, however. It
  doesn't fail, it just doesn't return anything:

  stack@archive:~/devstack$ nova list --deleted
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+

  
  This is slightly different but very explicit:

  
https://github.com/openstack/nova/blob/12.0.0/nova/api/openstack/compute/servers.py#L335-L340

  Results in:

  stack@archive:~/devstack$ nova list --deleted --status 'deleted'
  ERROR (Forbidden): Only administrators may list deleted instances (HTTP 403) 
(Request-ID: req-fb8ed625-2f2d-45ff-87cd-b5571cdf1dac)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518455] [NEW] Top Nav shouldn't include bottom margin

2015-11-20 Thread Diana Whitten
Public bug reported:

In the 'default' theme, we remove the default bootstrap margin on the
top navbar because it creates a gap between the sidenav and the topnav.

This should be a global style so that other themes can take advantage of
it.

** Affects: horizon
 Importance: Undecided
 Assignee: Diana Whitten (hurgleburgler)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518455

Title:
  Top Nav shouldn't include bottom margin

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the 'default' theme, we remove the default bootstrap margin on the
  top navbar because it creates a gap between the sidenav and the
  topnav.

  This should be a global style so that other themes can take advantage
  of it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518459] [NEW] "Manage Attachments" option should be available under instance actions in Compute Menu

2015-11-20 Thread Pushkar Umaranikar
Public bug reported:

"Manage Attachments" option for attaching Cinder volume to an instance is not 
available under instance actions. 
User has to switch to Compute -> Volume tab to attach volume to an instance. 
Instead we can add option for "Manage Attachments" under instance actions.

Steps to reproduce:
1) Login to Horizon and Navigate to Compute -> instances tab.
2) Launch Instance and wait for instance status to become "active".
3)  We can see various options under actions column for perform operations on 
instance.
4) "Manage Attachments" option is not available.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1518459

Title:
  "Manage Attachments" option should be available under instance actions
  in Compute Menu

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  "Manage Attachments" option for attaching Cinder volume to an instance is not 
available under instance actions. 
  User has to switch to Compute -> Volume tab to attach volume to an instance. 
Instead we can add option for "Manage Attachments" under instance actions.

  Steps to reproduce:
  1) Login to Horizon and Navigate to Compute -> instances tab.
  2) Launch Instance and wait for instance status to become "active".
  3)  We can see various options under actions column for perform operations on 
instance.
  4) "Manage Attachments" option is not available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1518459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518466] [NEW] Fullstack connectivity tests fail intermittently

2015-11-20 Thread Assaf Muller
Public bug reported:

Fullstack test_connectivity_* fail at the gate from time to time. This
happens locally as well.

The test sets up a couple of fake VMs and issues a ping from one to
another. This ping can fail from time to time. I issued a break point
after such a failure and issued a ping myself and it worked.

I think that the ping should be changed to a block/wait_until_ping.

** Affects: neutron
 Importance: Low
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: fullstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518466

Title:
  Fullstack connectivity tests fail intermittently

Status in neutron:
  In Progress

Bug description:
  Fullstack test_connectivity_* fail at the gate from time to time. This
  happens locally as well.

  The test sets up a couple of fake VMs and issues a ping from one to
  another. This ping can fail from time to time. I issued a break point
  after such a failure and issued a ping myself and it worked.

  I think that the ping should be changed to a block/wait_until_ping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420125] Re: href variables for federation controller are inconsistent

2015-11-20 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/248312

** Changed in: keystone
   Status: Won't Fix => In Progress

** Changed in: keystone
 Assignee: Matthieu Huin (mhu-s) => Jamie Lennox (jamielennox)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1420125

Title:
  href variables for federation controller are inconsistent

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  For the most part, the href variables seen in JSON home requests for
  federation resources are consistent,
  
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/routers.py
  they are usually idp_id, sp_id, protocol_id and mapping_id.

  Except for the following block:

path=self._construct_url('identity_providers/{identity_provider}/'
'protocols/{protocol}/auth'),

  Where 'identity_provider' and 'protocol' are used instead of 'idp_id'
  and 'protocol_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1420125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


Re: [Yahoo-eng-team] [Question #273752]: Questions of the way that multipip solve python package conflicts

2015-11-20 Thread Launchpad Janitor
Question #273752 on anvil changed:
https://answers.launchpad.net/anvil/+question/273752

Status: Open => Expired

Launchpad Janitor expired the question:
This question was expired because it remained in the 'Open' state
without activity for the last 15 days.

-- 
You received this question notification because your team Yahoo!
Engineering Team is an answer contact for anvil.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358379] Re: drop_resize_claim() can't release the resource in some small window

2015-11-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358379

Title:
  drop_resize_claim() can't release the resource in some small window

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Currently the resize resource claim is achieved through resize_claim()
  and drop_resize_claim() pair. In theory, the claim should be released
  after the drop_resize_claim() be called. However, there is a small
  window that this release will not happen.

  Currently RT tracker resource usage by two category: the instances hosted on 
the node (the  _update_usage_from_instances())  and the migration in/out of the 
node (the _update_usage_from_migrations()). 
  A instance hosted in the node is sure to have resource claim, an in/out 
migration that the instance is not hosted in the node will also have a resource 
claim. If a resize happens to the same host, then one claim will be tracked in 
the instance side and another is in the migration side. Such audit happens in 
the update_vailable_resources() periodic task.

  
  Current drop_resize_claim() implementation always assume the related resource 
is in the tracked migration, however, this is not true if the 
drop_resize_claim() happens before the audit periodic task. Considering the 
audit happens in time t1 and (t1 + 60s) assuming the audit periodic is 60s. And 
between these two audit time, a instance in this node is resized to another 
node, and user confirm the resize() too (i.e. this node is the source node).

  Because the resize happend between the audit periodic task, the RT has
  no idea and no migration tracked. Thus when
  drop_resize_claim(prefix='old_') happens, it has no resource claim to
  release. The release will happen till next audit cycle, which will
  find the host is not hosted in the node.

  I'm not sure if this is really a  issue. I think a) the result purely
  depends on the periodic task lengthy. If the periodic task lengthy is
  very long, it will cause resource waste, or in worst situation, a
  potential DoS issue. But it should be ok if the periodic task is
  short. b)From an implementation point of view,
  drop_resize_claim(prefix='old_') return successfully w/o release the
  resource is bogus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461827] Re: Fail to attach volumes using FC multipath

2015-11-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461827

Title:
  Fail to attach volumes using FC multipath

Status in OpenStack Compute (nova):
  Expired

Bug description:
  * Description

  Under FC-SAN multipath configuration, VM instances sometimes fail to
  grab volumes using multipath as expected.

  Because of this issue:
   - A single haredware failure in the FC Fabric can be exposed to VM instances 
regardless of physical multipath configuration
   - Performane can be also affected if it's configured active-active balancing 
policy

  
  * Version

  I found this issue while working on a stable/juno based OpenStack
  distribution, but I think master still has the same problem .

  
  * How to reproduce

Anyway, setup a Nova/Cinder deployment using Linux/KVM  with a
  multipath FC fabric.

As I describe below, this problem happens when:

 1) multipathd is down when nova-compute tried to find multipath device
   or
 2) It took long time for multipathd to configure multipath devices.
For example, a couple of minutes.
I think this happens by various reasons.

  * Expected results

 On the compute node hosting the VM in question, by using 'virsh
  dumpxml DOMAIN_ID', you can get source path device name(s) of virtual
  disk(s)  attached to the VM instance and  check if the disk are
  multipath devices or not.

 Under FC-SAN multipath environment, they are expected to be
  '/dev/mapper/X'. For example:

  | root@overcloud-ce-novacompute0-novacompute0-ueruxqghm5vm:~# virsh dumpxml 2 
| grep dev
  |
  |  
  |
  |  
  |  
  |  

  
  * Actual results

 Among the result of 'virsh dumpxml DOMAIN_ID', you sometimes (in
  case of me, often) see non-multipath device path name(s) like the
  following.

  |
  |  
  |  
  |  
  |  
  |  d4d64f3c-bd43-4bc6-8a58-230d677c188b
  |  
  |  
  |

  
  * Analysis

I think this comes from Nova volume connection handling code of
  Fiber Channel, ‘connect_volume’ method.

In case of master,
   
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L1301
in case of upstream stable/juno

https://github.com/openstack/nova/blob/stable/juno/nova/virt/libvirt/volume.py#L1012

  
   'connect_volume'  method above is in charge of connecting a LUN to host 
Linux side,
   and here is the problem.

After an FC storage box exported LUNs to a compute node, it takes time 
until:
  (1)  SCSI devices are discovered by the host Linux of the compute node
 and then
  (2) 'multipathd' detects and configures multipath devices using device 
mapper

   ‘connect_volume' retries and waits for above (1), but there is no
  retry logic in the above (2).

Thus, nova-compute service sometimes fails to recognize multipath FC
  devices and attaches single path devices to VM instances when it takes
  time.

  
  * Resolution / Discussion

I think we need to add retry logic for detecting and waiting for
  multipath  device files in 'connect_volume' method of
  LibvirtFibreChannelDriver class of nova.

  
In case of failure (timeout to detect multipath devices), there could be 
several options, I think.

choice 1) Make the attach_volume request fail.
If so, which HTTP status code?

choice 2) Go forward with single path.
But, from a viewpoint of SLA of a service provider, this is a 
degradation.
I'm wondering if it's better to return a HTTP status code other than  
HTTP 202 or not. 

Maybe it's better to allow administrators to choose the expected
  behavior  by nova.conf options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp