[Yahoo-eng-team] [Bug 1316744] Re: VPNAAS :Sometimes VM across the vpn sites are not able to ping from one side

2014-07-11 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316744

Title:
  VPNAAS :Sometimes VM across the vpn sites are not able to ping from
  one side

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Steps to Reproduce: 
  1. Create a site with vm and all other vpn operations.
  neutron  vpn-ipsecsite-connection list
  
+--++---+++---++
  | id   | name   | peer_address  | 
peer_cidrs | route_mode | auth_mode | status |
  
+--++---+++---++
  | d66dc9f8-6ffe-48da-a839-ee8dc9525cd1 | vpnconnection1 | $Peer_Address2 | 
"11.11.1.0/24" | static | psk   | ACTIVE |
  
+--++---+++---++
   neutron vpn-service-list
  
+--++--++
  | id   | name   | router_id   
 | status |
  
+--++--++
  | 3d27724b-8de9-44df-9ebc-342c8c7d501f | myvpn1 | 
041afb22-b78c-4d79-aeeb-8d493c03ddf7 | ACTIVE |
  
+--++--++

  Site 2:
   Create a site with vm and all other vpn operations.
   neutron vpn-service-list
  
+--++--++
  | id   | name   | router_id   
 | status |
  
+--++--++
  | 55371fdd-f9ab-418b-a55c-52f082de3700 | myvpn1 | 
a05743dc-678b-4334-bd00-927f978b079f | ACTIVE |
  
+--++--++
  neutron  vpn-ipsecsite-connection list
  
+--++---+++---++
  | id   | name   | peer_address  | 
peer_cidrs | route_mode | auth_mode | status |
  
+--++---+++---++
  | ce2cf55e-99a5-4cb4-9ff9-3c649dcf31ff | vpnconnection1 | $Peer_Address1 | 
"10.10.1.0/24" | static | psk   | ACTIVE |
  
+--++---+++---++
   
  Try to ping the vm across the site .
  Actual Results: 
  VM1 on site1 is not able to ping to VM2 onto site 2
  VM2 on site2 is able to ping VM1 onto site1

  Expected Results: VM across the sites should able to ping

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341042] [NEW] using neutron CLI to set a bad gateway result in dnsmasq continuously restart

2014-07-11 Thread Pauline Yeung
Public bug reported:

I'm using devstack stable/icehouse, and my neutron version is
1409da70959496375f1ac45457663a918ec8

I created an internal network not connected to the router using default gateway 
and dhcp enabled.
I then created a VM which activated dnsmasq, which offered an IP to this VM.

If I mis-update the gateway, by going to Horizon, "Networks", "test-net", 
"subnet-2" "Edit Subnet", and set
  Gateway IP (optional): 10.100.100.100
or mis-update the gateway using neutron CLI,
it results in neutron-dhcp-agent continously restarting the dnsmasq for the 
"private" network.

Horizon will not allow the user to mis-update the gateway, if I set 
"force_gateway_on_subnet = True" in /etc/neutron/neutron.conf.
But this parameter does not affect the behavior of the neutron CLI.


> neutron net-create test-net
Created a new network:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| id | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
| name   | test-net |
| shared | False|
| status | ACTIVE   |
| subnets|  |
| tenant_id  | 8092813be8fd4122a20ee3a6bfe91162 |
++--+

> neutron net-list
+--+--+--+
| id   | name | subnets 
 |
+--+--+--+
| 0dd5722d-f535-42ec-9257-437c05e4de25 | private  | 
81859ee5-4ea5-4e60-ab2a-ba74146d39ba 10.0.0.0/24 |
| 27c1649d-f6fc-4893-837d-dbc293fc4b80 | public   | 
6c1836a1-eb7d-4acb-ad6f-6c394cedced5 |
| b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a | test-net | 
 |
+--+--+--+

> neutron subnet-create --name subnet-2 test-net --enable_dhcp=True 
> 10.10.150.0/24
Created a new subnet:
+--+--+
| Field| Value|
+--+--+
| allocation_pools | {"start": "10.10.150.2", "end": "10.10.150.254"} |
| cidr | 10.10.150.0/24   |
| dns_nameservers  |  |
| enable_dhcp  | True |
| gateway_ip   | 10.10.150.1  |
| host_routes  |  |
| id   | 4e462af6-7514-413c-bf55-4ae9a1643ce9 |
| ip_version   | 4|
| name | subnet-2 |
| network_id   | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
| tenant_id| 8092813be8fd4122a20ee3a6bfe91162 |
+--+--+

> neutron subnet-list
+--+++--+
| id   | name   | cidr   | 
allocation_pools |
+--+++--+
| 4e462af6-7514-413c-bf55-4ae9a1643ce9 | subnet-2   | 10.10.150.0/24 | 
{"start": "10.10.150.2", "end": "10.10.150.254"} |
| 81859ee5-4ea5-4e60-ab2a-ba74146d39ba | private-subnet | 10.0.0.0/24| 
{"start": "10.0.0.2", "end": "10.0.0.254"}   |
+--+++--+

> ps -ef |grep dnsmasq | grep -v grep
nobody 995   635  0 16:09 ?00:00:00 /usr/sbin/dnsmasq --no-resolv 
--keep-in-foreground --no-hosts --bind-interfaces 
--pid-file=/run/sendsigs.omit.d/network-manager.dnsmasq.pid 
--listen-address=127.0.1.1 --conf-file=/var/run/NetworkManager/dnsmasq.conf 
--cache-size=0 --proxy-dnssec 
--enable-dbus=org.freedesktop.NetworkManager.dnsmasq 
--conf-dir=/etc/NetworkManager/dnsmasq.d
nobody   12991 1  0 18:21 ?00:00:00 dnsmasq --no-hosts --no-resolv 
--strict-order --bind-interfaces --interface=tap75786806-5a 
--except-interface=lo 
--pid-file=/opt/stack/data/neutron/dhcp/0dd5722d-f535-42ec-9257-437c05e4de25/pid
 
--dhcp-hostsfile=/opt/stack/data/neutron/dhcp/0dd5722d-f535-42ec-9257-437c05e4de25/host
 
--addn-hosts=/opt/stack/da

[Yahoo-eng-team] [Bug 1341040] [NEW] neutron CLI should not allow user to create /32 subnet

2014-07-11 Thread Pauline Yeung
Public bug reported:

I'm using devstack stable/icehouse, and my neutron version is
1409da70959496375f1ac45457663a918ec8

I created an internal network not connected to the router.  If I mis-configure 
the subnet, Horizon will catch the problem, but not neutron CLI.
Subsequently VM cannot be created on this misconfigured subnet, as it ran out 
of IP to offer to the VM.

> neutron net-create test-net
Created a new network:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| id | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
| name   | test-net |
| shared | False|
| status | ACTIVE   |
| subnets|  |
| tenant_id  | 8092813be8fd4122a20ee3a6bfe91162 |
++--+

If I use Horizon, go to "Networks", "test-net", "Create Subnet", then use 
parameters,
  Subnet Name: subnet-1
  Network Address: 10.10.150.0/32
  IP Version: IPv4
Horizon returns the error message "The subnet in the Network Address is too 
small (/32)."

If I use neutron CLI,

> neutron subnet-create --name subnet-1 test-net 10.10.150.0/32
Created a new subnet:
+--+--+
| Field| Value|
+--+--+
| allocation_pools |  |
| cidr | 10.10.150.0/32   |
| dns_nameservers  |  |
| enable_dhcp  | True |
| gateway_ip   | 10.10.150.1  |
| host_routes  |  |
| id   | 4142ff1d-28de-4e77-b82b-89ae604190ae |
| ip_version   | 4|
| name | subnet-1 |
| network_id   | b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a |
| tenant_id| 8092813be8fd4122a20ee3a6bfe91162 |
+--+--+

> neutron net-list
+--+--+-+
| id   | name | subnets 
|
+--+--+-+
| 0dd5722d-f535-42ec-9257-437c05e4de25 | private  | 
81859ee5-4ea5-4e60-ab2a-ba74146d39ba 10.0.0.0/24|
| 27c1649d-f6fc-4893-837d-dbc293fc4b80 | public   | 
6c1836a1-eb7d-4acb-ad6f-6c394cedced5|
| b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a | test-net | 
4142ff1d-28de-4e77-b82b-89ae604190ae 10.10.150.0/32 |
+--+--+-+

> nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny --nic 
> net-id=b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a vm2
:
:

> nova list
+--+--+++-+--+
| ID   | Name | Status | Task State | Power > 
State | Networks |
+--+--+++-+--+
| d98511f7-452c-4ab6-8af9-d73576714c87 | vm1  | ACTIVE | -  | Running   
  | private=10.0.0.2 |
| b12b6a6d-4ab9-43b2-825c-ae656a7aafc4 | vm2  | ERROR  | -  | NOSTATE   
  |  |
+--+--+++-+--+

I get this output from screen:

2014-07-11 18:37:32.327 DEBUG neutronclient.client [-] RESP:409
CaseInsensitiveDict({'date': 'Sat, 12 Jul 2014 01:37:32 GMT', 'content-
length': '164', 'content-type': 'application/json; charset=UTF-8', 'x
-openstack-request-id': 'req-35a49577-5a3d-4a98-a790-52694f09d59a'})
{"NeutronError": {"message": "No more IP addresses available on network
b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a.", "type":
"IpAddressGenerationFailure", "detail": ""}}

2014-07-11 18:37:32.327 DEBUG neutronclient.v2_0.client [-] Error
message: {"NeutronError": {"message": "No more IP addresses available on
network b7bb10bb-48e0-4c1a-a5fc-9590b6619f5a.", "type":
"IpAddressGenerationFailure", "detail": ""}} from (pid=31896)
_handle_fault_response /opt/stack/python-
neutronclient/neutronclient/v2_0/client.py:1202`

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341040

Title:
  neutron CLI should not allow user to create /32 subnet

Status in OpenStac

[Yahoo-eng-team] [Bug 1341038] [NEW] volume creation should provide some validation before 'Error' status

2014-07-11 Thread Cindy Lu
Public bug reported:

Since Cinder doesn't return any error messages for Create Volume,
Horizon just shows the blue "info" message (Info: Creating volume
).  The volume may fall into Error state.

Scenario:
1. Go to 'Create Volume' form
2. For 'Volume Source', select an image
3. You will notice 'Size' field is automatically populated
4. Change 'Size' field to something smaller
5. Press 'Create'
6. It will show blue tooltip, then fail without any explanation

To understand the issue, you need check into volume.log to get something
like: ' ImageUnacceptable: CN-2B1EAC4 Image
725d646b-c030-4c1c-a843-66565a01d571 is unacceptable: CN-07AA282 Size is
20GB and doesn't fit in a volume of size 7GB.'

If Horizon could provide some preliminary checks that would be better
(e.g like what we do elsewhere)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1341038

Title:
  volume creation should provide some validation before 'Error' status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Since Cinder doesn't return any error messages for Create Volume,
  Horizon just shows the blue "info" message (Info: Creating volume
  ).  The volume may fall into Error state.

  Scenario:
  1. Go to 'Create Volume' form
  2. For 'Volume Source', select an image
  3. You will notice 'Size' field is automatically populated
  4. Change 'Size' field to something smaller
  5. Press 'Create'
  6. It will show blue tooltip, then fail without any explanation

  To understand the issue, you need check into volume.log to get
  something like: ' ImageUnacceptable: CN-2B1EAC4 Image
  725d646b-c030-4c1c-a843-66565a01d571 is unacceptable: CN-07AA282 Size
  is 20GB and doesn't fit in a volume of size 7GB.'

  If Horizon could provide some preliminary checks that would be better
  (e.g like what we do elsewhere)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1341038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341020] [NEW] after add_port, get_port_ofport may be called before vswitchd has assigned the ofport

2014-07-11 Thread Terry Wilson
Public bug reported:

OVSBridge.add_port() runs ovs-vsctl to add the port, then runs ovs-vsctl
again to query the ofport of the newly created port. The ofport gets
assigned outside of any kind of transaction, and the OVS api defines an
empty set response([]) to mean that the ofport assignment is still
pending and a response of '-1' to mean that there is a failure.

The current get_port_ofport code treats these responses both as failures
even though '[]' will most likely later succeed. We need to implement a
retry mechanism to ensure that we don't incorrectly fail port creation.
Raising an exception on retry expiration also seems like a good idea.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341020

Title:
  after add_port, get_port_ofport may be called before vswitchd has
  assigned the ofport

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  OVSBridge.add_port() runs ovs-vsctl to add the port, then runs ovs-
  vsctl again to query the ofport of the newly created port. The ofport
  gets assigned outside of any kind of transaction, and the OVS api
  defines an empty set response([]) to mean that the ofport assignment
  is still pending and a response of '-1' to mean that there is a
  failure.

  The current get_port_ofport code treats these responses both as
  failures even though '[]' will most likely later succeed. We need to
  implement a retry mechanism to ensure that we don't incorrectly fail
  port creation. Raising an exception on retry expiration also seems
  like a good idea.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341014] [NEW] Update VSM credential effectively

2014-07-11 Thread AARON ZHANG
Public bug reported:

Today if we modify the VSM credential in the cisco_plugins.ini, the
older VSM ip address remains in the db, and all requests are sent to
the older VSM. This patch deletes all VSM credentials on neutron
start up before adding the newer VSM credentials. Hence making sure
that there is only one VSM IP address and credential in the db.

** Affects: neutron
 Importance: Undecided
 Assignee: AARON ZHANG (fenzhang)
 Status: New


** Tags: cisco n1kv

** Changed in: neutron
 Assignee: (unassigned) => AARON ZHANG (fenzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341014

Title:
  Update VSM credential effectively

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Today if we modify the VSM credential in the cisco_plugins.ini, the
  older VSM ip address remains in the db, and all requests are sent to
  the older VSM. This patch deletes all VSM credentials on neutron
  start up before adding the newer VSM credentials. Hence making sure
  that there is only one VSM IP address and credential in the db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340993] [NEW] Glance + SSL - Image download errors

2014-07-11 Thread Kris Lindgren
Public bug reported:

Hello,

I have a latest stable havana (2013.2.3) openstack setup and I am
noticing issues occasionally when downloading new backing files for vm's
to compute nodes.  I will occasionally end up with vm's that are stuck
spawning, upon investigation I can see the backing file under
/var/nova/instances/_base/.part is created but is
only partially downloaded and hasn't been update in some time (some
times days).  Side not - you are unable to a delete a vm in this state
successfully - it will always be stuck in deleting, until you restart
nova-compute on the compute node and perform the delete again.

I have managed to create some scripts that will replicate the issue
multiple ways.  The image files that I have been testing with are 8.8gb,
8.6gb and a large 60gb image (however another larger 8gb image would
also duplicate the issue).

The first script:
https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-
multiboot-sh

Will take the image files that you give it and will deploy a vm per
image file to the compute node that you have specified.  With SSL
enabled typically only 1 VM will ever boot successfully.  Errors here
will range from failed (md5sum mismatches) image downloads to backing
files that are only partially downloaded.  To narrow down the issue I
switched over to using the glance client to do image downloads.

The second script:
https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-multi-
img-download-sh

Will take the images specified on the command line and run the glance
image-download command in a parallel bash subshell.  This script removes
nova from the mix.  However, errors seen here are the same as what I
have seen with the first script.

The thrid script:
https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-multi-
img-download-newclient-sh

Uses: https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-
client-py instead of the glance cli to download the image.  I believe it
also uses a different download library as well.  WIth this client I will
usually get 2 successful images downloads (sometimes 3), but the issue
still exists.

With all the scripts, and after a lot of testing I have found that this
issue is 100% re-producible when trying to download 3 images at the same
time.   But I have also noticed in production that this issue happens
when only downloading a single image on a compute node.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1340993

Title:
  Glance + SSL - Image download errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hello,

  I have a latest stable havana (2013.2.3) openstack setup and I am
  noticing issues occasionally when downloading new backing files for
  vm's to compute nodes.  I will occasionally end up with vm's that are
  stuck spawning, upon investigation I can see the backing file under
  /var/nova/instances/_base/.part is created but is
  only partially downloaded and hasn't been update in some time (some
  times days).  Side not - you are unable to a delete a vm in this state
  successfully - it will always be stuck in deleting, until you restart
  nova-compute on the compute node and perform the delete again.

  I have managed to create some scripts that will replicate the issue
  multiple ways.  The image files that I have been testing with are
  8.8gb, 8.6gb and a large 60gb image (however another larger 8gb image
  would also duplicate the issue).

  The first script:
  https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-
  multiboot-sh

  Will take the image files that you give it and will deploy a vm per
  image file to the compute node that you have specified.  With SSL
  enabled typically only 1 VM will ever boot successfully.  Errors here
  will range from failed (md5sum mismatches) image downloads to backing
  files that are only partially downloaded.  To narrow down the issue I
  switched over to using the glance client to do image downloads.

  The second script:
  https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-multi-
  img-download-sh

  Will take the images specified on the command line and run the glance
  image-download command in a parallel bash subshell.  This script
  removes nova from the mix.  However, errors seen here are the same as
  what I have seen with the first script.

  The thrid script:
  https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-multi-
  img-download-newclient-sh

  Uses: https://gist.github.com/krislindgren/fc519aa03d350f42e9e6#file-
  client-py instead of the glance cli to download the image.  I believe
  it also uses a different download library as well.  WIth this client I
  will usually get 2 successful images downloads (sometimes 3), but the
  issue still exists.

  With all the scripts, and af

[Yahoo-eng-team] [Bug 1194639] Re: Authenticate VNC Proxy-to-Host Connection

2014-07-11 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1194639

Title:
  Authenticate VNC Proxy-to-Host Connection

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  The VNC proxy to host link should be authenticated.  As it currently
  stands, if some malicious entity managed to get into an OpenStack
  cloud's internal network, they could simply connect using their VNC
  client of choice to any compute host node at ports 5900, 5901, etc an
  get access to the VMs.  This is not desirable.  In situations where a
  Kerberos installation is available, the link between the proxy and the
  host should be protected by Kerberos.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1194639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223695] Re: nova server creation with inexistent security group raises 404 with neutron

2014-07-11 Thread Joe Gordon
Is this bu still valid? Or is this a neutron bug?

** Changed in: nova
   Status: In Progress => Incomplete

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223695

Title:
  nova server creation with inexistent security group raises 404 with
  neutron

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  Trying to create a new server with a security group that does not
  exist raises a 404 error with neutron, but not with nova-network.
  Since the security group not existing is not directly related to the
  server creation, I agree with the nova-network API, that the raised
  error should be 400, not 404.

  The following test fails in tempest when openstack is configured with
  neutron:
  
tempest.tests.compute.servers.test_servers_negative:ServersNegativeTest.test_create_with_nonexistent_security_group

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1223695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340969] [NEW] NSX: Correct default timeout params

2014-07-11 Thread Aaron Rosen
Public bug reported:

Previously, req_timeout and http_timeout where set to the same value
which is not correct. req_timeout is the total time limit for a cluster
request and  http_timeout is the time allowed before aborting a request on
an unresponsive controller. Since the default configuration allows 2
retires req_timeout should be double that of http_timeout.

This patch also bumps the timeout values to be higher. We've seen
more and more timeouts occur in our CI system largely because our
cloud is overloaded so increaing the default timeouts will help reduce
test failures.

** Affects: neutron
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: In Progress


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided => High

** Tags added: nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340969

Title:
  NSX: Correct default timeout params

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Previously, req_timeout and http_timeout where set to the same value
  which is not correct. req_timeout is the total time limit for a cluster
  request and  http_timeout is the time allowed before aborting a request on
  an unresponsive controller. Since the default configuration allows 2
  retires req_timeout should be double that of http_timeout.

  This patch also bumps the timeout values to be higher. We've seen
  more and more timeouts occur in our CI system largely because our
  cloud is overloaded so increaing the default timeouts will help reduce
  test failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340970] [NEW] Excessive logging due to defaults being unset in tests

2014-07-11 Thread Morgan Fainberg
Public bug reported:

Keystone logs from tests tend to be excessively large due to the default log 
levels not being set.
This can occasionally cause logs to exceed the 50MB limit (infra) on a 
gate/check job.

** Affects: keystone
 Importance: Medium
 Assignee: Morgan Fainberg (mdrnstm)
 Status: In Progress

** Changed in: keystone
   Status: New => Confirmed

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
 Assignee: (unassigned) => Morgan Fainberg (mdrnstm)

** Changed in: keystone
Milestone: None => juno-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1340970

Title:
  Excessive logging due to defaults being unset in tests

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Keystone logs from tests tend to be excessively large due to the default log 
levels not being set.
  This can occasionally cause logs to exceed the 50MB limit (infra) on a 
gate/check job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1340970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334779] Re: db_sync breaks in non-utf8 databases on region table

2014-07-11 Thread Morgan Fainberg
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334779

Title:
  db_sync breaks in non-utf8 databases on region table

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  In Progress

Bug description:
  The migration that creates the region table does not explicitly set
  utf8 so if the database default is not set, then db_sync fails with
  the following error:

  2014-06-26 17:00:48.231 965 CRITICAL keystone [-] ValueError: Tables "region" 
have non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-06-26 17:00:48.231 965 TRACE keystone Traceback (most recent call last):
  2014-06-26 17:00:48.231 965 TRACE keystone File "/usr/bin/keystone-manage", 
line 51, in 
  2014-06-26 17:00:48.231 965 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
"/usr/lib/python2.7/dist-packages/keystone/cli.py", line 191, in main
  2014-06-26 17:00:48.231 965 TRACE keystone CONF.command.cmd_class.main()
  2014-06-26 17:00:48.231 965 TRACE keystone File 
"/usr/lib/python2.7/dist-packages/keystone/cli.py", line 67, in main
  2014-06-26 17:00:48.231 965 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
"/usr/lib/python2.7/dist-packages/keystone/common/sql/migration_helpers.py", 
line 139, in sync_database_to_version
  2014-06-26 17:00:48.231 965 TRACE keystone 
migration.db_sync(sql.get_engine(), abs_path, version=version)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/migration.py",
 line 195, in db_sync
  2014-06-26 17:00:48.231 965 TRACE keystone _db_schema_sanity_check(engine)
  2014-06-26 17:00:48.231 965 TRACE keystone File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/db/sqlalchemy/migration.py",
 line 228, in _db_schema_sanity_check
  2014-06-26 17:00:48.231 965 TRACE keystone ) % ','.join(table_names))
  2014-06-26 17:00:48.231 965 TRACE keystone ValueError: Tables "region" have 
non utf8 collation, please make sure all tables are CHARSET=utf8
  2014-06-26 17:00:48.231 965 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340793] Re: DB2 deadlock error not supported

2014-07-11 Thread Matt Riedemann
** Changed in: nova
   Status: New => Confirmed

** Tags added: db oslo

** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Summary changed:

- DB2 deadlock error not supported
+ DB2 deadlock error not detected

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340793

Title:
  DB2 deadlock error not detected

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  Currently, only mysql and postgresql deadlock errors are properly handled. 
  The error message for DB2 looks like:

  'SQL0911N  The current transaction has been rolled back because of a
  deadlock or timeout.  '

  Olso.db needs to include a regex to detect this deadlock. Essentially the 
same as 
  https://bugs.launchpad.net/nova/+bug/1270725
  but for DB2

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1340793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340913] [NEW] improve display of size information in trove database details

2014-07-11 Thread Andrew Bramley
Public bug reported:

Currently in the trove databases list we have a size colume that contains 
 |  
e.g. eph.rd-smaller | 768MB RAM

When you drill down to the instance details page you only see the RAM
listed

If you compare this to a nova compute instance then in the table we display 
something like:
eph.rd-smaller | 768MB RAM | 1 VCPU | 3.0GB Disk

and then when you drill down you see:
Specs
Flavor
eph.rd-smaller
RAM
768MB
VCPUs
1 VCPU
Disk
3GB
Ephemeral Disk
2GB

Trove database details page should have a Specs section that display the 
relevant information in a
similarly formatted manner

** Affects: horizon
 Importance: Undecided
 Assignee: Andrew Bramley (andrlw)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Andrew Bramley (andrlw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340913

Title:
  improve display of size information in trove database details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently in the trove databases list we have a size colume that contains 
 |  
  e.g. eph.rd-smaller | 768MB RAM

  When you drill down to the instance details page you only see the RAM
  listed

  If you compare this to a nova compute instance then in the table we display 
something like:
  eph.rd-smaller | 768MB RAM | 1 VCPU | 3.0GB Disk

  and then when you drill down you see:
  Specs
  Flavor
  eph.rd-smaller
  RAM
  768MB
  VCPUs
  1 VCPU
  Disk
  3GB
  Ephemeral Disk
  2GB

  Trove database details page should have a Specs section that display the 
relevant information in a
  similarly formatted manner

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339382] Re: horizon ignores region for identity

2014-07-11 Thread Matt Fischer
Even when setting the available regions, Horizon is always talking to
the first Identity endpoint in the list, ignoring what's in that region
list except for the initial login. This does not seem to be the correct
behavior. In our case the identity system is global but the we have
separate VIPs per geographic region. When Horizon chooses to only talk
to one geographic region regardless it add a single point of failure.

In this example, I'm signing into a region "West" which has a defined
endpoint of http://d...

Sign-in:

2014-07-11 19:02:59,575 24827 DEBUG keystoneclient.session REQ: curl -i
-X POST http://d:5000/v2.0/tokens -H ...

After I get the catalog, Horizon says "well I'll just use the first one
I find"

And all subsequent calls do this, talking to "C" which is 1500 miles
away.

2014-07-11 19:03:01,334 24829 DEBUG keystoneclient.session REQ: curl -i
-X POST http://c:5000/v2.0/tokens

While this works, since our Identity is global, it is inefficient. It
also causes all generated OPENRC files to point to the same place,
thereby propagating this inefficiency and SPOF to our users.

** Changed in: horizon
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339382

Title:
  horizon ignores region for identity

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In our setup we have multiple regions with an identity endpoint in
  each. For some reason Horizon ignores regions for idenity and just
  returns the first one in the list.

  in openstack_dashboard/api/base.py
  def get_url_for_service(service, region, endpoint_type):
  identity_version = get_version_from_service(service)
  for endpoint in service['endpoints']:
  # ignore region for identity
  if service['type'] == 'identity' or region == endpoint['region']:
  try:
  ...

  This causes the openrc file generation to include the first identity
  endpoint always and it always shows the first one in the endpoint
  list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340900] [NEW] Display volume size for trove instances

2014-07-11 Thread Andrew Bramley
Public bug reported:

One important piece of information about a trove instance is the volume size 
the user specified at create time.
This information is available from trove but it isn't currently displayed in 
the horizon dashboard.

Add volume size to the database instances table and also to the details
page

** Affects: horizon
 Importance: Undecided
 Assignee: Andrew Bramley (andrlw)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Andrew Bramley (andrlw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340900

Title:
  Display volume size for trove instances

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  One important piece of information about a trove instance is the volume size 
the user specified at create time.
  This information is available from trove but it isn't currently displayed in 
the horizon dashboard.

  Add volume size to the database instances table and also to the
  details page

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340885] [NEW] Can't unset a flavor-key

2014-07-11 Thread Matthew Gilliard
Public bug reported:

I am able to set a flavor-key but not unset it.  devstack
sha1=fdf1cffbd5d2a7b47d5bdadbc0755fcb2ff6d52f


ubuntu@d8:~/devstack$ nova help flavor-key
usage: nova flavor-key[ ...]

Set or unset extra_spec for a flavor.

Positional arguments:
   Name or ID of flavor
   Actions: 'set' or 'unset'
Extra_specs to set/unset (only key is necessary on unset)
ubuntu@d8:~/devstack$ nova flavor-key m1.tiny set foo=bar
ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
+++
| Property   | Value  |
+++
| OS-FLV-DISABLED:disabled   | False  |
| OS-FLV-EXT-DATA:ephemeral  | 0  |
| disk   | 1  |
| extra_specs| {"foo": "bar"} |
| id | 1  |
| name   | m1.tiny|
| os-flavor-access:is_public | True   |
| ram| 512|
| rxtx_factor| 1.0|
| swap   ||
| vcpus  | 1  |
+++
ubuntu@d8:~/devstack$ nova flavor-key m1.tiny unset foo
ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
+++
| Property   | Value  |
+++
| OS-FLV-DISABLED:disabled   | False  |
| OS-FLV-EXT-DATA:ephemeral  | 0  |
| disk   | 1  |
| extra_specs| {"foo": "bar"} |
| id | 1  |
| name   | m1.tiny|
| os-flavor-access:is_public | True   |
| ram| 512|
| rxtx_factor| 1.0|
| swap   ||
| vcpus  | 1  |
+++

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340885

Title:
  Can't unset a flavor-key

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am able to set a flavor-key but not unset it.  devstack
  sha1=fdf1cffbd5d2a7b47d5bdadbc0755fcb2ff6d52f

  
  ubuntu@d8:~/devstack$ nova help flavor-key
  usage: nova flavor-key[ ...]

  Set or unset extra_spec for a flavor.

  Positional arguments:
 Name or ID of flavor
 Actions: 'set' or 'unset'
  Extra_specs to set/unset (only key is necessary on unset)
  ubuntu@d8:~/devstack$ nova flavor-key m1.tiny set foo=bar
  ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 1  |
  | extra_specs| {"foo": "bar"} |
  | id | 1  |
  | name   | m1.tiny|
  | os-flavor-access:is_public | True   |
  | ram| 512|
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 1  |
  +++
  ubuntu@d8:~/devstack$ nova flavor-key m1.tiny unset foo
  ubuntu@d8:~/devstack$ nova flavor-show m1.tiny
  +++
  | Property   | Value  |
  +++
  | OS-FLV-DISABLED:disabled   | False  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  |
  | disk   | 1  |
  | extra_specs| {"foo": "bar"} |
  | id | 1  |
  | name   | m1.tiny|
  | os-flavor-access:is_public | True   |
  | ram| 512|
  | rxtx_factor| 1.0|
  | swap   ||
  | vcpus  | 1  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340881] [NEW] importing neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py causes program to exit prematurely

2014-07-11 Thread Joe Gordon
Public bug reported:

https://review.openstack.org/#/c/103675/5/neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py

https://review.openstack.org/#/c/103675/5


neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py


try:
import httmock
except (NameError, ImportError):
exit()
import requests


since httmock is not in requirements, importing this module causes whatever 
imports it to suddenly exit with a return code of 0.

This  has disabled flake8, and is possibly breaking unit tests as well.

** Affects: neutron
 Importance: Undecided
 Assignee: Joe Gordon (jogo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340881

Title:
  importing
  neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py
  causes program to exit prematurely

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  
https://review.openstack.org/#/c/103675/5/neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py

  https://review.openstack.org/#/c/103675/5


  neutron/tests/unit/services/vpn/device_drivers/_test_cisco_csr_rest.py

  
  try:
  import httmock
  except (NameError, ImportError):
  exit()
  import requests

  
  since httmock is not in requirements, importing this module causes whatever 
imports it to suddenly exit with a return code of 0.

  This  has disabled flake8, and is possibly breaking unit tests as
  well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340873] [NEW] simplify process for associating newly created network to domain

2014-07-11 Thread Cindy Lu
Public bug reported:

Prereq: identity v3 support.

Scenario: Creating a network and associating it to a project within a
domain.

Issues :

1) In the Create Network panel, the user is unable to identify which
project to select based on a domain - because in the project listing
choice, he/she is given a list of projects across *all* domains ... but
with no indication regarding the domain to which the project belongs

2) the above issue makes it quite impossible for the admin to find the
appropriate project and the only evident workaround is to go back to all
the projects in the environment and change their names to assign them a
name where the domain can be understood

3) Even setting the domain context in the Horizon UI does not improve
the situation...still the entire list of projects shows up

4) At this point the other workaround is to logout and log back in to
the specific domain - and then do the above action from the
Projects->Network entry point...

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340873

Title:
  simplify process for associating newly created network to domain

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Prereq: identity v3 support.

  Scenario: Creating a network and associating it to a project within a
  domain.

  Issues :

  1) In the Create Network panel, the user is unable to identify which
  project to select based on a domain - because in the project listing
  choice, he/she is given a list of projects across *all* domains ...
  but with no indication regarding the domain to which the project
  belongs

  2) the above issue makes it quite impossible for the admin to find the
  appropriate project and the only evident workaround is to go back to
  all the projects in the environment and change their names to assign
  them a name where the domain can be understood

  3) Even setting the domain context in the Horizon UI does not improve
  the situation...still the entire list of projects shows up

  4) At this point the other workaround is to logout and log back in to
  the specific domain - and then do the above action from the
  Projects->Network entry point...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242820] Re: Nova docker virt driver triggers - AttributeError: 'Message' object has no attribute 'format'

2014-07-11 Thread Eric Windisch
** Also affects: nova-docker
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242820

Title:
  Nova docker virt driver triggers - AttributeError: 'Message' object
  has no attribute 'format'

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  Using the nova virt docker driver and when I try to deploy an instance
  I get:

  2013-10-21 12:18:39.229 20636 ERROR nova.compute.manager 
[req-270deff8-b0dc-4a05-9923-417dc5b662db c99d13095fbd4605b36a802fd9539a4a 
a03677565e97495fa798fe6cd2628180] [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Error: 'Message' object has no attribute 
'format'
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Traceback (most recent call last):
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1045, in 
_build_instance
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] set_access_ip=set_access_ip)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1444, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1430, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] block_device_info)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/virt/docker/driver.py", line 297, in 
spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.info(msg.format(image_name))
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/gettextutils.py", line 
255, in __getattribute__
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] return 
UserString.UserString.__getattribute__(self, name)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] AttributeError: 'Message' object has no 
attribute 'format'

  On a boot VM flow.

  Based on a tiny bit of poking it appears there is a common
  gettextutils method to wrapper message/string access which does not
  account for the use of Message.format() as used in the docker virt
  driver.

  Honestly based on the code I'm running I'm not sure how spawn worked
  at all for docker as the use of Message.format() is in the boot
  codepath:

  296 msg = _('Image name "{0}" does not exist, fetching it...')
  297 LOG.info(msg.format(image_name))

  This always triggers the no attr message.

  I'm not up to speed on the gettextutils wrapper, but I hacked around
  this by adding 'format' to the list of ops in gettextutils.py:

  
   def __getattribute__(self, name):
  243 # NOTE(mrodden): handle lossy operations that we can't deal with 
yet
  244 # These override the UserString implementation, since UserString
  245 # uses our __class__ attribute to try and build a new message
  246 # after running the inner data string through the operation.
  247 # At that point, we have lost the gettext message id and can just
  248 # safely resolve to a string instead.
  249 ops = ['capitalize', 'center', 'decode', 'encode',
  250'expandtabs', 'ljust', 'lstrip', 'replace', 'rjust', 
'rstrip',
  251'strip', 'swapcase', 'title', 'translate', 'upper', 
'zfill', 'format']
  252 if name in ops:
  253 return getattr(self.data, name)
  254 else:
  255 return UserString.UserString.__getattribute__(self, name)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1248387] Re: docker virt driver cannot use remote registry hosts/IPs

2014-07-11 Thread Eric Windisch
** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova-docker
   Status: New => Fix Committed

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1248387

Title:
  docker virt driver cannot use remote registry hosts/IPs

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  Fix Committed

Bug description:
  Docker virt driver implements spawn only with CONF.my_ip, and leave no
  configurable choice for a remote Docker registry service.  Add a
  config parameter registry_default_ip, or better, either hostname or IP
  address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1248387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249104] Re: nova docker - can't deploy multiple instances

2014-07-11 Thread Eric Windisch
** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249104

Title:
  nova docker - can't deploy multiple instances

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New

Bug description:
  Using docker 0.6.6 + devstack

  The docker virt driver does not appear to support multiple instances
  on a single boot call. For example a call like this (notice the --num-
  instances param):

  nova boot --image f4034eef-cac6-4059-b3f1-c3b080bc70e4 --flavor 84
  --num-instances 5 cent

  Will produce errors like this in the n-cpu log

  7/dist-packages/amqp/channel.py:71 [-] Channel open from (pid=28464)
  _open_ok /usr/local/lib/python2.7/dist-packages/amqp/channel.py:429R
  nova.compute.manager [req-237556e8-b961-45dd-a4e2-1aab217067e3 admin
  demo] [instance: 68ab99a6-6b90-45a8-8e28-79827ae78165] Error: Cannot
  setup network: Cannot find any PID under container
  
"e9b3773625a0adf022b4800ad8f253c47a385f8954cb7573f1eaa7a87981df11"6-6b90-45a8-8e28-79827ae78165]
  File "/opt/stack/nova/nova/compute/manager.py", line 1030, in
  _build_instance6b90-45a8-8e28-79827ae78165]
  set_access_ip=set_access_ip)TRACE nova.compute.manager [instance:
  68ab99a6-6b90-45a8-8e28-79827ae78165]   File
  "/opt/stack/nova/nova/compute/manager.py", line 1439, in
  _spawn68ab99a6-6b90-45a8-8e28-79827ae78165]
  LOG.exception(_('Instance failed to spawn'), instance=instance)e:
  68ab99a6-6b90-45a8-8e28-79827ae78165]   File
  "/opt/stack/nova/nova/compute/manager.py", line 1436, in
  _spawn68ab99a6-6b90-45a8-8e28-79827ae78165]
  block_device_info)38:58.883 TRACE nova.compute.manager [instance:
  68ab99a6-6b90-45a8-8e28-79827ae78165]   File
  "/opt/stack/nova/nova/virt/docker/driver.py", line 317, in
  spawn8ab99a6-6b90-45a8-8e28-79827ae78165]
  instance_id=instance['name'])RACE nova.compute.manager [instance:
  68ab99a6-6b90-45a8-8e28-79827ae78165] InstanceDeployFailure: Cannot
  setup network: Cannot find any PID under container
  "e9b3773625a0adf022b4800ad8f253c47a385f8954cb7573f1eaa7a87981df11"

  To reproduce:
  - push a docker image into glance
  - use nova boot with --num-instances param with a num greater than one
  - watch the vm boot fail
  - check the n-cpu.log for errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1256256] Re: [Docker] The nova network device (pvnetXXX) is lost after a instance soft/hard reboot

2014-07-11 Thread Eric Windisch
** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1256256

Title:
  [Docker] The nova network device (pvnetXXX) is lost after a instance
  soft/hard reboot

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New

Bug description:
  
  The nova docker driver doesn't seem to handle a reboot correctly. 
  A docker container which has been started via nova has a network device 
pvnetXXX which belongs to the nova network.
  This network device is lost after a instance reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1256256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312690] Re: nova-docker snapshot does not return proper image ID

2014-07-11 Thread Eric Windisch
*** This bug is a duplicate of bug 1314322 ***
https://bugs.launchpad.net/bugs/1314322

** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Incomplete => Invalid

** This bug has been marked a duplicate of bug 1314322
   snapshots return wrong UUID for image

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312690

Title:
  nova-docker snapshot does not return proper image ID

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  New

Bug description:
  With the current impl of the nova-docker virt driver and the docker-
  registry (https://github.com/dotcloud/docker-registry) snapshotting a
  docker container does not return the image ID of the final image
  created from the snapshot operation.

  For example consumer code should be able to do something like this:

  image_uuid = self.clients("nova").servers.create_image(server,
 server.name)
  image = self.clients("nova").images.get(image_uuid)
  image = bench_utils.wait_for(
  image,
  is_ready=bench_utils.resource_is("ACTIVE"),
  update_resource=bench_utils.get_from_manager(),
  timeout=CONF.benchmark.nova_server_image_create_timeout,
  check_interval=
  CONF.benchmark.nova_server_image_create_poll_interval
  )

  That is, the image returned from the create_image should reflect the
  image UUID of the "final" image created during capture. However with
  docker driver the process actually creates a final image call
  :latest.

  Example:
  - Install devstack + nova-docker driver
  - Pull, tag and push a docker image into glance using docker-registry with 
glance store
  - Create a nova server for docker -- results in a docker container
  - Use the nova python api to snapshot the server (see code snippet above).
  - The image_uuid returned in the above snippet might point to an image named 
'myzirdsivgoftfqp'. However the actual final image created by the snapshot is 
named 'myzirdsivgoftfqp:latest' and is not the same image referred to in the 
return response from the create_image call

  Such behavior impacts consumers and is not consistent with the nova
  snapshot behavior.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1312690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266875] Re: First spawn of docker image starts with "sh" command instead of image CMD

2014-07-11 Thread Eric Windisch
@jogo - in the future, it would be helpful if you might send these over
to the nova-docker project. This (and the other recent) bug has already
had a fix committed. Thanks.

** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova-docker
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266875

Title:
  First spawn of docker image starts with "sh" command instead of image
  CMD

Status in OpenStack Compute (Nova):
  Invalid
Status in Nova Docker Driver:
  Fix Committed

Bug description:
  Hello

  On spawning instance firstly after uploading image to registry, docker
  container runs with command "sh", instead of command, written in
  image.

  Next spawns work as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1266875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340867] [NEW] Consistent handling of choiceFIeld in trove

2014-07-11 Thread Andrew Bramley
Public bug reported:

There appears to be a sort of standard way to populate choiceField such
that:

If the choice field list has items then the initial value will be
'Select '

and if the choice field list is empty then the value will be 'No s available'

In the Trove Launch dialog we don't do this for the Restore From Backup
choiceFIeld - instead it just contains an initial '-'

Change the trove launch dialog restore from backup choice list to follow
the preferred pattern using 'Select backup' and 'No backups available'

** Affects: horizon
 Importance: Undecided
 Assignee: Andrew Bramley (andrlw)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Andrew Bramley (andrlw)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340867

Title:
  Consistent handling of choiceFIeld in trove

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There appears to be a sort of standard way to populate choiceField
  such that:

  If the choice field list has items then the initial value will be
  'Select '

  and if the choice field list is empty then the value will be 'No
  s available'

  In the Trove Launch dialog we don't do this for the Restore From
  Backup choiceFIeld - instead it just contains an initial '-'

  Change the trove launch dialog restore from backup choice list to
  follow the preferred pattern using 'Select backup' and 'No backups
  available'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317189] Re: create of flavor should be in 512MB multiplication for memory

2014-07-11 Thread Joe Gordon
I think the best way to fix this is in documentation, not in code.

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1317189

Title:
  create of flavor should be in 512MB multiplication for memory

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  when we create a flavor, the limitation for the MB is bigger than 0. 
  however, we are simulating a computer and memory comes in 512MB multiplication
  So we should not allow create of instance which is not in 512MB multiplication

  [root@puma32 ~(keystone_admin)]# nova flavor-create  Dafna_cli  91  300 6 1
  
++---+---+--+---+--+---+-+---+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public |
  
++---+---+--+---+--+---+-+---+
  | 91 | Dafna_cli | 300   | 6| 0 |  | 1 | 1.0 
| True  |
  
++---+---+--+---+--+---+-+---+

  [root@puma32 ~(keystone_admin)]# nova flavor-list 
  
+--++---+--+---+--+---+-+---+
  | ID   | Name   | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--++---+--+---+--+---+-+---+
  | 1| m1.tiny| 512   | 1| 0
 |  | 1 | 1.0 | True  |
  | 2| m1.small   | 2048  | 20   | 0
 |  | 1 | 1.0 | True  |
  | 3| m1.medium  | 4096  | 40   | 0
 |  | 2 | 1.0 | True  |
  | 4| m1.large   | 8192  | 80   | 0
 |  | 4 | 1.0 | True  |
  | 5| m1.xlarge  | 16384 | 160  | 0
 |  | 8 | 1.0 | True  |
  | 9| TEST1  | 512   | 6| 0
 |  | 1 | 1.0 | True  |
  | 91   | Dafna_cli  | 300   | 6| 0
 |  | 1 | 1.0 | True  |
  | 92   | Dafna_cli1 | 1 | 6| 0
 |  | 1 | 1.0 | True  |
  | e58fd866-46a5-4f43-8edd-769b8f31b5a1 | TEST   | 512   | 6| 0
 |  | 1 | 1.0 | True  |
  
+--++---+--+---+--+---+-+---+
  [root@puma32 ~(keystone_admin)]# nova flavor-show Dafna_cli
  ++---+
  | Property   | Value |
  ++---+
  | name   | Dafna_cli |
  | ram| 300   |
  | OS-FLV-DISABLED:disabled   | False |
  | vcpus  | 1 |
  | extra_specs| {}|
  | swap   |   |
  | os-flavor-access:is_public | True  |
  | rxtx_factor| 1.0   |
  | OS-FLV-EXT-DATA:ephemeral  | 0 |
  | disk   | 6 |
  | id | 91|
  ++---+
  [root@puma32 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1317189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242820] Re: Nova docker virt driver triggers - AttributeError: 'Message' object has no attribute 'format'

2014-07-11 Thread Joe Gordon
nova doesn't have a docker driver currently.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242820

Title:
  Nova docker virt driver triggers - AttributeError: 'Message' object
  has no attribute 'format'

Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  Using the nova virt docker driver and when I try to deploy an instance
  I get:

  2013-10-21 12:18:39.229 20636 ERROR nova.compute.manager 
[req-270deff8-b0dc-4a05-9923-417dc5b662db c99d13095fbd4605b36a802fd9539a4a 
a03677565e97495fa798fe6cd2628180] [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Error: 'Message' object has no attribute 
'format'
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] Traceback (most recent call last):
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1045, in 
_build_instance
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] set_access_ip=set_access_ip)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1444, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1430, in _spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] block_device_info)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/virt/docker/driver.py", line 297, in 
spawn
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] LOG.info(msg.format(image_name))
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd]   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/gettextutils.py", line 
255, in __getattribute__
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] return 
UserString.UserString.__getattribute__(self, name)
  2013-10-21 12:18:39.229 20636 TRACE nova.compute.manager [instance: 
206bb110-4fa5-4999-be87-9b10951ad5dd] AttributeError: 'Message' object has no 
attribute 'format'

  On a boot VM flow.

  Based on a tiny bit of poking it appears there is a common
  gettextutils method to wrapper message/string access which does not
  account for the use of Message.format() as used in the docker virt
  driver.

  Honestly based on the code I'm running I'm not sure how spawn worked
  at all for docker as the use of Message.format() is in the boot
  codepath:

  296 msg = _('Image name "{0}" does not exist, fetching it...')
  297 LOG.info(msg.format(image_name))

  This always triggers the no attr message.

  I'm not up to speed on the gettextutils wrapper, but I hacked around
  this by adding 'format' to the list of ops in gettextutils.py:

  
   def __getattribute__(self, name):
  243 # NOTE(mrodden): handle lossy operations that we can't deal with 
yet
  244 # These override the UserString implementation, since UserString
  245 # uses our __class__ attribute to try and build a new message
  246 # after running the inner data string through the operation.
  247 # At that point, we have lost the gettext message id and can just
  248 # safely resolve to a string instead.
  249 ops = ['capitalize', 'center', 'decode', 'encode',
  250'expandtabs', 'ljust', 'lstrip', 'replace', 'rjust', 
'rstrip',
  251'strip', 'swapcase', 'title', 'translate', 'upper', 
'zfill', 'format']
  252 if name in ops:
  253 return getattr(self.data, name)
  254 else:
  255 return UserString.UserString.__getattribute__(self, name)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319661] Re: bare metal: tftp prefix can cause the cloud not find kernel image

2014-07-11 Thread Joe Gordon
We are on the process of deprecating nova baremetal in favor of ironic.

** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319661

Title:
  bare metal: tftp prefix can cause the cloud not find kernel image

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  hi,
 I boot a baremetal instance,  but it has a error "could not find a kernel 
image" in the PXE . I check the file "/tftpboot/UUID/config":
  label deploy
  kernel /tftpboot/8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

 so I modify it :
   label deploy
   kernel 8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

 and then the PXE can work ok.

  My version is Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1319661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340340] Re: 3 tempest.thirdparty.boto.* tests fail

2014-07-11 Thread Joe Gordon
*** This bug is a duplicate of bug 1338841 ***
https://bugs.launchpad.net/bugs/1338841

** This bug has been marked a duplicate of bug 1338841
   asynchronous connection failed in postgresql jobs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340340

Title:
  3 tempest.thirdparty.boto.* tests fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.thirdparty.boto.test_ec2_security_groups.EC2SecurityGroupTest.test_create_authorize_security_group
  tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_get_delete
  
tempest.thirdparty.boto.test_ec2_volumes.EC2VolumesTest.test_create_volume_from_snapshot

  with the following error:
  BotoServerError: BotoServerError: 500 Internal Server Error
  
  OperationalErrorUnknown error 
occurred.req-af61466b-dc4b-4d33-8ca7-8a25543dc246

  full log: http://logs.openstack.org/89/89989/9/gate/gate-tempest-dsvm-
  postgres-full/bc0e2b8/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339382] Re: horizon ignores region for identity

2014-07-11 Thread David Lyle
Identity is intended to be global for a region.  Other keystone
endpoints would require re-authentication against that endpoint.
Specifying identity that way is done in local_settings.py.

See:
# For multiple regions uncomment this configuration, and add (endpoint, title).
# AVAILABLE_REGIONS = [
# ('http://127.0.0.1:5000/v2.0', 'Region 1'),
# ('http://10.0.2.15:5000/v2.0', 'Region 2'),
# ]

I think you have a configuration problem rather than a bug.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339382

Title:
  horizon ignores region for identity

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In our setup we have multiple regions with an identity endpoint in
  each. For some reason Horizon ignores regions for idenity and just
  returns the first one in the list.

  in openstack_dashboard/api/base.py
  def get_url_for_service(service, region, endpoint_type):
  identity_version = get_version_from_service(service)
  for endpoint in service['endpoints']:
  # ignore region for identity
  if service['type'] == 'identity' or region == endpoint['region']:
  try:
  ...

  This causes the openrc file generation to include the first identity
  endpoint always and it always shows the first one in the endpoint
  list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340834] [NEW] Support configdrive in LXC

2014-07-11 Thread Rick Harris
Public bug reported:

We'd like to support configdrive in Libvirt+LXC so that we can use
cloud-init to configure guest networking, inject SSH keys, etc.

Currently configdrive uses block devices which are attached to VM and
then are mounted by the guest.

For LXC our requirements are:

* We'd like to avoid using blockdevices (CAP_SYS_MOUNT maybe dropped
within a guest...not stock Libvirt, but it's possible we'd like to
support that use case eventually)

* We'd like avoid bind-mounts. Recent security concerns around bind-
mount have surfaced where a user could traverse to the top of a bind-
mounted FS. (User namespaces mitigated this, but we'd like to be extra-
safe)


The proposed implementation:

* Adds a `fs` configdrive type, that just drops the config-drive
information into a directory on the host, avoiding the creation of a
blockdevice

* Moves that config-drive directory into the root filesystem of the
guest at spawn time.

** Affects: nova
 Importance: Undecided
 Assignee: Rick Harris (rconradharris)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Rick Harris (rconradharris)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340834

Title:
  Support configdrive in LXC

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  We'd like to support configdrive in Libvirt+LXC so that we can use
  cloud-init to configure guest networking, inject SSH keys, etc.

  Currently configdrive uses block devices which are attached to VM and
  then are mounted by the guest.

  For LXC our requirements are:

  * We'd like to avoid using blockdevices (CAP_SYS_MOUNT maybe dropped
  within a guest...not stock Libvirt, but it's possible we'd like to
  support that use case eventually)

  * We'd like avoid bind-mounts. Recent security concerns around bind-
  mount have surfaced where a user could traverse to the top of a bind-
  mounted FS. (User namespaces mitigated this, but we'd like to be
  extra-safe)

  
  The proposed implementation:

  * Adds a `fs` configdrive type, that just drops the config-drive
  information into a directory on the host, avoiding the creation of a
  blockdevice

  * Moves that config-drive directory into the root filesystem of the
  guest at spawn time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340815] [NEW] Multi-backend domain code/tests could use a bit of tidy up

2014-07-11 Thread Henry Nash
Public bug reported:

The multi-domain backend code has a number of tidy-up items that were
deferred from the review:

- Re-factoring _set_domain_id_and_mapping() in identity/core.py
- Potential relaxation of the constraint that user/group membership cannot 
cross a backend boundary
- Corner case testing for exceptions
- Potentially add anything multi-backend test that is in-between the simple and 
complex tests that have already been defined (e.g. add one that as SQL as the 
default identity backend, with one LDAP specific domain)
- Potentially add a test that puts the SQL driver as a specific backend driver 
for a domain (since we only support one SQL driver, this is an odd 
configuration, but still probably worth while)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1340815

Title:
  Multi-backend domain code/tests could use a bit of tidy up

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The multi-domain backend code has a number of tidy-up items that were
  deferred from the review:

  - Re-factoring _set_domain_id_and_mapping() in identity/core.py
  - Potential relaxation of the constraint that user/group membership cannot 
cross a backend boundary
  - Corner case testing for exceptions
  - Potentially add anything multi-backend test that is in-between the simple 
and complex tests that have already been defined (e.g. add one that as SQL as 
the default identity backend, with one LDAP specific domain)
  - Potentially add a test that puts the SQL driver as a specific backend 
driver for a domain (since we only support one SQL driver, this is an odd 
configuration, but still probably worth while)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1340815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340802] [NEW] test_cache_layer_domain_crud is currently skipped in test_backend_ldap

2014-07-11 Thread Henry Nash
Public bug reported:

We should eventually be able to unskip this by either rewriting the test
or provide proper support in LDAP

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1340802

Title:
  test_cache_layer_domain_crud is currently skipped in test_backend_ldap

Status in OpenStack Identity (Keystone):
  New

Bug description:
  We should eventually be able to unskip this by either rewriting the
  test or provide proper support in LDAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1340802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334496] Re: Stacks panel shows multiple copies of a stack

2014-07-11 Thread Thomas Herve
*** This bug is a duplicate of bug 1332611 ***
https://bugs.launchpad.net/bugs/1332611

** This bug is no longer a duplicate of bug 1322097
   Stack list pagination is broken
** This bug has been marked a duplicate of bug 1332611
   --marker behavior broken for stack-list

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1334496

Title:
  Stacks panel shows multiple copies of a stack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The stacks panel shows multiple copies of a stack. In this case the
  heat stack was a failed stack. Fedora 30. The stack is an auto-
  scaling-group. The openstack is a devstack (master git) installed
  2014/06/25

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1334496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340793] [NEW] DB2 deadlock error not supported

2014-07-11 Thread Bryan Jones
Public bug reported:

Currently, only mysql and postgresql deadlock errors are properly handled. 
The error message for DB2 looks like:

'SQL0911N  The current transaction has been rolled back because of a
deadlock or timeout.  '

Olso.db needs to include a regex to detect this deadlock. Essentially the same 
as 
https://bugs.launchpad.net/nova/+bug/1270725
but for DB2

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340793

Title:
  DB2 deadlock error not supported

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Currently, only mysql and postgresql deadlock errors are properly handled. 
  The error message for DB2 looks like:

  'SQL0911N  The current transaction has been rolled back because of a
  deadlock or timeout.  '

  Olso.db needs to include a regex to detect this deadlock. Essentially the 
same as 
  https://bugs.launchpad.net/nova/+bug/1270725
  but for DB2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340791] [NEW] Support IPv6 in Libvirt LXC

2014-07-11 Thread Rick Harris
Public bug reported:

Libvirt's LXC implementation exposes a read-only `/proc/sys/net` to
guests.

This can cause guest's default network configuration scripts to fail
when trying to bring up `IPv6` interfaces.

The short-term fix is to use `postup-hooks` to configure the interface.

Longer-term, we may want to consider a writable `/proc/sys/net` (we just
need to verify that this can be done securely).

** Affects: nova
 Importance: Undecided
 Assignee: Rick Harris (rconradharris)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Rick Harris (rconradharris)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340791

Title:
  Support IPv6 in Libvirt LXC

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Libvirt's LXC implementation exposes a read-only `/proc/sys/net` to
  guests.

  This can cause guest's default network configuration scripts to fail
  when trying to bring up `IPv6` interfaces.

  The short-term fix is to use `postup-hooks` to configure the
  interface.

  Longer-term, we may want to consider a writable `/proc/sys/net` (we
  just need to verify that this can be done securely).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340787] [NEW] nova unit test virtual environment creation issue

2014-07-11 Thread naggappan
Public bug reported:

I tried to run nova unit test in Read Hat virtual machine 
Linux rhel6-madhu 2.6.32-431.20.3.el6.x86_64 #1 SMP Fri Jun 6 18:30:54 EDT 2014 
x86_64 x86_64 x86_64 GNU/Linux

cd /etc/nova/
./run_tests.sh -V nova.tests.scheduler

I am getting the error while when the run test creates .venv and tries
trying to upgrade glance when installing

pip install cryptography.

I even tried to install manually "pip install cryptography" in read hat and 
gave the same error .
Error pasted here : http://pastebin.com/tAsSRFuA

I made sure the following are done ,

yum upgrade
yum install gcc libffi-devel python-devel openssl-devel

This issue happens only in readhat . I tried in Centos 6.5 and it works
fine. Any help to fix this issue would be appreciable .

Note: This issue is reproducible when you tried to do it in any Readhat.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340787

Title:
  nova unit test virtual environment creation issue

Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried to run nova unit test in Read Hat virtual machine 
  Linux rhel6-madhu 2.6.32-431.20.3.el6.x86_64 #1 SMP Fri Jun 6 18:30:54 EDT 
2014 x86_64 x86_64 x86_64 GNU/Linux

  cd /etc/nova/
  ./run_tests.sh -V nova.tests.scheduler

  I am getting the error while when the run test creates .venv and tries
  trying to upgrade glance when installing

  pip install cryptography.

  I even tried to install manually "pip install cryptography" in read hat and 
gave the same error .
  Error pasted here : http://pastebin.com/tAsSRFuA

  I made sure the following are done ,

  yum upgrade
  yum install gcc libffi-devel python-devel openssl-devel

  This issue happens only in readhat . I tried in Centos 6.5 and it
  works fine. Any help to fix this issue would be appreciable .

  Note: This issue is reproducible when you tried to do it in any
  Readhat.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340778] [NEW] common/log.py creates its own logger

2014-07-11 Thread Jakub Libosvar
Public bug reported:

Using log decorator from neutron.common.log causes creating new logger
and not respecting logging level of decorated methods.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340778

Title:
  common/log.py creates its own logger

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Using log decorator from neutron.common.log causes creating new logger
  and not respecting logging level of decorated methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284677] Re: Python 3: do not use 'unicode()'

2014-07-11 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284677

Title:
  Python 3: do not use 'unicode()'

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Manila:
  New
Status in Python client library for Glance:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  The unicode() function is Python2-specific, we should use
  six.text_type() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340596] Re: Tests fail due to novaclient 2.18 update

2014-07-11 Thread John Garbutt
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  Tests fail due to novaclient 2.18 update

Status in Orchestration API (Heat):
  Invalid
Status in heat havana series:
  Confirmed
Status in heat icehouse series:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Python client library for Nova:
  New

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340735] [NEW] FWaaS:Not able to delete Firewall policy immediatly after deleting Firewall

2014-07-11 Thread Rajkumar
Public bug reported:

DESCRIPTION:

   I am not able to delete firewall policy immediately after
deleting firewall. And able to delete it only after 3 seconds

Steps to Reproduce: 
 1. create router and attach new network as interface to the router
 2. create firewall rule and attact it to the firewall policy
 3. create firewall using the above policy
 4. Now delete firewall
 5. delete firewall policy immediatly with in 3 seconds using 
script.

Actual Results: 
Error thrown saying that firewall policy is in use

Expected Results:

   firewall policy should get deleted and it should not take 3
seconds

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340735

Title:
  FWaaS:Not able to delete Firewall policy immediatly after deleting
  Firewall

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  DESCRIPTION:

 I am not able to delete firewall policy immediately
  after deleting firewall. And able to delete it only after 3 seconds

  Steps to Reproduce: 
   1. create router and attach new network as interface to the 
router
   2. create firewall rule and attact it to the firewall policy
   3. create firewall using the above policy
   4. Now delete firewall
   5. delete firewall policy immediatly with in 3 seconds using 
script.

  Actual Results: 
  Error thrown saying that firewall policy is in use

  Expected Results:

 firewall policy should get deleted and it should not take 3
  seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340709] [NEW] Fix attach volume failure

2014-07-11 Thread warewang
Public bug reported:

I attach volume on VM, there is a exception occured when call Cinder's
attach volume, Cinder is shown that the volume is not mounted,but  i
login in to the VM found volume was mounted on the VM.

 I think  that Nova need to detach volume when the excepiton occured!

** Affects: nova
 Importance: Undecided
 Assignee: warewang (wangguangcai)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => warewang (wangguangcai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340709

Title:
  Fix attach volume failure

Status in OpenStack Compute (Nova):
  New

Bug description:
  I attach volume on VM, there is a exception occured when call Cinder's
  attach volume, Cinder is shown that the volume is not mounted,but  i
  login in to the VM found volume was mounted on the VM.

   I think  that Nova need to detach volume when the excepiton occured!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340708] [NEW] imporve exception handles inside spawn

2014-07-11 Thread Zang MingJie
Public bug reported:

eventlet pool will discards all exception raised inside spawn(_n), it is
hard to discover problems inside spawned green-threads, it is better to
add a function wrapper to log the exceptions raised inside spawned
function.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340708

Title:
  imporve exception handles inside spawn

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  eventlet pool will discards all exception raised inside spawn(_n), it
  is hard to discover problems inside spawned green-threads, it is
  better to add a function wrapper to log the exceptions raised inside
  spawned function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340702] [NEW] Missing of instance action record for start of live migration

2014-07-11 Thread ChangBo Guo(gcb)
Public bug reported:

Nova has capability of recording instance actions like start/stop ,migrate, etc.
 We also need record live migrating action.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340702

Title:
  Missing  of instance action record for start of live migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova has capability of recording instance actions like start/stop ,migrate, 
etc.
   We also need record live migrating action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340696] [NEW] rpc communication is not logged in debug

2014-07-11 Thread Jakub Libosvar
Public bug reported:

Currently even though service is running in with debug=True, rpc
messages are not logged. Previously in icehouse we could see those logs.
It helps with debugging when we can see the communication between agents
and server.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340696

Title:
  rpc communication is not logged in debug

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently even though service is running in with debug=True, rpc
  messages are not logged. Previously in icehouse we could see those
  logs. It helps with debugging when we can see the communication
  between agents and server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340664] [NEW] openstack+ceph glance image create stuck in datastore rbd mode

2014-07-11 Thread Mh Raies
Public bug reported:

I have 3 node ceph-cluster + 1 node openstack (legacy networking and
remote cinder volume node)

ceph health == OK

On OpenStack node I have configured glance-1pi.conf as -

default_store = file
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8

When I run glance image-create command then this command stuck till
timeout

I again increased timeout to 15 minute still same error.

Now after time-out,  I run glance image-list it also stuck for too long.

Now I restart glance-api service.

Now I can run glance image-list, and it shows that the image which I was
trying to create is in "saving" state.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: ceph

** Tags added: glance

** Tags removed: glance
** Tags added: ceph

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1340664

Title:
  openstack+ceph glance image create stuck in datastore rbd mode

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I have 3 node ceph-cluster + 1 node openstack (legacy networking and
  remote cinder volume node)

  ceph health == OK

  On OpenStack node I have configured glance-1pi.conf as -

  default_store = file
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  rbd_store_user = glance
  rbd_store_pool = images
  rbd_store_chunk_size = 8

  When I run glance image-create command then this command stuck till
  timeout

  I again increased timeout to 15 minute still same error.

  Now after time-out,  I run glance image-list it also stuck for too
  long.

  Now I restart glance-api service.

  Now I can run glance image-list, and it shows that the image which I
  was trying to create is in "saving" state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1340664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340661] [NEW] de-stub test_http and use mock

2014-07-11 Thread Amala Basha
Public bug reported:

Modify test_http.py to use mock instead of the FakeResponse stub.

** Affects: glance
 Importance: Undecided
 Assignee: Amala Basha (amalabasha)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Amala Basha (amalabasha)

** Summary changed:

- destub test_http and use mock
+ de-stub test_http and use mock

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1340661

Title:
  de-stub test_http and use mock

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Modify test_http.py to use mock instead of the FakeResponse stub.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1340661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340641] [NEW] nova-api crashes when using ipv6-address for metadata API

2014-07-11 Thread Ville Salmela
Public bug reported:

I'm doing openstack icehouse controller installation inside virtualbox
with ipv6 configurations when I'm installing nova.

When I use ipv6 address for the metadata API (metadata_listen =
2001:db8:0::1, metadata_host = 2001:db8:0::1), nova-api crashes soon
after launching and with ipv4 everything seems to be running like
charm(metadata_listen = 198.168.0.1, metadata_host = 198.168.0.1).

e.g. when i restart my nova processes and run 'nova list' command twice
as root following things occurs:

# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+

# nova list
ERROR: HTTPConnectionPool(host='ctrl', port=8774): Max retries exceeded with 
url: /v2/d117e271b78248de8a26e572197fd149/servers/detail (Caused by : [Errno 111] Connection refused)

Here is the trace from nova-api.log:

2014-05-16 20:41:28.602 22728 DEBUG nova.openstack.common.processutils [-] 
Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf 
iptables-restore -c execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] 
Result was 2 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] 
['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'iptables-restore', '-c'] 
failed. Retrying. execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:199
2014-05-16 20:41:30.278 22728 DEBUG nova.openstack.common.processutils [-] 
Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf 
iptables-restore -c execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.processutils [-] 
Result was 2 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.lockutils [-] 
Released file lock "iptables" at /run/lock/nova/nova-iptables lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:210
2014-05-16 20:41:30.349 22728 DEBUG nova.openstack.common.lockutils [-] 
Semaphore / lock released "_apply" inner 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252
2014-05-16 20:41:30.349 22728 CRITICAL nova [-] ProcessExecutionError: 
Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
Exit code: 2
Stdout: ''
Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred 
at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more 
information.\n"
2014-05-16 20:41:30.349 22728 TRACE nova Traceback (most recent call last):
2014-05-16 20:41:30.349 22728 TRACE nova   File "/usr/bin/nova-api", line 10, 
in 
2014-05-16 20:41:30.349 22728 TRACE nova sys.exit(main())
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api.py", line 53, in main
2014-05-16 20:41:30.349 22728 TRACE nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 329, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova self.manager = self._get_manager()
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 373, in _get_manager
2014-05-16 20:41:30.349 22728 TRACE nova return manager_class()
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/api/manager.py", line 30, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova 
self.network_driver.metadata_accept()
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 660, in 
metadata_accept
2014-05-16 20:41:30.349 22728 TRACE nova iptables_manager.apply()
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 428, in apply
2014-05-16 20:41:30.349 22728 TRACE nova self._apply()
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 
249, in inner
2014-05-16 20:41:30.349 22728 TRACE nova return f(*args, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 457, in 
_apply
2014-05-16 20:41:30.349 22728 TRACE nova attempts=5)
2014-05-16 20:41:30.349 22728 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1205, in 
_execute
2014-05-16 20:41:30.349 22728 TRACE nova return utils.execute(*cmd, 
**kwargs)
2014-05-16 20:41:30.349 22728 TRACE n

[Yahoo-eng-team] [Bug 1340596] Re: Tests fail due to novaclient 2.18 update

2014-07-11 Thread Thierry Carrez
Heat master was preventively fixed, using the following commit:
https://git.openstack.org/cgit/openstack/heat/commit/?id=9850832da563c1113e121b99d38feb21af9daa8d

** Summary changed:

- tests fail due updated novalient
+ Tests fail due to novaclient 2.18 update

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: heat/icehouse
   Importance: Undecided
   Status: New

** Also affects: heat/havana
   Importance: Undecided
   Status: New

** Changed in: heat
   Status: New => Invalid

** Changed in: heat/havana
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  Tests fail due to novaclient 2.18 update

Status in Orchestration API (Heat):
  Invalid
Status in heat havana series:
  Confirmed
Status in heat icehouse series:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207953] Re: OverflowError: timeout is too large in gate-neutron-python27

2014-07-11 Thread Eugene Nikanorov
** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1207953

Title:
  OverflowError: timeout is too large in gate-neutron-python27

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I submited a patch https://review.openstack.org/#/c/39904/

  Have quite a few tests that failed with the following reason that does
  not appear to be related to my change:

  2013-08-03 02:04:44.828 | FAIL: 
neutron.tests.unit.test_wsgi.TestWSGIServerWithSSL.test_app_using_ipv6_and_ssl
  2013-08-03 02:04:44.828 | tags: worker-2
  2013-08-03 02:04:44.828 | 
--
  2013-08-03 02:04:44.828 | Empty attachments:
  2013-08-03 02:04:44.828 |   stderr
  2013-08-03 02:04:44.828 |   stdout
  2013-08-03 02:04:44.828 | 
  2013-08-03 02:04:44.828 | pythonlogging:'': {{{2013-08-03 02:04:44,808 
INFO [eventlet.wsgi.server] (18483) wsgi starting up on https://::1:58005/}}}
  2013-08-03 02:04:44.828 | 
  2013-08-03 02:04:44.828 | Traceback (most recent call last):
  2013-08-03 02:04:44.828 |   File 
"/home/jenkins/workspace/gate-neutron-python27/neutron/tests/unit/test_wsgi.py",
 line , in test_app_using_ipv6_and_ssl
  2013-08-03 02:04:44.828 | response = urllib2.urlopen('https://[::1]:%d/' 
% server.port)
  2013-08-03 02:04:44.828 |   File "/usr/lib/python2.7/urllib2.py", line 126, 
in urlopen
  2013-08-03 02:04:44.829 | return _opener.open(url, data, timeout)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/urllib2.py", line 400, 
in open
  2013-08-03 02:04:44.829 | response = self._open(req, data)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/urllib2.py", line 418, 
in _open
  2013-08-03 02:04:44.829 | '_open', req)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-08-03 02:04:44.829 | result = func(*args)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/urllib2.py", line 1215, 
in https_open
  2013-08-03 02:04:44.829 | return self.do_open(httplib.HTTPSConnection, 
req)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/urllib2.py", line 1174, 
in do_open
  2013-08-03 02:04:44.829 | h.request(req.get_method(), req.get_selector(), 
req.data, headers)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/httplib.py", line 958, 
in request
  2013-08-03 02:04:44.829 | self._send_request(method, url, body, headers)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/httplib.py", line 992, 
in _send_request
  2013-08-03 02:04:44.829 | self.endheaders(body)
  2013-08-03 02:04:44.829 |   File "/usr/lib/python2.7/httplib.py", line 954, 
in endheaders
  2013-08-03 02:04:44.830 | self._send_output(message_body)
  2013-08-03 02:04:44.830 |   File "/usr/lib/python2.7/httplib.py", line 814, 
in _send_output
  2013-08-03 02:04:44.830 | self.send(msg)
  2013-08-03 02:04:44.830 |   File "/usr/lib/python2.7/httplib.py", line 776, 
in send
  2013-08-03 02:04:44.830 | self.connect()
  2013-08-03 02:04:44.830 |   File "/usr/lib/python2.7/httplib.py", line 1161, 
in connect
  2013-08-03 02:04:44.830 | self.sock = ssl.wrap_socket(sock, 
self.key_file, self.cert_file)
  2013-08-03 02:04:44.830 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/ssl.py",
 line 288, in wrap_socket
  2013-08-03 02:04:44.830 | return GreenSSLSocket(sock, *a, **kw)
  2013-08-03 02:04:44.830 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/ssl.py",
 line 46, in __init__
  2013-08-03 02:04:44.830 | super(GreenSSLSocket, self).__init__(sock.fd, 
*args, **kw)
  2013-08-03 02:04:44.830 |   File "/usr/lib/python2.7/ssl.py", line 143, in 
__init__
  2013-08-03 02:04:44.830 | self.do_handshake()
  2013-08-03 02:04:44.830 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/ssl.py",
 line 196, in do_handshake
  2013-08-03 02:04:44.830 | super(GreenSSLSocket, self).do_handshake)
  2013-08-03 02:04:44.830 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/green/ssl.py",
 line 83, in _call_trampolining
  2013-08-03 02:04:44.831 | timeout_exc=timeout_exc('timed out'))
  2013-08-03 02:04:44.831 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/__init__.py",
 line 155, in trampoline
  2013-08-03 02:04:44.831 | return hub.switch()
  2013-08-03 02:04:44.831 |   File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 lin

[Yahoo-eng-team] [Bug 1340596] [NEW] tests in stable branches fail due updated novalient

2014-07-11 Thread Matthias Runge
Public bug reported:

tests currently fail on stable branches:
2014-07-11 07:14:28.737 | 
==
2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
2014-07-11 07:14:28.774 | 
--
2014-07-11 07:14:28.775 | Traceback (most recent call last):
2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
2014-07-11 07:14:28.775 | data_func(load_onto)
2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
2014-07-11 07:14:28.776 | return cls(status_code, msg)
2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
2014-07-11 07:14:28.776 | self.code = code
2014-07-11 07:14:28.776 | AttributeError: can't set attribute
2014-07-11 07:14:28.777 |

** Affects: horizon
 Importance: High
 Assignee: Matthias Runge (mrunge)
 Status: In Progress

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  tests in stable branches fail due updated novalient

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340570] [NEW] ovs agent test is blocking to sleep

2014-07-11 Thread Kevin Benton
Public bug reported:

The tests for the daemon loop in the ovs tunnel are not mocking out the
polling call so they take 30+ seconds each.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340570

Title:
  ovs agent test is blocking to sleep

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The tests for the daemon loop in the ovs tunnel are not mocking out
  the polling call so they take 30+ seconds each.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340564] [NEW] Very bad performance of concurrent spawning VMs to VCenter

2014-07-11 Thread Qin Zhao
Public bug reported:

When 10 user starts to provision VMs to a VCenter, OpenStack chooses one same 
datastore for everyone.
After the first clone task is complete, OpenStack recognizes that datastore 
space usage is increased, and will choose another datastore. However, all the 
next 9 provision tasks are still performed on the same datastore. If no 
provision task on one datastore completes, OpenStack will persist to choose 
that datastore to spawn next VMs.

This bug has significant performance impact, because it slows down
performance of all the provisioning tasks greatly. VCenter driver should
choose a not busy datastore for the provisioning tasks.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340564

Title:
  Very bad performance of concurrent spawning VMs to VCenter

Status in OpenStack Compute (Nova):
  New

Bug description:
  When 10 user starts to provision VMs to a VCenter, OpenStack chooses one same 
datastore for everyone.
  After the first clone task is complete, OpenStack recognizes that datastore 
space usage is increased, and will choose another datastore. However, all the 
next 9 provision tasks are still performed on the same datastore. If no 
provision task on one datastore completes, OpenStack will persist to choose 
that datastore to spawn next VMs.

  This bug has significant performance impact, because it slows down
  performance of all the provisioning tasks greatly. VCenter driver
  should choose a not busy datastore for the provisioning tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340552] [NEW] Volume detach error when use NFS as the cinder backend

2014-07-11 Thread Sampath Priyankara
Public bug reported:

Tested Environment
--
OS: Ubuntu 14.04 LST
Cinder NFS driver: 
volume_driver=cinder.volume.drivers.nfs.NfsDriver

Error description
--
I used NFS as the cinder storage backend and successfully attached multiple 
volumes to nova instances.
However, when I tried to detach one them, I found following error on 
nova-compute.log.

2014-07-07 17:48:46.175 3195 ERROR nova.virt.libvirt.volume 
[req-a07d077f-2ad1-4558-91fa-ab1895ca4914 c8ac60023a794aed8cec8552110d5f12 
fdd538eb5dbf48a98d08e6d64def73d7] Couldn't unmount the NFS share 
172.23.58.245:/NFSThinLun2
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Traceback (most 
recent call last):
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 675, 
in disconnect_volume
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
utils.execute('umount', mount_path, run_as_root=True)
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/utils.py", line 164, in execute
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume return 
processutils.execute(*cmd, **kwargs)
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume   File 
"/usr/local/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", 
line 193, in execute
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume cmd=' 
'.join(cmd))
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume 
ProcessExecutionError: Unexpected error while running command.
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Command: sudo 
nova-rootwrap /etc/nova/rootwrap.conf umount 
/var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Exit code: 16
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stdout: ''
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume Stderr: 
'umount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\numount.nfs: /var/lib/nova/mnt/16a381ac60f3e130cf26e7d6eb832cb6: device is 
busy\n'
2014-07-07 17:48:46.175 3195 TRACE nova.virt.libvirt.volume

For NFS volumes, every time you detach a volume, nova tries to umount the 
device path.
/nova/virt/libvirt/volume.py in 
Line 632: class LibvirtNFSVolumeDriver(LibvirtBaseVolumeDriver):
Line 653:   def disconnect_volume(self, connection_info, disk_dev):
Line 661:   utils.execute('umount', mount_path, run_as_root=True)
 
This works when the device path is not busy.
If the device path is busy (or in use), it should output a message to log and 
continue.
The problem is, Instead of output a log message, it raise exception and that 
cause the above error.

I think the reason is, the ‘if’ statement at Line 663 fails to catch the device 
busy massage from the content of the exc.message. It looking for the ‘target is 
busy’ in the exc.message, but umount error code returns ‘device is busy’.
Therefore, current code skip the ‘if’ statement and run the ‘else’ and raise 
the exception.

How to reproduce
--
(1) Prepare a NFS share storage and set it as the storage backend of you 
cinder
(refer 
http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/NFS-driver.html)
In cinder.conf
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=
(2) Create 2 empty volumes from cinder
(3) Create a nova instance and attach above 2 volumes
(4) Then, try to detach one of them.
You will get the error in nova-compute.log “Couldn't unmount the NFS share 
”

Proposed Fix
--
I’m not sure about any other OSs who outputs the ‘target is busy’ in the umount 
error code. 
Therefore, first fix comes to my mind is fix the ‘if’ statement to:
Before fix; 
if 'target is busy' in exc.message:
After fix;
if 'device is busy' in exc.message:

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340552

Title:
  Volume detach error when use NFS as the cinder backend

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tested Environment
  --
  OS: Ubuntu 14.04 LST
  Cinder NFS driver: 
  volume_driver=cinder.volume.drivers.nfs.NfsDriver

  Error description
  --
  I used NFS as the cinder storage backend and successfully attached multiple 
volumes to nova instances.
  However, when I tried to detach one them, I found following error on 
nova-compute.log.

  2014-07-07 17:48:46.175 3195 ERROR nova.virt.libvirt.volume 
[req-a07d077f-2ad1-4558-91fa-ab1895ca4914 c8ac60023a794aed8cec8552110d5f12 
fdd538eb5dbf48a98d08e6d64def73d7] Couldn't unmount the NFS share 
172.23.58.245:/NFSThinLun2
  2014-07

[Yahoo-eng-team] [Bug 1340553] [NEW] glance image-create command queued

2014-07-11 Thread novelkumar
Public bug reported:

When glance image-create command is given with the actual image file, it
does accept that without any error or warn message. It goes to queued
state and stays there for long.

I can see bug 1171206 which says its a property of glance image-create
to create a placeholder for image. But in this scenario of not giving
proper warning/message from glance, how can one know if its accepting it
as a placeholder for image.

A warning or confirmation flag when giving glance image-create without
image file would be better than silently putting the image to 'queued'
state.

-Novel

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1340553

Title:
  glance image-create command queued

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When glance image-create command is given with the actual image file,
  it does accept that without any error or warn message. It goes to
  queued state and stays there for long.

  I can see bug 1171206 which says its a property of glance image-create
  to create a placeholder for image. But in this scenario of not giving
  proper warning/message from glance, how can one know if its accepting
  it as a placeholder for image.

  A warning or confirmation flag when giving glance image-create without
  image file would be better than silently putting the image to 'queued'
  state.

  -Novel

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1340553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp