[Yahoo-eng-team] [Bug 1435155] [NEW] [Launch Instance Fix] Conditionally handle DiskConfig

2015-03-22 Thread Shaoquan Chen
Public bug reported:

In Launch Instance work flow Configuration step, `disk_config` should be
handle only when certain service is enabled.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1435155

Title:
  [Launch Instance Fix] Conditionally handle DiskConfig

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In Launch Instance work flow Configuration step, `disk_config` should
  be handle only when certain service is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1435155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435136] [NEW] [IPv6] [VPNaaS]ipsec-site-connection-create failing for ipv6

2015-03-22 Thread venkata anil
Public bug reported:

ipsec-site-connection-create failing for ipv6 with the following errors

2015-03-23 04:27:58.667 ERROR neutron.agent.linux.utils 
[req-fe39cbe2-9349-43bc-be0b-6c70c72fe874 admin 
8f8b8fabb981498a81863266ffabf34f] 
Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-22af8b67-1902-453d-9b0f-117df0bb6d6
8', 'iptables-restore', '-c']
Exit code: 2
..
.
Stderr: iptables-restore v1.4.21: invalid mask `64' specified
Error occurred at line: 23
Try `iptables-restore -h' or 'iptables-restore --help' for more information.

2015-03-23 04:27:58.671 ERROR neutron.agent.linux.iptables_manager [req-
fe39cbe2-9349-43bc-be0b-6c70c72fe874 admin
8f8b8fabb981498a81863266ffabf34f] IPTablesManager.apply failed to apply
the following set of iptables rules:

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435136

Title:
  [IPv6] [VPNaaS]ipsec-site-connection-create failing for ipv6

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ipsec-site-connection-create failing for ipv6 with the following
  errors

  2015-03-23 04:27:58.667 ERROR neutron.agent.linux.utils 
[req-fe39cbe2-9349-43bc-be0b-6c70c72fe874 admin 
8f8b8fabb981498a81863266ffabf34f] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-22af8b67-1902-453d-9b0f-117df0bb6d6
  8', 'iptables-restore', '-c']
  Exit code: 2
  ..
  .
  Stderr: iptables-restore v1.4.21: invalid mask `64' specified
  Error occurred at line: 23
  Try `iptables-restore -h' or 'iptables-restore --help' for more information.

  2015-03-23 04:27:58.671 ERROR neutron.agent.linux.iptables_manager
  [req-fe39cbe2-9349-43bc-be0b-6c70c72fe874 admin
  8f8b8fabb981498a81863266ffabf34f] IPTablesManager.apply failed to
  apply the following set of iptables rules:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377161] Re: If volume-attach API is failed, Block Device Mapping record will remain

2015-03-22 Thread haruka tanizawa
** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377161

Title:
  If volume-attach API is failed, Block Device Mapping record will
  remain

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Python client library for Cinder:
  New

Bug description:
  I executed volume-attach API(nova V2 API) when RabbitMQ was down.
  As result of above API execution, volume-attach API was failed and volume's 
status is still available.
  But, block device mapping record remains on nova DB.
  This condition is inconsistency.

  And, remained block device mapping record maybe cause some problems.
  (I'm researching now.)

  I used openstack juno-3.

  
--
  * Before executing volume-attach API:

  $ nova list
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | e93478bf-ee37-430f-93df-b3cf26540212 | available | None |  1   |
 None|  false   | |
  
+--+---+--+--+-+--+-+
  devstack@ubuntu-14-04-01-64-juno3-01:~$

  mysql> select * from block_device_mapping where instance_uuid = 
'0b529526-4c8d-4650-8295-b7155a977ba7';
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | created_at  | updated_at  | deleted_at | id  | device_name 
| delete_on_termination | snapshot_id | volume_id | volume_size | no_device | 
connection_info | instance_uuid| deleted | source_type 
| destination_type | guest_format | device_type | disk_bus | boot_index | 
image_id |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  | 2014-10-02 18:36:08 | 2014-10-02 18:36:10 | NULL   | 145 | /dev/vda
| 1 | NULL| NULL  |NULL |  NULL | 
NULL| 0b529526-4c8d-4650-8295-b7155a977ba7 |   0 | image   
| local| NULL | disk| NULL |  0 | 
c1d264fd-c559-446e-9b94-934ba8249ae1 |
  
+-+-++-+-+---+-+---+-+---+-+--+-+-+--+--+-+--++--+
  1 row in set (0.00 sec)

  * After executing volume-attach API:
  $ nova list --all-t
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | 0b529526-4c8d-4650-8295-b7155a977ba7 | testVM | ACTIVE | -  | 
Running | private=10.0.0.104 |
  
+--++++-++
  $ cinder list
  
+--+---+--+--+-+--+-+
  |  ID  |   Stat

[Yahoo-eng-team] [Bug 1429492] Re: When host_manager sync the aggregates into host_state obj by iterate aggregates set in host_aggregates_map may trigger Runtime error

2015-03-22 Thread Alex Xu
After rethinking:

host_state.aggregates = [self.aggs_by_id[agg_id] for agg_id in
  self.host_aggregates_map[
 host_state.host]]

is a single line loop, with eventlet this won't be changed at
concurrent.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1429492

Title:
  When host_manager sync the aggregates into host_state obj by iterate
  aggregates set in host_aggregates_map may trigger Runtime error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This is just found by review.

  The host_manager sync aggregate to host_state obj as below:

   host_state.aggregates = [self.aggs_by_id[agg_id] for agg_id in
self.host_aggregates_map[
   host_state.host]]

  It iterate the set directly. But at the same time, the aggregtes set
  can be updated concurrently.

  
  But when you change the size of set when iteration will trigger Runtime error:

  In [2]: s = set([1,2,3,4])

  In [3]: s
  Out[3]: {1, 2, 3, 4}

  In [4]: for i in s:
 ...: print i
 ...: if i == 3:
 ...: s.add(5)
 ...:
  1
  2
  3
  ---
  RuntimeError  Traceback (most recent call last)
   in ()
  > 1 for i in s:
2 print i
3 if i == 3:
4 s.add(5)
5

  RuntimeError: Set changed size during iteration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1429492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435012] [NEW] Exception when removing last network port from external router

2015-03-22 Thread Gal Sagie
Public bug reported:

Steps to reproduce:

Single node devstack setup.

Start devstack with ML2 and ovs,l2 population drivers.
Enable DVR in L3/L2 and Service plugin (with DVR_SNAT)

1. Create Network with an instance on it
2. Create Router attached to external network
3. Add the network to the router

(Two new interfaces should be added to the router, the network and an
SNAT interface

4. Delete the two added interfaces from the router (or just the network
one)

The following exception is shown in the L3 Agent:

2015-03-22 16:07:54.613 ERROR neutron.agent.linux.utils [-]
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-81176deb-cc2b-4d29-903e-0a1a2fe207d6', 'ip', 
'-4', 'route', 'del', 'default', 'via', '10.1.0.8', 'dev', 'qr-a3c4575d-5f', 
'table', '167837697']
Exit code: 1
Stdin:
Stdout:
Stderr: Cannot find device "qr-a3c4575d-5f"

2015-03-22 16:07:54.614 ERROR neutron.agent.l3.dvr_router [-] DVR: removed snat 
failed
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Traceback (most 
recent call last):
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/l3/dvr_router.py", line 282, in 
_snat_redirect_remove
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router 
ns_ipd.route.delete_gateway(gateway, table=snat_idx)
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 409, in delete_gateway
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router 
self._as_root([ip_version], tuple(args))
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 222, in _as_root
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router 
use_root_namespace=use_root_namespace)
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 69, in _as_root
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router 
log_fail_as_error=self.log_fail_as_error)
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 78, in _execute
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router 
log_fail_as_error=log_fail_as_error)
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 135, in execute
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router raise 
RuntimeError(m)
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router RuntimeError:
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-81176deb-cc2b-4d29-903e-0a1a2fe207d6', 'ip', '-4', 'route', 
'del', 'default', 'via', '10.1.0.8', 'dev', 'qr-a3c4575d-5f', 'table', 
'167837697']
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Exit code: 1
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Stdin:
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Stdout:
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Stderr: Cannot find 
device "qr-a3c4575d-5f"
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router
2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router

Assumption:
The code tries to delete the default route (for the SNAT) inside the router 
namespace, how ever the namespace
is already deleted (since this is only connected network)

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Gal Sagie (gal-sagie)

** Description changed:

  Steps to reproduce:
  
  Single node devstack setup.
  
  Start devstack with ML2 and ovs,l2 population drivers.
  Enable DVR in L3/L2 and Service plugin (with DVR_SNAT)
  
- 
  1. Create Network with an instance on it
- 2. Create Router attached to external router
+ 2. Create Router attached to external network
  3. Add the network to the router
  
  (Two new interfaces should be added to the router, the network and an
  SNAT interface
  
  4. Delete the two added interfaces from the router (or just the network
  one)
  
  The following exception is shown in the L3 Agent:
  
- 2015-03-22 16:07:54.613 ERROR neutron.agent.linux.utils [-] 
+ 2015-03-22 16:07:54.613 ERROR neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-81176deb-cc2b-4d29-903e-0a1a2fe207d6', 'ip', 
'-4', 'route', 'del', 'default', 'via', '10.1.0.8', 'dev', 'qr-a3c4575d-5f', 
'table', '167837697']
  Exit code: 1
- Stdin: 
- Stdout: 
+ Stdin:
+ Stdout:
  Stderr: Cannot find device "qr-a3c4575d-5f"
  
  2015-03-22 16:07:54.614 ERROR neutron.agent.l3.dvr_router [-] DVR: removed 
snat failed
  2015-03-22 16:07:54.614 TRACE neutron.agent.l3.dvr_router Traceback (most 
re