[Yahoo-eng-team] [Bug 1861529] [NEW] [RFE] A port's network should be changable

2020-01-31 Thread Stephen Ma
Public bug reported:

We want a way to change a VM port’s network when the VM’s operating
system does not support online removal and addition of devices.

These are the steps to accomplish this:
Given a running VM. We want to change the network of .  No floating 
IP is associated with the VM.

# openstack port set –no-fixed-ip 
# openstack port set –network-id  
# openstack port set –fixed-ip subnet= 

Neutron-server succeeds  with changing the port’s network-id if:

1. The new network belongs to the same project as the original network.
2. The port has no fixed-ip addresses on it.
3. The port’s mac address is unused on the new network.
4. The new network’s MTU is greater than or equal to the old network’s.

Some operating systems may be unable to handle online addition and
deletion of devices.  We want a way of changing a VM’s port’s network
that does not rely on the guest operating system supporting these
capabilities.

The change is done by making the port’s network-id attribute to allow
PUT requests. Neutron api server checks whether the 4 criteria are met.
Then it will proceed with updating the port. When the new fixed-ip is
set, the user will have to login to the VM’s console and manually change
the guest OS’s configuration to make use of the new IP address.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1861529

Title:
  [RFE] A port's network should be changable

Status in neutron:
  New

Bug description:
  We want a way to change a VM port’s network when the VM’s operating
  system does not support online removal and addition of devices.

  These are the steps to accomplish this:
  Given a running VM. We want to change the network of .  No 
floating IP is associated with the VM.

  # openstack port set –no-fixed-ip 
  # openstack port set –network-id  
  # openstack port set –fixed-ip subnet= 

  Neutron-server succeeds  with changing the port’s network-id if:

  1. The new network belongs to the same project as the original network.
  2. The port has no fixed-ip addresses on it.
  3. The port’s mac address is unused on the new network.
  4. The new network’s MTU is greater than or equal to the old network’s.

  Some operating systems may be unable to handle online addition and
  deletion of devices.  We want a way of changing a VM’s port’s network
  that does not rely on the guest operating system supporting these
  capabilities.

  The change is done by making the port’s network-id attribute to allow
  PUT requests. Neutron api server checks whether the 4 criteria are
  met.  Then it will proceed with updating the port. When the new fixed-
  ip is set, the user will have to login to the VM’s console and
  manually change the guest OS’s configuration to make use of the new IP
  address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1861529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1857047] [NEW] openstack port set --no-fixed-ips removed only 1 fixed IP

2019-12-19 Thread Stephen Ma
Public bug reported:

In the devstack environment (master branch), stack.sh created the
private network.  It has two subnets it (an IPv4 and an IPv6).

 A port is created using the "private" network.

Then the port is updated using the "openstack port set --no-fixed-ip"
command.  The IPv4 IP address is removed but not the IPv6 address.


stack@dvstku1804:/etc/neutron$ openstack port show test-private-port
+-+-+
| Field   | Value   

|
+-+-+
| admin_state_up  | UP  

|
| allowed_address_pairs   | 

|
| binding_host_id | None

|
| binding_profile | None

|
| binding_vif_details | None

|
| binding_vif_type| None

|
| binding_vnic_type   | normal  

|
| created_at  | 2019-12-19T19:11:39Z

|
| data_plane_status   | None

|
| description | 

|
| device_id   | 

|
| device_owner| 

|
| dns_assignment  | None

|
| dns_domain  | None

|
| dns_name| None

|
| extra_dhcp_opts | 

|
| fixed_ips   | ip_address='10.0.0.62', 
subnet_id='9283e75b-f414-4a55-af38-bb884330c322'
|
| | ip_address='fd2f:1450:5e39:0:f816:3eff:fe0c:5d1e', 
subnet_id='79c9c3f8-b93b-487e-a287-7f583b7ee992'
 |
| id  | 7d366c7b-8557-4ce4-ae9b-336aa00403b3

|
| location| cloud='', project.domain_id='default', 
project.domain_name=, project.id='e38b18b8ad344dcd96cf563aaeb81193', 
project.name='demo', region_name='RegionOne', zone= |
| mac_address | fa:16:3e:0c:5d:1e   


[Yahoo-eng-team] [Bug 1822199] [NEW] neutron-vpn-netns-wrapper not invoked with --rootwrap_config parameter

2019-03-28 Thread Stephen Ma
Public bug reported:

The neutron-vpn-netns-wrapper always assumes the rootwrap.conf lives in
the default location of /etc/neutron/ because it is not executed with
the --rootwrap_config parameter.  If rootwrap.conf is not in the default
location, then execution will fail with a message like:

2019-03-27 18:06:49.176 13642 INFO neutron.common.config [-] 
/opt/stack/service/neutron/venv/bin/neutron-vpn-netns-wrapper version 
13.0.3.dev77
2019-03-27 18:06:49.177 13642 ERROR 
neutron_vpnaas.services.vpn.common.netns_wrapper [-] Incorrect configuration 
file: /etc/neutron/rootwrap.conf: NoOptionError: No option 'filters_path' in 
section: 'DEFAULT'
; Stderr: 


In this case, rootwrap.conf is actually in the non-default directory 
/opt/stack/service/neutron/etc/.

So all neutron-vpn-netns-wrapper execution should include the
--rootwrap_config= argument.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822199

Title:
  neutron-vpn-netns-wrapper not invoked with --rootwrap_config parameter

Status in neutron:
  New

Bug description:
  The neutron-vpn-netns-wrapper always assumes the rootwrap.conf lives
  in the default location of /etc/neutron/ because it is not executed
  with the --rootwrap_config parameter.  If rootwrap.conf is not in the
  default location, then execution will fail with a message like:

  2019-03-27 18:06:49.176 13642 INFO neutron.common.config [-] 
/opt/stack/service/neutron/venv/bin/neutron-vpn-netns-wrapper version 
13.0.3.dev77
  2019-03-27 18:06:49.177 13642 ERROR 
neutron_vpnaas.services.vpn.common.netns_wrapper [-] Incorrect configuration 
file: /etc/neutron/rootwrap.conf: NoOptionError: No option 'filters_path' in 
section: 'DEFAULT'
  ; Stderr: 

  
  In this case, rootwrap.conf is actually in the non-default directory 
/opt/stack/service/neutron/etc/.

  So all neutron-vpn-netns-wrapper execution should include the
  --rootwrap_config= argument.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543756] [NEW] RBAC: Port creation on a shared network failed if --fixed-ip is specified in 'neutron port-create' command

2016-02-09 Thread Stephen Ma
Public bug reported:

The network demo-net, owned by user demo, is shared with tenant demo-2.
The sharing is created by demo using the command

neutron rbac-create --type network --action access_as_shared --target-
tenant  demo-net


A user on the demo-2 tenant is can see the network demo-net:

stack@Ubuntu-38:~/DEVSTACK/demo$ neutron net-list
+--+--+--+
| id   | name | subnets 
 |
+--+--+--+
| 85bb7612-e5fa-440c-bacf-86c5929298f3 | demo-net | 
e66487b6-430b-4fb1-8a87-ed28dd378c43 10.1.2.0/24 |
|  |  | 
ff01f7ca-d838-42dc-8d86-1b2830bc4824 10.1.3.0/24 |
| 5beb4080-4cf0-4921-9bbf-a7f65df6367f | public   | 
57485a80-815c-45ef-a0d1-ce11939d7fab |
|  |  | 
38d1ddad-8084-4d32-b142-240e16fcd5df |
+--+--+--+


The owner of network demo-net is able to create a port using the command 
'neutron port-create demo-net --fixed-ip ... :
stack@Ubuntu-38:~/DEVSTACK/devstack$ neutron port-create demo-net --fixed-ip 
subnet_id=ff01f7ca-d838-42dc-8d86-1b2830bc4824
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| dns_name  |   
  |
| fixed_ips | {"subnet_id": "ff01f7ca-d838-42dc-8d86-1b2830bc4824", 
"ip_address": "10.1.3.6"} |
| id| 37402f22-fcd5-4b01-8b01-c6734573d7a8  
  |
| mac_address   | fa:16:3e:44:71:ad 
  |
| name  |   
  |
| network_id| 85bb7612-e5fa-440c-bacf-86c5929298f3  
  |
| security_groups   | 7db11aa0-3d0d-40d1-ae25-e4c02b8886ce  
  |
| status| DOWN  
  |
| tenant_id | 54913ee1ca89458ba792d685c799484d  
  |
+---+-+


The user demo-2 of tenant demo-2 is able to create a port using the
network demo-net:

stack@Ubuntu-38:~/DEVSTACK/demo$ neutron port-create demo-net
Created a new port:
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| dns_name  |   
  |
| fixed_ips | {"subnet_id": "ff01f7ca-d838-42dc-8d86-1b2830bc4824", 
"ip_address": "10.1.3.5"} |
| id| bab87cc9-2c83-489d-a973-1a42872a3dd4  
  |
| mac_address   | fa:16:3e:c6:93:e5 
  |
| name  |   
  |
| 

[Yahoo-eng-team] [Bug 1489091] [NEW] neutron l3-agent-router-remove is not unscheduling dvr routers from L3-agents

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr'.
On the controller node the L3-agent mode is 'dvr-snat'.
Nova-compute is only running on the compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create my-net
   neutron subnet-create sb-my-net my-net 10.1.2.0/24
   neutron router-create my-router
   neutron router-interface-add my-router sb-my-net
   neutron router-gateway-set my-router public

my-net's UUID is 1162f283-6efc-424a-af37-0fbeeaf5d02a
my-router's UUID is 4f357733-9320-4c67-a0f6-81054d40fdaa

2. Boot a VM
   nova boot --flavor 1 --image IMAGE --nic 
net-id=1162f283-6efc-424a-af37-0fbeeaf5d02a myvm
   - The VM is hosted on the compute node.

3. Assign a floating IP to the VM
neutron port-list --device-id vm-uuid
neutron floatingip-create --port-id vm-port-uuid public

The fip namespace and the qrouter- 4f357733-9320-4c67-a0f6-81054d40fdaa
is found on the compute node.

4. Delete the VM. On the compute node, the fip namespace went away as expected. 
 But the qrouter namespace is left behind, but it should have been deleted. 
Neutron l3-agent-list-hosting-router shows the router is still scheduled on the 
compute node's L3-agent.
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   | 
 |
| 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+

5. Attempt to use neutron l3-agent-router-remove to remove the router from the 
compute node's L3-agent also didn't work.  The router is still scheduled on the 
agent.
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron l3-agent-router-remove 
733e31eb-b49e-488b-aaf1-0dbcda802f66 4f357733-9320-4c67-a0f6-81054d40fdaa
Removed router 4f357733-9320-4c67-a0f6-81054d40fdaa from L3 agent

stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   | 
 |
| 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+

The errors in (4) and (5) did not happen on the stable/kilo or the stable/juno 
code:
   i.) In (4) the router should no longer be scheduled on the compute node's L3 
agent.
   ii.) In (5) neutron l3-agent-router-remove should removed the router from 
the compute node's L3 agent.

Both (4) and (5) indicates that no notification to remove the router is
sent to the L3-agent on the compute node.  They represent regressions in
the latest neutron code.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489091

Title:
  neutron l3-agent-router-remove is not unscheduling dvr routers from
  L3-agents

Status in neutron:
  New

Bug description:
  In my environment where there is a compute node and a controller node.
  On the compute node the L3-agent mode is 'dvr'.
  On the controller node the L3-agent mode is 'dvr-snat'.
  Nova-compute is only running on the compute node.

  Start: the compute node has no VMs running, there are no namespaces on
  the compute node.

  1. Created a network and a router
     neutron net-create my-net
     neutron subnet-create sb-my-net my-net 10.1.2.0/24
     neutron router-create my-router
     neutron router-interface-add my-router sb-my-net
     neutron router-gateway-set my-router public

  my-net's UUID is 1162f283-6efc-424a-af37-0fbeeaf5d02a
  my-router's UUID is 4f357733-9320-4c67-a0f6-81054d40fdaa

  2. Boot a VM
     nova boot --flavor 1 --image IMAGE --nic 

[Yahoo-eng-team] [Bug 1489183] [NEW] Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr' on the controller node
the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create demo-net
   neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
   neutron router-create demo-router
   neutron router-interface-add demo-router sb-demo-net
   neutron router-gateway-set demo-router public

my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

2. Created a port: 
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

Note: the port is not associated with a floating IP.

3. Boot up a VM using the port:
nova boot --flavor 1 --image IMAGE_UUID --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

Wait for the VM to come up on the compute node.

4. Deleted the VM.

5. The port still exists and is now unbound from the compute node (device owner 
and binding:host_id are now None):
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {subnet_id: b45d41ca-134f-4274-bb05-50fab100315e, 
ip_address: 10.1.2.4} |
| id| 278743d7-b057-4797-8b2b-faaf5fe13a4a  
  |
| mac_address   | fa:16:3e:a6:f7:d1 
  |
| name  |   
  |
| network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297  
  |
| port_security_enabled | True  
  |
| security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246  
  |
| status| DOWN  
  |
| tenant_id | a7950bd5a61548ee8b03145cacf90a53  
  |
+---+-+

The Router is still scheduled on the compute node.

stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 2fc1f65b-4c05-4cec-95eb-93dda39a6eec | Dvr-Ctrl2   | True   | :-)   | 
 |
| dae065fb-b140-4ece-8824-779cf6426337 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+


When the port is unbound, the router should no longer be scheduled on the 
compute node as it is no longer needed on the compute node.  The reason is that 
when the port is no longer bound to the compute node, the DVR scheduler didn't 
check whether the router can be removed from an L3-agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of 

[Yahoo-eng-team] [Bug 1489184] [NEW] Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr' on the controller node
the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create demo-net
   neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
   neutron router-create demo-router
   neutron router-interface-add demo-router sb-demo-net
   neutron router-gateway-set demo-router public

my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

2. Created a port: 
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

Note: the port is not associated with a floating IP.

3. Boot up a VM using the port:
nova boot --flavor 1 --image IMAGE_UUID --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

Wait for the VM to come up on the compute node.

4. Deleted the VM.

5. The port still exists and is now unbound from the compute node (device owner 
and binding:host_id are now None):
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {subnet_id: b45d41ca-134f-4274-bb05-50fab100315e, 
ip_address: 10.1.2.4} |
| id| 278743d7-b057-4797-8b2b-faaf5fe13a4a  
  |
| mac_address   | fa:16:3e:a6:f7:d1 
  |
| name  |   
  |
| network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297  
  |
| port_security_enabled | True  
  |
| security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246  
  |
| status| DOWN  
  |
| tenant_id | a7950bd5a61548ee8b03145cacf90a53  
  |
+---+-+

The Router is still scheduled on the compute node.

stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 2fc1f65b-4c05-4cec-95eb-93dda39a6eec | Dvr-Ctrl2   | True   | :-)   | 
 |
| dae065fb-b140-4ece-8824-779cf6426337 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+


When the port is unbound, the router should no longer be scheduled on the 
compute node as it is no longer needed on the compute node.  The reason is that 
when the port is no longer bound to the compute node, the DVR scheduler didn't 
check whether the router can be removed from an L3-agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New


** Tags: l3-dvr-backlog

[Yahoo-eng-team] [Bug 1468783] [NEW] File not Found error in check-neutron-lbaasv1-dsvm-api

2015-06-25 Thread Stephen Ma
Public bug reported:

In https://review.openstack.org/#/c/195223/ (a backport to stable/juno)
the jenkins check job failed at the  check-neutron-lbaasv1-dsvm-api.
Looking at the console.html log, the failure occurred during the test
setup phase.  The console.log is:

http://logs.openstack.org/23/195223/1/check/check-neutron-lbaasv1-dsvm-
api/3394a7e/console.html

The failure is:
   ...
2015-06-25 02:22:54.814 | Running gate_hook
2015-06-25 02:22:54.814 | + gate_hook
2015-06-25 02:22:54.814 | + 
/opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/gate_hook.sh lbaasv1
2015-06-25 02:22:54.815 | ./safe-devstack-vm-gate-wrap.sh: 
/opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/gate_hook.sh: No such 
file or directory
2015-06-25 02:22:54.815 | + GATE_RETVAL=127
2015-06-25 02:22:54.815 | + RETVAL=127
2015-06-25 02:22:54.815 | + '[' 127 -ne 0 ']'
2015-06-25 02:22:54.815 | + echo 'ERROR: the main setup script run by this job 
failed - exit code: 127'
2015-06-25 02:22:54.815 | ERROR: the main setup script run by this job failed - 
exit code: 127
2015-06-25 02:22:54.816 | + echo 'please look at the relevant log files to 
determine the root cause'
2015-06-25 02:22:54.816 | please look at the relevant log files to 
determine the root cause
2015-06-25 02:22:54.816 | + echo 'Running devstack worlddump.py'
2015-06-25 02:22:54.816 | Running devstack worlddump.py
 ...

Looking at the neutron-lbaas in the stable/juno branch, there is no
contrib directory, hence the No such file or directory error.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- check-neutron-lbaasv1-dsvm-api failing due to file not found
+ File not Found error in check-neutron-lbaasv1-dsvm-api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468783

Title:
  File not Found error in check-neutron-lbaasv1-dsvm-api

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In https://review.openstack.org/#/c/195223/ (a backport to
  stable/juno) the jenkins check job failed at the  check-neutron-
  lbaasv1-dsvm-api.  Looking at the console.html log, the failure
  occurred during the test setup phase.  The console.log is:

  http://logs.openstack.org/23/195223/1/check/check-neutron-lbaasv1
  -dsvm-api/3394a7e/console.html

  The failure is:
 ...
  2015-06-25 02:22:54.814 | Running gate_hook
  2015-06-25 02:22:54.814 | + gate_hook
  2015-06-25 02:22:54.814 | + 
/opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/gate_hook.sh lbaasv1
  2015-06-25 02:22:54.815 | ./safe-devstack-vm-gate-wrap.sh: 
/opt/stack/new/neutron-lbaas/neutron_lbaas/tests/contrib/gate_hook.sh: No such 
file or directory
  2015-06-25 02:22:54.815 | + GATE_RETVAL=127
  2015-06-25 02:22:54.815 | + RETVAL=127
  2015-06-25 02:22:54.815 | + '[' 127 -ne 0 ']'
  2015-06-25 02:22:54.815 | + echo 'ERROR: the main setup script run by this 
job failed - exit code: 127'
  2015-06-25 02:22:54.815 | ERROR: the main setup script run by this job failed 
- exit code: 127
  2015-06-25 02:22:54.816 | + echo 'please look at the relevant log files 
to determine the root cause'
  2015-06-25 02:22:54.816 | please look at the relevant log files to 
determine the root cause
  2015-06-25 02:22:54.816 | + echo 'Running devstack worlddump.py'
  2015-06-25 02:22:54.816 | Running devstack worlddump.py
   ...

  Looking at the neutron-lbaas in the stable/juno branch, there is no
  contrib directory, hence the No such file or directory error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462154] [NEW] With DVR Pings to floating IPs replied with fixed-ips

2015-06-04 Thread Stephen Ma
Public bug reported:

On my single node devstack setup, there are 2 VMs hosted.  VM1 has no floating 
IP assigned.  VM2 has a floating IP assigned.  From VM1, ping to VM2 using the 
floating IP.  Ping output reports the replies comes from VM2's fixed ip address.
The reply should be from VM2's floating ip address.

This is a DVR problem as it doesn't happen when the L3 agent's mode is
'legacy'.

This may be a problem with the NAT rules defined by the DVR L3-agent.

I used the latest neutron code on the master branch to reproduce, The
agent_mode is set to 'dvr_snat'.


Here is how the problem is reproduced:

VM1 and VM2 runs on the same host.

VM1 has fixed IP of 10.11.12.4, no floating-ip associated.
VM2 has fixed IP of 10.11.12.5  floating-ip=10.127.10.226

Logged into VM1 from the qrouter namespace.

From VM1, ping to 10.127.10.226, ping output at VM1 reports
ping replies are from the VM2's fixed IP address

# ssh cirros@10.11.12.4
cirros@10.11.12.4's password: 
$ ping 10.127.10.226
PING 10.127.10.226 (10.127.10.226): 56 data bytes
64 bytes from 10.11.12.5: seq=0 ttl=64 time=4.189 ms
64 bytes from 10.11.12.5: seq=1 ttl=64 time=1.254 ms
64 bytes from 10.11.12.5: seq=2 ttl=64 time=2.386 ms
64 bytes from 10.11.12.5: seq=3 ttl=64 time=2.064 ms
^C
--- 10.127.10.226 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 1.254/2.473/4.189 ms
$ 


If I associate a floating IP on VM1 then repeat the same test, ping reports the 
replies comes from VM2's floating IP:

# ssh cirros@10.11.12.4
cirros@10.11.12.4's password: 
$ ping 10.127.10.226
PING 10.127.10.226 (10.127.10.226): 56 data bytes
64 bytes from 10.127.10.226: seq=0 ttl=63 time=16.750 ms
64 bytes from 10.127.10.226: seq=1 ttl=63 time=2.417 ms
64 bytes from 10.127.10.226: seq=2 ttl=63 time=1.558 ms
64 bytes from 10.127.10.226: seq=3 ttl=63 time=1.042 ms
64 bytes from 10.127.10.226: seq=4 ttl=63 time=2.770 ms
^C
--- 10.127.10.226 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 1.042/4.907/16.750 ms
$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462154

Title:
  With DVR Pings to floating IPs replied with fixed-ips

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On my single node devstack setup, there are 2 VMs hosted.  VM1 has no 
floating IP assigned.  VM2 has a floating IP assigned.  From VM1, ping to VM2 
using the floating IP.  Ping output reports the replies comes from VM2's fixed 
ip address.
  The reply should be from VM2's floating ip address.

  This is a DVR problem as it doesn't happen when the L3 agent's mode is
  'legacy'.

  This may be a problem with the NAT rules defined by the DVR L3-agent.

  I used the latest neutron code on the master branch to reproduce, The
  agent_mode is set to 'dvr_snat'.

  
  Here is how the problem is reproduced:

  VM1 and VM2 runs on the same host.

  VM1 has fixed IP of 10.11.12.4, no floating-ip associated.
  VM2 has fixed IP of 10.11.12.5  floating-ip=10.127.10.226

  Logged into VM1 from the qrouter namespace.

  From VM1, ping to 10.127.10.226, ping output at VM1 reports
  ping replies are from the VM2's fixed IP address

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.11.12.5: seq=0 ttl=64 time=4.189 ms
  64 bytes from 10.11.12.5: seq=1 ttl=64 time=1.254 ms
  64 bytes from 10.11.12.5: seq=2 ttl=64 time=2.386 ms
  64 bytes from 10.11.12.5: seq=3 ttl=64 time=2.064 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  4 packets transmitted, 4 packets received, 0% packet loss
  round-trip min/avg/max = 1.254/2.473/4.189 ms
  $ 

  
  If I associate a floating IP on VM1 then repeat the same test, ping reports 
the replies comes from VM2's floating IP:

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.127.10.226: seq=0 ttl=63 time=16.750 ms
  64 bytes from 10.127.10.226: seq=1 ttl=63 time=2.417 ms
  64 bytes from 10.127.10.226: seq=2 ttl=63 time=1.558 ms
  64 bytes from 10.127.10.226: seq=3 ttl=63 time=1.042 ms
  64 bytes from 10.127.10.226: seq=4 ttl=63 time=2.770 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  5 packets transmitted, 5 packets received, 0% packet loss
  round-trip min/avg/max = 1.042/4.907/16.750 ms
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456822] [NEW] AgentNotFoundByTypeHost exception logged when L3-agent starts up

2015-05-19 Thread Stephen Ma
Public bug reported:

On my single-node devstack setup running the latest neutron code, there
is one AgentNotFoundByTypeHost exception found for the L3-agent.
However, the AgentNotFoundByTypeHost exception is not logged for the
DHCP, OVS, or metadata agents.  This fact would point to a problem with
how the L3-agent is starting up.

Exception found in the L3-agent log:

2015-05-19 11:27:57.490 23948 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1d0f3e0a8a6744c9a9fc43eb3fdc5153 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311^M
2015-05-19 11:27:57.550 23948 ERROR neutron.agent.l3.agent [-] Failed 
synchronizing routers due to RPC error^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/opt/stack/neutron/neutron/agent/l3/agent.py, line 517, in 
fetch_and_sync_all_routers^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent routers = 
self.plugin_rpc.get_routers(context)^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/opt/stack/neutron/neutron/agent/l3/agent.py, line 91, in get_routers^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
router_ids=router_ids)^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 
156, in call^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
retry=self.retry)^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, 
in _send^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent timeout=timeout, 
retry=retry)^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 350, in send^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent retry=retry)^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 341, in _send^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent raise result^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent RemoteError: Remote 
error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and 
host=DVR-Ctrl2 could not be found^M
2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent [u'Traceback (most 
recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py, line 81, in 
sync_routers\ncontext, host, router_ids))\n', u'  File 
/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py, line 290, in 
list_active_sync_routers_on_active_l3_agent\ncontext, 
constants.AGENT_TYPE_L3, host)\n', u'  File 
/opt/stack/neutron/neutron/db/agents_db.py, line 197, in 
_get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: 
Agent with agent_type
 =L3 agent and host=DVR-Ctrl2 could not be found\n'].^M

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456822

Title:
  AgentNotFoundByTypeHost exception logged when L3-agent starts up

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On my single-node devstack setup running the latest neutron code,
  there is one AgentNotFoundByTypeHost exception found for the L3-agent.
  However, the AgentNotFoundByTypeHost exception is not logged for the
  DHCP, OVS, or metadata agents.  This fact would point to a problem
  with how the L3-agent is starting up.

  Exception found in the L3-agent log:

  2015-05-19 11:27:57.490 23948 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1d0f3e0a8a6744c9a9fc43eb3fdc5153 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311^M
  2015-05-19 11:27:57.550 23948 ERROR neutron.agent.l3.agent [-] Failed 
synchronizing routers due to RPC error^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
/opt/stack/neutron/neutron/agent/l3/agent.py, line 517, in 
fetch_and_sync_all_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent routers = 
self.plugin_rpc.get_routers(context)^M
  2015-05-19 11:27:57.550 23948 TRACE 

[Yahoo-eng-team] [Bug 1456809] [NEW] L3-agent not recreating missing fg- device

2015-05-19 Thread Stephen Ma
Public bug reported:

When using DVR, the fg device in a compute is needed to access VMs on a
compute node.  If for any reason the fg- device is deleted.  users will
not be able access the VMs on the compute node.

On a single node system where the L3-agent is running in 'dvr-snat'
mode, a VM is booted up and assigned a floating-ip.  The VM is pingable
using the floating IP.  Now I go into the fip namespace and delete the
fg device using the command ovs-vsctl del-port br-ex fg-ccbd7bcb-75. 
Now the the VM can no longer be pinged.

Then another VM is booted up and it is also assigned a Floating IP.  The
new VM is not pingable either.

The L3-agent log shows it reported that it cannot find fg-ccbd7bcb-75
when setting up the qrouter and fip namespaces for the new floating IP.
But it didn't not go and re-create the fg- device.

Given that this is a deliberate act to cause the cloud  to fail, the
L3-agent could have gone ahead and re-create the fg device to make it
more fault tolerant.

The problem can be reproduced with the latest neutron code.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456809

Title:
  L3-agent not recreating missing fg- device

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When using DVR, the fg device in a compute is needed to access VMs on
  a compute node.  If for any reason the fg- device is deleted.  users
  will not be able access the VMs on the compute node.

  On a single node system where the L3-agent is running in 'dvr-snat'
  mode, a VM is booted up and assigned a floating-ip.  The VM is
  pingable using the floating IP.  Now I go into the fip namespace and
  delete the fg device using the command ovs-vsctl del-port br-ex fg-
  ccbd7bcb-75.Now the the VM can no longer be pinged.

  Then another VM is booted up and it is also assigned a Floating IP.
  The new VM is not pingable either.

  The L3-agent log shows it reported that it cannot find fg-ccbd7bcb-75
  when setting up the qrouter and fip namespaces for the new floating
  IP.  But it didn't not go and re-create the fg- device.

  Given that this is a deliberate act to cause the cloud  to fail, the
  L3-agent could have gone ahead and re-create the fg device to make it
  more fault tolerant.

  The problem can be reproduced with the latest neutron code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445202] Re: Bug #1414218 is not fixed on the stable/juno branch

2015-04-30 Thread Stephen Ma
** Changed in: neutron
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445202

Title:
  Bug #1414218 is not fixed on the stable/juno branch

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  On the stable/juno branch, https://review.openstack.org/#/c/164329
  (ChangeId: I3ad7864eeb2f959549ed356a1e34fa18804395cc, fixed bug
  https://bugs.launchpad.net/neutron/+bug/1414218) was merged on April
  1st.  Less than 1 hour before this merge,
  https://review.openstack.org/#/c/153181 was merged.  Both patches
  modified the same function _output_hosts_file() in the same file
  (neutron/agent/linux/dhcp.py).
  https://review.openstack.org/#/c/164329 removed LOG.debug statements
  from the _output_hosts_file while
  https://review.openstack.org/#/c/153181 added LOG.debug statements.
  The end result is that the bad performance problem fixed by
  https://review.openstack.org/#/c/164329 is reverted by
  https://review.openstack.org/#/c/153181 unintentionally.

  The https://review.openstack.org/#/c/164329 fixes bug
  https://bugs.launchpad.net/neutron/+bug/1414218. The root cause is the
  performance overhead due to the LOG.debug statements in the for-loop
  of the _output_hosts_file() function.

  This problem is only found on the stable/juno branch of neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445202] [NEW] Bug #1414218 is not fixed on the stable/juno branch

2015-04-16 Thread Stephen Ma
Public bug reported:

On the stable/juno branch, https://review.openstack.org/#/c/164329
(ChangeId: I3ad7864eeb2f959549ed356a1e34fa18804395cc, fixed bug
https://bugs.launchpad.net/neutron/+bug/1414218) was merged on April
1st.  Less than 1 hour before this merge,
https://review.openstack.org/#/c/153181 was merged.  Both patches
modified the same function _output_hosts_file() in the same file
(neutron/agent/linux/dhcp.py).  https://review.openstack.org/#/c/164329
removed LOG.debug statements from the _output_hosts_file while
https://review.openstack.org/#/c/153181 added LOG.debug statements. The
end result is that the bad performance problem fixed by
https://review.openstack.org/#/c/164329 is reverted by
https://review.openstack.org/#/c/153181 unintentionally.

The https://review.openstack.org/#/c/164329 fixes bug
https://bugs.launchpad.net/neutron/+bug/1414218. The root cause is the
performance overhead due to the LOG.debug statements in the for-loop of
the _output_hosts_file() function.

This problem is only found on the stable/juno branch of neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445202

Title:
  Bug #1414218 is not fixed on the stable/juno branch

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On the stable/juno branch, https://review.openstack.org/#/c/164329
  (ChangeId: I3ad7864eeb2f959549ed356a1e34fa18804395cc, fixed bug
  https://bugs.launchpad.net/neutron/+bug/1414218) was merged on April
  1st.  Less than 1 hour before this merge,
  https://review.openstack.org/#/c/153181 was merged.  Both patches
  modified the same function _output_hosts_file() in the same file
  (neutron/agent/linux/dhcp.py).
  https://review.openstack.org/#/c/164329 removed LOG.debug statements
  from the _output_hosts_file while
  https://review.openstack.org/#/c/153181 added LOG.debug statements.
  The end result is that the bad performance problem fixed by
  https://review.openstack.org/#/c/164329 is reverted by
  https://review.openstack.org/#/c/153181 unintentionally.

  The https://review.openstack.org/#/c/164329 fixes bug
  https://bugs.launchpad.net/neutron/+bug/1414218. The root cause is the
  performance overhead due to the LOG.debug statements in the for-loop
  of the _output_hosts_file() function.

  This problem is only found on the stable/juno branch of neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394043] Re: KeyError: 'gw_port_host' seen for DVR router removal

2015-02-24 Thread Stephen Ma
** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394043

Title:
  KeyError: 'gw_port_host' seen for DVR router removal

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron juno series:
  Fix Released

Bug description:
  In some multi-node setups, a qrouter namespace might be hosted on a
  node where only a dhcp port is hosted (no VMs, no SNAT).

  When the router is removed from the db, the host with only the qrouter
  and dhcp namespace will have the qrouter namespace remain.  Other
  hosts with the same qrouter will remove the namespace.  The following
  KeyError is seen on the host with the remaining namespace -

  2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent 
call last):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 341, in call
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, 
**kwargs)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
ri.router['gw_port_host'] == self.host):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 
82, in _spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in 
_process_router_update
  self._process_router_if_compatible(router)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in 
_process_router_if_compatible
  self.process_router(ri)
File /opt/stack/neutron/neutron/common/utils.py, line 344, in call
  self.logger(e)
File /opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/neutron/neutron/common/utils.py, line 341, in call
  return func(*args, **kwargs)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in 
process_router
  self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
  ri.router['gw_port_host'] == self.host):
  KeyError: 'gw_port_host'

  For the issue to be seen, the router in question needs to have the
  router-gateway-set previously.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424096] [NEW] DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net

2015-02-20 Thread Stephen Ma
Public bug reported:

As the administrator, a DVR router is created and attached to a shared
network. The administrator also created the shared network.

As a non-admin tenant, a VM is created with the port using the shared
network.  The only VM using the shared network is scheduled to a compute
node.  When the VM is deleted, it is expected the qrouter namespace of
the DVR router is removed.  But it is not.  This doesn't happen with
routers attached to networks that are not shared.

The environment consists of 1 controller node and 1 compute node.

Routers having the problem are created by the administrator attached to
shared networks that are also owned by the admin:

As the administrator, do the following commands on a setup having 1
compute node and 1 controller node:

1. neutron net-create shared-net -- --shared True
   Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242.

2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16

3. neutron router-create shared-router
Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f.

4. neutron router-interface-add shared-router shared-subnet
5. neutron router-gateway-set  shared-router public


As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a VM 
using the shared-net network:

1. neutron net-show shared-net
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| id  | f9ccf1f9-aea9-4f72-accc-8a03170fa242 |
| name| shared-net   |
| router:external | False|
| shared  | True |
| status  | ACTIVE   |
| subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d |
| tenant_id   | 2a54d6758fab47f4a2508b06284b5104 |
+-+--+

At this point, there are no VMs using the shared-net network running in
the environment.

2. Boot a VM that uses the shared-net network: nova boot ... --nic 
net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet
3. Assign a floating IP to the VM vm_sharednet
4. Delete vm_sharednet. On the compute node, the qrouter namespace of the 
shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind

stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f
 ...


This is consistent with the output of neutron l3-agent-list-hosting-router 
command.  It shows the router is still being hosted on the compute node.


$ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f
+--+++---+
| id   | host   | admin_state_up | 
alive |
+--+++---+
| 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True   | :-)  
 |
| ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True   | :-)  
 |
+--+++---+

Running the neutron l3-agent-router-remove command removes the qrouter
namespace from the compute node:

$ neutron l3-agent-router-remove ff869dc5-d39c-464d-86f3-112b55ec1c08 
ab78428a-9653-4a7b-98ec-22e1f956f44f
Removed router ab78428a-9653-4a7b-98ec-22e1f956f44f from L3 agent

stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
stack@DVR-CN2:~/DEVSTACK/manage$

This is a workaround to get the qrouter namespace deleted from the
compute node. The L3-agent scheduler should have removed the router from
the compute node when the VM is deleted.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424096

Title:
  DVR routers attached to shared networks aren't being unscheduled from
  a compute node after deleting the VMs using the shared net

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As the administrator, a DVR router is created and attached to a shared
  network. The administrator also created the shared network.

  As a non-admin tenant, a VM is created with the port using the shared
  network.  The only VM using the shared network is scheduled to a
  compute node.  When the VM is deleted, it is expected the qrouter
  namespace of the DVR router is removed.  But it is not.  This doesn't
  happen with routers attached to networks that are not shared.

  The environment consists of 1 controller node and 1 compute node.

  Routers having the problem are created by the administrator attached
  to shared networks that are also owned by the admin:

  As the administrator, do the following commands on a 

[Yahoo-eng-team] [Bug 1377156] [NEW] fg- device is not deleted after the deletion of the last VM on the compute node

2014-10-03 Thread Stephen Ma
 |  | fa:16:3e:64:76:26 | 
{subnet_id: 3cd21767-38b5-4cef-9a17-60a03e1400ef, ip_address: 10.1.2.3} 
 |
| c1dc81fb-3331-440c-92b2-26f051454689 |  | fa:16:3e:4b:29:b4 | 
{subnet_id: e7412ed4-e037-42d7-bbca-d2cefef68bec, ip_address: 10.0.0.2} 
 |
| c3cd18c2-280a-45f9-b3b1-efdfed10d7e1 |  | fa:16:3e:df:1d:da | 
{subnet_id: e7412ed4-e037-42d7-bbca-d2cefef68bec, ip_address: 10.0.0.1} 
 |
+--+--+---+--+


3. Delete the VM, The VM is deleted and the floating IP is disassociated.

stack@DVR-Controller:~/DEVSTACK/manage$ nova delete user-1-vm1
Request to delete server user-1-vm1 has been accepted.


stack@DVR-Controller:~/DEVSTACK/manage$ neutron floatingip-list
+--+--+-+-+
| id   | fixed_ip_address | floating_ip_address 
| port_id |
+--+--+-+-+
| 0173a17f-0413-45ef-9ea5-0ecd84aa107a |  | 10.127.10.226   
| |
+--+--+-+-+
stack@DVR-Controller:~/DEVSTACK/manage$ nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+


4. HOWEVER, on the compute that hosted the only VM user-1-vm2, the fip 
namespace shows that device for port 5cb96efd-eb83-43b0-9fbc-c0003edb1302 still 
exists even 

though it has already been deleted from the database when the VM is
deleted.

The fip- namespace on the compute node after the VM is deleted.

stack@DVR-CN2:~/DEVSTACK/manage$ sudo ip netns exec 
fip-59c7a096-b07c-4d09-be90-31867aed00c6 ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group 
default 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
110: fg-5cb96efd-eb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default 
link/ether fa:16:3e:57:e7:76 brd ff:ff:ff:ff:ff:ff
inet 10.127.10.227/24 brd 10.127.10.255 scope global fg-5cb96efd-eb
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe57:e776/64 scope link 
   valid_lft forever preferred_lft forever


The list of ports after the VM deletion:

stack@DVR-Controller:~/DEVSTACK/manage$ ./os_admin neutron port-list
/home/stack/DEVSTACK/manage
+--+--+---+--+
| id   | name | mac_address   | fixed_ips   
 |
+--+--+---+--+
| 0b51a8fc-6737-4b6f-b640-4338e6fe4652 |  | fa:16:3e:e2:3f:f7 | 
{subnet_id: 3cd21767-38b5-4cef-9a17-60a03e1400ef, ip_address: 10.1.2.2} 
 |
| 1b649bc2-7d3c-4e8f-b3fb-80ca12bbe65b |  | fa:16:3e:c3:6c:3d | 
{subnet_id: 6502fec5-ae38-4b7e-a84d-d0779e85ecf7, ip_address: 
10.127.10.226} |
| 24426b5f-cafa-4688-b5e5-0be38c14408f |  | fa:16:3e:c2:c4:83 | 
{subnet_id: 3cd21767-38b5-4cef-9a17-60a03e1400ef, ip_address: 10.1.2.1} 
 |
| 3813c08c-8c40-46d3-a99a-4bba9a91e2ba |  | fa:16:3e:d1:27:e0 | 
{subnet_id: 6502fec5-ae38-4b7e-a84d-d0779e85ecf7, ip_address: 
10.127.10.225} |
| 3b4119fd-4b69-4310-817e-9b0b8d499d06 |  | fa:16:3e:ed:a7:65 | 
{subnet_id: 6502fec5-ae38-4b7e-a84d-d0779e85ecf7, ip_address: 
10.127.10.224} |
| 686b5139-dc0b-46d8-bc05-b474f973194f |  | fa:16:3e:31:bc:3b | 
{subnet_id: 3cd21767-38b5-4cef-9a17-60a03e1400ef, ip_address: 10.1.2.4} 
 |
| c1dc81fb-3331-440c-92b2-26f051454689 |  | fa:16:3e:4b:29:b4 | 
{subnet_id: e7412ed4-e037-42d7-bbca-d2cefef68bec, ip_address: 10.0.0.2} 
 |
| c3cd18c2-280a-45f9-b3b1-efdfed10d7e1 |  | fa:16:3e:df:1d:da | 
{subnet_id: e7412ed4-e037-42d7-bbca-d2cefef68bec, ip_address: 10.0.0.1} 
 |
+--+--+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377156

Title:
  fg- device is not deleted

[Yahoo-eng-team] [Bug 1373660] [NEW] An in-use dhcp port can be deleted

2014-09-24 Thread Stephen Ma
Public bug reported:

An in-use dhcp-port can be deleted by a tenant:

For example:


stack@Controller:~/DEVSTACK/user-1$ neutron net-list
+--+---+--+
| id   | name  | subnets
  |
+--+---+--+
| 90b34c78-9204-4a2f-8c23-a0f8d5676b6d | public| 
a90927be-72a4-47d1-8285-ba5bc403d99a |
| cdb1392e-b9a2-4d85-b736-a729235b4b82 | user-3net | 
13a4c458-2c00-4da7-9f68-97b0d6a0a74b 10.1.2.0/24 |
+--+---+--+
stack@DVR-Controller:~/DEVSTACK/user-1$ neutron port-list --network-id 
cdb1392e-b9a2-4d85-b736-a729235b4b82 --device_owner 'network:dhcp'
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| ed9bf7f4-e5df-4543-ab9c-b9a7885fed68 |  | fa:16:3e:36:aa:1a | 
{subnet_id: 13a4c458-2c00-4da7-9f68-97b0d6a0a74b, ip_address: 10.1.2.3} 
|
+--+--+---+-+
stack@Controller:~/DEVSTACK/user-1$
stack@Controller:~/DEVSTACK/user-1$ cd ../manage
stack@Controller:~/DEVSTACK/manage$ ./os_admin neutron 
dhcp-agent-list-hosting-net cdb1392e-b9a2-4d85-b736-a729235b4b82
+--+++---+
| id   | host   | admin_state_up | alive |
+--+++---+
| 674ffd44-f4a6-4695-b24c-59ef02d9cbd8 | Controller | True   | :-)   |
+--+++---+

As an user of the tenant:

stack@Controller:~/DEVSTACK/manage$ neutron port-delete 
ed9bf7f4-e5df-4543-ab9c-b9a7885fed68
Deleted port: ed9bf7f4-e5df-4543-ab9c-b9a7885fed68

stack@Controller:~/DEVSTACK/manage$ neutron port-list --network-id
cdb1392e-b9a2-4d85-b736-a729235b4b82 --device_owner 'network:dhcp'

stack@Controller:~/DEVSTACK/manage$


The network is still scheduled with the same dhcp agent:
stack@Controller:~/DEVSTACK/manage$ ./os_admin neutron 
dhcp-agent-list-hosting-net cdb1392e-b9a2-4d85-b736-a729235b4b82
/home/stack/DEVSTACK/manage
+--+++---+
| id   | host   | admin_state_up | alive |
+--+++---+
| 674ffd44-f4a6-4695-b24c-59ef02d9cbd8 | Controller | True   | :-)   |
+--+++---+


The port deletion should not be allowed.  This makes the configuration of the 
qdhcp namespace on the controller node inconsistent with the databse 

information.  The tap device taped9bf7f4-e5 still in the namespace but
the port is no longer found in the database.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373660

Title:
  An in-use dhcp port can be deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  An in-use dhcp-port can be deleted by a tenant:

  For example:

  
  stack@Controller:~/DEVSTACK/user-1$ neutron net-list
  
+--+---+--+
  | id   | name  | subnets  
|
  
+--+---+--+
  | 90b34c78-9204-4a2f-8c23-a0f8d5676b6d | public| 
a90927be-72a4-47d1-8285-ba5bc403d99a |
  | cdb1392e-b9a2-4d85-b736-a729235b4b82 | user-3net | 
13a4c458-2c00-4da7-9f68-97b0d6a0a74b 10.1.2.0/24 |
  
+--+---+--+
  stack@DVR-Controller:~/DEVSTACK/user-1$ neutron port-list --network-id 
cdb1392e-b9a2-4d85-b736-a729235b4b82 --device_owner 'network:dhcp'
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
   

[Yahoo-eng-team] [Bug 1362908] [NEW] snat namespace remain on network node after the router is deleted

2014-08-28 Thread Stephen Ma
Public bug reported:

On a controller node, with L3 agent mode of 'dvr_snat', the snat
namespace remains on the node even after the router is deleted.

This problem is  reproduced on a 3 node setup with 2 compute nodes and
one controller node setup using devstack. L3 agent mode on compute nodes
is 'dvr'.  The mode on the controller node is 'dvr-snat'.

1. Create a network and a subnetwork.
2. Boot a VM using the network.
3. Create a router
4. Run neutron router-interface-add router subnet
5. Run neutron router-gateway-set router public
6. Wait awhile, then do neutron router-gateway-clear router public
7. Run neutron router-interface-delete router subnet
8. delete the router.

The router namespace is deleted on the control node. but the snat
namespace of the router remains.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362908

Title:
  snat namespace remain on network node after the router is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On a controller node, with L3 agent mode of 'dvr_snat', the snat
  namespace remains on the node even after the router is deleted.

  This problem is  reproduced on a 3 node setup with 2 compute nodes and
  one controller node setup using devstack. L3 agent mode on compute
  nodes is 'dvr'.  The mode on the controller node is 'dvr-snat'.

  1. Create a network and a subnetwork.
  2. Boot a VM using the network.
  3. Create a router
  4. Run neutron router-interface-add router subnet
  5. Run neutron router-gateway-set router public
  6. Wait awhile, then do neutron router-gateway-clear router public
  7. Run neutron router-interface-delete router subnet
  8. delete the router.

  The router namespace is deleted on the control node. but the snat
  namespace of the router remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357001] [NEW] Snat-namespace created by L3-agent whose agent_mode='dvr'

2014-08-14 Thread Stephen Ma
Public bug reported:

Snat namespace is being created by the L3-agent with agent_mode='dvr'.

How the problem is reproduced.

I have a configuration with 2 nodes:

   -- Controller node with L3-agent agent_mode='dvr_snat'
   -- Compute Node with L3-agent agent_mode='dvr'


1. Started up devstack, created a tenant and a user

2. Setup a network, subnetwork, and router, and a the subnetwork to the
router and setup a external gateway for the router.

3. Now boot up a VM using the network.

4. Now create a floating-ip for the VM's port.

AFter creating the floating-ip, the snat-router-id namespace is
created on both the Controller and Compute Nodes.  There should be only
the snat-namespace on the Controller node.

router-id is 19a08298-eb3f-42f2-9c90-645ee36e2698

On the Compute node;

sudo ip netns exec snat-19a08298-eb3f-42f2-9c90-645ee36e2698 ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group 
default 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
125: qg-78955d8e-47: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default 
link/ether fa:16:3e:a6:dd:15 brd ff:ff:ff:ff:ff:ff
inet 10.127.10.229/24 brd 10.127.10.255 scope global qg-78955d8e-47
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea6:dd15/64 scope link 
   valid_lft forever preferred_lft forever


On the Controller Node:
stack@DVR-Controller:~/DEVSTACK/user-1$ sudo ip netns exec 
snat-19a08298-eb3f-42f2-9c90-645ee36e2698 ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group 
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
435: sg-b5a5802f-e9: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default
link/ether fa:16:3e:d1:db:59 brd ff:ff:ff:ff:ff:ff
inet 10.1.2.2/24 brd 10.1.2.255 scope global sg-b5a5802f-e9
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fed1:db59/64 scope link
   valid_lft forever preferred_lft forever
436: qg-78955d8e-47: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UNKNOWN group default
link/ether fa:16:3e:a6:dd:15 brd ff:ff:ff:ff:ff:ff
inet 10.127.10.229/24 brd 10.127.10.255 scope global qg-78955d8e-47
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fea6:dd15/64 scope link tentative dadfailed
   valid_lft forever preferred_lft forever


From mysql, looking at the port binding for port 78955d8e-47.  The port should 
be on the DVR-controller:
mysql select port_id, host, vif_type, vif_details from ml2_port_bindings where 
 port_id like '78955d8e-47%';
+--++--++
| port_id  | host   | vif_type | 
vif_details|
+--++--++
| 78955d8e-4739-4465-9aa7-864d67aa5814 | DVR-Controller | ovs  | 
{port_filter: true, ovs_hybrid_plug: true} |
+--++--++

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357001

Title:
  Snat-namespace created by L3-agent whose agent_mode='dvr'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Snat namespace is being created by the L3-agent with agent_mode='dvr'.

  How the problem is reproduced.

  I have a configuration with 2 nodes:

 -- Controller node with L3-agent agent_mode='dvr_snat'
 -- Compute Node with L3-agent agent_mode='dvr'

  
  1. Started up devstack, created a tenant and a user

  2. Setup a network, subnetwork, and router, and a the subnetwork to
  the router and setup a external gateway for the router.

  3. Now boot up a VM using the network.

  4. Now create a floating-ip for the VM's port.

  AFter creating the floating-ip, the snat-router-id namespace is
  created on both the Controller and Compute Nodes.  There should be
  only the snat-namespace on the Controller node.

  router-id is 19a08298-eb3f-42f2-9c90-645ee36e2698

  On the Compute node;

  sudo ip netns exec snat-19a08298-eb3f-42f2-9c90-645ee36e2698 ip a
  1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state

[Yahoo-eng-team] [Bug 1351416] Re: neutron agent-list reports incorrect binary

2014-08-06 Thread Stephen Ma
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351416

Title:
  neutron agent-list reports incorrect binary

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In an environment setup with devstack where the the neutron-vpn-agent
  is used, 'neutron agent-list' is reporting the binary for the L3 agent
  type is neutron-l3-agent.  Neutron-vpn-agent is running but not
  neutron-l3-agent. The binary column should list neutron-vpn-agent as
  the binary, not neutron-l3-agent.

  # . openrc admin admin

  # ps -ef | grep neutron-l3-agent
 (no output)

  # ps -ef | grep neutron-vpn-agent
  stack 5701  5699  0 10:57 pts/29   00:00:00 /usr/bin/python 
/usr/local/bin/neutron-vpn-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/vpn_agent.ini

  # neutron agent-list
  
+--++-+---++---+
  | id   | agent_type | host| 
alive | admin_state_up | binary|
  
+--++-+---++---+
  | 264256b0-6228-4b7d-a169-c2e5cc5206d2 | Metadata agent | Ubuntu-37 | :-) 
  | True   | neutron-metadata-agent|
  | 5c340696-cbfd-4f7c-980b-6712f919841f | Open vSwitch agent | Ubuntu-37 | :-) 
  | True   | neutron-openvswitch-agent |
  | d3de59fe-1f51-4d57-8079-776109448a91 | L3 agent   | Ubuntu-37 | :-) 
  | True   | neutron-l3-agent  |
  | fd071b5c-1bef-41e0-a737-466fce69b5c3 | DHCP agent | Ubuntu-37 | :-) 
  | True   | neutron-dhcp-agent|
  
+--+++---++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353165] [NEW] Namespaces not removed when the last VM using a DVR is deleted

2014-08-05 Thread Stephen Ma
Public bug reported:

With DVR, the qrouter- and snat- namespaces are deleted after the last
VM using the router is deleted. But the namespaces remain on the node
afterwards.

How the problem is reproduced;

1. Create a network, subnet, and router:

   - neutron net-create mynet
   - neutron subnet-create --name sb-mynet mynet 10.1.2.0/24
   - neutron router-create myrouter
   - neutron router-interface-add myrouter sb-mynet
   - neutron router-gateway-set myrouter public

2. Create a VM:

   - nova boot --flavor 1 --key-name key --image cirros-image --nic
net-id=mynet-uuid vm1

   After the VM comes boots up, check the VM is pingable

3. Delete the VM.

The router's namespaces remain on the node.  They should have been
deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353165

Title:
  Namespaces not removed when the last VM using a DVR is deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  With DVR, the qrouter- and snat- namespaces are deleted after the last
  VM using the router is deleted. But the namespaces remain on the node
  afterwards.

  How the problem is reproduced;

  1. Create a network, subnet, and router:

 - neutron net-create mynet
 - neutron subnet-create --name sb-mynet mynet 10.1.2.0/24
 - neutron router-create myrouter
 - neutron router-interface-add myrouter sb-mynet
 - neutron router-gateway-set myrouter public

  2. Create a VM:

 - nova boot --flavor 1 --key-name key --image cirros-image
  --nic net-id=mynet-uuid vm1

 After the VM comes boots up, check the VM is pingable

  3. Delete the VM.

  The router's namespaces remain on the node.  They should have been
  deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351416] [NEW] neutron agent-list reports incorrect binary

2014-08-01 Thread Stephen Ma
Public bug reported:

In an environment setup with devstack where the the neutron-vpn-agent is
used, 'neutron agent-list' is reporting the binary for the L3 agent type
is neutron-l3-agent.  Neutron-vpn-agent is running but not
neutron-l3-agent. The binary column should list neutron-vpn-agent as the
binary, not neutron-l3-agent.

# . openrc admin admin

# ps -ef | grep neutron-l3-agent
   (no output)

# ps -ef | grep neutron-vpn-agent
stack 5701  5699  0 10:57 pts/29   00:00:00 /usr/bin/python 
/usr/local/bin/neutron-vpn-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/vpn_agent.ini

# neutron agent-list
+--++-+---++---+
| id   | agent_type | host| 
alive | admin_state_up | binary|
+--++-+---++---+
| 264256b0-6228-4b7d-a169-c2e5cc5206d2 | Metadata agent | Ubuntu-37 | :-)   
| True   | neutron-metadata-agent|
| 5c340696-cbfd-4f7c-980b-6712f919841f | Open vSwitch agent | Ubuntu-37 | :-)   
| True   | neutron-openvswitch-agent |
| d3de59fe-1f51-4d57-8079-776109448a91 | L3 agent   | Ubuntu-37 | :-)   
| True   | neutron-l3-agent  |
| fd071b5c-1bef-41e0-a737-466fce69b5c3 | DHCP agent | Ubuntu-37 | :-)   
| True   | neutron-dhcp-agent|
+--+++---++---+

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351416

Title:
  neutron agent-list reports incorrect binary

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In an environment setup with devstack where the the neutron-vpn-agent
  is used, 'neutron agent-list' is reporting the binary for the L3 agent
  type is neutron-l3-agent.  Neutron-vpn-agent is running but not
  neutron-l3-agent. The binary column should list neutron-vpn-agent as
  the binary, not neutron-l3-agent.

  # . openrc admin admin

  # ps -ef | grep neutron-l3-agent
 (no output)

  # ps -ef | grep neutron-vpn-agent
  stack 5701  5699  0 10:57 pts/29   00:00:00 /usr/bin/python 
/usr/local/bin/neutron-vpn-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/vpn_agent.ini

  # neutron agent-list
  
+--++-+---++---+
  | id   | agent_type | host| 
alive | admin_state_up | binary|
  
+--++-+---++---+
  | 264256b0-6228-4b7d-a169-c2e5cc5206d2 | Metadata agent | Ubuntu-37 | :-) 
  | True   | neutron-metadata-agent|
  | 5c340696-cbfd-4f7c-980b-6712f919841f | Open vSwitch agent | Ubuntu-37 | :-) 
  | True   | neutron-openvswitch-agent |
  | d3de59fe-1f51-4d57-8079-776109448a91 | L3 agent   | Ubuntu-37 | :-) 
  | True   | neutron-l3-agent  |
  | fd071b5c-1bef-41e0-a737-466fce69b5c3 | DHCP agent | Ubuntu-37 | :-) 
  | True   | neutron-dhcp-agent|
  
+--+++---++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331877] [NEW] Neutron api server stopped working due to Errno 24

2014-06-18 Thread Stephen Ma
Public bug reported:

This traceback below was found in the neutron api server log. It
happened about one minute after the rabbitmq server was restarted after
the msgq service was out.  Afterwards, noticed that the api server is no
longer responding to requests from python-neutronclient.  However, the
api-server continues to respond to RPC calls from L3 and DHCP agents
even after the traceback was reported.

To get the api server to fully functional again, I have to stop and
restart the neutron api server.

There are multiple workers for the api server.

The traceback:

2014-05-28 08:46:16.977 16091 CRITICAL quantum [-] [Errno 24] Too many open 
files
2014-05-28 08:46:16.977 16091 TRACE quantum Traceback (most recent call last):
2014-05-28 08:46:16.977 16091 TRACE quantum   File /usr/bin/quantum-server, 
line 27, in module
2014-05-28 08:46:16.977 16091 TRACE quantum server()
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/server/__init__.py, line 38, in main
2014-05-28 08:46:16.977 16091 TRACE quantum quantum_service = 
service.serve_wsgi(service.QuantumApiService)
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/service.py, line 102, in serve_wsgi
2014-05-28 08:46:16.977 16091 TRACE quantum service.start()
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/service.py, line 69, in start
2014-05-28 08:46:16.977 16091 TRACE quantum self.wsgi_app = 
_run_wsgi(self.app_name)
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/service.py, line 113, in _run_wsgi
2014-05-28 08:46:16.977 16091 TRACE quantum server.start(app, 
cfg.CONF.bind_port, cfg.CONF.bind_host, workers=cfg.CONF.workers)
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/wsgi.py, line 206, in start
2014-05-28 08:46:16.977 16091 TRACE quantum self.run_child()
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/wsgi.py, line 265, in run_child
2014-05-28 08:46:16.977 16091 TRACE quantum self._run(self._application, 
self._socket)
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/wsgi.py, line 277, in _run
2014-05-28 08:46:16.977 16091 TRACE quantum 
log=logging.WritableLogger(logger))
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 655, in server
2014-05-28 08:46:16.977 16091 TRACE quantum client_socket = sock.accept()
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 154, in accept
2014-05-28 08:46:16.977 16091 TRACE quantum res = socket_accept(fd)
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/eventlet/greenio.py, line 52, in 
socket_accept
2014-05-28 08:46:16.977 16091 TRACE quantum return descriptor.accept()
2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/socket.py, line 202, in accept
2014-05-28 08:46:16.977 16091 TRACE quantum sock, addr = self._sock.accept()
2014-05-28 08:46:16.977 16091 TRACE quantum error: [Errno 24] Too many open 
files
2014-05-28 08:46:16.977 16091 TRACE quantum

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: api files many neutron open too

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1331877

Title:
  Neutron api server stopped working due to Errno 24

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This traceback below was found in the neutron api server log. It
  happened about one minute after the rabbitmq server was restarted
  after the msgq service was out.  Afterwards, noticed that the api
  server is no longer responding to requests from python-neutronclient.
  However, the api-server continues to respond to RPC calls from L3 and
  DHCP agents even after the traceback was reported.

  To get the api server to fully functional again, I have to stop and
  restart the neutron api server.

  There are multiple workers for the api server.

  The traceback:

  2014-05-28 08:46:16.977 16091 CRITICAL quantum [-] [Errno 24] Too many open 
files
  2014-05-28 08:46:16.977 16091 TRACE quantum Traceback (most recent call last):
  2014-05-28 08:46:16.977 16091 TRACE quantum   File /usr/bin/quantum-server, 
line 27, in module
  2014-05-28 08:46:16.977 16091 TRACE quantum server()
  2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/server/__init__.py, line 38, in main
  2014-05-28 08:46:16.977 16091 TRACE quantum quantum_service = 
service.serve_wsgi(service.QuantumApiService)
  2014-05-28 08:46:16.977 16091 TRACE quantum   File 
/usr/lib/python2.7/dist-packages/quantum/service.py, 

[Yahoo-eng-team] [Bug 1328991] [NEW] External network should not have provider:network_type as vxlan

2014-06-11 Thread Stephen Ma
Public bug reported:

Given this is the ml2_conf.ini file on a controller node that also runs neutron 
API server, DHCP and L3 agents.
[ml2]
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,l2population
tenant_network_types = vxlan

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
vni_ranges = 1001:65535

[database]
connection = mysql://...

[ovs]
local_ip = 192.0.2.24
enable_tunneling = True

[agent]
tunnel_types = vxlan
l2_population = True
polling_interval = 2
minimize_polling = True

[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

When the tripleo-incubator's setup-neutron.sh script creates the
external network on this node (devstack runs the same command):

neutron net-create ext-net --router:external=True


neutron net-show ext-net displays:

# neutron net-show ext-net
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 94384f38-0cb0-4336-a7c3-293858dec2ba |
| name  | ext-net  |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1002 |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   | b0585eeb-ea5a-4741-a759-4497f6e6c21a |
| tenant_id | 8b0a6343ca3e471bbcf0a78e084a98c0 |
+---+--+


The provider:network_type of vxlan is not appropriate for an external network 
because the only ports possible on it are floatingips and router gateways. 
There are no VM ports. So vxlan tunnels for this network won't be created at 
all.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328991

Title:
  External network should not have provider:network_type as vxlan

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Given this is the ml2_conf.ini file on a controller node that also runs 
neutron API server, DHCP and L3 agents.
  [ml2]
  type_drivers = local,flat,vlan,gre,vxlan
  mechanism_drivers = openvswitch,l2population
  tenant_network_types = vxlan

  [ml2_type_flat]

  [ml2_type_vlan]

  [ml2_type_gre]
  tunnel_id_ranges = 1:1000

  [ml2_type_vxlan]
  vni_ranges = 1001:65535

  [database]
  connection = mysql://...

  [ovs]
  local_ip = 192.0.2.24
  enable_tunneling = True

  [agent]
  tunnel_types = vxlan
  l2_population = True
  polling_interval = 2
  minimize_polling = True

  [securitygroup]
  firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

  When the tripleo-incubator's setup-neutron.sh script creates the
  external network on this node (devstack runs the same command):

  neutron net-create ext-net --router:external=True

  
  neutron net-show ext-net displays:

  # neutron net-show ext-net
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 94384f38-0cb0-4336-a7c3-293858dec2ba |
  | name  | ext-net  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1002 |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   | b0585eeb-ea5a-4741-a759-4497f6e6c21a |
  | tenant_id | 8b0a6343ca3e471bbcf0a78e084a98c0 |
  +---+--+

  
  The provider:network_type of vxlan is not appropriate for an external network 
because the only ports possible on it are floatingips and router gateways. 
There are no VM ports. So vxlan tunnels for this network won't be created at 
all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328991/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1325800] [NEW] Potential Race Condition between L3NATAgent.routers_updated and L3NATAgent._rpc_loop.

2014-06-02 Thread Stephen Ma
Public bug reported:

The _rpc_loop routine takes a snapshot of the L3NATAgent’s
routers_updated set and then it clears the set.  At the same time,
L3NATAgent.routers_updated can run, it adds new routers to the
routers_updated set.  It is possible for both routines to run at the
same time.  So it is possible that _rpc_loop will clear the
routers_updated set right after routers_updated routine added a router
without having the new router included in the snapshot.  The problem
will manifests itself by having a newly associated floating ip address
not being configured in the iptables and the qg- device.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325800

Title:
  Potential Race Condition between L3NATAgent.routers_updated and
  L3NATAgent._rpc_loop.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The _rpc_loop routine takes a snapshot of the L3NATAgent’s
  routers_updated set and then it clears the set.  At the same time,
  L3NATAgent.routers_updated can run, it adds new routers to the
  routers_updated set.  It is possible for both routines to run at the
  same time.  So it is possible that _rpc_loop will clear the
  routers_updated set right after routers_updated routine added a router
  without having the new router included in the snapshot.  The problem
  will manifests itself by having a newly associated floating ip address
  not being configured in the iptables and the qg- device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1325800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285962] [NEW] Configurable values is not printed in the ovs-neutron-agent log file

2014-02-27 Thread Stephen Ma
Public bug reported:

When the neutron-server starts up, it prints out the configurable values
in the q-svc log. This values is useful in debugging problems.  Such
output is not in the ovs-neutron-agent's log file.

Configurable value output in screen-q-svc.log:

$ cd /opt/stack/neutron  python /usr/local/bin/ ^Mneutron-server 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutro 
^Mn/plugins/ml2/ml2_conf.ini  echo $! /opt/stack/status/stack/q-svc.pid; fg 
|| e ^Mcho q-svc failed to start | tee /opt/stack/status/stack/q-svc.failure
[1] 29287
cd /opt/stack/neutron  python /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
2014-02-27 13:19:20.245 29288 INFO neutron.common.config [-] Logging enabled!
2014-02-27 13:19:20.245 29288 ERROR neutron.common.legacy [-] Skipping unknown 
group key: firewall_driver
2014-02-27 13:19:20.245 29288 DEBUG neutron.service [-] 

 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1900
2014-02-27 13:19:20.245 29288 DEBUG neutron.service [-] Configuration options 
gathered from: log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1901
2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] command line args: 
['--config-file', '/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/plugins/ml2/ml2_conf.ini'] log_opt_values 
/opt/stack/oslo.config/oslo/config/cfg.py:1902
2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] config files: 
['/etc/neutron/neutron.conf', '/etc/neutron/plugins/ml2/ml2_conf.ini'] 
log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1903
2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] 

 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1904
2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] allow_bulk  
   = True log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1913
2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] allow_overlapping_ips   
   = True log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1913

 . . .
2014-02-27 13:19:20.256 29288 DEBUG neutron.service [-] database.min_pool_size  
   = 1 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1921
2014-02-27 13:19:20.256 29288 DEBUG neutron.service [-] database.pool_timeout   
   = 10 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1921
2014-02-27 13:19:20.256 29288 DEBUG neutron.service [-] database.retry_interval 
   = 10 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1921
2014-02-27 13:19:20.256 29288 DEBUG neutron.service [-] 
database.slave_connection  =  log_opt_values 
/opt/stack/oslo.config/oslo/config/cfg.py:1921
2014-02-27 13:19:20.257 29288 DEBUG neutron.service [-] 

 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1923
2014-02-27 13:19:20.257 29288 INFO neutron.common.config [-] Config paste file: 
/etc/neutron/api-paste.ini

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New

** Description changed:

  When the neutron-server starts up, it prints out the configurable values
  in the q-svc log. This values is useful in debugging problems.  Such
  output is not in the ovs-neutron-agent's log file.
  
- a$ cd /opt/stack/neutron  python /usr/local/bin/ ^Mneutron-server 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutro 
^Mn/plugins/ml2/ml2_conf.ini  echo $! /opt/stack/status/stack/q-svc.pid; fg 
|| e ^Mcho q-svc failed to start | tee /opt/stack/status/stack/q-svc.failure
+ Configurable value output in screen-q-svc.log:
+ 
+ $ cd /opt/stack/neutron  python /usr/local/bin/ ^Mneutron-server 
--config-file /etc/neutron/neutron.conf --config-file /etc/neutro 
^Mn/plugins/ml2/ml2_conf.ini  echo $! /opt/stack/status/stack/q-svc.pid; fg 
|| e ^Mcho q-svc failed to start | tee /opt/stack/status/stack/q-svc.failure
  [1] 29287
  cd /opt/stack/neutron  python /usr/local/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  2014-02-27 13:19:20.245 29288 INFO neutron.common.config [-] Logging enabled!
  2014-02-27 13:19:20.245 29288 ERROR neutron.common.legacy [-] Skipping 
unknown group key: firewall_driver
  2014-02-27 13:19:20.245 29288 DEBUG neutron.service [-] 

 log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1900
  2014-02-27 13:19:20.245 29288 DEBUG neutron.service [-] Configuration options 
gathered from: log_opt_values /opt/stack/oslo.config/oslo/config/cfg.py:1901
  2014-02-27 13:19:20.246 29288 DEBUG neutron.service [-] command line args: 
['--config-file', '/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron

[Yahoo-eng-team] [Bug 1268823] [NEW] Non-admin owned networks can be updated to shared

2014-01-13 Thread Stephen Ma
Public bug reported:

As a non-admin user, I am unable to create a shared network:

stack@sma-vm-dvstk:~/DEVSTACK/devstack$

stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-create mysharednet --shared
{NeutronError: {message: Policy doesn't allow create_network to be 
performed., type: PolicyNotAuthorized, detail: }}

This is expected since the behavior is defined in policy.json.

However, If I am able to update a network to be shared.  If a network
cannot be created with shared=True, then the network shouldn't be able
to be modified to be shared=True.


stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-create mysharednet
Created a new network:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| id | 3e2ccb52-79a5-404b-9838-3a0926b35947 |
| name   | mysharednet  |
| shared | False|
| status | ACTIVE   |
| subnets|  |
| tenant_id  | c3d21dbd077144fe9d8f919488f72c2d |
++--+

stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-update mysharednet --shared 
True
Updated network: mysharednet

stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-show mysharednet
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| id  | 3e2ccb52-79a5-404b-9838-3a0926b35947 |
| name| mysharednet  |
| router:external | False|
| shared  | True |
| status  | ACTIVE   |
| subnets |  |
| tenant_id   | c3d21dbd077144fe9d8f919488f72c2d |
+-+--+

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Stephen Ma (stephen-ma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268823

Title:
  Non-admin owned networks can be updated to shared

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As a non-admin user, I am unable to create a shared network:

  stack@sma-vm-dvstk:~/DEVSTACK/devstack$

  stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-create mysharednet 
--shared
  {NeutronError: {message: Policy doesn't allow create_network to be 
performed., type: PolicyNotAuthorized, detail: }}

  This is expected since the behavior is defined in policy.json.

  However, If I am able to update a network to be shared.  If a network
  cannot be created with shared=True, then the network shouldn't be able
  to be modified to be shared=True.

  
  stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-create mysharednet
  Created a new network:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | id | 3e2ccb52-79a5-404b-9838-3a0926b35947 |
  | name   | mysharednet  |
  | shared | False|
  | status | ACTIVE   |
  | subnets|  |
  | tenant_id  | c3d21dbd077144fe9d8f919488f72c2d |
  ++--+

  stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-update mysharednet 
--shared True
  Updated network: mysharednet

  stack@sma-vm-dvstk:~/DEVSTACK/devstack$ neutron net-show mysharednet
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 3e2ccb52-79a5-404b-9838-3a0926b35947 |
  | name| mysharednet  |
  | router:external | False|
  | shared  | True |
  | status  | ACTIVE   |
  | subnets |  |
  | tenant_id   | c3d21dbd077144fe9d8f919488f72c2d |
  +-+--+

To manage notifications about this bug go to:
https