[Yahoo-eng-team] [Bug 1652748] Re: Sometimes the controller may exist more than one L3-agent/DHCP-agent/Metadata-agent.

2016-12-28 Thread Eugene Nikanorov
The bug has nothing to do with upstream neutron project, hence removing
it.

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1652748

Title:
  Sometimes the controller may exist more than one L3-agent/DHCP-agent
  /Metadata-agent.

Status in Mirantis OpenStack:
  Confirmed

Bug description:
  We have a large scale of openstack clusters, sometimes the controller
  node may exist more than one L3-agent/DHCP-agent/Metadata-agent, after
  the whole environment works correctly several days.

  Our environment is mainly based on Mirantis Fuel 7.0, many services
  are monitored and managed by pacemaker that is a very powerful and
  automantic tool. Also there are four services controlled by pacemaker,
  for example, L3-agent, DHCP-agent, ovs-agent and neutron meta-data
  agent. So administrator and any other users have no need to manage and
  operate them, but .. what happened? And here is the key reason
  that administrator and other users unexpectedly don't find them
  because of unknown reasons, for instance, using linux bash shell "ps
  -ef | grep L3-agent", they may use types of tools to restart these
  services, e.g. "service *** start", "systemctl *** start" ..

  As a result, it looks ok, and the crash service works again, however
  it is just managed by linux bash, and pacemaker don't know what
  happened and regularly start this crash service by himself, so TWO the
  same services have been started and they work dependently.

  Or any other non man-made factors, so it should be checked when system
  wants to start a new *-agent.

  * Pre-condition: 
  You have a large scale environment or a small test one when it works several 
days.

  * Step-by-step: 
  In controller, when you type the list of commands, like "neutron agent-list, 
check *-agents"

  
  * Expect result: 
  Only one L3-agent/DHCP-agent/Metadata-agent exists 

  * Actual result:
  Two L3-agent/DHCP-agent/Metadata-agent exist 

  * Version:
  Openstack Newton, deployed with Fuel 10.0
  Ubuntu Ubuntu 16.04.1 LTS, running kernel 4.4.0-57-generic
  Neutron version 5.1.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1652748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581918] [NEW] Sometimes DHCP agent spawns dnsmasq incorrectly

2016-05-15 Thread Eugene Nikanorov
Public bug reported:

When a network contains several subnets (especially ipv4 + ipv6) DHCP
agent may spawn dnsmasq incorrectly, so tag in the command line (--dhcp-
range) will not match the tag in opts file.

This leads to a state when dnsmasq sends it's IP address as a default
gateway.

As a side effect, VM's floating ip snat traffic begin to flow through
dhcp namespace of the server that has given an ip address to that VM.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-ipam-dhcp

** Description changed:

  When a network contains several subnets (especially ipv4 + ipv6) DHCP
- agent may spawn dnsmasq incorrectly, so tag in command line (--dhcp-
+ agent may spawn dnsmasq incorrectly, so tag in the command line (--dhcp-
  range) will not match the tag in opts file.
  
  This lead to a state when dnsmasq sends it's IP address as a default
  gateway.
  
  As a side effect, VM's floating ip snat traffic begin to flow through
  dhcp namespace of the server that has given an ip address to that VM.

** Description changed:

  When a network contains several subnets (especially ipv4 + ipv6) DHCP
  agent may spawn dnsmasq incorrectly, so tag in the command line (--dhcp-
  range) will not match the tag in opts file.
  
- This lead to a state when dnsmasq sends it's IP address as a default
+ This leads to a state when dnsmasq sends it's IP address as a default
  gateway.
  
  As a side effect, VM's floating ip snat traffic begin to flow through
  dhcp namespace of the server that has given an ip address to that VM.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581918

Title:
  Sometimes DHCP agent spawns dnsmasq incorrectly

Status in neutron:
  New

Bug description:
  When a network contains several subnets (especially ipv4 + ipv6) DHCP
  agent may spawn dnsmasq incorrectly, so tag in the command line
  (--dhcp-range) will not match the tag in opts file.

  This leads to a state when dnsmasq sends it's IP address as a default
  gateway.

  As a side effect, VM's floating ip snat traffic begin to flow through
  dhcp namespace of the server that has given an ip address to that VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579631] Re: Node time changed,resulting in abnormal state of neutron agent.

2016-05-09 Thread Eugene Nikanorov
This is expected behavior.
Node time being in sync with other nodes and not being adjusted severily is 
critical for agent status to work properly.


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579631

Title:
  Node time changed,resulting in abnormal state of neutron agent.

Status in neutron:
  Invalid

Bug description:
  Bug performance
  1)compute node without L3 agent and DHCP agent,But in the control node can 
see the compute node’s L3 agent and DHCP agent work normally

  
  [root@controller ~(keystone_admin)]# neutron agent-list|grep Slot12
  | 2e15ea73-7abf-445c-a033-2bb94983c09a | PCI NIC Switch agent | 
SBCRRack3Shelf1Slot12 | :-)   | True   | neutron-pci-sriov-nic-agent |
  | 3ec4e476-bf1c-4a1c-baf7-2b6429389410 | Metadata agent   | 
SBCRRack3Shelf1Slot12 | :-)   | True   | neutron-metadata-agent  |
  | b769ecdf-4319-430e-b3af-bfb4e2d20f53 | L3 agent | 
SBCRRack3Shelf1Slot12 | :-)   | True   | neutron-l3-agent|
  | d2254fa9-db33-4277-9853-3dfe7ef9c643 | Open vSwitch agent   | 
SBCRRack3Shelf1Slot12 | :-)   | True   | neutron-openvswitch-agent   |
  | d639783e-0ff3-4777-9ce2-2c85512637f0 | DHCP agent   | 
SBCRRack3Shelf1Slot12 | :-)   | True   | neutron-dhcp-agent  |
  [root@SBCRRack3Shelf1Slot11 ~(keystone_admin)]# 

  
  [root@compute ~(keystone_admin)]# neutron agent-show 
b769ecdf-4319-430e-b3af-bfb4e2d20f53
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | admin_state_up  | True  
|
  | agent_type  | L3 agent  
|
  | alive   | True  
|
  | binary  | neutron-l3-agent  
|
  | configurations  | { 
|
  | |  "router_id": "", 
|
  | |  "agent_mode": "legacy",  
|
  | |  "gateway_external_network_id": "",   
|
  | |  "handle_internal_only_routers": true,
|
  | |  "use_namespaces": true,  
|
  | |  "routers": 0,
|
  | |  "interfaces": 0, 
|
  | |  "floating_ips": 0,   
|
  | |  "interface_driver": 
"neutron.agent.linux.interface.OVSInterfaceDriver",  |
  | |  "external_network_bridge": "br-ex",  
|
  | |  "ex_gw_ports": 0 
|
  | | } 
|
  | created_at  | 2016-04-08 05:55:31   
|
  | description |   
|
  | heartbeat_timestamp | 2016-04-14 02:29:38   
|
  | host| SBCRRack3Shelf1Slot12 
|
  | id  | b769ecdf-4319-430e-b3af-bfb4e2d20f53  
|
  | started_at  | 2016-04-13 09:42:49   
|
  | topic   | l3_agent  
|
  
+-+---+
  [root@SBCRRack3Shelf1Slot11 ~(keystone_admin)]#

  Bug reason
  2)After check,the agent is down,so heartbeat_timestamp is not updated.
However,for some reason,the controller node's current time changed(Earlier 
than heartbeat_timestamp) ,The controller node 's current time minus the agent' 
s heartbeat_timestamp negative,So the controlerr node executing “ neutron 
agent-list” command, we can 

[Yahoo-eng-team] [Bug 1579259] [NEW] Trace seen in L3 agent logs when removing DVR snat port

2016-05-06 Thread Eugene Nikanorov
Public bug reported:

The following trace is observed when removing snat port:

2016-05-01 18:09:57.906 733 ERROR neutron.agent.l3.dvr_router [-] DVR: no map 
match_port found!
2016-05-01 18:09:57.907 733 ERROR neutron.agent.l3.dvr_router [-] DVR: removed 
snat failed
2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router Traceback (most 
recent call last):
2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 261, in 
_snat_redirect_modify
2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router for 
gw_fixed_ip in gateway['fixed_ips']:
2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router TypeError: 
'NoneType' object has no attribute '__getitem__'

It doesn't seem to make any functional impact.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579259

Title:
  Trace seen in L3 agent logs when removing DVR snat port

Status in neutron:
  New

Bug description:
  The following trace is observed when removing snat port:

  2016-05-01 18:09:57.906 733 ERROR neutron.agent.l3.dvr_router [-] DVR: no map 
match_port found!
  2016-05-01 18:09:57.907 733 ERROR neutron.agent.l3.dvr_router [-] DVR: 
removed snat failed
  2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router Traceback (most 
recent call last):
  2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 261, in 
_snat_redirect_modify
  2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router for 
gw_fixed_ip in gateway['fixed_ips']:
  2016-05-01 18:09:57.907 733 TRACE neutron.agent.l3.dvr_router TypeError: 
'NoneType' object has no attribute '__getitem__'

  It doesn't seem to make any functional impact.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564144] [NEW] Sometimes L3 rpc update operations produce deadlock errors

2016-03-30 Thread Eugene Nikanorov
Public bug reported:

When DB backend works in a multimaster mode and L3 agents are setting up 
floating ips and routers, they often produce
Deadlock errors when changing floating IP or router statuses.

Need to add db_retry decorator to corresponding rpc handlers.

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564144

Title:
  Sometimes L3 rpc update operations produce deadlock errors

Status in neutron:
  In Progress

Bug description:
  When DB backend works in a multimaster mode and L3 agents are setting up 
floating ips and routers, they often produce
  Deadlock errors when changing floating IP or router statuses.

  Need to add db_retry decorator to corresponding rpc handlers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560724] [NEW] Deadlock errors when updating agents table

2016-03-22 Thread Eugene Nikanorov
Public bug reported:

This well-known error may appear when updating agents table with heartbeat 
information.
The reason for this error is mysql backend configured as multi-master cluster.

Usual solution is to apply retries here, which was not done for this
particular case.

As a result of this deadlock various side effects could arise such as
port binding failures or  resource rescheduling.

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

** Description changed:

  This well-known error may appear when updating agents table with heartbeat 
information.
  The reason for this error is mysql backend configured as multi-master cluster.
  
  Usual solution is to apply retries here, which was not done for this
  particular case.
+ 
+ As a result of this deadlock various side effects could arise such as
+ port binding failures or  resource rescheduling.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560724

Title:
  Deadlock errors when updating agents table

Status in neutron:
  In Progress

Bug description:
  This well-known error may appear when updating agents table with heartbeat 
information.
  The reason for this error is mysql backend configured as multi-master cluster.

  Usual solution is to apply retries here, which was not done for this
  particular case.

  As a result of this deadlock various side effects could arise such as
  port binding failures or  resource rescheduling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541191] Re: Limit to create only one external network

2016-02-03 Thread Eugene Nikanorov
Multiple external networks are supported.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541191

Title:
  Limit to create only one external network

Status in neutron:
  Invalid

Bug description:
  Since L3 agent support only one external network, 
  
https://github.com/openstack/neutron/blob/master/neutron/db/external_net_db.py#L149

  But now, we can create any number of external network that will cause to 
raise TooManyExternalNetworks exception in agent rpc get_external_network_id(). 
 
  We should modify create_network api  to limit only one external_network could 
be create.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540555] Re: Nobody listens to network delete notifications

2016-02-01 Thread Eugene Nikanorov
It's correct observation, Nate.
Linux bridge agent uses this notification to delete network's bridge unlike OVS 
agent which only needs port delete notifications.
I think it's not a big deal if notifications are just lost if nobody consumes 
them.

Closing as invalid.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540555

Title:
  Nobody listens to network delete notifications

Status in neutron:
  Invalid

Bug description:
  Here it can be seen that agents are notified of network delete event:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/rpc.py#L304

  But on the agent side only network update events are being listened:
  
https://github.com/openstack/neutron/blob/8.0.0.0b2/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L374

  That was uncovered during testing of Pika driver, because it does not
  allow to send messages to queues which do not exist, unlike current
  Rabbit (Kombu) driver. That behaviour will probably be changed in Pika
  driver, but still it is worthy to get rid of unnecessary notifications
  on Neutron side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531381] Re: Neutron API allows for creating multiple tenant networks with the same subnet address CIDR

2016-01-06 Thread Eugene Nikanorov
Having multiple subnets sharing the same cidr is ok even for one tenant.
So you're observing by-design behavior.
If you want unique cidrs for the tenant, you need to use subnet pools.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531381

Title:
  Neutron API allows for creating multiple tenant networks with the same
  subnet address CIDR

Status in neutron:
  Opinion

Bug description:
  When logged in as a demo tenant from Horizon, I created a Network N1 with 
subnet CIDR address of 10.10.10.0/24 that got created successfully. Next, i 
create a Network N2 with the same subnet CIDR address of 10.10.10.0/24 
expecting an error message indicating that a network with the subnet CIDR 
address of 10.10.10.0/24 already exists., however, the network got created 
successfully. 
  It is understandable that two networks with the same subnet CIDR address can 
exists across different tenants in the same OpenStack instance. However, 
curious as to what the use case or rationale would be for allowing two networks 
within the same tenant to be allowed to be created with the same subnet CIDR 
address?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529100] [NEW] OVSBridge.delete_port method performance could be improved

2015-12-24 Thread Eugene Nikanorov
Public bug reported:

Using transactions to execute multiple ovs commands can be beneficial if 
python-ovs library is used.
Particularly, deletion of multiple ovs port can be made significantly faster if 
executed in the transaction as a single operation.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529100

Title:
  OVSBridge.delete_port method performance could be improved

Status in neutron:
  New

Bug description:
  Using transactions to execute multiple ovs commands can be beneficial if 
python-ovs library is used.
  Particularly, deletion of multiple ovs port can be made significantly faster 
if executed in the transaction as a single operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1529100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526785] [NEW] Remove unused parameter gw_info of _update_router_db

2015-12-16 Thread Eugene Nikanorov
Public bug reported:

It's being passed here and there without actual use.
Tests also pass None or mock.ANY.
It makes sense to cleanup the code a little bit.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526785

Title:
  Remove unused parameter gw_info of _update_router_db

Status in neutron:
  In Progress

Bug description:
  It's being passed here and there without actual use.
  Tests also pass None or mock.ANY.
  It makes sense to cleanup the code a little bit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525295] [NEW] subnet listing is too slow with rbac

2015-12-11 Thread Eugene Nikanorov
Public bug reported:

subnet listing of 100 subnets takes about 2 seconds on a powerfull hardware.
60% of the time is consumed by the calculation of 'shared' attribute of the 
subnet which involves rbac rules.

This makes horizon barely usable as number of networks grow.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  subnet listing of 100 subnets takes about 2 seconds on a powerfull hardware.
- 60% of the time is consumed by the calculation of 'shared' attribute of the 
subnet.
+ 60% of the time is consumed by the calculation of 'shared' attribute of the 
subnet which involves rbac rules.
  
  This makes horizon barely usable as number of networks grow.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525295

Title:
  subnet listing is too slow with rbac

Status in neutron:
  New

Bug description:
  subnet listing of 100 subnets takes about 2 seconds on a powerfull hardware.
  60% of the time is consumed by the calculation of 'shared' attribute of the 
subnet which involves rbac rules.

  This makes horizon barely usable as number of networks grow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514017] Re: Sec Group IPtables dropping all TCP ACK as Invalid for all connections between instances on different networks on same libvirt host.

2015-11-16 Thread Eugene Nikanorov
Removing duplicate to perform additional checks


** This bug is no longer a duplicate of bug 1478925
   Instances on the same compute node unable to connect to each other's ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514017

Title:
  Sec Group IPtables dropping all TCP ACK as Invalid for all connections
  between instances on different networks on same libvirt host.

Status in neutron:
  New

Bug description:
  Sec Group rules are set to allow all TCP connections in both
  directions.

  When 2 instance on the same host but in different networks try to
  start a TCP session, the TCP handshake never completes as the ACK
  packet gets dropped as invalid by the security group iptable rules on
  the Linux bridge. The SYN and SYN_ACK packets get through. This
  happens on all libvirt hosts, when the 2 instances are the same host.
  When the instance are on separate hosts, the ACK packet gets through
  and the session is established.

  VM1 --> SYN --> VM2
  VM1 <-- SYN_ACK <-- VM2
  VM1 --> ACK --> ???

  In the case of the above running tcpdump on the host on the VM1 tap
  interface shows the ACK packet entering the bridge, a tcpdump on qvb
  interface on the far side of the bridge show the packet missing.

  The following is a sample tcpdump from the client tap interface
  showing SYN, SYN_ACK and ACK packets.

  12:08:34.685284 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 74: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [S], seq 942942122, win 14600, options [mss 
1460,sackOK,TS val 1211694066 ecr 0,nop,wscale 6], length 0
  12:08:34.686254 a2:30:cf:00:00:1e (oui Unknown) > fa:16:3e:6d:ff:9f (oui 
Unknown), ethertype IPv4 (0x0800), length 74: 10.0.115.11.ssh > 
10.0.116.15.58374: Flags [S.], seq 2764187879, ack 942942123, win 5792, options 
[mss 1380,sackOK,TS val 1212626150 ecr 1211694066,nop,wscale 7], length 0
  12:08:34.686425 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 66: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [.], ack 1, win 229, options [nop,nop,TS val 1211694067 
ecr 1212626150], length 0
  12:08:38.096191 a2:30:cf:00:00:1e (oui Unknown) > fa:16:3e:6d:ff:9f (oui 
Unknown), ethertype IPv4 (0x0800), length 74: 10.0.115.11.ssh > 
10.0.116.15.58374: Flags [S.], seq 2764187879, ack 942942123, win 5792, options 
[mss 1380,sackOK,TS val 1212629559 ecr 1211694066,nop,wscale 7], length 0
  12:08:38.096379 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 66: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [.], ack 1, win 229, options [nop,nop,TS val 1211697477 
ecr 1212626150], length 0

  After examining iptable rule counters while attempting to open a TCP
  session, the following rule in the neutron-openvswi-od080d3d6-c chain
  is dropping the packets.

  DROP   all  --  anywhere anywhere state
  INVALID /* Drop packets that appear related to an existing connection
  (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */

  conntrack shows the TCP session as being in the correct SYN_RECV
  state.

  conntrack -L -s 10.0.116.15 -d 10.0.115.11 -p tcp
  tcp  6 57 SYN_RECV src=10.0.116.15 dst=10.0.115.11 sport=58619 dport=22 
src=10.0.115.11 dst=10.0.116.15 sport=22 dport=58619 mark=0 use=1

  Bypassing Sec Groups firewalls entirely by disabling IPtable filtering
  on Linux bridges resolves the issue.

  net.bridge.bridge-nf-call-iptables=0

  Software Versions:
  Neutron: Kilo 2015.1.1
  Nova: Kilo 2015.1.1
  OS: CentOS 7.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514017] Re: Sec Group IPtables dropping all TCP ACK as Invalid for all connections bettween instances on different networks on same libvirt host.

2015-11-13 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1478925 ***
https://bugs.launchpad.net/bugs/1478925

** This bug has been marked a duplicate of bug 1478925
   Instances on the same compute node unable to connect to each other's ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514017

Title:
  Sec Group IPtables dropping all TCP ACK as Invalid for all connections
  bettween instances on different networks on same libvirt host.

Status in neutron:
  New

Bug description:
  Sec Group rules are set to allow all TCP connections in both
  directions.

  When 2 instance on the same host but in different networks try to
  start a TCP session, the TCP handshake never completes as the ACK
  packet gets dropped as invalid by the security group iptable rules on
  the Linux bridge. The SYN and SYN_ACK packets get through. This
  happens on all libvirt hosts, when the 2 instances are the same host.
  When the instance are on separate hosts, the ACK packet gets through
  and the session is established.

  VM1 --> SYN --> VM2
  VM1 <-- SYN_ACK <-- VM2
  VM1 --> ACK --> ???

  In the case of the above running tcpdump on the host on the VM1 tap
  interface shows the ACK packet entering the bridge, a tcpdump on qvb
  interface on the far side of the bridge show the packet missing.

  The following is a sample tcpdump from the client tap interface
  showing SYN, SYN_ACK and ACK packets.

  12:08:34.685284 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 74: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [S], seq 942942122, win 14600, options [mss 
1460,sackOK,TS val 1211694066 ecr 0,nop,wscale 6], length 0
  12:08:34.686254 a2:30:cf:00:00:1e (oui Unknown) > fa:16:3e:6d:ff:9f (oui 
Unknown), ethertype IPv4 (0x0800), length 74: 10.0.115.11.ssh > 
10.0.116.15.58374: Flags [S.], seq 2764187879, ack 942942123, win 5792, options 
[mss 1380,sackOK,TS val 1212626150 ecr 1211694066,nop,wscale 7], length 0
  12:08:34.686425 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 66: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [.], ack 1, win 229, options [nop,nop,TS val 1211694067 
ecr 1212626150], length 0
  12:08:38.096191 a2:30:cf:00:00:1e (oui Unknown) > fa:16:3e:6d:ff:9f (oui 
Unknown), ethertype IPv4 (0x0800), length 74: 10.0.115.11.ssh > 
10.0.116.15.58374: Flags [S.], seq 2764187879, ack 942942123, win 5792, options 
[mss 1380,sackOK,TS val 1212629559 ecr 1211694066,nop,wscale 7], length 0
  12:08:38.096379 fa:16:3e:6d:ff:9f (oui Unknown) > 00:00:0c:07:ac:47 (oui 
Cisco), ethertype IPv4 (0x0800), length 66: 10.0.116.15.58374 > 
10.0.115.11.ssh: Flags [.], ack 1, win 229, options [nop,nop,TS val 1211697477 
ecr 1212626150], length 0

  After examining iptable rule counters while attempting to open a TCP
  session, the following rule in the neutron-openvswi-od080d3d6-c chain
  is dropping the packets.

  DROP   all  --  anywhere anywhere state
  INVALID /* Drop packets that appear related to an existing connection
  (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */

  conntrack shows the TCP session as being in the correct SYN_RECV
  state.

  conntrack -L -s 10.0.116.15 -d 10.0.115.11 -p tcp
  tcp  6 57 SYN_RECV src=10.0.116.15 dst=10.0.115.11 sport=58619 dport=22 
src=10.0.115.11 dst=10.0.116.15 sport=22 dport=58619 mark=0 use=1

  Bypassing Sec Groups firewalls entirely by disabling IPtable filtering
  on Linux bridges resolves the issue.

  net.bridge.bridge-nf-call-iptables=0

  Software Versions:
  Neutron: Kilo 2015.1.1
  Nova: Kilo 2015.1.1
  OS: CentOS 7.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514769] Re: qrouter loosing iptable entry after certain frequency.

2015-11-10 Thread Eugene Nikanorov
That looks like a support request rhather than a bug.

You should not add iptables rules directly to neutron namespaces, because 
they're managed by neutron.
There's no guarantee that that manually added rule will persist.

You should be doing this via security groups or floatingips using
neutorn API.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514769

Title:
  qrouter loosing iptable entry after certain frequency.

Status in neutron:
  Invalid

Bug description:
  Hi Everyone,

  We have made iptable entry to qrouter for getting access outside
  public instances but we found qrouter is loosing iptable entry after
  some time because of that instances are loosing connection between
  outside instance.

  we are using DevStack  stable/liberty

  
  After adding iptable Rule
  
  $ sudo ip netns exec qrouter-b74e8aec-2d7d-4f4f-823e-bc12ae0040e4 iptables -I 
neutron-l3-agent-snat -t nat -d 10.30.0.0/24 -j RETURN

  $ sudo ip netns exec qrouter-b74e8aec-2d7d-4f4f-823e-bc12ae0040e4  sudo 
iptables -t nat -L --line-numbers
  Chain PREROUTING (policy ACCEPT)
  num  target prot opt source   destination
  1neutron-l3-agent-PREROUTING  all  --  anywhere anywhere
  2DNAT   tcp  --  ubuntu492e9c.ubuntusjc.com  anywhere tcp 
dpt:3000 to:10.20.0.115:3000
  3DNAT   tcp  --  anywhere anywhere tcp 
dpt:3000 to:10.20.0.124:3000

  Chain INPUT (policy ACCEPT)
  num  target prot opt source   destination

  Chain OUTPUT (policy ACCEPT)
  num  target prot opt source   destination
  1neutron-l3-agent-OUTPUT  all  --  anywhere anywhere

  Chain POSTROUTING (policy ACCEPT)
  num  target prot opt source   destination
  1neutron-l3-agent-POSTROUTING  all  --  anywhere anywhere
  2neutron-postrouting-bottom  all  --  anywhere anywhere

  Chain neutron-l3-agent-OUTPUT (1 references)
  num  target prot opt source   destination
  1DNAT   all  --  anywhere 172.24.4.129 
to:10.20.0.125
  2DNAT   all  --  anywhere 172.24.4.130 
to:10.20.0.126
  3DNAT   all  --  anywhere 172.24.4.131 
to:10.20.0.127

  Chain neutron-l3-agent-POSTROUTING (1 references)
  num  target prot opt source   destination
  1ACCEPT all  --  anywhere anywhere ! ctstate 
DNAT

  Chain neutron-l3-agent-PREROUTING (1 references)
  num  target prot opt source   destination
  1REDIRECT   tcp  --  anywhere 169.254.169.254  tcp 
dpt:http redir ports 9697
  2DNAT   all  --  anywhere 172.24.4.129 
to:10.20.0.125
  3DNAT   all  --  anywhere 172.24.4.130 
to:10.20.0.126
  4DNAT   all  --  anywhere 172.24.4.131 
to:10.20.0.127

  Chain neutron-l3-agent-float-snat (1 references)
  num  target prot opt source   destination
  1SNAT   all  --  10.20.0.125  anywhere 
to:172.24.4.129
  2SNAT   all  --  10.20.0.126  anywhere 
to:172.24.4.130
  3SNAT   all  --  10.20.0.127  anywhere 
to:172.24.4.131

  Chain neutron-l3-agent-snat (1 references)
  num  target prot opt source   destination
  1RETURN all  --  anywhere 10.30.0.0/24
  2neutron-l3-agent-float-snat  all  --  anywhere anywhere
  3SNAT   all  --  anywhere anywhere 
to:172.24.4.3
  4SNAT   all  --  anywhere anywhere mark match 
! 0x2/0x ctstate DNAT to:172.24.4.3

  Chain neutron-postrouting-bottom (1 references)
  num  target prot opt source   destination
  1neutron-l3-agent-snat  all  --  anywhere anywhere
 /* Perform source NAT on outgoing traffic. */  

  
  After some time
  =

  $ sudo ip netns exec qrouter-b74e8aec-2d7d-4f4f-823e-bc12ae0040e4  sudo 
iptables -t nat -L --line-numbers
  Chain PREROUTING (policy ACCEPT)
  num  target prot opt source   destination
  1neutron-l3-agent-PREROUTING  all  --  anywhere anywhere
  2DNAT   tcp  --  ubuntu492e9c.ubuntussjc.com  anywhere 
tcp dpt:3000 to:10.20.0.115:3000
  3DNAT   tcp  --  anywhere anywhere tcp 
dpt:3000 to:10.20.0.124:3000

  Chain INPUT (policy ACCEPT)
  num  target prot opt source   destination

  Chain OUTPUT (policy ACCEPT)
  num  target prot opt source   destination
  1neutron-l3-agent-OUTPUT  all  --  anywhere anywhere

  Chain POSTROUTING 

[Yahoo-eng-team] [Bug 1514056] [NEW] Set agent timestamp aka cookie to physical bridges

2015-11-07 Thread Eugene Nikanorov
Public bug reported:

Currently ovs agent only explicitly sets agent timestamp to br-int and br-tun 
bridges.
Other physical bridges that are configured receive cookie=0x0 for their flows 
because the agent doesn't set timestamp for these bridges.
Currently that doesn't lead to any malfunction, however it's better to provide 
a consistentsy over operations with bridges's flows.

** Affects: neutron
 Importance: Low
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514056

Title:
  Set agent timestamp aka cookie to physical bridges

Status in neutron:
  New

Bug description:
  Currently ovs agent only explicitly sets agent timestamp to br-int and br-tun 
bridges.
  Other physical bridges that are configured receive cookie=0x0 for their flows 
because the agent doesn't set timestamp for these bridges.
  Currently that doesn't lead to any malfunction, however it's better to 
provide a consistentsy over operations with bridges's flows.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513041] Re: need to wait more than 30 seconds before the network namespace can be checked on network node when creating network(with 860 tenant/network/instance created)

2015-11-04 Thread Eugene Nikanorov
I would question the issue itself.
Processing 860 networks in 30 seconds is much much better than Juno has.
Since we had rootwrap demon, processing time reduced significantly. 

You can increase num_sync_threads in dhcp_agent.ini from default value
of 4 to higher and see if it helps.

** Changed in: neutron
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513041

Title:
  need to wait more than 30 seconds before the network namespace can be
  checked on network node  when creating network(with 860
  tenant/network/instance created)

Status in neutron:
  Opinion

Bug description:
  [Summary]
  need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

  created)

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the network namespace can be checked on network node immidiately when 
creating network

  [Reproduceable or not]
  reproducible when large number of tenant/network/instance configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more network, the name space of this network can only be
  checked on network node 30 seconds later ISSUE

  
  [Configration]
  config files for controller/network/compute are attached

  [logs]
  Post logs here.

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394165] Re: L3 and DHCP agents may hang while processing a router or a network

2015-10-26 Thread Eugene Nikanorov
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394165

Title:
  L3 and DHCP agents may hang while processing a router or a network

Status in neutron:
  Invalid

Bug description:
  In some cases when L3 agent is restarted, it can't detect that ns-
  metadata-proxy process is already running in the qrouter-namespace so
  L3 agent spawns it again and again with each restart.

  After some number of ns-metadata-proxy processes running in a namespace, L3 
agent hangs on spawning additional process.
  The symptoms are that router processing loop is getting stuck on one of the 
router, so remaining routers are not processed.

  The workaround is to kill ns-metadata-proxy processes from that
  router's namespace and restart L3 agent.

  Similar problem appears with DHCP agent in case metadata is turned on
  and network is isolated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509959] [NEW] Stale DHCP ports remain in DHCP namespace

2015-10-26 Thread Eugene Nikanorov
Public bug reported:

Consider the following case: 2 or more DHCP agents host a network, then for any 
reason they go offline.
both (or more) DHCP ports become reserved. If then DHCP agents start or revive 
nearly at the same time, they may fetch different reserved port that they have 
owned previously.

At the same time, previous DHCP port remains in the namespace and is not 
getting cleaned up.
As a result it is possible to get two hosts with the same DHCP namespace having 
a pair of port which duplicate the same ports on other host.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509959

Title:
  Stale DHCP ports remain in DHCP namespace

Status in neutron:
  In Progress

Bug description:
  Consider the following case: 2 or more DHCP agents host a network, then for 
any reason they go offline.
  both (or more) DHCP ports become reserved. If then DHCP agents start or 
revive nearly at the same time, they may fetch different reserved port that 
they have owned previously.

  At the same time, previous DHCP port remains in the namespace and is not 
getting cleaned up.
  As a result it is possible to get two hosts with the same DHCP namespace 
having a pair of port which duplicate the same ports on other host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506092] [NEW] Count all network-agent bindings during scheduling

2015-10-14 Thread Eugene Nikanorov
Public bug reported:

Currently the code in DHCP agent scheduler counts only active agents that host 
network.
In such case it may allow more agents to host the network than it is configured.

This is creates possibility of race condition when several DHCP agents start up 
at the same time and try to get active networks.
The network gets hosted by several agents eventhough it might already be hosted 
by other agents.
This just wastes ports/fixed ips from tenant's network range and increases load 
on controllers.

It's better to let rescheduling mechanism to sort out active/dead agents
for each of networks.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506092

Title:
  Count all network-agent bindings during scheduling

Status in neutron:
  New

Bug description:
  Currently the code in DHCP agent scheduler counts only active agents that 
host network.
  In such case it may allow more agents to host the network than it is 
configured.

  This is creates possibility of race condition when several DHCP agents start 
up at the same time and try to get active networks.
  The network gets hosted by several agents eventhough it might already be 
hosted by other agents.
  This just wastes ports/fixed ips from tenant's network range and increases 
load on controllers.

  It's better to let rescheduling mechanism to sort out active/dead
  agents for each of networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506092] Re: Count all network-agent bindings during scheduling

2015-10-14 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1388698 ***
https://bugs.launchpad.net/bugs/1388698

** This bug has been marked a duplicate of bug 1388698
   dhcp_agents_per_network does not work appropriately.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506092

Title:
  Count all network-agent bindings during scheduling

Status in neutron:
  In Progress

Bug description:
  Currently the code in DHCP agent scheduler counts only active agents that 
host network.
  In such case it may allow more agents to host the network than it is 
configured.

  This is creates possibility of race condition when several DHCP agents start 
up at the same time and try to get active networks.
  The network gets hosted by several agents eventhough it might already be 
hosted by other agents.
  This just wastes ports/fixed ips from tenant's network range and increases 
load on controllers.

  It's better to let rescheduling mechanism to sort out active/dead
  agents for each of networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505166] [NEW] Resync OVS, L3, DHCP agents upon revival

2015-10-12 Thread Eugene Nikanorov
Public bug reported:

In some cases on a loaded cloud when neutron is working over rabbitmq in 
clustered mode there could be a condition when one of the rabbitmq cluster 
member is stuck replicating queues.
During that period agents that connect via that instance can't communicate and 
send heartbeats.

Neutron-sever will reschedule resources from such agents in such case.
After that, when rabbitmq finishes sync, agents will "revive", but will
not do anything to cleanup resources which were rescheduled during their
"sleep".

As a result, there could be resources in failed or conflicting state 
(dhcp/router namespaces, ports with binding_failed).
They should be either deleted or syncronized with server state.

** Affects: neutron
 Importance: Undecided
     Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505166

Title:
  Resync OVS, L3, DHCP agents upon revival

Status in neutron:
  In Progress

Bug description:
  In some cases on a loaded cloud when neutron is working over rabbitmq in 
clustered mode there could be a condition when one of the rabbitmq cluster 
member is stuck replicating queues.
  During that period agents that connect via that instance can't communicate 
and send heartbeats.

  Neutron-sever will reschedule resources from such agents in such case.
  After that, when rabbitmq finishes sync, agents will "revive", but
  will not do anything to cleanup resources which were rescheduled
  during their "sleep".

  As a result, there could be resources in failed or conflicting state 
(dhcp/router namespaces, ports with binding_failed).
  They should be either deleted or syncronized with server state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505217] [NEW] Spawn dedicated rpc workers for state reports queue

2015-10-12 Thread Eugene Nikanorov
Public bug reported:


This change will address a case when rpc workers are loaded with heavy 
requests, like l3 router sync and have no capacity to
process state reports. In such case agents will blink, rescheduling will occur, 
which will load neutron-server even more and potentially disrupt connectivity.

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505217

Title:
  Spawn dedicated rpc workers for state reports queue

Status in neutron:
  In Progress

Bug description:
  
  This change will address a case when rpc workers are loaded with heavy 
requests, like l3 router sync and have no capacity to
  process state reports. In such case agents will blink, rescheduling will 
occur, which will load neutron-server even more and potentially disrupt 
connectivity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501198] Re: When network is rescheduled from one DHCP agent to another, DHCP port binding (host) doesn't change

2015-10-01 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1411163 ***
https://bugs.launchpad.net/bugs/1411163

** This bug has been marked a duplicate of bug 1411163
   No fdb entries added when failover dhcp and l3 agent together

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501198

Title:
  When network is rescheduled from one DHCP agent to another, DHCP port
  binding (host) doesn't change

Status in neutron:
  In Progress

Bug description:
  During network failover DHCP port doesn't change its port binding 
information, host in particular.
  This prevents external SDNs like Cisco from configuring port properly because 
it needs correct binding information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500990] Re: dnsmasq responds with NACKs to requests from unknown hosts

2015-09-30 Thread Eugene Nikanorov
I'm not sure whole use case is valid.
DHCP for the subnet should be managed by neutron only, otherwise subnet should 
have DHCP disabled and IPs should be allocated via other means.

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500990

Title:
  dnsmasq responds with NACKs to requests from unknown hosts

Status in neutron:
  Opinion

Bug description:
   When a request comes in from a host not managed by neutron, dnsmasq
  responds with a NACK. This causes a race condition where if the wrong
  DHCP server responds to the request, your request will not be honored.
  This can be inconvenient if you  are sharing a subnet with other DHCP
  servers.

  Our team recently ran into this in our Ironic development environment
  and were stepping on each other's DHCP requests. A solution is to
  provide an option that ignores unknown hosts rather than NACKing them.

  The symptom of this was the repeated DISCOVER,OFFER,REQUEST,PACK cycle
  with no acceptance from the host. (Sorry for all the omissions, this
  may be overly cautious)

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205   

  ...And so on

  I did a dhcpdump and saw NACKs  coming from my two teammates'
  machines.

  Of course multiple DHCP servers on a subnet is not a standard or
  common case, but we've needed this case in our Ironic development
  environment and have found the fix to be useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500365] Re: neutron port API does not support atomicity

2015-09-28 Thread Eugene Nikanorov
>From API perspective, there is no restriction on port ownership within one 
>tenant. 
If tenant wants to change the ownership - it can do that. Also, there is no 
problems with atomicity, because API calls act quite atomically. 
What you are looking for is transactional semantics where such client problem 
could be resolved, but I don't think neutron is going to provide such ability 
any time soon, and also I don't think it is on the map.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500365

Title:
  neutron port API does not support atomicity

Status in neutron:
  Opinion

Bug description:
  Neutron port API offers an update method where the user of the API can say "I 
use this port" by setting the device_owner and device_id fields of the port. 
However the neutron API does not prevent port allocation race conditions.
  The API semantic is that a port is used if the device_id and the device_owner 
fields are set, and not used if they aren't.  Now lets have two clients that 
both want to set the ownership of the port. Both clients first have to check if 
the port is free or not by checking the value of the device_owner and device_id 
fields of the port, then they have to set the those fields to express 
ownership. 
  If the two clients act parallel it is pretty much possible that both clients 
see that the fields are empty and both issue the port update command. This can 
leads to race conditions between clients.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500430] [NEW] Access to floating IP can take 30+ seconds during boot_runcommand_delete test on scale.

2015-09-28 Thread Eugene Nikanorov
Public bug reported:

For some reason simple GET access to a floating IP on a 200-node cluster
during boot_runcommand_delete test takes too much time, ranging from 0
to 35 seconds with median ~15 seconds.

That leads to timeouts on nova side.
Also, some traces in RPC workers appear: http://paste.openstack.org/show/474339/
That may indicate that the table is a contention point.
Slowdown may be caused by constant retries.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed


** Tags: scale

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500430

Title:
  Access to floating IP can take 30+ seconds during
  boot_runcommand_delete test on scale.

Status in neutron:
  Confirmed

Bug description:
  For some reason simple GET access to a floating IP on a 200-node
  cluster during boot_runcommand_delete test takes too much time,
  ranging from 0 to 35 seconds with median ~15 seconds.

  That leads to timeouts on nova side.
  Also, some traces in RPC workers appear: 
http://paste.openstack.org/show/474339/
  That may indicate that the table is a contention point.
  Slowdown may be caused by constant retries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500430] Re: Access to floating IP can take 30+ seconds during boot_runcommand_delete test on scale.

2015-09-28 Thread Eugene Nikanorov
** Also affects: mos
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Changed in: mos
 Assignee: (unassigned) => Eugene Nikanorov (enikanorov)

** Changed in: mos
   Importance: Undecided => High

** Changed in: mos
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500430

Title:
  Access to floating IP can take 30+ seconds during
  boot_runcommand_delete test on scale.

Status in Mirantis OpenStack:
  Confirmed

Bug description:
  For some reason simple GET access to a floating IP on a 200-node
  cluster during boot_runcommand_delete test takes too much time,
  ranging from 0 to 35 seconds with median ~15 seconds.

  That leads to timeouts on nova side.
  Also, some traces in RPC workers appear: 
http://paste.openstack.org/show/474339/
  That may indicate that the table is a contention point.
  Slowdown may be caused by constant retries.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1500430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500121] Re: openSUSE install error - DHCP neutron agent fails

2015-09-26 Thread Eugene Nikanorov
Please file a bug against SUSE, upstream project doesn't track packaging
for particular distros

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500121

Title:
  openSUSE install error - DHCP neutron agent fails

Status in neutron:
  Invalid

Bug description:
  System: openSUSE LEAP 42.1 (although I doubt the error has anything to do 
with the OS version)
  Repository: openSUSE (not SUSE)

  Summary:
  Attempting to install OpenStack Kilo on a Network Node, first using the 
network-nodes pattern and then the faulty package.

  Attempted first
  zypper in patterns-OpenStack-network-node
  Loading repository data...
  Reading installed packages...
  Resolving package dependencies...

  Problem: patterns-OpenStack-network-node-2015.1-4.1.noarch requires 
openstack-neutron-dhcp-agent >= 2015.1, but this requirement cannot be provided
uninstallable providers: 
openstack-neutron-dhcp-agent-2015.1.2~a0~dev65-1.1.noarch[OpenStack_Kilo]
   Solution 1: deinstallation of 
patterns-openSUSE-minimal_base-conflicts-20150505-6.5.x86_64
   Solution 2: do not install patterns-OpenStack-network-node-2015.1-4.1.noarch
   Solution 3: break patterns-OpenStack-network-node-2015.1-4.1.noarch by 
ignoring some of its dependencies

  Choose from above solutions by number or cancel [1/2/3/c] (c):

  Then, tried to resolve by attempting to install the neutron DHCP agent 
package first
  zypper in openstack-neutron-dhcp-agent
  Loading repository data...
  Reading installed packages...
  Resolving package dependencies...

  Problem: openstack-neutron-dhcp-agent-2015.1.2~a0~dev65-1.1.noarch requires 
openstack-neutron = 2015.1.2~a0~dev65, but this requirement cannot be provided
uninstallable providers: 
openstack-neutron-2015.1.2~a0~dev65-1.1.noarch[OpenStack_Kilo]
   Solution 1: deinstallation of 
patterns-openSUSE-minimal_base-conflicts-20150505-6.5.x86_64
   Solution 2: do not install 
openstack-neutron-dhcp-agent-2015.1.2~a0~dev65-1.1.noarch
   Solution 3: break openstack-neutron-dhcp-agent-2015.1.2~a0~dev65-1.1.noarch 
by ignoring some of its dependencies

  Choose from above solutions by number or cancel [1/2/3/c] (c):

  The error looks weird, as if the agent package has some kind of
  circular error requiring itself?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499381] Re: Openstack Kilo Nova-Docker:P of contanier will be lost after stop/start contanier via docker cli

2015-09-24 Thread Eugene Nikanorov
Seems to be invalid for neutron

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499381

Title:
  Openstack Kilo  Nova-Docker:P of contanier will be lost after
  stop/start contanier via docker cli

Status in neutron:
  Invalid

Bug description:
  Bug description:
[Summary]
  When stop container and restart container via docker cli (docker stop xx / 
docker start xx) on compute node, ip of this container will be lost.   for 
detail pls refer following logs.  
[Topo]
Unbuntu Kilo 14.04 OS , Kilo docker setup , 1 controller ,2 network node,6 
computenode

[Reproduceable or not]
Can be reproduced, 

[Recreate Steps]

  Meanwhile, we also tried these two ways to check if the container can get ip 
or not,
  a, launch a new container, can get ip successfully; 
  b, stop and re-start via nova stop xxx / nova start xxx , container can get 
ip successfully. 

  
  # Steps:
  1, Build up a openstack & docker setup based on Ubuntu 14.04 trusty; 
  2, 1 controller, 2 network node and 7 computer node ; 
  3, Launch docker containers; 

  root@quasarucn3:~# nova list
  
+--+--+-++-+---+
  | ID   | Name | Status  | Task State 
| Power State | Networks  |
  
+--+--+-++-+---+
  | ae6d1bc5-635a-453d-8cec-143a2b8240f6 | d7   | ACTIVE  | -  
| Running | vmn=12.1.1.20 |
  | 914c2f05-b56a-4689-9c1b-63f5bb36c7db | d8_sdn1  | ACTIVE  | -  
| Running | vmn=12.1.1.22 |
  | 925df284-3339-4af7-86e9-f66c41d947ce | d9_220   | ACTIVE  | -  
| Running | vmn=12.1.1.26 |
  | 518f7d49-07fb-474b-b18c-647d558562ea | d9_sdnve | ACTIVE  | -  
| Running | vmn=12.1.1.25 |
  | 59b3512a-8f6b-443d-ae1a-ca1ff615dba4 | leo_ubuntu   | ACTIVE  | -  
|  Running | vmn=12.1.1.15 |   << container launched. 
  
+--+--+-++-+---+
  root@quasarucn3:~# 

  
  root@quasarsdn2:~# docker ps | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Up 37 seconds   
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  root@quasarsdn2:~# 

  
  root@quasarsdn2:~# docker exec -i -t 90a1510e9dfd /bin/sh 
  # ifconfig 
  loLink encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0 
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

  nsaeda5cee-7b Link encap:Ethernet  HWaddr fa:16:3e:6c:88:96  
inet addr:12.1.1.15  Bcast:0.0.0.0  Mask:255.255.255.0  
<<< ip got. 
inet6 addr: fe80::f816:3eff:fe6c:8896/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 
RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

  # 
  # 

  root@quasarsdn2:~# docker stop 90a1510e9dfd
  90a1510e9dfd

  root@quasarsdn2:~#  docker start 90a1510e9dfd <<< stop then 
restart container. 
  90a1510e9dfd

  root@quasarsdn2:~# docker ps
  CONTAINER IDIMAGE   COMMAND   CREATED 
STATUS  PORTS   NAMES
  c3464a45cd94cirros  "/sbin/init"  47 hours ago
Up 23 minutes   
nova-ae6d1bc5-635a-453d-8cec-143a2b8240f6
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   47 hours ago
Up 16 seconds   
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  root@quasarsdn2:~# 

  root@quasarsdn2:~# docker exec -i -t 90a1510e9dfd /bin/sh 
  # ifconfig 
  # ifconfig << no ip can be seen after container rebooting. 
  # 

  
[Log]
  There is no error, this is  docker related issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499383] Re: Openstack Kilo Nova-Docker:Container will be hung in "powering-on" state.

2015-09-24 Thread Eugene Nikanorov
** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499383

Title:
  Openstack Kilo  Nova-Docker:Container will be hung in "powering-on"
  state.

Status in OpenStack Compute (nova):
  New

Bug description:
  Bug description:
[Summary]
When nova-compute service on compute node is down, issuing "nova start xxx" 
to start SHUTOFF container,  container will be hung in "powering-on" state, 
even the nova-compute service has been started on compute node.  
  # Expected behavior:
  Cmd "Nova start xx" of start container should not be issued and prompt an 
error notice to inform that compute node nova-compute service is down; 
  or 
  The state of container should be changed to ACTIVE from powering-on after 
compute node's nova-compute service up. 

[Topo]
Unbuntu Kilo 14.04 OS , Kilo docker setup , 1 controller ,2 network node,6 
computenode

[Reproduceable or not]
Can be reproduced, 

[Recreate Steps]
  # Steps:
  1, Build up a openstack & docker setup based on Ubuntu 14.04 trusty;
  2, 1 controller, 2 network node and 7 computer node ;
  3, Launch docker containers;

  root@quasarucn3:~# nova list
  
+--+--+-+--+-+---+
  | ID   | Name | Status  | Task State  
 | Power State | Networks  |
  
+--+--+-+--+-+---+
  | 3f62d6c5-a3e4-420d-b6b6-9a719f60c580 | cirros31 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.27 |
  | d65433d0-001b-4e24-9553-42ec5baeb056 | cirros32 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.28 |
  | 28914c07-a0cc-40a0-b743-48136ee997e3 | cirros33 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.29 |
  | 8c1c6f0d-c385-45b3-aef5-eead0dbbd54b | cirros34 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.30 |
  | 40d40e57-7bdb-4a97-8e49-b2b6f7fbb8b5 | cirros35 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.31 |
  | 6dc4c963-124b-4264-adbc-3a32f705318f | cirros36 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.32 |
  | af4ec54b-dc90-410d-87e4-170b8a643bba | d5   | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.14 |
  | ae6d1bc5-635a-453d-8cec-143a2b8240f6 | d7   | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.20 |
  | 914c2f05-b56a-4689-9c1b-63f5bb36c7db | d8_sdn1  | ACTIVE  | -   
 | Running | vmn=12.1.1.22 |
  | 925df284-3339-4af7-86e9-f66c41d947ce | d9_220   | ACTIVE  | -   
 | Running | vmn=12.1.1.26 |
  | 518f7d49-07fb-474b-b18c-647d558562ea | d9_sdnve | ACTIVE  | -   
 | Running | vmn=12.1.1.25 |
  | 90cc6a5e-e8ce-4a75-a9c5-f0393b68560d | spark_docker | ACTIVE  | -   
 | Running | vmn=12.1.1.18 |
  
+--+--+-+--+-+---+
  root@quasarucn3:~# 


  
  root@quasarucn3:~# nova show d7
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | quasarsdn2   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | quasarsdn2   
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0017
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2015-09-19T07:03:41.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2015-09-19T07:03:39Z 
|
  | flavor   | m1.tiny (1)   

[Yahoo-eng-team] [Bug 1499393] Re: Openstack Kilo Nova-Docker:Container states are not sync up between docker cli and nova cli.

2015-09-24 Thread Eugene Nikanorov
** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499393

Title:
  Openstack Kilo  Nova-Docker:Container states are not sync up between
  docker cli and nova cli.

Status in OpenStack Compute (nova):
  New

Bug description:
  Bug description:
[Summary]
  Container states are not sync up between docker cli and nova cli.
  The container state got from docker cli and nova cli are not some.  
  such as a running /up container, but nova list shows SHUTOFF state. 

  root@quasarucn3:~# nova list | grep leo
  | 59b3512a-8f6b-443d-ae1a-ca1ff615dba4 | leo_ubuntu   | SHUTOFF | -  
| Shutdown| vmn=12.1.1.15 |

  root@quasarsdn2:~# docker ps -a | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Up 2 minutes
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4

  # Expected behavior:
  Container state should be same from docker cli and nova cli. 

  
[Topo]
Unbuntu Kilo 14.04 OS , Kilo docker setup , 1 controller ,2 network node,6 
computenode

[Reproduceable or not]
Can be reproduced, 

  
[Recreate Steps]
1, Build up a openstack & docker setup based on Ubuntu 14.04 trusty;
  2, 1 controller, 2 network node and 7 computer node ;
  3, Check the down state container.
  root@quasarsdn2:~#  docker ps -a | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Exited (0) 28 minutes ago   
  nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  root@quasarucn3:~# nova list | grep ubuntu
  | 59b3512a-8f6b-443d-ae1a-ca1ff615dba4 | leo_ubuntu   | SHUTOFF | -  
| Shutdown| vmn=12.1.1.15 |
  root@quasarucn3:~# 

  
  4, Sart the down state container via docker start '
  root@quasarsdn2:~# docker start 90a1510e9dfd
  90a1510e9dfd
  root@quasarsdn2:~#  docker ps -a | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Up 12 seconds   
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4

  
  root@quasarucn3:~# ^C
  root@quasarucn3:~# date
  Mon Sep 21 05:13:25 CDT 2015
  root@quasarucn3:~# nova list | grep ubuntu
  | 59b3512a-8f6b-443d-ae1a-ca1ff615dba4 | leo_ubuntu   | SHUTOFF | -  
| Shutdown| vmn=12.1.1.15 |
  root@quasarucn3:~# 

  root@quasarsdn2:~# docker ps -a | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Exited (0) 10 seconds ago   
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  root@quasarsdn2:~# ^C

  root@quasarsdn2:~# docker start 90a1510e9dfd  
  90a1510e9dfd
  root@quasarsdn2:~# docker ps -a | grep 59b3512a-8f6b-443d-ae1a-ca1ff615dba4
  90a1510e9dfdleo_ubuntu  "/usr/sbin/sshd -D"   2 days ago  
Up 2 minutes
nova-59b3512a-8f6b-443d-ae1a-ca1ff615dba4

  root@quasarucn3:~# date
  Mon Sep 21 05:15:39 CDT 2015
  root@quasarucn3:~# nova list | grep ubuntu
  | 59b3512a-8f6b-443d-ae1a-ca1ff615dba4 | leo_ubuntu   | SHUTOFF | -  
| Shutdown| vmn=12.1.1.15 |<<< Container still in shutoff state via 
nova list. 
  root@quasarucn3:~# 
  \

[Log]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1499393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499397] Re: openstack Kilo Nova-Docker:Container will be hung in "powering-on" state.

2015-09-24 Thread Eugene Nikanorov
** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1499397

Title:
  openstack Kilo  Nova-Docker:Container will be hung in "powering-on"
  state.

Status in OpenStack Compute (nova):
  New

Bug description:
  Bug description:
[Summary]
  When nova-compute service on compute node is down, issuing "nova start xxx" 
to start SHUTOFF container,  container will be hung in "powering-on" state, 
even the nova-compute service has been started on compute node.  

  # Expected behavior:
  Cmd "Nova start xx" of start container should not be issued and prompt an 
error notice to inform that compute node nova-compute service is down; 
  or 
  The state of container should be changed to ACTIVE from powering-on after 
compute node's nova-compute service up. 

[Topo]
Unbuntu Kilo 14.04 OS , Kilo docker setup , 1 controller ,2 network node,6 
computenode

[Reproduceable or not]
Can be reproduced, 


[Recreate Steps]
  1, Build up a openstack & docker setup based on Ubuntu 14.04 trusty;
  2, 1 controller, 2 network node and 7 computer node ;
  3, Launch docker containers;

  root@quasarucn3:~# nova list
  
+--+--+-+--+-+---+
  | ID   | Name | Status  | Task State  
 | Power State | Networks  |
  
+--+--+-+--+-+---+
  | 3f62d6c5-a3e4-420d-b6b6-9a719f60c580 | cirros31 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.27 |
  | d65433d0-001b-4e24-9553-42ec5baeb056 | cirros32 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.28 |
  | 28914c07-a0cc-40a0-b743-48136ee997e3 | cirros33 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.29 |
  | 8c1c6f0d-c385-45b3-aef5-eead0dbbd54b | cirros34 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.30 |
  | 40d40e57-7bdb-4a97-8e49-b2b6f7fbb8b5 | cirros35 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.31 |
  | 6dc4c963-124b-4264-adbc-3a32f705318f | cirros36 | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.32 |
  | af4ec54b-dc90-410d-87e4-170b8a643bba | d5   | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.14 |
  | ae6d1bc5-635a-453d-8cec-143a2b8240f6 | d7   | SHUTOFF | -   
 | Shutdown| vmn=12.1.1.20 |
  | 914c2f05-b56a-4689-9c1b-63f5bb36c7db | d8_sdn1  | ACTIVE  | -   
 | Running | vmn=12.1.1.22 |
  | 925df284-3339-4af7-86e9-f66c41d947ce | d9_220   | ACTIVE  | -   
 | Running | vmn=12.1.1.26 |
  | 518f7d49-07fb-474b-b18c-647d558562ea | d9_sdnve | ACTIVE  | -   
 | Running | vmn=12.1.1.25 |
  | 90cc6a5e-e8ce-4a75-a9c5-f0393b68560d | spark_docker | ACTIVE  | -   
 | Running | vmn=12.1.1.18 |
  
+--+--+-+--+-+---+
  root@quasarucn3:~# 


  
  root@quasarucn3:~# nova show d7
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | quasarsdn2   
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | quasarsdn2   
|
  | OS-EXT-SRV-ATTR:instance_name| instance-0017
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2015-09-19T07:03:41.00   
|
  | OS-SRV-USG:terminated_at | -
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | config_drive |  
|
  | created  | 2015-09-19T07:03:39Z 
|
  | flavor   | m1.tiny (1)  

[Yahoo-eng-team] [Bug 1498844] [NEW] Service plugin queues should be consumed by all RPC workers

2015-09-23 Thread Eugene Nikanorov
Public bug reported:

Currently only parent neutron process consumes messages from service queues 
such as l3-plugin.
In case of DVR with many L3 agents on computes that can quickly lead to 
different issues because there is not enough speed of RPC requests processing.

That is actually a problem of other service plugins too: metering,
lbaas, fwaas, vpnaas.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-core

** Tags added: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498844

Title:
  Service plugin queues should be consumed by all RPC workers

Status in neutron:
  New

Bug description:
  Currently only parent neutron process consumes messages from service queues 
such as l3-plugin.
  In case of DVR with many L3 agents on computes that can quickly lead to 
different issues because there is not enough speed of RPC requests processing.

  That is actually a problem of other service plugins too: metering,
  lbaas, fwaas, vpnaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498506] Re: LBaas-linux kernel soft lockup after create 200 LB on single tenant with 30M traffic on each LB

2015-09-23 Thread Eugene Nikanorov
That can't be a neutron/lbaas bug. Please file for CentOS/haproxy

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498506

Title:
  LBaas-linux  kernel soft lockup after create 200 LB on single tenant
  with 30M traffic on each LB

Status in neutron:
  Invalid

Bug description:
  linux  kernel soft lockup after create 200 LB on sinalge tenant with 30M 
traffic on each LB
  CentOS Linux release 7.1.1503 (Core)
  reproduce step
  1 create 1 client  and  1 backend server by docker container
  2 create a lb and add backend as member
  3 send 30M traffic with 100 session by siege
  setsid  siege -t1000H  {vip_ip_address}/test_150KB.iso -c 100
  4 repeat step 1 to 3 for create 200 LB with traffic
  after step 4 , you will look at a prompt on LB agent host node
  and have some traffic will can not pass on LB from client and can't ping to 
vip address from client container

  log
  [root@scalenetwork1 ~]# cat /etc/redhat-release
  CentOS Linux release 7.1.1503 (Core)
  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]#
  Message from syslogd@scalenetwork1 at Sep 13 23:24:05 ...
   kernel:BUG: soft lockup - CPU#5 stuck for 23s! [rcuos/6:24] 


  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]#
  [root@scalenetwork1 ~]# sar -P ALL 60 1
  Linux 3.10.0-229.11.1.el7.x86_64 (scalenetwork1)09/13/2015  
_x86_64_(8 CPU)

  11:24:22 PM CPU %user %nice   %system   %iowait%steal 
%idle
  11:25:22 PM all 11.71  0.00 43.13  0.00  0.00 
45.17
  11:25:22 PM   0 13.81  0.00 34.69  0.00  0.00 
51.49
  11:25:22 PM   1  6.11  0.00 19.80  0.00  0.00 
74.10
  11:25:22 PM   2 14.09  0.00 34.71  0.00  0.00 
51.21
  11:25:22 PM   3 13.66  0.00 35.00  0.00  0.00 
51.34
  11:25:22 PM   4 15.34  0.00 38.34  0.00  0.00 
46.32
  11:25:22 PM   5  0.05  0.00 99.78  0.00  0.00  
0.17
  11:25:22 PM   6 14.95  0.00 41.34  0.00  0.00 
43.71
  11:25:22 PM   7 15.80  0.00 39.70  0.00  0.00 
44.50

  Average:CPU %user %nice   %system   %iowait%steal 
%idle
  Average:all 11.71  0.00 43.13  0.00  0.00 
45.17
  Average:  0 13.81  0.00 34.69  0.00  0.00 
51.49
  Average:  1  6.11  0.00 19.80  0.00  0.00 
74.10
  Average:  2 14.09  0.00 34.71  0.00  0.00 
51.21
  Average:  3 13.66  0.00 35.00  0.00  0.00 
51.34
  Average:  4 15.34  0.00 38.34  0.00  0.00 
46.32
  Average:  5  0.05  0.00 99.78  0.00  0.00  
0.17
  Average:  6 14.95  0.00 41.34  0.00  0.00 
43.71
  Average:  7 15.80  0.00 39.70  0.00  0.00 
44.50

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496410] [NEW] Create separate queue for state reports with dedicated workers

2015-09-16 Thread Eugene Nikanorov
Public bug reported:

In big clusters having hundreds of nodes, neutron rpc workers could be
consumed by rpc requests so much that they can't process state reports
from agents on time.

That lead to a condition when agents begin to "flap", appear dead and
alive. This in turn causes rescheduling which loads neutron-server even
more, creating self-sustaining loop.

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496410

Title:
  Create separate queue for state reports with dedicated workers

Status in neutron:
  New

Bug description:
  In big clusters having hundreds of nodes, neutron rpc workers could be
  consumed by rpc requests so much that they can't process state reports
  from agents on time.

  That lead to a condition when agents begin to "flap", appear dead and
  alive. This in turn causes rescheduling which loads neutron-server
  even more, creating self-sustaining loop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496406] [NEW] Let RPC workers serve L3 queues too

2015-09-16 Thread Eugene Nikanorov
Public bug reported:

This is important for DVR clusters with lots of L3 agents.

Right now L3 queue is only consumed by parent process of neutron-server.

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496406

Title:
  Let RPC workers serve L3 queues too

Status in neutron:
  New

Bug description:
  This is important for DVR clusters with lots of L3 agents.

  Right now L3 queue is only consumed by parent process of neutron-
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492254] Re: neutron should not try to bind port on compute with hypervisor_type ironic

2015-09-04 Thread Eugene Nikanorov
** No longer affects: mos

** No longer affects: mos/7.0.x

** No longer affects: mos/8.0.x

** Description changed:

  Neutron tries to bind port on compute where instance is launched.  It
- doesn't make sense when hypervisor_type is ironic, since VM  not lives
- on hypervisor in this case.  Furthermore it lead to failed provisioning
- of baremetal node, when neutron is not configured on ironic compute
- node.
+ doesn't make sense when hypervisor_type is ironic, since VM  does not
+ live on hypervisor in this case.  Furthermore it leads to failed
+ provisioning of baremetal node, when neutron is not configured on ironic
+ compute node.
  
  Setup:
  node-1: controller
  node-2: ironic-compute without neutron
  
  neutron-server.log: http://paste.openstack.org/show/445388/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492254

Title:
  neutron should not try to bind port on compute with hypervisor_type
  ironic

Status in neutron:
  New

Bug description:
  Neutron tries to bind port on compute where instance is launched.  It
  doesn't make sense when hypervisor_type is ironic, since VM  does not
  live on hypervisor in this case.  Furthermore it leads to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  Setup:
  node-1: controller
  node-2: ironic-compute without neutron

  neutron-server.log: http://paste.openstack.org/show/445388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491361] [NEW] Trace seen in L3 DVR agent during router interface removal

2015-09-02 Thread Eugene Nikanorov
Public bug reported:

The trace is from Kilo code, but issue persists in Liberty code as of
now.

37749 ERROR neutron.agent.l3.dvr_router [-] DVR: removed snat failed
 37749 TRACE neutron.agent.l3.dvr_router Traceback (most recent call last):
 37749 TRACE neutron.agent.l3.dvr_router File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 260, in 
_snat_redirect_modify
 37749 TRACE neutron.agent.l3.dvr_router for gw_fixed_ip in 
gateway['fixed_ips']:
 37749 TRACE neutron.agent.l3.dvr_router TypeError: 'NoneType' object has no 
attribute '__getitem__'

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491361

Title:
  Trace seen in L3 DVR agent during router interface removal

Status in neutron:
  In Progress

Bug description:
  The trace is from Kilo code, but issue persists in Liberty code as of
  now.

  37749 ERROR neutron.agent.l3.dvr_router [-] DVR: removed snat failed
   37749 TRACE neutron.agent.l3.dvr_router Traceback (most recent call last):
   37749 TRACE neutron.agent.l3.dvr_router File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_router.py", line 260, in 
_snat_redirect_modify
   37749 TRACE neutron.agent.l3.dvr_router for gw_fixed_ip in 
gateway['fixed_ips']:
   37749 TRACE neutron.agent.l3.dvr_router TypeError: 'NoneType' object has no 
attribute '__getitem__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1491361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174657] Re: metadata IP 169.254.169.254 routing breaks RFC3927 and does not work on Windows starting from WS 2008

2015-09-01 Thread Eugene Nikanorov
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1174657

Title:
  metadata IP 169.254.169.254 routing breaks RFC3927 and does not work
  on Windows starting from WS 2008

Status in neutron:
  Fix Released

Bug description:
  The Quantum L3 Linux Agent handles metadata IP access with the
  following rule:

  -A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp
  --dport 80 -j REDIRECT --to-ports 9697

  obtained with:  sudo ip netns exec qrouter- iptables-save

  
  169.254.x.x link local addresses are described in RFC3927 whose section 2.6.2 
clearly states:

  "The host MUST NOT send a packet with an IPv4 Link-Local destination
  address to any router for forwarding."

  And on section 2.7:

  "An IPv4 packet whose source and/or destination address is in the
  169.254/16 prefix MUST NOT be sent to any router for forwarding, and
  any network device receiving such a packet MUST NOT forward it,
  regardless of the TTL in the IPv4 header."

  Ref: http://tools.ietf.org/html/rfc3927#section-2.6.2

  
  Linux does not enforce this rule, but Windows starting with 2008 and Vista 
does, which means that the metadata IP 169.254.169.254 is not accessible from a 
Windows guest (tested on Windows Server 2012 on Hyper-V).

  
  The current workaround consists in adding explicitly a static route on the 
Windows guest with:

  route add 169.254.169.254 mask 255.255.255.255 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1174657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490961] [NEW] Fix doubled error message for a single event

2015-09-01 Thread Eugene Nikanorov
Public bug reported:

Writing two error messages with different text for a single event is not a good 
practice.
It reduces readability and search convenience and also may affect UX in case 
log monitoring is enabled.

 def _bind_centralized_snat_port_on_dvr_subnet(self, port, lvm, fixed_ips, 
device_owner):
 if port.vif_id in self.local_ports:
 # throw an error if CSNAT port is already on a different
 # dvr routed subnet
 ovsport = self.local_ports[port.vif_id]
 subs = list(ovsport.get_subnets())
 LOG.error(_LE("Centralized-SNAT port %s already seen on "),
   port.vif_id)
 LOG.error(_LE("a different subnet %s"), subs[0])
 return

** Affects: neutron
 Importance: Medium
     Assignee: Eugene Nikanorov (enikanorov)
 Status: Invalid


** Tags: l3-dvr-backlog

** Changed in: neutron
   Status: New => Confirmed

** Tags added: l3-dvr-backlog

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490961

Title:
  Fix doubled error message for a single event

Status in neutron:
  Invalid

Bug description:
  Writing two error messages with different text for a single event is not a 
good practice.
  It reduces readability and search convenience and also may affect UX in case 
log monitoring is enabled.

   def _bind_centralized_snat_port_on_dvr_subnet(self, port, lvm, fixed_ips, 
device_owner):
   if port.vif_id in self.local_ports:
   # throw an error if CSNAT port is already on a different
   # dvr routed subnet
   ovsport = self.local_ports[port.vif_id]
   subs = list(ovsport.get_subnets())
   LOG.error(_LE("Centralized-SNAT port %s already seen on "),
 port.vif_id)
   LOG.error(_LE("a different subnet %s"), subs[0])
   return

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475938] [NEW] create_security_group code may get into endless loop

2015-07-18 Thread Eugene Nikanorov
Public bug reported:

That damn piece of code again.

In some cases when network is created for tenant and default security group is 
created in the process, there may be concurrent network of sg creation 
happening.
That leads to a condition when the code fetches default sg, it's not there, 
tries to add it - it's already there, then it tries to fetch it again, but due 
to REPEATABLE READ isolation method, the query returns empty result.
As a result, such logic will hang in the loop forever.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: sg-fw

** Changed in: fuel
 Assignee: (unassigned) = Eugene Nikanorov (enikanorov)

** Tags added: sg-fw

** Project changed: fuel = neutron

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475938

Title:
  create_security_group code may get into endless loop

Status in neutron:
  New

Bug description:
  That damn piece of code again.

  In some cases when network is created for tenant and default security group 
is created in the process, there may be concurrent network of sg creation 
happening.
  That leads to a condition when the code fetches default sg, it's not there, 
tries to add it - it's already there, then it tries to fetch it again, but due 
to REPEATABLE READ isolation method, the query returns empty result.
  As a result, such logic will hang in the loop forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465687] Re: use fixed ip address cannot ping/ssh instance from controller node

2015-06-23 Thread Eugene Nikanorov
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465687

Title:
  use fixed ip address cannot ping/ssh instance from controller node

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I found  one question in openstack network. I configure nova.conf with 
multi_host = true in controller node and compute node. and
  set security group as below:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

  and I create one neutron network,  the type is  gre, and subnet  is
  10.1.2.0/24. I can boot one instance successful and also the boot
  instance also can get one ip address from 10.1.2.0/24 subnet.

  from this instance,  it can ping/ssh the controller node ip address,
  but  from controller node ,  I can not ping/ssh  the instance  ip
  address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1465687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467518] Re: neutron --debug port-list --binding:vif_type=binding_failed returns wrong ports

2015-06-23 Thread Eugene Nikanorov
binding:vif_type is an attribute from a different table, not 'ports'
table, you can't filter by that attribute.

** Changed in: neutron
   Status: New = Opinion

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467518

Title:
  neutron --debug port-list --binding:vif_type=binding_failed returns
  wrong ports

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  neutron --debug port-list --binding:vif_type=binding_failed displays
  all ports with all vif_type, not only with binding_failed.

  vif_type=binding_failed is set when something bad happens on a compute
  host during port configuration (no local vlans in ml2 conf, etc)

  We had intention to monitor for such ports, but request to neutron
  return some irrelevant ports:

  REQ: curl -i -X GET
  
https://neutron.lab.internal:9696/v2.0/ports.json?binding%3Avif_type=binding_failed
  -H User-Agent: python-neutronclient -H Accept: application/json -H
  X-Auth-Token: 52c0c1ee1f764c408977f41c9f3743ca

  RESP BODY: {ports: [{status: ACTIVE, binding:host_id:
  compute2, name: , admin_state_up: true, network_id:
  5c399fb7-67ac-431d-9965-9586dbcec1c9, tenant_id:
  3e6b1fc20da346838f93f124cb894d0f, extra_dhcp_opts: [],
  binding:vif_details: {port_filter: false, ovs_hybrid_plug:
  false}, binding:vif_type: ovs, device_owner: network:dhcp,
  mac_address: fa:16:3e:ad:6f:22, binding:profile: {},
  binding:vnic_type: normal, fixed_ips: [{subnet_id:
  c10a3520-17e2-4c04-94c6-a4419d79cca9, ip_address: 192.168.0.3}],
  .

  If request is send on neutron --debug port-list
  --binding:host_id=compute1, filtering works as expected.

  Neutron version - 2014.2.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467504] Re: Health monitor Admin state up False inactive

2015-06-23 Thread Eugene Nikanorov
** Changed in: neutron
   Importance: Undecided = Medium

** Project changed: neutron = python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467504

Title:
  Health monitor Admin state up False inactive

Status in Python client library for Neutron:
  New

Bug description:
  We have LbaasV2 running. 
  I configured health monitor. 
  When executing tcpdump on VM I see it receives the http traffic. 

  I executed the following 
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-create 
--delay 1 --max-retries 3 --timeout 3 --type PING --pool 
1ac828d0-0064-446e-a7cc-5f4eacaf37de
  Created a new healthmonitor:
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 1  |
  | expected_codes | 200|
  | http_method| GET|
  | id | 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c   |
  | max_retries| 3  |
  | pools  | {id: 1ac828d0-0064-446e-a7cc-5f4eacaf37de} |
  | tenant_id  | bee13f9e436e4d78b9be72b8ec78d81c   |
  | timeout| 3  |
  | type   | PING   |
  | url_path   | /  |
  +++
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update
  usage: neutron lbaas-healthmonitor-update [-h] [--request-format {json,xml}]
HEALTHMONITOR
  neutron lbaas-healthmonitor-update: error: too few arguments
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_down True
  Unrecognized attribute(s) 'admin_state_down'
  [root@puma09 neutron(keystone_redhat)]# neutron lbaas-healthmonitor-update 
46fd07b4-94b1-4494-8ee2-0a6c9803cf2c --admin_state_up False
  Updated healthmonitor: 46fd07b4-94b1-4494-8ee2-0a6c9803cf2c
  [root@puma09 neutron(keystone_redhat)]# 

  A executed tcpdump again on the VM and I could see the
  HealthMonitoring traffic continues.

  After deleting the healthmonitor not traffic captured on VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1467504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464554] Re: instance failed to spawn with external network

2015-06-23 Thread Eugene Nikanorov
The only way to get external connectivity via tenant network is to setup 
so-called provider network.
It would be a tenant network going through specifically configured bridges and 
having fixed cidr which is a part of global ipv4 pool.

Other than that VMs can't be plugged into an external network.

** Changed in: neutron
   Status: New = Incomplete

** Changed in: neutron
   Status: Incomplete = Invalid

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464554

Title:
  instance failed to spawn with external network

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I'm trying to launch an instance with external network but result in
  failed status. But instances with internal network are fine.

   F ollowing is the nova-compute.log from compute node

  2015-06-12 15:22:50.899 3121 INFO nova.compute.manager 
[req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 
700e680640e0415faf591e950cdb42d0 - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Starting instance...
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Attempting claim: memory 2048 MB, disk 50 
GB
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Total memory: 515884 MB, used: 2560.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] memory limit: 773826.00 MB, free: 
771266.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Total disk: 1144 GB, used: 50.00 GB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] disk limit not specified, defaulting to 
unlimited
  2015-06-12 15:22:51.023 3121 INFO nova.compute.claims [-] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Claim successful
  2015-06-12 15:22:51.134 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.270 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.470 3121 INFO nova.virt.libvirt.driver 
[req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Creating image
  2015-06-12 15:22:51.760 3121 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.993 3121 ERROR nova.compute.manager 
[req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Instance failed to spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] Traceback (most recent call last):
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2442, in 
_build_resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] yield resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2314, in 
_build_and_run_instance
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] block_device_info=block_device_info)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2351, in 
spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] write_to_disk=True)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4172, in 
_get_guest_xml
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] context)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4043, in 
_get_guest_config
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772] flavor, virt_type)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 
8da458b4-c064-47c8-a1bb-aad4e4400772]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py, line 374, in 
get_config
  

[Yahoo-eng-team] [Bug 1419760] Re: nova secgroup-add-default-rule support dropped?

2015-06-23 Thread Eugene Nikanorov
Please use ask.openstack.org for usage questions.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419760

Title:
  nova secgroup-add-default-rule  support dropped?

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Versions:  rhel7  , python-novaclient-2.20.0-1.el7ost.noarch
  Same error on two separate deployments.
  I've been trying to add a sec rule to default group getting error:

  # nova secgroup-add-default-rule icmp -1 -1 0.0.0.0/0
  ERROR (HTTPNotImplemented): Network driver does not support this function. 
(HTTP 501) (Request-ID: req-0a4f1a29-cd70-483b-aa3d-8de978f57da0)

  I've also tested it with tcp port and other IP's still same error.

  Nova api.log

  2015-02-09 09:24:30.685 23064 INFO nova.api.openstack.wsgi 
[req-4c6742f1-2c25-4e61-889b-dfa0f5071322 None] HTTP exception thrown: Network 
driver does not support this function.
  2015-02-09 09:24:30.685 23064 DEBUG nova.api.openstack.wsgi 
[req-4c6742f1-2c25-4e61-889b-dfa0f5071322 None] Returning 501 to user: Network 
driver does not support this function. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1201

  
  Seeing this  Network driver does not support this function if this has been 
deprecated and command moved to Neutron we should mention this on error notice 
to user. 

  Also consider updating gudei - https://bugs.launchpad.net/openstack-
  manuals/+bug/1419739

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461714] [NEW] Do not remove dhcp agent bindings if no alive agents are available

2015-06-03 Thread Eugene Nikanorov
Public bug reported:

In some cases all DHCP agents become unavailable for neutron-server
(primary cause of that is an issue with amqp), so network rescheduling
will remove networks from all agents. When then agents will be available
again, autoscheduling will not happen because there is no reason for
DHCP agents to do full sync or request available networks.

This can be solved from agent side, but preferable way is to not remove
network from agents if there are no alive agents available.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461714

Title:
  Do not remove dhcp agent bindings if no alive agents are available

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some cases all DHCP agents become unavailable for neutron-server
  (primary cause of that is an issue with amqp), so network rescheduling
  will remove networks from all agents. When then agents will be
  available again, autoscheduling will not happen because there is no
  reason for DHCP agents to do full sync or request available networks.

  This can be solved from agent side, but preferable way is to not
  remove network from agents if there are no alive agents available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460793] [NEW] Metadata IP is resolved locally by Windows by default, causing big delay in URL access

2015-06-01 Thread Eugene Nikanorov
Public bug reported:

WIth private network plugged into a router and router serving metadata:

When windows accesses metadata url, it tries to resolve mac address of it 
despite of routing table that tells to go to default gateway. That is because 
of nature of 169.254 which is considered local by default.
Such behavior causes big delay before connection could be established.
This, in turn, causes lots of issues during cloud init phase: slowness, 
timeouts, etc.

The workaround could be to add explicit route to a subnet, e.g.
169.254.169.254/32 via subnet's default gateway.

It makes sense to let DHCP agent inject such route by default via
dnsmasq config.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460793

Title:
  Metadata IP is resolved locally by Windows by default, causing big
  delay in URL access

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  WIth private network plugged into a router and router serving
  metadata:

  When windows accesses metadata url, it tries to resolve mac address of it 
despite of routing table that tells to go to default gateway. That is because 
of nature of 169.254 which is considered local by default.
  Such behavior causes big delay before connection could be established.
  This, in turn, causes lots of issues during cloud init phase: slowness, 
timeouts, etc.

  The workaround could be to add explicit route to a subnet, e.g.
  169.254.169.254/32 via subnet's default gateway.

  It makes sense to let DHCP agent inject such route by default via
  dnsmasq config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456809] Re: L3-agent not recreating missing fg- device

2015-05-26 Thread Eugene Nikanorov
Agree with Itzik's analysis.
Closing as 'Invalid'

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456809

Title:
  L3-agent not recreating missing fg- device

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When using DVR, the fg device in a compute is needed to access VMs on
  a compute node.  If for any reason the fg- device is deleted.  users
  will not be able access the VMs on the compute node.

  On a single node system where the L3-agent is running in 'dvr-snat'
  mode, a VM is booted up and assigned a floating-ip.  The VM is
  pingable using the floating IP.  Now I go into the fip namespace and
  delete the fg device using the command ovs-vsctl del-port br-ex fg-
  ccbd7bcb-75.Now the the VM can no longer be pinged.

  Then another VM is booted up and it is also assigned a Floating IP.
  The new VM is not pingable either.

  The L3-agent log shows it reported that it cannot find fg-ccbd7bcb-75
  when setting up the qrouter and fip namespaces for the new floating
  IP.  But it didn't not go and re-create the fg- device.

  Given that this is a deliberate act to cause the cloud  to fail, the
  L3-agent could have gone ahead and re-create the fg device to make it
  more fault tolerant.

  The problem can be reproduced with the latest neutron code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439870] Re: Fixed IPs not being recorded in database

2015-05-26 Thread Eugene Nikanorov
this is not a neutron bug, please be more attentive.

** Project changed: neutron = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439870

Title:
  Fixed IPs not being recorded in database

Status in OpenStack Compute (Nova):
  New

Bug description:
  When new VMs are spawned after deleting previous VMs, the new VMs
  obtain completely new ips and the old ones are not recycled to reuse.
  I looked into the mysql database to see where ips may be being stored
  and accessed by openstack to determine what the next in line should
  be, but didn' tmanage to find any ip information there. Has the
  location of this storage changed out of the fixed_ips table?
  Currently, this table is entirely empty:

  MariaDB [nova] select * from fixed_ips;
  Empty set (0.00 sec)

  despite having many vms running on two different networks:

   mysql -e select uuid, deleted, power_state, vm_state, display_name, host 
from nova.instances;
  
+--+-+-+--+--+--+
  | uuid | deleted | power_state | vm_state | 
display_name | host |
  
+--+-+-+--+--+--+
  | 14600536-7ce1-47bf-8f01-1a184edb5c26 |   0 |   4 | error| 
Ctest| r001ds02.pcs |
  | abb38321-5b74-4f36-b413-a057897b8579 |   0 |   4 | stopped  | 
cent7| r001ds02.pcs |
  | 31cbb003-42d0-468a-be4d-81f710e29aef |   0 |   1 | active   | 
centos7T2| r001ds02.pcs |
  | 4494fd8d-8517-4f14-95e6-fe5a6a64b331 |   0 |   1 | active   | 
selin_test   | r001ds02.pcs |
  | 25505dc4-2ba9-480d-ba5a-32c2e91fc3c9 |   0 |   1 | active   | 
2NIC | r001ds02.pcs |
  | baff8cef-c925-4dfb-ae90-f5f167f32e83 |   0 |   4 | stopped  | 
kepairtest   | r001ds02.pcs |
  | 317e1fbf-664d-43a8-938a-063fd53b801d |   0 |   1 | active   | 
test | r001ds02.pcs |
  | 3a8c1a2d-1a4b-4771-8e62-ab1982759ecd |   0 |   1 | active   | 3 
   | r001ds02.pcs |
  | c4b2175a-296c-400c-bd54-16df3b4ca91b |   0 |   1 | active   | 
344  | r001ds02.pcs |
  | ac02369e-b426-424d-8762-71ca93eacd0c |   0 |   4 | stopped  | 
333  | r001ds02.pcs |
  | 504d9412-e2a3-492a-8bc1-480ce6249f33 |   0 |   1 | active   | 
libvirt  | r001ds02.pcs |
  | cc9f6f06-2ba6-4ec2-94f7-3a795aa44cc4 |   0 |   1 | active   | 
arger| r001ds02.pcs |
  | 0a247dbf-58b4-4244-87da-510184a92491 |   0 |   1 | active   | 
arger2   | r001ds02.pcs |
  | 4cb85bbb-7248-4d46-a9c2-fee312f67f96 |   0 |   1 | active   | 
gh   | r001ds02.pcs |
  | adf9de81-3986-4d73-a3f1-a29d289c2fe3 |   0 |   1 | active   | 
az   | r001ds02.pcs |
  | 8396eabf-d243-4424-8ec8-045c776e7719 |   0 |   1 | active   | 
sdf  | r001ds02.pcs |
  | 947905b5-7a2c-4afb-9156-74df8ed699c5 |  55 |   1 | deleted  | 
yh   | r001ds02.pcs |
  | f690d7ed-f8d5-45a1-b679-e79ea4d3366f |  56 |   1 | deleted  | 
tr   | r001ds02.pcs |
  | dd1aa5b1-c0ac-41f6-a6de-05be8963242f |  57 |   1 | deleted  | 
ig   | r001ds02.pcs |
  | 42688a7d-2ba2-4d5a-973f-e87f87c32326 |  58 |   1 | deleted  | 
td   | r001ds02.pcs |
  | 7c1014d8-237d-48f0-aa77-3aa09fff9101 |  59 |   1 | deleted  | 
td2  | r001ds02.pcs |
  
+--+-+-+--+--+--+

  I am using neutron networking with OVS.  It is my understanding that
  the mysql sqlalchemy is setup to leave old information accessible in
  mysql, but deleting the associated information manually doesn't seem
  to make a difference as to the fixed_ips issue I am experiencing. Are
  there solutions for this?

  nova --version : 2.20.0 ( 2014.2.1-1.el7 running on centOS7, epel-juno
  release)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439870/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458119] [NEW] Improve stability and robustness of periodic agent checks

2015-05-22 Thread Eugene Nikanorov
Public bug reported:

In some cases due to DB controller failure, DB connections could be interrupted.
This causes exceptions that sneak in looping call method effectively shutting 
loop down and preventing any further failover for particular resource time.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458119

Title:
  Improve stability and robustness of periodic agent checks

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some cases due to DB controller failure, DB connections could be 
interrupted.
  This causes exceptions that sneak in looping call method effectively shutting 
loop down and preventing any further failover for particular resource time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457711] Re: Improve user-facing log message of ovs agent when it can't find an interface

2015-05-21 Thread Eugene Nikanorov
Doesn't apply for master, only applies for Juno+

** Changed in: neutron
   Status: New = Won't Fix

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) = (unassigned)

** Summary changed:

- Improve user-facing log message of ovs agent when it can't find an interface
+ message 'Unable to parse interface details' in ovs-agent logs is misleading

** Summary changed:

- message 'Unable to parse interface details' in ovs-agent logs is misleading
+ message 'Unable to parse interface details. Exception: list index out of 
range' in ovs-agent logs is misleading

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457711

Title:
  message 'Unable to parse interface details. Exception: list index out
  of range' in ovs-agent logs is misleading

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  In case ovs-agent can't find interface by given VM id, it prints a
  warning  Unable to parse interface details. Exception: list index out
  of range

  That doesn't look user-friendly and is misleading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457711] [NEW] message 'Unable to parse interface details. Exception: list index out of range' in ovs-agent logs is misleading

2015-05-21 Thread Eugene Nikanorov
Public bug reported:

In case ovs-agent can't find interface by given VM id, it prints a
warning  Unable to parse interface details. Exception: list index out
of range

That doesn't look user-friendly and is misleading.

** Affects: neutron
 Importance: Medium
 Status: Won't Fix


** Tags: ovs

** Tags added: ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457711

Title:
  message 'Unable to parse interface details. Exception: list index out
  of range' in ovs-agent logs is misleading

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  In case ovs-agent can't find interface by given VM id, it prints a
  warning  Unable to parse interface details. Exception: list index out
  of range

  That doesn't look user-friendly and is misleading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456399] Re: Add config option that allows separate configuration of router and gateway interfaces

2015-05-19 Thread Eugene Nikanorov
After going deeper into recent network mtu feature, it appeared that it
solves this issue by allowing to configure mtu for the external network.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456399

Title:
  Add config option that allows separate configuration of router and
  gateway interfaces

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  There is an option to configure mtu for interfaces created by L3 agent, 
however when setting up jumbo frames on instances, there could be a problem 
with outbound packets going from the subnet with jumbos to router interface on 
the subnet.
  Depending on the OVS version such packets could be damaged/dropped which 
effectively prevents VMs from having external connectivity.

  The issue is not in IP packet fragmentation on the gateway interface,
  but rather with OVS handling jumbos on qr- interface.

  We need to allow setting MTU for qr- interface, while preserving MTU
  on qg-

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456399] [NEW] Add config option that allows separate configuration of router and gateway interfaces

2015-05-18 Thread Eugene Nikanorov
Public bug reported:

There is an option to configure mtu for interfaces created by L3 agent, however 
when setting up jumbo frames on instances, there could be a problem with 
outbound packets going from the subnet with jumbos to router interface on the 
subnet.
Depending on the OVS version such packets could be damaged/dropped which 
effectively prevents VMs from having external connectivity.

The issue is not in IP packet fragmentation on the gateway interface,
but rather with OVS handling jumbos on qr- interface.

We need to allow setting MTU for qr- interface, while preserving MTU on
qg-

** Affects: neutron
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456399

Title:
  Add config option that allows separate configuration of router and
  gateway interfaces

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is an option to configure mtu for interfaces created by L3 agent, 
however when setting up jumbo frames on instances, there could be a problem 
with outbound packets going from the subnet with jumbos to router interface on 
the subnet.
  Depending on the OVS version such packets could be damaged/dropped which 
effectively prevents VMs from having external connectivity.

  The issue is not in IP packet fragmentation on the gateway interface,
  but rather with OVS handling jumbos on qr- interface.

  We need to allow setting MTU for qr- interface, while preserving MTU
  on qg-

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454408] [NEW] ObectDeletedError while removing port

2015-05-12 Thread Eugene Nikanorov
Public bug reported:

The following trace could be observed running rally tests on multi-
server environment:

2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 476, in delete
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 671, in 
delete_network
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self._delete_ports(context, ports)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 587, in 
_delete_ports
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource port.id)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource value = 
callable_(state, passive)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource raise 
orm_exc.ObjectDeletedError(state)
2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource ObjectDeletedError: 
Instance 'Port at 0x7fb50067e290' has been deleted, or its row is otherwise 
not present.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454408

Title:
  ObectDeletedError while removing port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following trace could be observed running rally tests on multi-
  server environment:

  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 476, in delete
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 671, in 
delete_network
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self._delete_ports(context, ports)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 587, in 
_delete_ports
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource port.id)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource value = 
callable_(state, passive)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
  2015-05-12 11:41:20.503 14172 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line

[Yahoo-eng-team] [Bug 1454434] [NEW] NoNetworkFoundInMaximumAllowedAttempts during concurrent network creation

2015-05-12 Thread Eugene Nikanorov
Public bug reported:

NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are created 
by multiple threads simultaneously.
This is related to https://bugs.launchpad.net/bugs/1382064
Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

We need to randomize segmentation_id selection to avoid such issues.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454434

Title:
  NoNetworkFoundInMaximumAllowedAttempts during concurrent network
  creation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are 
created by multiple threads simultaneously.
  This is related to https://bugs.launchpad.net/bugs/1382064
  Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

  We need to randomize segmentation_id selection to avoid such issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453008] [NEW] Deadlock on update_port_status

2015-05-08 Thread Eugene Nikanorov
Public bug reported:

Deadlock trace found in neutron-server logs when updating router gateway
port to BUILD status which port binding process is in progress. The
trace:

http://logs.openstack.org/32/181132/5/check/gate-rally-dsvm-neutron-
neutron/c90deb5/logs/screen-q-svc.txt.gz?#_2015-05-08_00_54_04_657

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) = Eugene Nikanorov (enikanorov)

** Tags added: db

** Changed in: neutron
   Importance: Undecided = High

** Changed in: neutron
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453008

Title:
  Deadlock on update_port_status

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Deadlock trace found in neutron-server logs when updating router
  gateway port to BUILD status which port binding process is in
  progress. The trace:

  http://logs.openstack.org/32/181132/5/check/gate-rally-dsvm-neutron-
  neutron/c90deb5/logs/screen-q-svc.txt.gz?#_2015-05-08_00_54_04_657

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453320] [NEW] Create periodic task that logs agent's state

2015-05-08 Thread Eugene Nikanorov
Public bug reported:

When analyzing logs it would be convenient to have an information about
liveliness of agents that would create a short-time context of cluster
health.

That also could be than used for some automated log-analysis tools.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453320

Title:
  Create periodic task that logs agent's state

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When analyzing logs it would be convenient to have an information
  about liveliness of agents that would create a short-time context of
  cluster health.

  That also could be than used for some automated log-analysis tools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452582] [NEW] PluginReportStateAPI.report_state should provide searchable identifier

2015-05-07 Thread Eugene Nikanorov
Public bug reported:

When troubleshooting problems with cluster it would be very convenient
to have information about agent heartbeats logged with some searchable
identifier which could create 1-to-1 mapping between events in agent's
logs and server's logs.

Currently agent's heartbeats are not logged at all on server side.
Since on a large cluster that could create too much logging (even for 
troubleshooting cases), it might make sense to make this configurable both on 
neutron-server side and on agent-side.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452582

Title:
  PluginReportStateAPI.report_state should provide searchable identifier

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When troubleshooting problems with cluster it would be very convenient
  to have information about agent heartbeats logged with some searchable
  identifier which could create 1-to-1 mapping between events in agent's
  logs and server's logs.

  Currently agent's heartbeats are not logged at all on server side.
  Since on a large cluster that could create too much logging (even for 
troubleshooting cases), it might make sense to make this configurable both on 
neutron-server side and on agent-side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451391] Re: --router:external=True syntax is invalid

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451391

Title:
  --router:external=True syntax is invalid

Status in Python client library for Neutron:
  New

Bug description:
  Kilo syntax is not backward compatibility:

  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
  usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width integer] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type network_type]
[--provider:physical_network 
physical_network_name]
[--provider:segmentation_id segmentation_id]
[--vlan-transparent {True,False}]
NAME
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

  Current syntax supports:
  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | mtu   | 0|
  | name  | public   |
  | provider:network_type | vlan |
  | provider:physical_network | physnet  |
  | provider:segmentation_id  | 193  |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1451391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451399] Re: subnet-create arguments order is too strict (not backward compatibility)

2015-05-04 Thread Eugene Nikanorov
I believe cidr should go last per command help, isn't it?

** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451399

Title:
  subnet-create arguments order is too strict (not backward
  compatibility)

Status in Python client library for Neutron:
  Incomplete

Bug description:
  Changing arguments order cause CLI error:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
  Invalid values_specs 10.35.178.0/24

  Changing order:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 
10.35.178.254 --name public_subnet public 10.35.178.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {start: 10.35.178.1, end: 10.35.178.253} |
  | cidr  | 10.35.178.0/24   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 10.35.178.254|
  | host_routes   |  |
  | id| 19593f99-b13c-4624-9755-983d7406cb47 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | public_subnet|
  | network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | subnetpool_id |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1451399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450625] Re: common service chaining driver API

2015-05-04 Thread Eugene Nikanorov
Usually new features are tracked via blueprints, not bugs, and require
spec to be merged.

Is there a blueprint regarding this feature?

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450625

Title:
  common service chaining driver API

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  This feature/bug is related to bug #1450617 (Neutron extension to
  support service chaining)

  Bug #1450617 is to add a neutron port chaining API and associated neutron 
port chain manager to support service chaining functionality.  Between the 
neutron port manager and the underlying service chain drivers,  a common 
service chain driver API shim layer is needed to allow for different types of 
service chain drivers  (eg. OVS driver, different SDN Controller drivers) to be 
integrated into the Neutron. Different service chain drivers may have different 
ways of constructing the service chain path and use different data path 
encapsulation and transport to steer the flow through the chain path. With one 
common interface between the Neutron service chain manager and various 
vendor-specific drivers, the driver design/implementation can be changed 
without changing the Neutron Service Chain Manager and the interface APIs.
   
  This interface should include the following entities:
   
   * An ordered list of service function instance clusters. Each service 
instance cluster represents a group of like service function instances which 
can be used for load distribution.
   * Traffic flow classification rules: It consists of a set of flow 
descriptors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450414] Re: can't get authentication with os-token and os-url

2015-05-04 Thread Eugene Nikanorov
what auth_url are you providing when making the call?

** Project changed: neutron = python-neutronclient

** Changed in: python-neutronclient
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450414

Title:
  can't get authentication with os-token and os-url

Status in Python client library for Neutron:
  Incomplete

Bug description:
  Hi, I can't get authentication with os-token and os-url on Juno
  pythone-neutronclient.

  On Icehouse, with os-token and os-url, we can get authentication.
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | 06c5d426-ec2c-4a19-a5c9-cfd21cfb5a0c | ext-net  | 
38d87619-9c76-481f-bfe8-b301e05693d9 193.160.15.0/24 |
  
+--+--+--+

  But on Juno, it failed. The detail :
  [root@compute01 ~]# neutron --os-token $token --os-url http://controller:9696 
net-list --debug 
  ERROR: neutronclient.shell Unable to determine the Keystone version to 
authenticate with using the given auth_url. Identity service may not support 
API version discovery. Please provide a versioned auth_url instead. 
  Traceback (most recent call last):
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 666, 
in run 
   self.initialize_app(remainder)
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 808, 
in initialize_app
   self.authenticate_user()
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 761, 
in authenticate_user
   auth_session = self._get_keystone_session()
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 904, 
in _get_keystone_session
   auth_url=self.options.os_auth_url)
 File /usr/lib/python2.6/site-packages/neutronclient/shell.py, line 889, 
in _discover_auth_versions
   raise exc.CommandError(msg)
   CommandError: Unable to determine the Keystone version to authenticate with 
using the given auth_url. Identity service may not support API version 
discovery. Please provide a versioned auth_url instead. 
  Unable to determine the Keystone version to authenticate with using the given 
auth_url. Identity service may not support API version discovery. Please 
provide a versioned auth_url instead. 

  
  my solution is this:
  On /usr/lib/python2.6/site-packages/neutronclient/shell.py, modify the 
authenticate_user(self) method.

   Origin:
   auth_session = self._get_keystone_session()

  Modified: 
  auth_session = None
  auth = None
  if not self.options.os_token:
  auth_session = self._get_keystone_session()
  auth = auth_session.auth

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1450414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449544] Re: Neutron-LB Health monitor association mismatch in horizon and CLI

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449544

Title:
  Neutron-LB Health monitor association mismatch in horizon and CLI

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a new pool is created, all the health monitors that are available
  are shown in LB-Pool Information in Horizon Dashboard.

  But in CLI ,

  neutron lb-pool-show pool-id  shows no monitors associated to the newly 
created Pool.
  Please refer LB_HM_default_assoc_UI and LB_HM_default_assoc_CLI.

  Using CLI ,  associate any health monitor to the Pool, correct information 
will be displayed in Horizon Dashboard and CLI.
  So only after creating new pool, Horizon Dashboard lists all the health 
monitors , which is wrong and this needs to be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449546] Re: Neutron-LB Health monitor association not listed in Horizon Dashboard

2015-05-04 Thread Eugene Nikanorov
** Project changed: neutron = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449546

Title:
  Neutron-LB Health monitor association not listed in Horizon Dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In LB-Pool Horizon Dashboard, 
  --LB- Pool -- Edit Pool -- Associate Monitor,

  it is expected that all the available health monitors to be listed.
  But the List box is empty.

  Please find the attached screen shot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449344] Re: When VM security group is not choosen, the packets are still blocked by default

2015-05-04 Thread Eugene Nikanorov
It's by design behavior, marking bus as Invalid

** Summary changed:

- When VM security group is not choose,the packets is still block by security 
group
+ When VM security group is not choosen, the packets are still blocked by 
default

** Summary changed:

- When VM security group is not choosen, the packets are still blocked by 
default
+ When VM security group is not chosen, the packets are still blocked by default

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449344

Title:
  When VM security group is not chosen, the packets are still blocked by
  default

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default
  1.2 Create router:R1,R1 inner interface relate to subnet1 and set outer 
network for R1
  1.2 Create VM1-1,choose subnet1,security group choose default and firewall is 
closed
  1.3 Edit security group of VM1-1,remove default security group from VM-1,now 
VM1-1 security group is none
  1.4 VM1-1 ping subnet1 gw:192.168.1.1 fail

  Capture in tap.xxx of linux bridge which is connect to VM1-1 ,we can see icmp 
request packets which is go to 192.168.1.1 from VM1-1
  Capture in qvb.xxx,we can't see any packets.Therefore,the packets is deny by 
security group.But VM1-1 security group is not choose

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419613] Re: Fails to run unit test in red hat 6.5 with tox

2015-05-04 Thread Eugene Nikanorov
I'm not sure this bug should be filed against upstream neutron. Marking
as Invalid.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419613

Title:
  Fails to run unit test in red hat 6.5 with tox

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Did following steps to run neutron unit test in RED HAT 6.5 and failed
  to run tox:

  # Fresh instllation of RED HAT 6.5
  # Installed the dependencies mentioned in the doc 
http://docs.openstack.org/developer/nova/devref/development.environment.html;

   sudo yum install python-devel openssl-devel python-pip git gcc 
libxslt-devel mysql-devel postgresql-devel libffi-devel libvirt-devel graphviz 
sqlite-devel
   pip install tox

  # cd neutron
  # tox 
  Failed with following error: 
  [root@localhost neutron]# tox -v neutron.tests.unit.api
  Traceback (most recent call last):
File /usr/bin/tox, line 5, in module
  from pkg_resources import load_entry_point
File /usr/lib/python2.6/site-packages/pkg_resources.py, line 2655, in 
module
  working_set.require(__requires__)
File /usr/lib/python2.6/site-packages/pkg_resources.py, line 648, in 
require
  needed = self.resolve(parse_requirements(requirements))
File /usr/lib/python2.6/site-packages/pkg_resources.py, line 546, in 
resolve
  raise DistributionNotFound(req)
  pkg_resources.DistributionNotFound: argparse
  [root@localhost neutron]# 

  
  Even though argparse module is installed successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450006] Re: Create port in both subnets ipv4 and ipv6 when specified only one subnet

2015-05-04 Thread Eugene Nikanorov
This behavior is a result of commit
https://review.openstack.org/#/c/113339/

It should have had DocImpact in commit msg.
Anyway, marking the bug as Invalid since behavior is by-design.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450006

Title:
  Create port in both subnets ipv4 and ipv6 when specified only one
  subnet

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When I try to create port on fresh devstack in specific subnet:

  neutron port-create private --fixed-ip subnet_id=3f80621c-
  4b88-48bd-8794-2372ef56485c

  It is created in both ipv4 and ipv6 subnets:

  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {subnet_id: 
3f80621c-4b88-48bd-8794-2372ef56485c, ip_address: 10.0.0.18}  
  |
  |   | {subnet_id: 
f7166154-5064-4e85-9ab2-10877ca4bfad, ip_address: 
fd67:d76a:29be:0:f816:3eff:feda:83a0} |
  | id| b9811513-9a6c-4f2d-b7d4-583f32bfd8e5
|
  | mac_address   | fa:16:3e:da:83:a0   
|
  | name  | 
|
  | network_id| 73c8160b-647a-4cbe-81a4-0adab9ac67c8
|
  | security_groups   | f5915ed5-67e6-425b-913d-38193ca16e22
|
  | status| DOWN
|
  | tenant_id | 9574611fa7c04fab87617412a409f606 

  skr ~ $ neutron port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 

  |
  
+--+--+---+-+
  | b9811513-9a6c-4f2d-b7d4-583f32bfd8e5 |  | fa:16:3e:da:83:a0 | 
{subnet_id: 3f80621c-4b88-48bd-8794-2372ef56485c, ip_address: 
10.0.0.18}|
  |  |  |   | 
{subnet_id: f7166154-5064-4e85-9ab2-10877ca4bfad, ip_address: 
fd67:d76a:29be:0:f816:3eff:feda:83a0} |
  
+--+--+---+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448268] [NEW] Need to factor out test helper methods of db plugin test suite into reusable class

2015-04-24 Thread Eugene Nikanorov
Public bug reported:

Currently NeutronDbPluginV2TestCase has both generic api tests for core plugins 
plus a set of helper methods.
This test suite is then inherited by all test classes of particular core 
plugins.

Need to extract helper methods into a mixin class so they can be reused
by tests that test base plugin as a DB mixin.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448268

Title:
  Need to factor out test helper methods of db plugin test suite into
  reusable class

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently NeutronDbPluginV2TestCase has both generic api tests for core 
plugins plus a set of helper methods.
  This test suite is then inherited by all test classes of particular core 
plugins.

  Need to extract helper methods into a mixin class so they can be
  reused by tests that test base plugin as a DB mixin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446911] [NEW] Improve naming of methods in l3_dvr_db

2015-04-21 Thread Eugene Nikanorov
Public bug reported:

Currently many of l3dvrdb mixin method names are not started with
underscore where they are really local to the l3dvr mixin class.

Let's make code a little bit clearer for a reader and clean up the
naming.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446911

Title:
  Improve naming of methods in l3_dvr_db

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Currently many of l3dvrdb mixin method names are not started with
  underscore where they are really local to the l3dvr mixin class.

  Let's make code a little bit clearer for a reader and clean up the
  naming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397087] Re: Bulk port create fails with conflict with some addresses fixed

2015-04-17 Thread Eugene Nikanorov
IMO, this just needs to be documented.
Obviously ports that go first get their fixed IPs allocated and later port 
creation may fail if some port specifies already-allocated IP.

I don't think it makes sense to arrange input port list in anyway.

** Changed in: neutron
   Status: Confirmed = Incomplete

** Changed in: neutron
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397087

Title:
  Bulk port create fails with conflict with some addresses fixed

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In the bulk version of the port create request, multiple port
  creations may be requested.

  If there is a port without specifying an fixed_ip address, one is
  assigned to it. If a later port requests the same address, a conflict
  is detected and raised. The overall call's succeeds or fails depending
  on which addresses from the pool are set to be assigned next, and the
  order of the requested ports.

  Steps to reproduce:

  # neutron net-create test_fixed_ports
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e |
  | name  | test_fixed_ports |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 4|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | d42d65485d674e0a9d007a06182e46f7 |
  +---+--+

  # neutron subnet-create test_fixed_ports 10.0.0.0/24
  Created a new subnet:
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {start: 10.0.0.2, end: 10.0.0.254} |
  | cidr | 10.0.0.0/24|
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | 5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a   |
  | ip_version   | 4  |
  | name ||
  | network_id   | af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e   |
  | tenant_id| d42d65485d674e0a9d007a06182e46f7   |
  +--++

  # cat ports.data
  {ports: [
  {
  name: A,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }, {
  fixed_ips: [{ip_address: 10.0.0.2}],
  name: B,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }
  ]}

  # TOKEN='a valid keystone token'

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
http://127.0.1.1:9696/v2.0/ports; -d...@ports.data
  {NeutronError: {message: Unable to complete operation for network 
af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e. The IP address 10.0.0.2 is in use., 
type: IpAddressInUse, detail: }}

  Positive case:

  # cat ports.data.rev
  {ports: [
  {
  name: A,
  fixed_ips: [{ip_address: 10.0.0.2}],
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }, {
  name: B,
  network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e
  }
  ]}

  # curl  -H 'Content-Type: application/json' -H 'X-Auth-Token: '$TOKEN -X POST 
http://127.0.1.1:9696/v2.0/ports; -d...@ports.data.rev
  {ports: [{status: DOWN, binding:host_id: , name: A, 
admin_state_up: true, network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e, 
tenant_id: 7b3e2f49d1fc4154ac5af10a4b9862c5, binding:vif_details: {}, 
binding:vnic_type: normal, binding:vif_type: unbound, device_owner: 
, mac_address: fa:16:3e:16:1e:50, binding:profile: {}, fixed_ips: 
[{subnet_id: 5369fb82-8ff6-4ec5-acf7-1d86d0ec9d2a, ip_address: 
10.0.0.2}], id: 75f5cdb7-5884-4583-9db1-73b946f94a04, device_id: }, 
{status: DOWN, binding:host_id: , name: B, admin_state_up: true, 
network_id: af3b3cc8-b556-4604-a4fe-ae9bffa8cd9e, tenant_id: 
7b3e2f49d1fc4154ac5af10a4b9862c5, binding:vif_details: {}, 
binding:vnic_type: normal, 

[Yahoo-eng-team] [Bug 1442272] [NEW] functional.agent.test_ovs_flows.ARPSpoofTestCase.test_arp_spoof_disable_port_security fails

2015-04-09 Thread Eugene Nikanorov
Public bug reported:

Logstash query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlNraXBwaW5nIEFSUCBzcG9vZmluZyBydWxlcyBmb3IgcG9ydFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjg1OTc0OTExMTN9


 Captured pythonlogging:
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.414 | ~~~
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.416 | 2015-04-09 16:18:04,042 
INFO [neutron.plugins.openvswitch.agent.ovs_neutron_agent] Skipping ARP 
spoofing rules for port 'test-port202660' because it has port security disabled
2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.417 | 2015-04-09 16:18:05,359 
   ERROR [neutron.agent.linux.utils] 
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.419 | Command: ['ip', 
'netns', 'exec', 'func-89a1f22f-b789-4b12-a70c-0f8dde1baf42', 'ping', '-c', 1, 
'-W', 1, '192.168.0.2']
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.420 | Exit code: 1
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.422 | Stdin: 
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.423 | Stdout: PING 
192.168.0.2 (192.168.0.2) 56(84) bytes of data.
2015-04-09 16:18:32.471 | 2015-04-09 16:18:32.425 | 
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.426 | --- 192.168.0.2 ping 
statistics ---
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.428 | 1 packets transmitted, 
0 received, 100% packet loss, time 0ms
2015-04-09 16:18:32.472 | 2015-04-09 16:18:32.429 | 
2015-04-09 16:18:32.666 | 2015-04-09 16:18:32.431 | 
2015-04-09 16:18:32.667 | 2015-04-09 16:18:32.432 | Stderr: 
2015-04-09 16:18:32.667 | 2015-04-09 16:18:32.439 | 
2015-04-09 16:18:32.668 | 2015-04-09 16:18:32.440 | 
2015-04-09 16:18:32.669 | 2015-04-09 16:18:32.442 | Captured traceback:
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.443 | ~~~
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.445 | Traceback (most recent 
call last):
2015-04-09 16:18:32.779 | 2015-04-09 16:18:32.447 |   File 
neutron/tests/functional/agent/test_ovs_flows.py, line 79, in 
test_arp_spoof_disable_port_security
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.448 | 
pinger.assert_ping(self.dst_addr)
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.450 |   File 
neutron/tests/functional/agent/linux/helpers.py, line 113, in assert_ping
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.451 | 
self._ping_destination(dst_ip)
2015-04-09 16:18:32.780 | 2015-04-09 16:18:32.453 |   File 
neutron/tests/functional/agent/linux/helpers.py, line 110, in 
_ping_destination
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.454 | '-W', 
self._timeout, dest_address])
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.456 |   File 
neutron/agent/linux/ip_lib.py, line 580, in execute
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.457 | 
extra_ok_codes=extra_ok_codes, **kwargs)
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.459 |   File 
neutron/agent/linux/utils.py, line 137, in execute
2015-04-09 16:18:32.781 | 2015-04-09 16:18:32.461 | raise 
RuntimeError(m)
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.462 | RuntimeError: 
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.464 | Command: ['ip', 
'netns', 'exec', 'func-89a1f22f-b789-4b12-a70c-0f8dde1baf42', 'ping', '-c', 1, 
'-W', 1, '192.168.0.2']
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.465 | Exit code: 1
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.467 | Stdin: 
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.469 | Stdout: PING 
192.168.0.2 (192.168.0.2) 56(84) bytes of data.
2015-04-09 16:18:32.782 | 2015-04-09 16:18:32.470 | 
2015-04-09 16:18:32.783 | 2015-04-09 16:18:32.472 | --- 192.168.0.2 ping 
statistics ---
2015-04-09 16:18:32.783 | 2015-04-09 16:18:32.473 | 1 packets transmitted, 
0 received, 100% packet loss, time 0ms

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New = Confirmed

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442272

Title:
  
functional.agent.test_ovs_flows.ARPSpoofTestCase.test_arp_spoof_disable_port_security
  fails

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Logstash query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlNraXBwaW5nIEFSUCBzcG9vZmluZyBydWxlcyBmb3IgcG9ydFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjg1OTc0OTExMTN9

  
   Captured pythonlogging:
  2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.414 | ~~~
  2015-04-09 16:18:32.470 | 2015-04-09 16:18:32.416 | 2015-04-09 
16:18:04,042 INFO 

[Yahoo-eng-team] [Bug 1424593] Re: ObjectDeleted error when network already removed during rescheduling

2015-04-09 Thread Eugene Nikanorov
Changing back to high as it was discovered another condition which could
lead autorescheduling loop to fail

** Changed in: neutron
   Status: Fix Released = In Progress

** Changed in: neutron
   Importance: Medium = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424593

Title:
  ObjectDeleted error when network already removed during rescheduling

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In some cases when concurrent rescheduling occurs, the following trace
  is observed:

  ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
  TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py, 
line 76, in _inner
  TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 269, 
in remove_networks_from_down_agents
  TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
  TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
  TRACE neutron.openstack.common.loopingcall value = callable_(state, 
passive)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
  TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
  TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
  TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'NetworkDhcpAgentBinding at 0x52b1850' has been deleted, or its row is 
otherwise not present.

  Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
  This issue terminates periodic task of rescheduling networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427432] Re: lbaas related(?) check-grenade-dsvm-neutron failure

2015-04-07 Thread Eugene Nikanorov
This bug was apparently fixed by
https://review.openstack.org/#/c/160913/

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New = Fix Committed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427432

Title:
  lbaas related(?) check-grenade-dsvm-neutron failure

Status in devstack - openstack dev environments:
  Fix Committed

Bug description:
  https://review.openstack.org/#/c/160523/  (purely doc-only change)
  
http://logs.openstack.org/23/160523/2/check/check-grenade-dsvm-neutron/6f82325/logs/new/screen-q-svc.txt.gz#_2015-03-02_23_28_04_319

  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config cls._instance = 
cls()
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/opt/stack/new/neutron/neutron/manager.py, line 128, in __init__
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config 
self._load_service_plugins()
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/opt/stack/new/neutron/neutron/manager.py, line 175, in _load_service_plugins
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config provider)
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/opt/stack/new/neutron/neutron/manager.py, line 133, in _get_plugin_instance
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config mgr = 
driver.DriverManager(namespace, plugin_provider)
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/usr/local/lib/python2.7/dist-packages/stevedore/driver.py, line 45, in 
__init__
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config 
verify_requirements=verify_requirements,
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/usr/local/lib/python2.7/dist-packages/stevedore/named.py, line 55, in 
__init__
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config 
verify_requirements)
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/usr/local/lib/python2.7/dist-packages/stevedore/extension.py, line 170, in 
_load_plugins
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config 
self._on_load_failure_callback(self, ep, err)
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config   File 
/usr/local/lib/python2.7/dist-packages/stevedore/driver.py, line 50, in 
_default_on_load_failure
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config raise err
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config ImportError: No 
module named neutron_lbaas.services.loadbalancer.plugin
  2015-03-02 23:28:04.319 15268 TRACE neutron.common.config

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkltcG9ydEVycm9yOiBObyBtb2R1bGUgbmFtZWQgbmV1dHJvbl9sYmFhcy5zZXJ2aWNlcy5sb2FkYmFsYW5jZXIucGx1Z2luXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MjUzNDM5MzMzNjN9

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1427432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412903] Re: Cisco Nexus VxLAN ML2: ping fails with fixed IP with remote Network Node

2015-04-07 Thread Eugene Nikanorov
I'm afraid it's not correct to file a bug against neutron, but referring
to cisco-openstack repository

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412903

Title:
  Cisco Nexus VxLAN ML2: ping fails with fixed IP with remote Network
  Node

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Testbed configuration to reproduce:
  - 1 TOR with 1 or 2 compute nodes
  - 1 TOR with 1 network node
  - 1 Spine connecting the 2 TOR

  data path: VM1-TOR1-Spine-TOR2-router-TOR2-Spine-TOR1-VM2
  VM1 and VM2 are on different networks.
  The MTU has been forced to 1400 on all VMs.
  The TOR are Nexus 9K (C9396PX NXOS 6.1)

  Ping from VM1 to VM2 using fixed IP fails 100% of the time.
  The test was performed with VM1 and VM2 running on different compute nodes 
under TOR1. It will also probably fail if they run on the same compute node 
under TOR1.

  Other data paths where ping is working:
  VM1-TOR-router-TOR-VM2 (VM1 and VM2 are in the same rack as the network 
node and run on the same compute node)
  VM1-TOR1-Spine-TOR2-router-TOR2-VM2 (VM2 is on the same rack as the 
network node)

  When using floating IPs (ie going through NAT) ping on:
  - VM in same rack as network node is OK: VM1-TOR-router--TOR-default 
gateway-TOR-VM2
  - at least one VM in a different rack than network node fails

  These issues can be reproduced manually using ping or in an automated
  way using the VMTP tool.

  Versioning:
  https://github.com/cisco-openstack/neutron.git (staging/junoplus branch)
  openstack and devstack: stable/juno

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440700] Re: Creating members with invalid/empty tenantid is not throwing error

2015-04-06 Thread Eugene Nikanorov
This has been raised multiple times in the past.
Neutron doesn't verify tenant_id against keystone. It's assumed that regular 
user calls neutron being authenticated.
And admin presumably know what he is doing.

I don't think we need to do anything with this issue, especially in the
scope of such bugs.

** Changed in: neutron
   Status: New = Opinion

** Tags added: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440700

Title:
  Creating members with invalid/empty tenantid is not throwing error

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Creating members with invalid/empty tenant is  successful (with logging_noop 
driver).It should throw error during validation.
  Following  is tempest test (with logging_noop driver backend) logs:

  
  0} 
neutron_lbaas.tests.tempest.v2.api.test_members.MemberTestJSON.test_create_member_empty_tenant_id
 [0.590837s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File neutron_lbaas/tests/tempest/v2/api/test_members.py, line 244, in 
test_create_member_empty_tenant_id
  self.pool_id, **member_opts)
File 
/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: bound method type._create_member 
of class 'neutron_lbaas.tests.tempest.v2.api.test_members.MemberTestJSON' 
returned {u'protocol_port': 80, u'weight': 1, u'admin_state_up': True, 
u'subnet_id': u'e20c013e-33d0-4752-883d-b78bd45ef0ea', u'tenant_id': u'', 
u'address': u'127.0.0.1', u'id': u'3f8d811f-ab69-44f8-ae18-8fc20a94b228'}

  
--
  {0} 
neutron_lbaas.tests.tempest.v2.api.test_members.MemberTestJSON.test_create_member_invalid_tenant_id
 [0.478688s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File neutron_lbaas/tests/tempest/v2/api/test_members.py, line 181, in 
test_create_member_invalid_tenant_id
  self.pool_id, **member_opts)
File 
/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 422, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/opt/stack/neutron-lbaas/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: bound method type._create_member 
of class 'neutron_lbaas.tests.tempest.v2.api.test_members.MemberTestJSON' 
returned {u'protocol_port': 80, u'weight': 1, u'admin_state_up': True, 
u'subnet_id': u'65676412-9961-4260-a3e0-8fbed62641a7', u'tenant_id': 
u'$232!$pw', u'address': u'127.0.0.1', u'id': 
u'8bde4a0c-8bb6-4c36-b8db-18b9228a1ade'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413231] Re: Traceback when creating VxLAN network using CSR plugin

2015-04-06 Thread Eugene Nikanorov
That's hard to triage such kind of bugs without of knowledge of particular 
backend.
Is there an evidence that neutron needs to be fixed and it's not a backend 
issue?

** Changed in: neutron
   Importance: Undecided = Low

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1413231

Title:
  Traceback when creating VxLAN network using CSR plugin

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  OpenStack Version: Kilo

  localadmin@qa1:~$ nova-manage version
  2015.1
  localadmin@qa1:~$ neutron --version
  2.3.10

  I’m trying to run the vxlan tests on my multi node setup and I’m
  seeing the following  error/traceback in the
  screen-q-ciscocfgagent.log when creating a network with a vxlan
  profile.

  The error complains that it can’t find the nrouter-56f2cf VRF but it
  is present on the CSR.

  VRF is configured on the CSR – regular VLAN works fine

  csr#show run | inc vrf
  vrf definition Mgmt-intf
  vrf definition nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
   vrf forwarding nrouter-56f2cf
  ip nat inside source list acl_756 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source list acl_758 interface GigabitEthernet3.757 vrf 
nrouter-56f2cf overload
  ip nat inside source static 10.11.12.2 172.29.75.232 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.4 172.29.75.233 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 10.11.12.5 172.29.75.234 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.2 172.29.75.235 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.4 172.29.75.236 vrf nrouter-56f2cf 
match-in-vrf
  ip nat inside source static 210.168.1.5 172.29.75.237 vrf nrouter-56f2cf 
match-in-vrf
  ip route vrf nrouter-56f2cf 0.0.0.0 0.0.0.0 172.29.75.225
  csr#


  2015-01-19 12:22:09.896 DEBUG neutron.agent.linux.utils [-] 
  Command: ['ping', '-c', '5', '-W', '1', '-i', '0.2', '10.0.100.10']
  Exit code: 0
  Stdout: 'PING 10.0.100.10 (10.0.100.10) 56(84) bytes of data.\n64 bytes from 
10.0.100.10: icmp_seq=1 ttl=255 time=1.74 ms\n64 bytes from 10.0.100.10: 
icmp_seq=2 ttl=255 time=1.09 ms\n64 bytes from 10.0.100.10: icmp_seq=3 ttl=255 
time=0.994 ms\n64 bytes from 10.0.100.10: icmp_seq=4 ttl=255 time=0.852 ms\n64 
bytes 
  from 10.0.100.10: icmp_seq=5 ttl=255 time=0.892 ms\n\n--- 10.0.100.10 ping 
statistics ---\n5 packets transmitted, 5 received, 0% packet loss, time 
801ms\nrtt min/avg/max/mdev = 0.852/1.116/1.748/0.328 ms\n'
  Stderr: '' from (pid=13719) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:79
  2015-01-19 12:22:09.897 DEBUG neutron.plugins.cisco.cfg_agent.device_status 
[-] Hosting device: 27b14fc6-b1c9-4deb-8abe-ae3703a4af2d@10.0.100.10 is 
reachable. from (pid=13719) is_hosting_device_reachable 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_status.py:115
  2015-01-19 12:22:10.121 INFO 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRFs:[]
  2015-01-19 12:22:10.122 ERROR 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
VRF nrouter-56f2cf not present
  2015-01-19 12:22:10.237 DEBUG 
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver [-] 
RPCReply for CREATE_SUBINTERFACE is ?xml version=1.0 
encoding=UTF-8?rpc-reply 
message-id=urn:uuid:b3476042-9fff-11e4-b07d-f872eaedf376 
xmlns=urn:ietf:params:netconf:base:1.0rpc-errorerror-typeprotocol/error-typeerror-tagoperation-failed/error-tagerror-severityerror/error-severity/rpc-error/rpc-reply
 from (pid=13719) _check_response 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/device_drivers/csr1kv/csr1kv_routing_driver.py:676
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.
  2015-01-19 12:22:10.238 ERROR 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper [-] Driver 
Exception on router:56f2cfbc-61c6-45dc-94d5-0cbb08b05053. Error is Error 
executing snippet:CREATE_SUBINTERFACE. ErrorType:protocol 
ErrorTag:operation-failed.

  
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper Traceback 
(most recent call last):
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 
/opt/stack/neutron/neutron/plugins/cisco/cfg_agent/service_helpers/routing_svc_helper.py,
 line 379, in _process_routers
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper 
self._process_router(ri)
  2015-01-19 12:22:10.238 TRACE 
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper   File 

[Yahoo-eng-team] [Bug 1437762] Re: portbindingsport does not have all ports causing ml2 migration failure

2015-04-05 Thread Eugene Nikanorov
that might be a bug with migrations in icehouse version which probably
will not be fixed.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1437762

Title:
  portbindingsport does not have all ports causing ml2 migration failure

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  I am trying to move from havana to icehouse on ubuntu 14.0.4. The
  migration is failing because the ml2 migration is expecting the
  portbindingsport table to contain all the ports. However my record
  count in ports is 460 and in portsportbinding just 192. Thus only 192
  records get added to ml2_port_bindings.

  The consequence of this is that the network-node and the compute nodes
  are adding unbound interfaces in the ml2_port_bindings table.
  Additionally nova-compute is update its network info with wrong
  information causing a subsequent restart of nova-compute to fail with
  an error of vif_type=unbound. Besides that the instances on the
  nodes do not get network connectivity.

  Let's say I am happy that I made a backup, because the DB gets into a
  inconsistent state every time now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1437762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1039304] Re: make ip + interface util apis indepedent of DictModel

2015-04-04 Thread Eugene Nikanorov
This probably is too old report to be relevant for current project
status

** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1039304

Title:
  make ip + interface util apis indepedent of DictModel

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  I noticed at least one agent library method (get_device_name() in
  interface.py) that seems to depend on the DictModel stuff from the
  DHCP work.  These APIs should be independent of DictModel, and if a
  data representation of a port needs to be passed, it should just be
  the standard dictionary formats for port/network/subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1039304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397489] Re: VM boot failure since nova to neutron port notification fails

2015-04-04 Thread Eugene Nikanorov
This is a problem of keystone which in turn causes failures in neutron-nova 
communication.
I don't think neutron can be fixed to avoid this.

** Changed in: neutron
   Status: In Progress = Confirmed

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397489

Title:
  VM boot failure  since nova to neutron port notification fails

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When I run the latest devstack and use nova boot to create a VM,but it
  failed.

  Nova show the VM,it show:message: Build of instance cb509a04-ca8a-
  491f-baf1-be01b15f4946 aborted: Failed to allocate the network(s), not
  rescheduling., code: 500, details:   File
  \/opt/stack/nova/nova/compute/manager.py\, line 2030, in
  _do_build_and_run_instance

  and the following error in the nova-compute.log:

  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1714, 
in _spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2266, in spawn
  block_device_info)
File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3681, in _create_domain_and_network
  raise exception.VirtualInterfaceCreateException()
  VirtualInterfaceCreateException: Virtual Interface creation failed

  Adding vif_plugging_is_fatal = False and vif_plugging_timeout = 5
  to the compute nodes stops the missing message from being fatal and
  guests can then be spawned normally and accessed over the network.

  In the bug: https://bugs.launchpad.net/nova/+bug/1348103 says it
  happened in cells environment,but it doesn't happen only in cells
  environment.This problem should arouse our attention more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440309] Re: Fwaas - update/create firewall will use the associated policy no matter whether it's audited or not

2015-04-04 Thread Eugene Nikanorov
That's better to be discussed with fwaas team first. Description doesn't
look like a valid bug report.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440309

Title:
  Fwaas - update/create firewall will use the associated policy no
  matter whether it's audited or not

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  New Firewall Rules cannot be directly added to a virtual Firewall. The
  rules have to be first added to a Firewall Policy and the Firewall
  Policy has to be reapplied for the rules to take effect

  This two ­step process allows the Firewall Policy to be audited after
  the new rules are added and before the policy is reapplied to a
  Firewall

  However in implementation, create/update firewall will use the
  associated firewall policy no matter whether it's audited or not which
  makes all the design decisions meaningless

  I consider the right implementation is similar to git workflow.

  1. The audited firewall policy is the master branch and create/update 
firewall can only use the master branch.
  2. A modification to a firewall policy is just like a feature branch. Once 
set its audited attribute to True, it got merged back into master branch

  So this implies:

  1. Create a firewall policy must have audited set to True
  2. we should support version control for firewall policy, so rollback is 
available

  It's a lot of work to do which suggests that we rethink about the
  necessarity of audition

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439067] [NEW] use db retry decorator from oslo.db

2015-04-01 Thread Eugene Nikanorov
Public bug reported:

Use oslo.db common retrying decorator rather than retrying library. 
Also, get rid of home-grown code in favor of oslo.db

** Affects: nova
 Importance: Undecided
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Eugene Nikanorov (enikanorov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439067

Title:
  use db retry decorator from oslo.db

Status in OpenStack Compute (Nova):
  New

Bug description:
  Use oslo.db common retrying decorator rather than retrying library. 
  Also, get rid of home-grown code in favor of oslo.db

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391806] Re: 'neutron port-list' is missing binding:vnic_type filter

2015-03-31 Thread Eugene Nikanorov
Looking at discussion on the review I have doubts that we should fix
that at all

** Tags added: api

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391806

Title:
  'neutron port-list' is missing binding:vnic_type filter

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  An example usage would be to filter the ports that have 
binding:vnic_type=direct
  # neutron port-list --binding:vnic_type=direct

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438159] Re: Made neutron agents silent by using AMQP

2015-03-30 Thread Eugene Nikanorov
This looks like deep rework of current messaging strategy and probably
should be worked on within a scope of a blueprint rather than a bug.

So I would suggest to file a bp for this and provide a spec explaining
these ideas so spec review could be a better place to discuss these
ideas.

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438159

Title:
  Made neutron agents silent by using AMQP

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Problem.: Neutron agents does a lot of periodic task which leads  an rpc call 
+ database transaction, which does not even provide a new information, because 
nothing changed.
  This behaviour in scale can be called as `DDOS attack`, generally this kind 
of architecture is bad at scaling and can be considered as an any-pattern.

  Instead of periodic poll, we can leverage the AMQP brokers bind capabilities. 
  Neutron has many situation like security group rule change or dvr related 
changes which needs to be communicated with multiple agents, but usually not 
with all agent.

  The agent at startup needs to synchronise the as usual, but during the
  sync the agent can subscribe to the interesting events to avoid the
  periodic tasks. (Note.: After the first subscribe loop a second one is
  needed to do not miss changes during the subscribe process ).

  The AMQP queues with 'auto-delete' can be considered as a reliable source of 
information which does not miss any event notification. 
  On connection loss or full broker cluster die the agent needs to re sync 
everything guarded in this way,
  in these cases, the queue will disappear so the situation easily detectable.

  1. Create a Direct exchange for all kind of resourcestype what needs
  to be synchronised in this way, for example.: 'neutron.securitygroups'
  . The exchange declaration needs to happen at q-svc start-up time or
  at full broker cluster die (not-found exception will tell it). The
  exchange SHOULD NOT be redeclared or verified at every message
  publish.

  2. Every agent creates a dedicated per agent queue with auto-delete flag, if 
the agent already maintains a queue with this property he MAY reuse that one. 
The agents SHOULD avoid to creating multiple queues per resource type. The 
messages MUST contain a type information. 
  3. All agent creates a binding between his queue and the resource type queue 
with he realise he depends on the resource, for example it maintains at least 
one port with the given security-group. (The agents needs to remove the 
binding. when they stop using it)
  4. The q-svc publishes just a single message  when the resource related 
change happened. The routing key is the uuid.

  Alternatively a topic exchange can be used, with a single  exchange.
  In this case the routing keys MUST contains the resource type like: 
neutron.resource_type.uuid ,
  this type exchange is generally more expensive than a direct exchange 
(pattern matching), and only useful if you have agents which needs to listens 
to ALL event related to a type, but others just interested just in a few of 
them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418635] Re: Neutron API reference is inconsistent and differs from implementation

2015-03-30 Thread Eugene Nikanorov
** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1418635

Title:
  Neutron API reference is inconsistent and differs from implementation

Status in OpenStack API documentation site:
  New

Bug description:
  I'm implementing modules for SaltStack (not yet on GitHub) to create entities 
in OpenStack Neutron and came across quite a few problems 
  with the API documentation:

  * Link Filtering and Column Selection to 
http://docs.openstack.org/api/openstack-network/2.0/content/filtering.html
    404s

  * Section http://developer.openstack.org/api-ref-
  networking-v2.html#subnets

    * POST /v2.0/subnets aka Create subnet
  - 'id' is listed to be an optional parameter but the Neutron-API in 
Icehouse refuses to set a user-selected ID
  - parameters 'dns_nameservers' and 'host_routes' missing from 
documentation (undocumented extensions?)

    * GET /v2.0/subnets aka List subnets
  - can't filter by allocation_pools or enable_dhcp

    * PUT /v2.0/subnets/​{subnet_id}​ aka Update subnet
  - parameters allocation_pools, network_id, tenant_id, id listed 
as optional request parameters but Neutron-API in Icehouse returns 
Cannot update read-only attribute $PARAMETER

  * Section http://developer.openstack.org/api-ref-
  networking-v2.html#networks

    * GET /v2.0/networks aka List networks
  - parameter shared is ignored as a filter

  * Section http://developer.openstack.org/api-ref-
  networking-v2.html#layer3

    * POST /v2.0/routers aka Create router
  - Description states router:external and external_gateway_info are 
valid request parameters but they're not listed in the table of 
request parameters
  - What's the parameter router described as A router object. supposed 
to be? A router object in JSON/XML notation inside a router object in 
JSON/XML notation?

  I'll probably add more when implementing functions for managing
  routers in Neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1418635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436922] [NEW] Improve readability of l3_agent_scheduler module

2015-03-26 Thread Eugene Nikanorov
Public bug reported:

Start protected method names with underscores to indicate how they're
going to be used.

This is convinient when understanding class relationships.

** Affects: neutron
 Importance: Wishlist
 Assignee: Eugene Nikanorov (enikanorov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1436922

Title:
  Improve readability of l3_agent_scheduler module

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Start protected method names with underscores to indicate how they're
  going to be used.

  This is convinient when understanding class relationships.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1436922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435919] [NEW] Traceback on listing security groups

2015-03-24 Thread Eugene Nikanorov
Public bug reported:

The following traceback has been observed in the gate jobs (it doesn't
lead to a job's failure though):

 TRACE neutron.api.v2.resource Traceback (most recent call last):
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
 TRACE neutron.api.v2.resource result = method(request=request, **args)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 311, in index
 TRACE neutron.api.v2.resource return self._items(request, True, parent_id)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 245, in _items
 TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 178, in 
get_security_groups
 TRACE neutron.api.v2.resource self._ensure_default_security_group(context, 
tenant_id)
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 553, in 
_ensure_default_security_group
 TRACE neutron.api.v2.resource return default_group['security_group_id']
 TRACE neutron.api.v2.resource   File /usr/lib/python2.7/contextlib.py, line 
24, in __exit__
 TRACE neutron.api.v2.resource self.gen.next()
 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/api.py, line 59, in autonested_transaction
 TRACE neutron.api.v2.resource yield tx
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 482, 
in __exit__
 TRACE neutron.api.v2.resource self.rollback()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
 TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 479, 
in __exit__
 TRACE neutron.api.v2.resource self.commit()
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 382, 
in commit
 TRACE neutron.api.v2.resource self._assert_active(prepared_ok=True)
 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 218, 
in _assert_active
 TRACE neutron.api.v2.resource This Session's transaction has been rolled 
back 
 TRACE neutron.api.v2.resource InvalidRequestError: This Session's transaction 
has been rolled back by a nested rollback() call.  To begin a new transaction, 
issue Session.rollback() first.

Example:

http://logs.openstack.org/17/165117/6/check/check-tempest-dsvm-neutron-
pg/7017248/logs/screen-q-svc.txt.gz?level=TRACE

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: Confirmed

** Description changed:

- The following traceback has been observed in the gate jobs:
+ The following traceback has been observed in the gate jobs (it doesn't
+ lead to a job's failure though):
  
-  TRACE neutron.api.v2.resource Traceback (most recent call last):
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
-  TRACE neutron.api.v2.resource result = method(request=request, **args)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 311, in index
-  TRACE neutron.api.v2.resource return self._items(request, True, 
parent_id)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 245, in _items
-  TRACE neutron.api.v2.resource obj_list = obj_getter(request.context, 
**kwargs)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 178, in 
get_security_groups
-  TRACE neutron.api.v2.resource 
self._ensure_default_security_group(context, tenant_id)
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/securitygroups_db.py, line 553, in 
_ensure_default_security_group
-  TRACE neutron.api.v2.resource return default_group['security_group_id']
-  TRACE neutron.api.v2.resource   File /usr/lib/python2.7/contextlib.py, 
line 24, in __exit__
-  TRACE neutron.api.v2.resource self.gen.next()
-  TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/api.py, line 59, in autonested_transaction
-  TRACE neutron.api.v2.resource yield tx
-  TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 482, 
in __exit__
-  TRACE neutron.api.v2.resource self.rollback()
-  TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
-  TRACE neutron.api.v2.resource compat.reraise(exc_type, exc_value, exc_tb)
-  TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist

[Yahoo-eng-team] [Bug 1430832] Re: Rally error Failed to delete network for tenant with scenario VMTasks.boot_runcommand_delete

2015-03-11 Thread Eugene Nikanorov
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430832

Title:
  Rally error Failed to delete network for tenant with scenario
  VMTasks.boot_runcommand_delete

Status in Mirantis OpenStack:
  New

Bug description:
  # fuel --fuel-version
  api: '1.0'
  astute_sha: 1be5b9b827f512d740fe907c7ff72486d4030938
  auth_required: true
  build_id: 2015-03-02_14-00-04
  build_number: '154'
  feature_groups:
  - mirantis
  fuellib_sha: b17e3810dbca407fca2a231c26f553a46e166343
  fuelmain_sha: baf24424a4e056c6753913de5f8c94851903f718
  nailgun_sha: f034fbb4b68be963e4dc5b5d680061b54efbf605
  ostf_sha: 103d6cf6badd57b791cfaf4310ec8bd81c7a8a46
  production: docker
  python-fuelclient_sha: 3ebfa9c14a192d0298ff787526bf990055a23694
  release: '6.1'
  release_versions:
2014.2-6.1:
  VERSION:
api: '1.0'
astute_sha: 1be5b9b827f512d740fe907c7ff72486d4030938
build_id: 2015-03-02_14-00-04
build_number: '154'
feature_groups:
- mirantis
fuellib_sha: b17e3810dbca407fca2a231c26f553a46e166343
fuelmain_sha: baf24424a4e056c6753913de5f8c94851903f718
nailgun_sha: f034fbb4b68be963e4dc5b5d680061b54efbf605
ostf_sha: 103d6cf6badd57b791cfaf4310ec8bd81c7a8a46
production: docker
python-fuelclient_sha: 3ebfa9c14a192d0298ff787526bf990055a23694
release: '6.1'

  Ubuntu, HA, neutron + VLAN, Ceph for volumes, images, ephemeral, objects
  Controllers: 3, computes: 47

  Scenario VMTasks.boot_runcommand_delete fail with timeout:

  Traceback (most recent call last):
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/runners/base.py, 
line 77, in _run_scenario_once
  method_name)(**kwargs) or scenario_output
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/scenarios/vm/vmtasks.py,
 line 108, in boot_runcommand_delete
  password, interpreter, script)
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/scenarios/vm/utils.py,
 line 63, in run_command
  self.wait_for_ping(server_ip)
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/scenarios/base.py,
 line 254, in func_atomic_actions
  f = func(self, *args, **kwargs)
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/scenarios/vm/utils.py,
 line 51, in wait_for_ping
  timeout=120
File 
/opt/stack/.venv/lib/python2.7/site-packages/rally/benchmark/utils.py, line 
109, in wait_for
  raise exceptions.TimeoutException()
  TimeoutException: Timeout exceeded.

  Rally's log contain errors like:

  2015-03-10 22:15:55.181 32756 DEBUG neutronclient.client [-] RESP:409 
{'date': 'Tue, 10 Mar 2015 22:15:55 GMT', 'connection': 'close', 
'content-type': 'application/json; charset=UTF-8', 'content-length': '204', 
'x-openstack-request-id': 'req-5f0bce57-ed0c-4fff-904d-a71ba37a30d6'} 
{NeutronError: {message: Unable to complete operation on subnet 
ef8cae84-92d1-4c1d-ac2d-9c48a8af6d58. One or more ports have an IP allocation 
from this subnet., type: SubnetInUse, detail: }}
   http_log_resp 
/opt/stack/.venv/lib/python2.7/site-packages/neutronclient/common/utils.py:139
  2015-03-10 22:15:55.181 32756 DEBUG neutronclient.v2_0.client [-] Error 
message: {NeutronError: {message: Unable to complete operation on subnet 
ef8cae84-92d1-4c1d-ac2d-9c48a8af6d58. One or more ports have an IP allocation 
from this subnet., type: SubnetInUse, detail: }} 
_handle_fault_response 
/opt/stack/.venv/lib/python2.7/site-packages/neutronclient/v2_0/client.py:173
  2015-03-10 22:15:55.183 32756 ERROR rally.benchmark.context.network [-] 
Failed to delete network for tenant 01c5e2e04cad4ec586ac736f4e9c1433
   reason: Unable to complete operation on subnet 
ef8cae84-92d1-4c1d-ac2d-9c48a8af6d58. One or more ports have an IP allocation 
from this subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1430832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429753] [NEW] Improve performance of security groups rpc-related code

2015-03-09 Thread Eugene Nikanorov
Public bug reported:

In a case when large number of VMs ( 2-3 thousand) reside in one L2
network, security group listing for ports requested from OVS agents
consumes significant amount of CPU.

When VM is spawned on such network, every OVS agent requests update sec groups 
info on each of its devices. 
Total time needed to process all such RPC requests that were caused by 1 VM 
spawn may reach tens of cpu-seconds.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429753

Title:
  Improve performance of security groups rpc-related code

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In a case when large number of VMs ( 2-3 thousand) reside in one L2
  network, security group listing for ports requested from OVS agents
  consumes significant amount of CPU.

  When VM is spawned on such network, every OVS agent requests update sec 
groups info on each of its devices. 
  Total time needed to process all such RPC requests that were caused by 1 VM 
spawn may reach tens of cpu-seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429737] [NEW] Do not notify dead DHCP agent about removed network

2015-03-09 Thread Eugene Nikanorov
Public bug reported:

In cases when networks are removed from the dead DHCP agent in the process of 
autorescheduling, notifying dead agent leaves messages on its queue.
If that agent is started again, these messages would be the first it will 
process. 
If there are a dozen of such messages, their processing may overlap with 
processing of active networks, so potentially DHCP agent may disable dhcp for 
active networks that it hosts.

An example of such problem could be a system with one DHCP agent where
it is stopped, networks removed and then it is started again. The more
networks the agent hosts the more chances that processing of some
network.delete.end notification would appear after dhcp is enabled on
that network.

** Affects: neutron
 Importance: Medium
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429737

Title:
  Do not notify dead DHCP agent about removed network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In cases when networks are removed from the dead DHCP agent in the process of 
autorescheduling, notifying dead agent leaves messages on its queue.
  If that agent is started again, these messages would be the first it will 
process. 
  If there are a dozen of such messages, their processing may overlap with 
processing of active networks, so potentially DHCP agent may disable dhcp for 
active networks that it hosts.

  An example of such problem could be a system with one DHCP agent where
  it is stopped, networks removed and then it is started again. The more
  networks the agent hosts the more chances that processing of some
  network.delete.end notification would appear after dhcp is enabled on
  that network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424593] [NEW] ObjectDeleted error when network already removed during rescheduling

2015-02-23 Thread Eugene Nikanorov
Public bug reported:

In some cases when concurrent rescheduling occurs, the following trace
is observed:

ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py, 
line 76, in _inner
TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 269, 
in remove_networks_from_down_agents
TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
TRACE neutron.openstack.common.loopingcall value = callable_(state, passive)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'NetworkDhcpAgentBinding at 0x52b1850' has been deleted, or its row is 
otherwise not present.

Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
This issue terminates periodic task of rescheduling networks.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424593

Title:
  ObjectDeleted error when network already removed during rescheduling

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some cases when concurrent rescheduling occurs, the following trace
  is observed:

  ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
  TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py, 
line 76, in _inner
  TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 269, 
in remove_networks_from_down_agents
  TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 239, in 
__get__
  TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py, line 589, in 
get
  TRACE neutron.openstack.common.loopingcall value = callable_(state, 
passive)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py, line 424, in 
__call__
  TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
  TRACE neutron.openstack.common.loopingcall   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py, line 614, in 
load_scalar_attributes
  TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
  TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'NetworkDhcpAgentBinding at 0x52b1850' has been deleted, or its row is 
otherwise not present.

  Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
  This issue terminates periodic task of rescheduling networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424578] [NEW] DetachedInstanceError when binding network to agent

2015-02-23 Thread Eugene Nikanorov
Public bug reported:

TRACE neutron.db.agentschedulers_db Traceback (most recent call last):
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 192, 
in _schedule_network
TRACE neutron.db.agentschedulers_db agents = self.schedule_network(context, 
network)
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 400, 
in schedule_network
TRACE neutron.db.agentschedulers_db self, context, created_network)
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 91, in schedule
TRACE neutron.db.agentschedulers_db self._schedule_bind_network(context, 
chosen_agents, network['id'])
TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 51, in _schedule_bind_network
TRACE neutron.db.agentschedulers_db LOG.info(_('Agent %s already present'), 
agent)
...
TRACE neutron.db.agentschedulers_db DetachedInstanceError: Instance Agent at 
0x5ff1710 is not bound to a Session; attribute refresh operation cannot proceed
2015-02-21 14:45:15.927 1417 TRACE neutron.db.agentschedulers_db

Need to print saved agent_id instead of using db object.

** Affects: neutron
 Importance: Low
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424578

Title:
  DetachedInstanceError when binding network to agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  TRACE neutron.db.agentschedulers_db Traceback (most recent call last):
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 192, 
in _schedule_network
  TRACE neutron.db.agentschedulers_db agents = 
self.schedule_network(context, network)
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py, line 400, 
in schedule_network
  TRACE neutron.db.agentschedulers_db self, context, created_network)
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 91, in schedule
  TRACE neutron.db.agentschedulers_db self._schedule_bind_network(context, 
chosen_agents, network['id'])
  TRACE neutron.db.agentschedulers_db   File 
/usr/lib/python2.7/dist-packages/neutron/scheduler/dhcp_agent_scheduler.py, 
line 51, in _schedule_bind_network
  TRACE neutron.db.agentschedulers_db LOG.info(_('Agent %s already 
present'), agent)
  ...
  TRACE neutron.db.agentschedulers_db DetachedInstanceError: Instance Agent at 
0x5ff1710 is not bound to a Session; attribute refresh operation cannot proceed
  2015-02-21 14:45:15.927 1417 TRACE neutron.db.agentschedulers_db

  Need to print saved agent_id instead of using db object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417267] Re: neutron-ovs-agent on compute couldn't create interface

2015-02-22 Thread Eugene Nikanorov
So apparently this issue along with some others were caused by unstable
work of rabbitmq in certain conditions in certain environment.

I don't see an issue on neutron side for now.

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417267

Title:
  neutron-ovs-agent on compute couldn't create interface

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Steps to reproduce:

  1. Create env with two powerful compute nodes and start to provision 
instances. 
  2. After 130 instance one of the compute nodes starting failing provision 
because of unable to create interfaces. Here is trace from logs. 
http://paste.openstack.org/show/164308/
  3. Restarting of agent restore functionality of compute node.
  4. After some more VMs provisioned ovs-agent on another node stop working 
with the same symptoms.
  5. Restarting of agent restore functionality of compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424099] Re: Unable to pass additional parameters to update_router tesmpest test case

2015-02-22 Thread Eugene Nikanorov
Removing Neutron project.

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424099

Title:
  Unable to pass additional parameters to update_router tesmpest test
  case

Status in Tempest:
  New

Bug description:
  While writing tempest test case, I encountered the following:

  Consider the following scenario:

  Suppose a third-party plugin has additinal attributes that can be
  passed during router-creation and router-update.

  Now, the _update_router method in our  json network client does not
  consider these additional parameter.

  See the method _update_router in json network client. (
  tempest/services/network/json/network_client.py )

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1424099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382064] Re: Failure to allocate tunnel id when creating networks concurrently

2015-02-20 Thread Eugene Nikanorov
** Changed in: neutron
   Status: Fix Released = In Progress

** Changed in: neutron
Milestone: kilo-2 = kilo-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382064

Title:
  Failure to allocate tunnel id when creating networks concurrently

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When multiple networks are created concurrently, the following trace
  is observed:

  WARNING neutron.plugins.ml2.drivers.helpers 
[req-34103ce8-b6d0-459b-9707-a24e369cf9de None] Allocate gre segment from pool 
failed after 10 failed attempts
  DEBUG neutron.context [req-2995f877-e3e6-4b32-bdae-da6295e492a1 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  DEBUG neutron.plugins.ml2.drivers.helpers 
[req-3541998d-44df-468f-b65b-36504e893dfb None] Allocate gre segment from pool, 
attempt 1 failed with segment {'gre_id': 300L} 
allocate_partially_specified_segment 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py:138
  DEBUG neutron.context [req-6dcfb91d-2c5b-4e4f-9d81-55ba381ad232 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  ERROR neutron.api.v2.resource [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] 
create failed
  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 448, in create
  TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py, line 497, in 
create_network
  TRACE neutron.api.v2.resource tenant_id)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line 160, 
in create_network_segments
  TRACE neutron.api.v2.resource segment = self.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line 189, 
in allocate_tenant_segment
  TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/type_tunnel.py, 
line 115, in allocate_tenant_segment
  TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  TRACE neutron.api.v2.resource File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py, line 
143, in allocate_partially_specified_segment
  TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  TRACE neutron.api.v2.resource NoNetworkFoundInMaximumAllowedAttempts: Unable 
to create the network. No available network found in maximum allowed attempts.
  TRACE neutron.api.v2.resource

  Additional conditions: multiserver deployment and mysql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179468] Re: Refactor lbaas plugin to not inherit from Lbaas Db plugin

2015-02-19 Thread Eugene Nikanorov
** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: neutron
 Assignee: Eugene Nikanorov (enikanorov) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179468

Title:
  Refactor lbaas plugin to not inherit from Lbaas Db plugin

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Currently lbaas plugin follows common quantum practice and inherits from Db 
plugin.
  However unlike core plugins which extend core Db object model, lbaas plugin 
is responsible for coupling Db operations and interaction with agents/drivers.
  It would be convenient to be able to mock entire Db plugin operations and 
test plugin-specific logic only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419745] Re: RPC method in OVS agent attempt to access uninitialized attribute

2015-02-09 Thread Eugene Nikanorov
Observed behavior applies to Juno release and was fixed later.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419745

Title:
  RPC method in OVS agent attempt to access uninitialized attribute

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  The following trace was observed during OVS agent startup.

  2015-02-09 07:32:54.512 25702 ERROR oslo.messaging.rpc.dispatcher 
[req-e7ffbedc-3e8e-4699-a341-9e14ec04f231 ] Exception during message handling: 
'OVSNeutronAgent' object has no attribute 'enable_tunneling'
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 137, 
in _dispatch_and_reply
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 180, 
in _dispatch
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 126, 
in _do_dispatch
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 321, in tunnel_update
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher if not 
self.enable_tunneling:
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'OVSNeutronAgent' object has no attribute 'enable_tunneling'

  The failure is caused by tunnel_update rpc message handled during OVS
  agent initialization.

  This failure at agent startup leads to connectivity failure of whole
  node, because tunnels are not set up properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1419745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419745] [NEW] RPC method in OVS agent attempt to access uninitialized attribute

2015-02-09 Thread Eugene Nikanorov
Public bug reported:

The following trace was observed during OVS agent startup.

2015-02-09 07:32:54.512 25702 ERROR oslo.messaging.rpc.dispatcher 
[req-e7ffbedc-3e8e-4699-a341-9e14ec04f231 ] Exception during message handling: 
'OVSNeutronAgent' object has no attribute 'enable_tunneling'
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 137, 
in _dispatch_and_reply
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 180, 
in _dispatch
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 126, 
in _do_dispatch
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 321, in tunnel_update
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher if not 
self.enable_tunneling:
2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'OVSNeutronAgent' object has no attribute 'enable_tunneling'

The failure is caused by tunnel_update rpc message handled during OVS
agent constructor.

This failure at agent startup leads to connectivity failure of whole
node, because tunnels are not set up properly.

** Affects: neutron
 Importance: High
 Assignee: Eugene Nikanorov (enikanorov)
 Status: New


** Tags: ovs

** Description changed:

- The following trace was observed:
+ The following trace was observed during OVS agent startup.
  
  2015-02-09 07:32:54.512 25702 ERROR oslo.messaging.rpc.dispatcher 
[req-e7ffbedc-3e8e-4699-a341-9e14ec04f231 ] Exception during message handling: 
'OVSNeutronAgent' object has no attribute 'enable_tunneling'
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 137, 
in _dispatch_and_reply
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 180, 
in _dispatch
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 126, 
in _do_dispatch
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 321, in tunnel_update
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher if not 
self.enable_tunneling:
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'OVSNeutronAgent' object has no attribute 'enable_tunneling'
+ 
+ The failure is caused by tunnel_update rpc message handled during OVS
+ agent constructor.
+ 
+ This failure at agent startup leads to connectivity failure of whole
+ node, because tunnels are not set up properly.

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1419745

Title:
  RPC method in OVS agent attempt to access uninitialized attribute

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following trace was observed during OVS agent startup.

  2015-02-09 07:32:54.512 25702 ERROR oslo.messaging.rpc.dispatcher 
[req-e7ffbedc-3e8e-4699-a341-9e14ec04f231 ] Exception during message handling: 
'OVSNeutronAgent' object has no attribute 'enable_tunneling'
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 137, 
in _dispatch_and_reply
  2015-02-09 07:32:54.512 25702 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015

  1   2   3   4   >