[Yahoo-eng-team] [Bug 1706078] Re: Code is breaking if try to create health monitor with --delay=3000000000000

2017-08-01 Thread Nir Magnezi
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1706078

Title:
  Code is breaking if try to create health monitor with
  --delay=3

Status in octavia:
  In Progress

Bug description:
  During execution of health monitor creation with delay 3,
  exception appears in neutron-server.logs. But proper error message not
  getting displayed over prompt.

  #neutron lbaas-healthmonitor-create --max-retries=3 --delay=3 
--timeout=10 --type=PING --pool=check-pool
  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-556bc474-64ce-40f0-88a4-0795a363949c']

  Logs:-
  http://paste.openstack.org/show/616304/

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1706078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660088] Re: Huge number of deprecation warnings in oslo.context 2.12.0

2017-02-27 Thread Nir Magnezi
** Changed in: oslo.context
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660088

Title:
  Huge number of deprecation warnings in oslo.context 2.12.0

Status in neutron:
  Fix Released
Status in oslo.context:
  Fix Released

Bug description:
  I found in Neutron's functional tests (like 
http://logs.openstack.org/29/426429/3/check/gate-neutron-dsvm-functional-ubuntu-xenial/7079dc5/console.html)
 huge number of deprecation warning which comes from oslo.context module.
  It causes troubles with finish of tests (because of reach timeouts) and is 
very "noisy" in logs.
  I described it also in 
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg99752.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1660088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619466] Re: Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line

2016-09-07 Thread Nir Magnezi
** Project changed: devstack => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619466

Title:
  Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc
  command line

Status in neutron:
  In Progress

Bug description:
  When q-lbaasv2 is enabled in your devstack local.conf, this implies
  that LBaaS v2 is going to be used, and neutron-lbaas's corresponding
  devstack plugin.sh script creates a new
  /etc/neutron/neutron_lbaas.conf file with come configuration
  parameters. However, under several circumstances, some of the options
  in this file are needed by other neutron daemons, such as the q-svc
  daemon.

  So, if q-lbaasv2 is enabled in devstack local.conf, then the command-
  line for the q-svc agent should also include '--config-file
  /etc/neutron/neutron_lbaas.conf' so that these configuration
  parameters are pulled in for that daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619466] Re: Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc command line

2016-09-04 Thread Nir Magnezi
This can only be fixed in the devstack codebase, since the plugin has no
notion of the q-svc --config-file options.

** Changed in: neutron
 Assignee: (unassigned) => Nir Magnezi (nmagnezi)

** Project changed: neutron => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619466

Title:
  Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc
  command line

Status in devstack:
  Confirmed

Bug description:
  When q-lbaasv2 is enabled in your devstack local.conf, this implies
  that LBaaS v2 is going to be used, and neutron-lbaas's corresponding
  devstack plugin.sh script creates a new
  /etc/neutron/neutron_lbaas.conf file with come configuration
  parameters. However, under several circumstances, some of the options
  in this file are needed by other neutron daemons, such as the q-svc
  daemon.

  So, if q-lbaasv2 is enabled in devstack local.conf, then the command-
  line for the q-svc agent should also include '--config-file
  /etc/neutron/neutron_lbaas.conf' so that these configuration
  parameters are pulled in for that daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1619466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614443] Re: LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit

2016-08-18 Thread Nir Magnezi
this is a duplicate of bug 1613251

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
 Assignee: Nir Magnezi (nmagnezi) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614443

Title:
  LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError
  Edit

Status in neutron:
  Invalid

Bug description:
  Found this while working on bug 1613251.

  Example for that error:
  
http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614443] [NEW] LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError Edit

2016-08-18 Thread Nir Magnezi
Public bug reported:

Found this while working on bug 1613251.

Example for that error:
http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html

** Affects: neutron
 Importance: Undecided
 Assignee: Nir Magnezi (nmagnezi)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Nir Magnezi (nmagnezi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614443

Title:
  LBaaSv1: HAproxy scenario tests cleanup fail with a StaleDataError
  Edit

Status in neutron:
  In Progress

Bug description:
  Found this while working on bug 1613251.

  Example for that error:
  
http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613251] [NEW] HAproxy scenario tests cleanup fail with a StaleDataError

2016-08-15 Thread Nir Magnezi
Public bug reported:

Example for that error:
http://logs.openstack.org/90/351490/4/check/gate-neutron-lbaasv2-dsvm-scenario-namespace-nv/8f1255a/logs/screen-q-svc.txt.gz#_2016-08-07_06_00_57_478

This is easily reproduced locally on my devstack with 
neutron_lbaas.tests.tempest.v2.scenario.test_load_balancer_basic
moreover, even if narrow the above mentioned scenario to only create a 
loadbalancer (with listener and pool and then, run the cleanup - the issue 
reproduces.

This is blocking the gate-neutron-lbaasv2-dsvm-scenario-namespace-nv
from properly indicate whether or not scenario tests fail for haproxy in
namespace lbaas driver.

** Affects: neutron
 Importance: High
 Assignee: Nir Magnezi (nmagnezi)
 Status: Confirmed


** Tags: db lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613251

Title:
  HAproxy scenario tests cleanup fail with a StaleDataError

Status in neutron:
  Confirmed

Bug description:
  Example for that error:
  
http://logs.openstack.org/90/351490/4/check/gate-neutron-lbaasv2-dsvm-scenario-namespace-nv/8f1255a/logs/screen-q-svc.txt.gz#_2016-08-07_06_00_57_478

  This is easily reproduced locally on my devstack with 
neutron_lbaas.tests.tempest.v2.scenario.test_load_balancer_basic
  moreover, even if narrow the above mentioned scenario to only create a 
loadbalancer (with listener and pool and then, run the cleanup - the issue 
reproduces.

  This is blocking the gate-neutron-lbaasv2-dsvm-scenario-namespace-nv
  from properly indicate whether or not scenario tests fail for haproxy
  in namespace lbaas driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1613251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565801] Re: Add process monitor for haproxy

2016-07-20 Thread Nir Magnezi
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801

Title:
  Add process monitor for haproxy

Status in neutron:
  In Progress

Bug description:
  Bug 1565511 aims to solve cases where the lbaas agent goes offline.
  To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

  This[1] neutron spec offers the following approach:  
  "We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
   
  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565801] [NEW] [RFE] Add process monitor for haproxy

2016-04-04 Thread Nir Magnezi
Public bug reported:

Bug 1565511 aims to solve cases where the lbaas agent goes offline.
To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

This[1] neutron spec offers the following approach:  
"We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
 
[1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801

Title:
  [RFE] Add process monitor for haproxy

Status in neutron:
  New

Bug description:
  Bug 1565511 aims to solve cases where the lbaas agent goes offline.
  To have a complete high-availability solution for lbaas agent with haproxy 
running in namespace, we would also want to handle a case where the haproxy 
process itself stopped. 

  This[1] neutron spec offers the following approach:  
  "We propose monitoring those processes, and taking a configurable action, 
making neutron more resilient to external failures."
   
  [1] 
http://specs.openstack.org/openstack/neutron-specs/specs/juno/agent-child-processes-status.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565511] [NEW] Loadbalancers should be rescheduled when a LBaaS agent goes offline

2016-04-03 Thread Nir Magnezi
Public bug reported:

Currently, when a LBaaS agent goes offline the loadbalancers remain in under 
that agent.
In a similar logic to 'allow_automatic_l3agent_failover', the neutron server 
should reschedule loadbalancers from dead lbaas agents.

this should be enabled with an option as well, such as:
allow_automatic_lbaas_agent_failover

** Affects: neutron
 Importance: Undecided
 Assignee: Nir Magnezi (nmagnezi)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565511

Title:
  Loadbalancers should be rescheduled when a LBaaS agent goes offline

Status in neutron:
  In Progress

Bug description:
  Currently, when a LBaaS agent goes offline the loadbalancers remain in under 
that agent.
  In a similar logic to 'allow_automatic_l3agent_failover', the neutron server 
should reschedule loadbalancers from dead lbaas agents.

  this should be enabled with an option as well, such as:
  allow_automatic_lbaas_agent_failover

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1565511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510126] [NEW] neutron-lbaas local.conf should not use an old version of cirros

2015-10-26 Thread Nir Magnezi
Public bug reported:

The local.conf file contains[1] a hardcoded version of cirros 0.3.0.
This should be replaced with the latest version of Cirros as it contains 
important improvements (such as IPv6 support)
It is preferable to refrain from hardcoding the version.

[1] https://github.com/openstack/neutron-
lbaas/blob/stable/liberty/devstack/samples/local.conf#L43

** Affects: neutron
 Importance: Undecided
 Assignee: Nir Magnezi (nmagnezi)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510126

Title:
  neutron-lbaas local.conf should not use an old version of cirros

Status in neutron:
  In Progress

Bug description:
  The local.conf file contains[1] a hardcoded version of cirros 0.3.0.
  This should be replaced with the latest version of Cirros as it contains 
important improvements (such as IPv6 support)
  It is preferable to refrain from hardcoding the version.

  [1] https://github.com/openstack/neutron-
  lbaas/blob/stable/liberty/devstack/samples/local.conf#L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499406] [NEW] The option force_metadata = True is broken

2015-09-24 Thread Nir Magnezi
Public bug reported:

Initially found here: https://bugzilla.redhat.com/show_bug.cgi?id=1256816#c9
Patch https://review.openstack.org/#/c/211963 intreduces a regression with 
force_metadata = True.
Using the option force_metadata = True can will cause neutron to fail:

ERROR neutron.agent.dhcp.agent [-] Unable to enable dhcp for 
ef93213d-2525-4088-abdd-9f7854ca68e7.
TRACE neutron.agent.dhcp.agent Traceback (most recent call last):
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
TRACE neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs)
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 205, in enable
TRACE neutron.agent.dhcp.agent self.spawn_process()
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 413, in spawn_process
TRACE neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 422, in 
_spawn_or_reload_process
TRACE neutron.agent.dhcp.agent self._output_config_files()
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 449, in 
_output_config_files
TRACE neutron.agent.dhcp.agent self._output_opts_file()
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 745, in 
_output_opts_file
TRACE neutron.agent.dhcp.agent options, subnet_index_map = 
self._generate_opts_per_subnet()
TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 797, in 
_generate_opts_per_subnet
TRACE neutron.agent.dhcp.agent subnet_dhcp_ip = 
subnet_to_interface_ip[subnet.id]
TRACE neutron.agent.dhcp.agent UnboundLocalError: local variable 
'subnet_to_interface_ip' referenced before assignment

Meaning, neutron won't be able to spawn a new instance of dnsmasq.
The reason for this:
when 'force_metadata = True' and 'enable_isolated_metadata = False' (default), 
the subnet_to_interface_ip won't be assigned.

** Affects: neutron
 Importance: Medium
 Assignee: Nir Magnezi (nmagnezi)
 Status: Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Nir Magnezi (nmagnezi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499406

Title:
  The option force_metadata = True is broken

Status in neutron:
  Confirmed

Bug description:
  Initially found here: https://bugzilla.redhat.com/show_bug.cgi?id=1256816#c9
  Patch https://review.openstack.org/#/c/211963 intreduces a regression with 
force_metadata = True.
  Using the option force_metadata = True can will cause neutron to fail:

  ERROR neutron.agent.dhcp.agent [-] Unable to enable dhcp for 
ef93213d-2525-4088-abdd-9f7854ca68e7.
  TRACE neutron.agent.dhcp.agent Traceback (most recent call last):
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  TRACE neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs)
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 205, in enable
  TRACE neutron.agent.dhcp.agent self.spawn_process()
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 413, in spawn_process
  TRACE neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 422, in 
_spawn_or_reload_process
  TRACE neutron.agent.dhcp.agent self._output_config_files()
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 449, in 
_output_config_files
  TRACE neutron.agent.dhcp.agent self._output_opts_file()
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 745, in 
_output_opts_file
  TRACE neutron.agent.dhcp.agent options, subnet_index_map = 
self._generate_opts_per_subnet()
  TRACE neutron.agent.dhcp.agent   File 
"/opt/openstack/neutron/neutron/agent/linux/dhcp.py", line 797, in 
_generate_opts_per_subnet
  TRACE neutron.agent.dhcp.agent subnet_dhcp_ip = 
subnet_to_interface_ip[subnet.id]
  TRACE neutron.agent.dhcp.agent UnboundLocalError: local variable 
'subnet_to_interface_ip' referenced before assignment

  Meaning, neutron won't be able to spawn a new instance of dnsmasq.
  The reason for this:
  when 'force_metadata = True' and 'enable_isolated_metadata = False' 
(default), the subnet_to_interface_ip won't be assigned.

To manage notifications abo

[Yahoo-eng-team] [Bug 1403001] [NEW] LBaaS VIP does not work with IPv6 addresses because haproxy cannot bind socket

2014-12-16 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
IPv6 VIP remains in ERROR state due to haproxy cannot bind socket.

neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call 
last):
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/agent/agent_manager.py,
 line 214, in create_vip
neutron.services.loadbalancer.agent.agent_manager driver.create_vip(vip)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 318, in create_vip
neutron.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 315, in _refresh_device
neutron.services.loadbalancer.agent.agent_manager 
self.deploy_instance(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/lockutils.py, line 
249, in inner
neutron.services.loadbalancer.agent.agent_manager return f(*args, **kwargs)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 311, in deploy_instance
neutron.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 92, in create
neutron.services.loadbalancer.agent.agent_manager 
self._spawn(logical_config)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 115, in _spawn
neutron.services.loadbalancer.agent.agent_manager ns.netns.execute(cmd)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 550, in 
execute
neutron.services.loadbalancer.agent.agent_manager 
check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
neutron.services.loadbalancer.agent.agent_manager   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 84, in 
execute
neutron.services.loadbalancer.agent.agent_manager raise RuntimeError(m)
neutron.services.loadbalancer.agent.agent_manager RuntimeError: 
neutron.services.loadbalancer.agent.agent_manager Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec',


Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2.1-2.el7ost.noarch
haproxy-1.5.2-3.el7_0.x86_64

How reproducible:
=
2/2

Steps to Reproduce:
===
1. Spawn Two instances and wait for them to become active
Via tenant_a:
nova boot tenant_a_instance --flavor m1.small --image image_id 
--min-count 2 --key-name tenant_a_keypair --security-groups default --nic 
net-id=internal_ipv4_a_id --nic net-id=tenant_a_radvd_stateful_id

2. Retrive your instances IPv6 addresses, tenant id and the subnet id you are 
about to use.
   You may use any IPv6 subnet, in this example we'll use 
tenant_a_radvd_stateful_subnet
   # nova list | awk '/tenant_a_instance/ {print $12}' | cut -d= -f2 | sed -e 
s/\;\//
   # neutron subnet-list | awk '/tenant_a_radvd_stateful_subnet/ {print $2}'

3. Create a LBaaS pool
   # neutron lb-pool-create --lb-method ROUND_ROBIN --name Ipv6_LBaaS 
--protocol HTTP --subnet-id c54f8745-2aba-42da-8845-15050db1d5d1

4. Add members to the pool
   # neutron lb-member-create Ipv6_LBaaS --address 
2001:65:65:65:f816:3eff:feda:b05e --protocol-port 80
   # neutron lb-member-create Ipv6_LBaaS --address 
2001:65:65:65:f816:3eff:fe82:5d8 --protocol-port 80

5. Create a VIP:
   # neutron lb-vip-create Ipv6_LBaaS --name Ipv6_LBaaS_VIP --protocol-port 80 
--protocol HTTP --subnet-id 0458273a-efe8-4d37-b2a0-e11cbd5e4d13

6. Check the VIP status:
   # neutron lb-vip-show Ipv6_LBaaS_VIP | grep status

Actual results:
===
1. status = ERROR

2. lbaas-agent.log (attached):
 
TRACE neutron.services.loadbalancer.agent.agent_manager Stderr: '[ALERT] 
349/101731 (20878) : Starting frontend fcb9db64-e877-4e95-a86f-fed6d1b244c2: 
cannot bind socket [2001:64:64:64::a:80]\n'

Expected results:
=
IPv6 VIP should work.

Additional info:

1. Tested with RHEL7
2. haproxy configuration:
global
daemon
user nobody
group haproxy
log /dev/log local0
log /dev/log local1 notice
stats socket 
/var/lib/neutron/lbaas/2c18a738-05f4-4099-8348-94575c9ed290/sock mode 0666 
level user
defaults
log global
retries 3

[Yahoo-eng-team] [Bug 1403034] [NEW] Horizon should accept an IPv6 address as a VIP Address for LB Pool

2014-12-16 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
Horizon should accept an IPv6 address as a VIP Address for LB Pool.

Version-Release number of selected component (if applicable):
=
RHEL-OSP6: 2014-12-12.1
python-django-horizon-2014.2.1-2.el7ost.noarch

How reproducible:
=
Always

Steps to Reproduce:
===
0. Have an IPv6 subnet.

1. Browse to: http://FQDN/dashboard/project/loadbalancers/

2. Create a Load Balancing Pool.

3. Add VIP as follows:
   - Name: test
   - VIP Subnet: Select your IPv6 subnet
   - Specify a free IP address from the selected subnet: IPv6 address such as: 
2001:65:65:65::a
   - Protocol Port: 80
   - Protocol: HTTP
   - Admin State: UP

Actual results:
===
Error: Invalid version for IP address

Expected results:
=
Should work OK, this is supported in the python-neutronclient.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ipv6

** Attachment added: screenshot
   
https://bugs.launchpad.net/bugs/1403034/+attachment/4282117/+files/Screenshot%20from%202014-12-16%2014%3A07%3A07.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1403034

Title:
  Horizon should accept an IPv6 address as a VIP Address for LB Pool

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  Horizon should accept an IPv6 address as a VIP Address for LB Pool.

  Version-Release number of selected component (if applicable):
  =
  RHEL-OSP6: 2014-12-12.1
  python-django-horizon-2014.2.1-2.el7ost.noarch

  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  0. Have an IPv6 subnet.

  1. Browse to: http://FQDN/dashboard/project/loadbalancers/

  2. Create a Load Balancing Pool.

  3. Add VIP as follows:
 - Name: test
 - VIP Subnet: Select your IPv6 subnet
 - Specify a free IP address from the selected subnet: IPv6 address such 
as: 2001:65:65:65::a
 - Protocol Port: 80
 - Protocol: HTTP
 - Admin State: UP

  Actual results:
  ===
  Error: Invalid version for IP address

  Expected results:
  =
  Should work OK, this is supported in the python-neutronclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1403034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402640] [NEW] IP addresses are not properly reported with radvd stateful subnets

2014-12-15 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I Created n radvd IPv6 subnet with:
 - ipv6_ra_mode: dhcpv6-stateful
 - ipv6_address_mode: dhcpv6-stateful

Meaning: 
a. The Neutron DHCP agent (dnsmasq) provides IP addresses and additional 
parameters.
b. Neutron router (radvd) sends out RAs.

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2.1-2

How reproducible:
=
2/2

Steps to Reproduce:
===
1. Create an IPv6 neutron network:
   # neutron net-create tenant_a_radvd_stateful --shared 
--provider:physical_network=ipv6_vlan_range --provider:network_type=vlan 
--provider:segmentation_id=64

2. Create an IPv6 subnet:
   # neutron subnet-create IPv6_net_id 2001:65:65:65::1/64 --name 
tenant_a_radvd_stateless_subnet --ipv6-ra-mode dhcpv6-stateless 
--ipv6-address-mode dhcpv6-stateless --dns-nameserver 2001:4860:4860:: 
--ip-version 6
3. Create a neutron router:
   # neutron router-create router1

4. Attach subnet to the router
   # neutron router-interface-add router_id ipv6_subnet

5. boot an instance with that network
   # nova boot tenant_a_instance_radvd_stateful --flavor m1.small --image 
image_id --key-name keypair --security-groups default --nic 
net-id=ipv6_net_id

Actual results:
===
# nova show instance_id | grep network
| tenant_a_radvd_stateful network  | 2001:64:64:64::5   
   |

# neutron port-show port_id | grep fixed_ips
| fixed_ips | {subnet_id: subnet_id, ip_address: 
2001:64:64:64::5} |

Dispite that fact that the port mac address matches the instance NIC,
the IP obtained by the instance is different.

Expected results:
=
In this case, the NIC IP address is: 2001:64:64:64:f816:3eff:fe2b:6197

Additional info:

1. Instance guest image: rhel7
2. Hypervisor: rhel7

radvd:
==
# cat /var/lib/neutron/ra/76f98730-bb40-40e7-bc57-e89f120efd4a.radvd.conf
interface qr-23b0614a-cd
{
   AdvSendAdvert on;
   MinRtrAdvInterval 3;
   MaxRtrAdvInterval 10;
   prefix 2001:65:65:65::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};
interface qr-9741429b-bf
{
   AdvSendAdvert on;
   MinRtrAdvInterval 3;
   MaxRtrAdvInterval 10;
   prefix 2001:63:63:63::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};
interface qr-ace4c312-6c
{
   AdvSendAdvert on;
   MinRtrAdvInterval 3;
   MaxRtrAdvInterval 10;
};


dnsmasq:

nobody   12197 1  0 Dec14 ?00:00:01 dnsmasq --no-hosts --no-resolv 
--strict-order --bind-interfaces --interface=tap74f92a94-90 
--except-interface=lo 
--pid-file=/var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/pid 
--dhcp-hostsfile=/var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/host
 
--addn-hosts=/var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/addn_hosts
 
--dhcp-optsfile=/var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/opts 
--leasefile-ro --dhcp-range=set:tag0,2001:64:64:64::,static,86400s 
--dhcp-lease-max=16777216 --conf-file=/etc/neutron/dnsmasq-neutron.conf 
--domain=openstacklocal


# cat /var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/host
fa:16:3e:07:2d:5d,host-2001-64-64-64--1.openstacklocal,[2001:64:64:64::1]
fa:16:3e:2b:61:97,host-2001-64-64-64--5.openstacklocal,[2001:64:64:64::5]
fa:16:3e:31:94:88,host-2001-64-64-64--6.openstacklocal,[2001:64:64:64::6]
fa:16:3e:a9:e0:e8,host-2001-64-64-64--7.openstacklocal,[2001:64:64:64::7]


# cat /var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/addn_hosts
2001:64:64:64::1host-2001-64-64-64--1.openstacklocal 
host-2001-64-64-64--1
2001:64:64:64::5host-2001-64-64-64--5.openstacklocal 
host-2001-64-64-64--5
2001:64:64:64::6host-2001-64-64-64--6.openstacklocal 
host-2001-64-64-64--6
2001:64:64:64::7host-2001-64-64-64--7.openstacklocal 
host-2001-64-64-64--7


# cat /var/lib/neutron/dhcp/308b1dd1-185c-451b-8912-2a323616acce/opts
tag:tag0,option6:dns-server,[2001:4860:4860::]
tag:tag0,option6:domain-search,openstacklocal

# cat /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1400

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402640

Title:
  IP addresses are not properly reported with radvd stateful subnets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I Created n radvd IPv6 subnet with:
   - ipv6_ra_mode: dhcpv6-stateful
   - ipv6_address_mode: dhcpv6-stateful

  Meaning: 
  a. The Neutron DHCP agent (dnsmasq) provides IP addresses and additional 
parameters.
  b. Neutron router (radvd) sends out RAs.

  Version-Release number of selected component (if applicable):
  

[Yahoo-eng-team] [Bug 1380238] [NEW] Instances won't obtain IPv6 address if they have additional IPv4 interface

2014-10-12 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I booted an instance with both IPv4 and IPv6 interfaces, yet that instance did 
no obtain any IPv6 address.
In order make sure nothing is wrong with my IPv6 configuration (which is RADVD 
SLAAC), I booted additional instance with IPv6 interface only, which obtained 
an IPv6 address with no issues.


Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3
How reproducible:
=
Always

Steps to Reproduce:
===
0. Perior to the test, configure the following:
   a. Neutron router
   b. IPv4 Network  Subnet
   c. IPv6 Network  Subnet (SLAAC in my specific case)
  -- Created with: --ipv6-address-mode slaac --ipv6_ra_mode slaac
   d. Add router interfaces with those networks.

1. spwan an instance with both IPv4  IPv6 interfaces.

2. spwan an instance with IPv6 interface only.


Actual results:
===
1. The instance spawed in step 1 obtained IPv4 address and IPv6 link local 
address only
2. The instance spawed in step 2 obtained IPv6 addrees proparly.

Expected results:
=
Instances should obtain all IP addresses in both scenarios I mentioned above.

Additional info:

Using tcpdump from within the instances I noticed that ICMPv6 Router 
Advertisments did not reach the NIC.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1380238

Title:
  Instances won't obtain IPv6 address if they have additional IPv4
  interface

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I booted an instance with both IPv4 and IPv6 interfaces, yet that instance 
did no obtain any IPv6 address.
  In order make sure nothing is wrong with my IPv6 configuration (which is 
RADVD SLAAC), I booted additional instance with IPv6 interface only, which 
obtained an IPv6 address with no issues.

  
  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3
  How reproducible:
  =
  Always

  Steps to Reproduce:
  ===
  0. Perior to the test, configure the following:
 a. Neutron router
 b. IPv4 Network  Subnet
 c. IPv6 Network  Subnet (SLAAC in my specific case)
-- Created with: --ipv6-address-mode slaac --ipv6_ra_mode slaac
 d. Add router interfaces with those networks.

  1. spwan an instance with both IPv4  IPv6 interfaces.

  2. spwan an instance with IPv6 interface only.

  
  Actual results:
  ===
  1. The instance spawed in step 1 obtained IPv4 address and IPv6 link local 
address only
  2. The instance spawed in step 2 obtained IPv6 addrees proparly.

  Expected results:
  =
  Instances should obtain all IP addresses in both scenarios I mentioned above.

  Additional info:
  
  Using tcpdump from within the instances I noticed that ICMPv6 Router 
Advertisments did not reach the NIC.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1380238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378783] [NEW] IPv6 namespaces are not updated upon router interface deletion

2014-10-08 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
In case the namespace contains both IPv4 and IPv6 Interfaces, they will not be 
deleted with interfaces are detached from the router.

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=

Steps to Reproduce:
===
1. Create a neutron Router
2. Attach an IPv6 interface
3. Attach an IPv4 interface
4. Delete both interfaces
5. Check if interfaces were deleted from the router namespace:
   # ip netns exec qrouter-id ifconfig | grep inet

Actual results:
===
Interfaces were not deleted.

Expected results:
=
Interfaces should be deleted.

Additional info:

Tested with RHEL7

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378783

Title:
  IPv6 namespaces are not updated upon router interface deletion

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  In case the namespace contains both IPv4 and IPv6 Interfaces, they will not 
be deleted with interfaces are detached from the router.

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =

  Steps to Reproduce:
  ===
  1. Create a neutron Router
  2. Attach an IPv6 interface
  3. Attach an IPv4 interface
  4. Delete both interfaces
  5. Check if interfaces were deleted from the router namespace:
 # ip netns exec qrouter-id ifconfig | grep inet

  Actual results:
  ===
  Interfaces were not deleted.

  Expected results:
  =
  Interfaces should be deleted.

  Additional info:
  
  Tested with RHEL7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377843] [NEW] Instances won't obtain IPv6 address and gateway when using Stateful DHCPv6 provided by OpenStack

2014-10-06 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: dhcpv6-stateful
2. ipv6_address_mode: dhcpv6-stateful

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=
100%

Steps to Reproduce:
===
1. create a neutron network
2. create an IPv6 subnet:
# neutron subnet-create IPv6_net_id 2001:db2::/64 --name 
usecase2_ipv6_stateles_subnet --ipv6-address-mode dhcpv6-stateful 
--ipv6_ra_mode dhcpv6-stateful --ip-version 6
3. boot an instance with that network

Actual results:
===
1. Instance did not obtain IPv6 address
2. default gw is not set

Expected results:
=
The instance should have IPv6 address a default gw configured.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377843

Title:
  Instances won't obtain IPv6 address and gateway when using Stateful
  DHCPv6 provided by OpenStack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateful
  2. ipv6_address_mode: dhcpv6-stateful

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # neutron subnet-create IPv6_net_id 2001:db2::/64 --name 
usecase2_ipv6_stateles_subnet --ipv6-address-mode dhcpv6-stateful 
--ipv6_ra_mode dhcpv6-stateful --ip-version 6
  3. boot an instance with that network

  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set

  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377841] [NEW] Instances won't obtain IPv6 address and gateway when using SLAAC provided by OpenStack

2014-10-06 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: slaac
2. ipv6_address_mode: slaac

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=
100%

Steps to Reproduce:
===
1. create a neutron network
2. create an IPv6 subnet:
# neutron subnet-create  IPv6_net_id  2001:db1::/64 --name 
usecase1_ipv6_slaac --ipv6-address-mode slaac --ipv6_ra_mode slaac --ip-version 
6
3. boot an instance with that network

Actual results:
===
1. Instance did not obtain IPv6 address
2. default gw is not set

Expected results:
=
The instance should have IPv6 address a default gw configured.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: slaac
  2. ipv6_address_mode: slaac
  
  Version-Release number of selected component (if applicable):
  =
- RDO Juno: openstack-neutron-2014.2-0.7.b3
+ openstack-neutron-2014.2-0.7.b3
  
  How reproducible:
  =
  100%
  
  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # neutron subnet-create  IPv6_net_id  2001:db1::/64 --name 
usecase1_ipv6_slaac --ipv6-address-mode slaac --ipv6_ra_mode slaac --ip-version 
6
  3. boot an instance with that network
  
  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set
  
  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377841

Title:
  Instances won't obtain IPv6 address and gateway when using SLAAC
  provided by OpenStack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: slaac
  2. ipv6_address_mode: slaac

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # neutron subnet-create  IPv6_net_id  2001:db1::/64 --name 
usecase1_ipv6_slaac --ipv6-address-mode slaac --ipv6_ra_mode slaac --ip-version 
6
  3. boot an instance with that network

  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set

  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377839] [NEW] Instances won't obtain IPv6 address and default gateway when using stateless DHCPv6 provided by OpenStack

2014-10-06 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: dhcpv6-stateless
2. ipv6_address_mode: dhcpv6-stateless

Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=
100%

Steps to Reproduce:
===
1. create a neutron network
2. create an IPv6 subnet:
# subnet-create IPv6_net_id 2001:db1:0::2/64 --name internal_ipv6_a_subnet 
--ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless 
--ip-version 6
3. boot an instance with that network

Actual results:
===
1. Instance did not obtain IPv6 address
2. default gw is not set

Expected results:
=
The instance should have IPv6 address a default gw configured.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateless
  2. ipv6_address_mode: dhcpv6-stateless
  
  Version-Release number of selected component (if applicable):
  =
- RDO Juno: openstack-neutron-2014.2-0.7.b3.el7.centos.noarch
+ RDO Juno: openstack-neutron-2014.2-0.7.b3
  
  How reproducible:
  =
  100%
  
  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # subnet-create IPv6_net_id 2001:db1:0::2/64 --name internal_ipv6_a_subnet 
--ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless 
--ip-version 6
  3. boot an instance with that network
  
  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set
  
  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

** Description changed:

  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateless
  2. ipv6_address_mode: dhcpv6-stateless
  
  Version-Release number of selected component (if applicable):
  =
- RDO Juno: openstack-neutron-2014.2-0.7.b3
+ openstack-neutron-2014.2-0.7.b3
  
  How reproducible:
  =
  100%
  
  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # subnet-create IPv6_net_id 2001:db1:0::2/64 --name internal_ipv6_a_subnet 
--ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless 
--ip-version 6
  3. boot an instance with that network
  
  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set
  
  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377839

Title:
  Instances won't obtain IPv6 address and default gateway when using
  stateless DHCPv6 provided by OpenStack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I Created an IPv6 subnet with:
  1. ipv6_ra_mode: dhcpv6-stateless
  2. ipv6_address_mode: dhcpv6-stateless

  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. create a neutron network
  2. create an IPv6 subnet:
  # subnet-create IPv6_net_id 2001:db1:0::2/64 --name internal_ipv6_a_subnet 
--ipv6-ra-mode dhcpv6-stateless --ipv6-address-mode dhcpv6-stateless 
--ip-version 6
  3. boot an instance with that network

  Actual results:
  ===
  1. Instance did not obtain IPv6 address
  2. default gw is not set

  Expected results:
  =
  The instance should have IPv6 address a default gw configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377985] [NEW] IPv6 Router Advertisments should be allowed by default

2014-10-06 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
Router Advertisments are blocked and can't reach instances.
As a direct result of that, All Ipv6 networking won't function for instances.
This was tested both with provider Ipv6 router and radvd.
 
Version-Release number of selected component (if applicable):
=
openstack-neutron-2014.2-0.7.b3

How reproducible:
=
100%

Steps to Reproduce:
===
1. Create Neutron network

2. Create Neutron IPv6 subnet, don't foreget to speficy:
   a. --ipv6-address-mode
   b. --ipv6_ra_mode
   c. --gateway , in case you create this subnet for provider router. speficy 
the IPv6 link local address.

3. Spawn an instance

4. Check if the instance obtained IPv6 address and defeault gw. (not
expected to work)

5. Use tcpdump to see if Router Advertisments reach the instance.

6. Add a rule to allow all ICMP from fe80::/4

7. Repeat steps 4  5, it should work ok now.

Actual results:
===
Router Advertisments are blocked and can't reach instances.

Expected results:
=
Router Advertisments should be allowed by default.

Additional Info:

Might be related to: https://review.openstack.org/#/c/72252/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377985

Title:
  IPv6 Router Advertisments should be allowed by default

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  Router Advertisments are blocked and can't reach instances.
  As a direct result of that, All Ipv6 networking won't function for instances.
  This was tested both with provider Ipv6 router and radvd.
 
  Version-Release number of selected component (if applicable):
  =
  openstack-neutron-2014.2-0.7.b3

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Create Neutron network

  2. Create Neutron IPv6 subnet, don't foreget to speficy:
 a. --ipv6-address-mode
 b. --ipv6_ra_mode
 c. --gateway , in case you create this subnet for provider router. speficy 
the IPv6 link local address.

  3. Spawn an instance

  4. Check if the instance obtained IPv6 address and defeault gw. (not
  expected to work)

  5. Use tcpdump to see if Router Advertisments reach the instance.

  6. Add a rule to allow all ICMP from fe80::/4

  7. Repeat steps 4  5, it should work ok now.

  Actual results:
  ===
  Router Advertisments are blocked and can't reach instances.

  Expected results:
  =
  Router Advertisments should be allowed by default.

  Additional Info:
  
  Might be related to: https://review.openstack.org/#/c/72252/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1377985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372359] [NEW] Security Groups: Add Rule dialog does not specify the option to create an IPv6 rule.

2014-09-22 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
The Add Rule dialog does not allow you to specify the 'Ether Type' for the rule.
Instead, It auto detects if the CIDR is IPv4 or IPv6 and creates the rule 
accordingly.
Having that approach, I Would suggest that the IPv4/IPv6 auto-detection will be 
better reflected to the user.

Currently:
a. The default CIDR is: 0.0.0.0/0
b. The CIDR Field help: Classless Inter-Domain Routing (e.g 192.168.0.0/24)
c. The IPv6 is not described as valid input in the Dialog Description.

Steps to Reproduce:
===
See the dialog at: 
http://FQDN/project/access_and_security/security_groups/sec_group_id/add_rule/

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ipv6

** Attachment added: addrule dialog screenshot
   
https://bugs.launchpad.net/bugs/1372359/+attachment/4211222/+files/addrule.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372359

Title:
  Security Groups: Add Rule dialog does not specify the option to create
  an IPv6 rule.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  The Add Rule dialog does not allow you to specify the 'Ether Type' for the 
rule.
  Instead, It auto detects if the CIDR is IPv4 or IPv6 and creates the rule 
accordingly.
  Having that approach, I Would suggest that the IPv4/IPv6 auto-detection will 
be better reflected to the user.

  Currently:
  a. The default CIDR is: 0.0.0.0/0
  b. The CIDR Field help: Classless Inter-Domain Routing (e.g 192.168.0.0/24)
  c. The IPv6 is not described as valid input in the Dialog Description.

  Steps to Reproduce:
  ===
  See the dialog at: 
http://FQDN/project/access_and_security/security_groups/sec_group_id/add_rule/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338470] Re: LBaaS Round Robin does not work as expected

2014-09-03 Thread Nir Magnezi
unfortunately and despite repeated efforts, The issue won't reproduce.
Will reopen the bug in case of reproduction.
Thanks for looking into it.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338470

Title:
  LBaaS Round Robin does not work as expected

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Description of problem:
  ===
  I configured a load balancing pool with 2 members using round robin mechanism.
  My expectation was that each request will be directed to the next available 
pool member.
  Meaning, the expected result was:
  Req #1 - Member #1
  Req #2 - Member #2
  Req #3 - Member #1
  Req #4 - Member #2

  etc..

  I configured the instances guest image to replay to the request with the 
private ip address of the instance, and by that i can easily see who handled 
the request.
  This is the result I witnessed:

  # for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.2
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.4
  192.168.208.2
  192.168.208.4

  Details about the pool: http://pastebin.com/index/MwRX7HCR

  Version-Release number of selected component (if applicable):
  =
  Icehouse:
  python-neutronclient-2.3.4-2
  python-neutron-2014.1-35
  openstack-neutron-2014.1-35
  openstack-neutron-openvswitch-2014.1-35
  haproxy-1.5-0.3.dev22.el7

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. As detailed above, configure a LB pool with round robin and two members.
  2.

  Additional info:
  
  Tested with RHEL7
  haproxy.cfg: http://pastebin.com/vuNe1p7H

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338470] [NEW] LBaaS Round Robin does not work as expected

2014-07-07 Thread Nir Magnezi
Public bug reported:

Description of problem:
===
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available 
pool member.
Meaning, the expected result was:
Req #1 - Member #1
Req #2 - Member #2
Req #3 - Member #1
Req #4 - Member #2

etc..

I configured the instances guest image to replay to the request with the 
private ip address of the instance, and by that i can easily see who handled 
the request.
This is the result I witnessed:

# for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.2
192.168.208.4
192.168.208.4
192.168.208.2
192.168.208.4
192.168.208.2
192.168.208.4

Details about the pool: http://pastebin.com/index/MwRX7HCR

Version-Release number of selected component (if applicable):
=
Icehouse:
python-neutronclient-2.3.4-2
python-neutron-2014.1-35
openstack-neutron-2014.1-35
openstack-neutron-openvswitch-2014.1-35
haproxy-1.5-0.3.dev22.el7

How reproducible:
=
100%

Steps to Reproduce:
===
1. As detailed above, configure a LB pool with round robin and two members.
2.

Additional info:

Tested with RHEL7
haproxy.cfg: http://pastebin.com/vuNe1p7H

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338470

Title:
  LBaaS Round Robin does not work as expected

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Description of problem:
  ===
  I configured a load balancing pool with 2 members using round robin mechanism.
  My expectation was that each request will be directed to the next available 
pool member.
  Meaning, the expected result was:
  Req #1 - Member #1
  Req #2 - Member #2
  Req #3 - Member #1
  Req #4 - Member #2

  etc..

  I configured the instances guest image to replay to the request with the 
private ip address of the instance, and by that i can easily see who handled 
the request.
  This is the result I witnessed:

  # for i in {1..10} ; do curl -s 192.168.170.9 ; echo ; done
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.2
  192.168.208.4
  192.168.208.4
  192.168.208.2
  192.168.208.4
  192.168.208.2
  192.168.208.4

  Details about the pool: http://pastebin.com/index/MwRX7HCR

  Version-Release number of selected component (if applicable):
  =
  Icehouse:
  python-neutronclient-2.3.4-2
  python-neutron-2014.1-35
  openstack-neutron-2014.1-35
  openstack-neutron-openvswitch-2014.1-35
  haproxy-1.5-0.3.dev22.el7

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. As detailed above, configure a LB pool with round robin and two members.
  2.

  Additional info:
  
  Tested with RHEL7
  haproxy.cfg: http://pastebin.com/vuNe1p7H

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp