** Project changed: neutron => octavia
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1706078
Title:
Code is breaking if try to create health monitor with
--delay=3
** Changed in: oslo.context
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1660088
Title:
Huge number of deprecation warnings in oslo.context 2.12.0
** Project changed: devstack => neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619466
Title:
Enabling q-lbaasv2 in devstack should add neutron_lbaas.conf to q-svc
command
This can only be fixed in the devstack codebase, since the plugin has no
notion of the q-svc --config-file options.
** Changed in: neutron
Assignee: (unassigned) => Nir Magnezi (nmagnezi)
** Project changed: neutron => devstack
--
You received this bug notification becau
this is a duplicate of bug 1613251
** Changed in: neutron
Status: In Progress => Invalid
** Changed in: neutron
Assignee: Nir Magnezi (nmagnezi) => (unassigned)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to n
Public bug reported:
Found this while working on bug 1613251.
Example for that error:
http://logs.openstack.org/90/351490/10/gate/gate-neutron-lbaasv1-dsvm-api/fa4d806/console.html
** Affects: neutron
Importance: Undecided
Assignee: Nir Magnezi (nmagnezi)
Status: In Progress
tests fail for haproxy in
namespace lbaas driver.
** Affects: neutron
Importance: High
Assignee: Nir Magnezi (nmagnezi)
Status: Confirmed
** Tags: db lbaas
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron
** Changed in: neutron
Status: Fix Released => In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1565801
Title:
Add process monitor for haproxy
Status in neutron:
In
Public bug reported:
Bug 1565511 aims to solve cases where the lbaas agent goes offline.
To have a complete high-availability solution for lbaas agent with haproxy
running in namespace, we would also want to handle a case where the haproxy
process itself stopped.
This[1] neutron spec offers
, such as:
allow_automatic_lbaas_agent_failover
** Affects: neutron
Importance: Undecided
Assignee: Nir Magnezi (nmagnezi)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https
/openstack/neutron-
lbaas/blob/stable/liberty/devstack/samples/local.conf#L43
** Affects: neutron
Importance: Undecided
Assignee: Nir Magnezi (nmagnezi)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
instance of dnsmasq.
The reason for this:
when 'force_metadata = True' and 'enable_isolated_metadata = False' (default),
the subnet_to_interface_ip won't be assigned.
** Affects: neutron
Importance: Medium
Assignee: Nir Magnezi (nmagnezi)
Status: Confirmed
** Changed in: neutron
Assignee: (unassig
Public bug reported:
Description of problem:
===
IPv6 VIP remains in ERROR state due to haproxy cannot bind socket.
neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call
last):
neutron.services.loadbalancer.agent.agent_manager File
Public bug reported:
Description of problem:
===
Horizon should accept an IPv6 address as a VIP Address for LB Pool.
Version-Release number of selected component (if applicable):
=
RHEL-OSP6: 2014-12-12.1
Public bug reported:
Description of problem:
===
I Created n radvd IPv6 subnet with:
- ipv6_ra_mode: dhcpv6-stateful
- ipv6_address_mode: dhcpv6-stateful
Meaning:
a. The Neutron DHCP agent (dnsmasq) provides IP addresses and additional
parameters.
b. Neutron router
Public bug reported:
Description of problem:
===
I booted an instance with both IPv4 and IPv6 interfaces, yet that instance did
no obtain any IPv6 address.
In order make sure nothing is wrong with my IPv6 configuration (which is RADVD
SLAAC), I booted additional instance
Public bug reported:
Description of problem:
===
In case the namespace contains both IPv4 and IPv6 Interfaces, they will not be
deleted with interfaces are detached from the router.
Version-Release number of selected component (if applicable):
Public bug reported:
Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: dhcpv6-stateful
2. ipv6_address_mode: dhcpv6-stateful
Version-Release number of selected component (if applicable):
=
Public bug reported:
Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: slaac
2. ipv6_address_mode: slaac
Version-Release number of selected component (if applicable):
=
Public bug reported:
Description of problem:
===
I Created an IPv6 subnet with:
1. ipv6_ra_mode: dhcpv6-stateless
2. ipv6_address_mode: dhcpv6-stateless
Version-Release number of selected component (if applicable):
=
Public bug reported:
Description of problem:
===
Router Advertisments are blocked and can't reach instances.
As a direct result of that, All Ipv6 networking won't function for instances.
This was tested both with provider Ipv6 router and radvd.
Version-Release number
Public bug reported:
Description of problem:
===
The Add Rule dialog does not allow you to specify the 'Ether Type' for the rule.
Instead, It auto detects if the CIDR is IPv4 or IPv6 and creates the rule
accordingly.
Having that approach, I Would suggest that the IPv4/IPv6
unfortunately and despite repeated efforts, The issue won't reproduce.
Will reopen the bug in case of reproduction.
Thanks for looking into it.
** Changed in: neutron
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Description of problem:
===
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available
pool member.
Meaning, the expected result was:
Req #1 - Member #1
Req #2
24 matches
Mail list logo