Public bug reported:
I have tested work of L3 HA on environment with 3 controllers and 1 compute
(Kilo) with this simple scenario:
1) ping vm by floating ip
2) disable master l3-agent (which ha_state is active)
3) wait for pings to continue and another agent became active
4) check number of packages that were lost
My results are following:
1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled
on every agent), 10 to 70 packages were lost.
I should mention that in both cases there was only one ha router.
It is expected that less packages will be lost when
max_l3_agents_per_router=3(0).
** Affects: neutron
Importance: Undecided
Status: New
** Tags: l3-ha
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272
Title:
L3 HA: Unstable rescheduling time
Status in neutron:
New
Bug description:
I have tested work of L3 HA on environment with 3 controllers and 1 compute
(Kilo) with this simple scenario:
1) ping vm by floating ip
2) disable master l3-agent (which ha_state is active)
3) wait for pings to continue and another agent became active
4) check number of packages that were lost
My results are following:
1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled
on every agent), 10 to 70 packages were lost.
I should mention that in both cases there was only one ha router.
It is expected that less packages will be lost when
max_l3_agents_per_router=3(0).
To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp