Hi tibz, it shouldn't happen. Let me do some testing and I'll let you know.
Regarding the fix, we've implemented a L4 scheduler in order to deactivate
the backends dynamically but for now it will be only available for least
connections.
Regards.
On Mon, Dec 8, 2014 at 3:01 PM, tibz <ti...@tibir.net> wrote:
> Hello,
>
> Can you please let me know the status of this? I've seen some bugfix
> related to L4 et farmguardian in the changelog but it's not clear if the
> below issue is fixed or not.
>
> Today i've had similar problem. Some backend marked as down in a farm, but
> it seems to also have disturbed/lost the persistence of the other L4 farm
> we have ... I was not expecting this ...
>
> Thanks
> tibz
>
>
> On 09/06/2014 13:32, Laura Garcia wrote:
>
> Hi Tibz, the session persistence between L4 farm reloads is not
> implemented yet. But some enhancements regarding the service disruption
> will be included very soon.
>
> Kind Regards.
>
>
>
>
> On Fri, Jun 6, 2014 at 9:04 AM, tibz <ti...@tibir.net> wrote:
>
>> Hello,
>>
>> I've a L4 farm with persistancy enabled. On the backend servers, we can
>> collect errors when a client arrives thinking its authenticated while it's
>> not. (so when it has been switched from one server to another one)
>> We see almost none of these error for a while, and then sometimes, we
>> have plenty at the same time, just like sometimes all persistancy is lost.
>>
>> I've received the info that yesterday between 10h19 and 11h19 there were
>> a lot of errors. I've checked at the logs on ZLB, and I see in
>> zenloadbalancer.log that at at 10h29 there were some action on this farm.
>> These actions are "running 'Stop write false' for ZLB-ULG farm l4xnat"
>> and "running 'Start write false' for ZLB-ULG farm l4xnat" (see attached
>> file).
>>
>> I've seen that farmguardian as detected a backend being down, and being
>> back up again afterward. Though this is great and i'll check with the owner
>> of the backend to fix this, i'm concerned about loosing all persistancy
>> when farmguardian remove/add a backend.
>>
>> I've 2 backends and when farmguardian remove one of them, it in fact
>> deletes all iptables entry for this farm, and re-added only the ones for
>> the alive backend. This is fine, having only 2 backend, I can live with
>> that (if I would have more, that would be the same problem as below => all
>> persistancy is lost)
>> When the backend comes back alive, again all iptables rules are deleted
>> and re-added for both backend. This is bad, because while running with 1
>> backend, persistancy has attached all users to that backend, but when the
>> 2nd backend joins back, all these persistancy is lost and all users are
>> splitted on both backend. Which in our case means a disconnection for half
>> of them.
>> I was expecting that only new connections would be associated with the
>> new joining backend, and all other remains on the first backend. This way,
>> there would be no disruption.
>>
>> I can imagine that it's probably easier to remove and re-add everything,
>> but is there any way to keep the persistancy? Maybe before you re-add a
>> backend coming back alive, you could dump the /proc/file/xt_recent/ file
>> associated with the running backend to re-inject the associations back
>> while you re-add the iptables entry?
>>
>> If not, what other way could you suggest?
>>
>> This make me thing about an enhancement for farmguardian for next
>> version, which would be "consider the backend as down only if X
>> consecutives checks fails and not only one".
>>
>> Thanks
>> tibz
>>
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Learn Graph Databases - Download FREE O'Reilly Book
>> "Graph Databases" is the definitive new guide to graph databases and their
>> applications. Written by three acclaimed leaders in the field,
>> this first edition is now available. Download your free book today!
>> http://p.sf.net/sfu/NeoTech
>> _______________________________________________
>> Zenloadbalancer-support mailing list
>> Zenloadbalancer-support@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/zenloadbalancer-support
>>
>>
>
>
> ------------------------------------------------------------------------------
> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
> Find What Matters Most in Your Big Data with HPCC Systems
> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
> Leverages Graph Analysis for Fast Processing & Easy Data
> Explorationhttp://www.hpccsystems.com
>
>
>
> _______________________________________________
> Zenloadbalancer-support mailing
> listZenloadbalancer-support@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/zenloadbalancer-support
>
>
>
>
> ------------------------------------------------------------------------------
> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
> from Actuate! Instantly Supercharge Your Business Reports and Dashboards
> with Interactivity, Sharing, Native Excel Exports, App Integration & more
> Get technology previously reserved for billion-dollar corporations, FREE
>
> http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
> _______________________________________________
> Zenloadbalancer-support mailing list
> Zenloadbalancer-support@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/zenloadbalancer-support
>
>
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Zenloadbalancer-support mailing list
Zenloadbalancer-support@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/zenloadbalancer-support