Hi,

Thank you for your response.

To be very precise the feature I am looking for from HA-Proxy is that when 
HA-Proxy does a re-dispatch HA-Proxy also ads a Header, which will tell the 
server receiving the request from HA-Proxy that HA-Proxy has done a 
re-dispatch. This is the critical feature we are looking for.

This feature will be important to both type 1 systems in order to minimize the 
load on the shared session storage and important to type 3 systems in order to 
allow them to flush local caches of potential stale data. Both of which are 
systems we run.

Best regards,

Gisle


From: Igor Cicimov <ig...@encompasscorporation.com>
Date: Thursday, 22 March 2018 at 07:48
To: Gisle Grimen <gisle.gri...@evry.com>
Cc: Willy Tarreau <w...@1wt.eu>, "haproxy@formilux.org" <haproxy@formilux.org>
Subject: Re: Can HA-Proxy set an header when he "breaks" stick routing

Hi,

On Wed, Mar 21, 2018 at 8:57 PM, Gisle Grimen 
<gisle.gri...@evry.com<mailto:gisle.gri...@evry.com>> wrote:
Hi,

Il try to be more specific:

The functionality I was looking for on HA-Proxy in connection with 
sticky-routing is the following:

Normal flow all servers up (this is functionality available today):
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server1
3. HA-Proxy forwards the request to Server1

Sticky Server is down: (this is functionality I would like HA-proxy to have or 
figure out how to configure)
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server1
3. HA-Proxy determines that Server1 is down and selects to send the request to 
Server2
4. HA-Proxy adds an HTTP header to the request. Example: 
sticky-destination-updated=true
5. HA-Proxy updates sticky table that further request from this source from now 
on is sent to server to Server2
6. HA-Proxy forwards the request to Server2

​It does have this of course, see 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20redispatch<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.7%2Fconfiguration.html%234.2-option%2520redispatch&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=a%2BGKy7VMI9OaxNHWEwNM%2FU%2Bh0B%2Ba00RX2nlVduesAN0%3D&reserved=0>
 for example. If it didn't many implementations would be broken don't you think?

I must say though the use of that header you insist of is not really clear to 
me except for maybe statistic purposes on the backend. You can have two types 
of backends (in terms of sessions): 1) one where each server is aware of each 
other sessions (shared session storage in memory or disk) or 2) one where each 
server has its own sessions. There is third one where no sessions are needed 
but that's not of interest here.

The second case is the one for which you most probably need stickiness for in 
which case if the Server1 one goes down and Haproxy re-distributes its 
connections between Server2 and Serve3 lets say by definition those servers 
will reset the sessions (since have no idea about them) and the user will have 
to lets say log in again in the application on their side.
 Once done they will stick to the new server elected. Which brings me to the 
point where I don't understand usage of the mentioned header in the first 
place. Header or not what you need/want is going to happen anyway.

In the first case with shared sessions, you can use stickiness as well if you 
like but it is not critical as in the one described above. In which case 
Server2 and Server3 will have knowledge of the Server1's sessions and it will 
be business as usual.
​

Next request from same source would be processed as follows on HA-Proxy 
(assuming server3 is still up):
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server2
3. HA-Proxy forwards the request to Server2

​That is already the case with Haproxy,
​

The assumption here is that selecting new  sticky-ness target due to existing 
sticky-ness server is not available is something that happens rarely.

What happen on the application when header is set:
The application will then flush all relevant local caches connected to that 
user/session and so on, ensuring that the server does not work on stale data.

This allows one instance of an application to handle all request from one 
user/session, which allows the application to apply aggressively caching of 
data within the specific instance of the application. If for some reason a 
request is forwarded by HA-proxy to another application instance, the instance 
will be able to determine that instance switch has occurred and can flush its 
potential stale cache entries.

You get into issue here on the following case:
1. You are first on server 1
2. Some reason you are sent to server 2
3. Some reason you are sent to server 1 again, which without the described 
functionality we would risk that Server 1 operates on stale data

This scenario is something that for example could happen during high load 
situations.

Best regards,

Gisle

On 21/03/2018, 09:57, "Willy Tarreau" <w...@1wt.eu<mailto:w...@1wt.eu>> wrote:

    On Wed, Mar 21, 2018 at 08:20:44AM +0000, Gisle Grimen wrote:
    > Hi,
    >
    > Thanks for the information. That was sad to hear. In our case the traffic 
is
    > coming from servers and not a web browser so solving this with cookies are
    > not an option. The communication between the servers are based on
    > international standards as such we cannot add additional requirements to 
the
    > server sending the requests. As such we have to solve it within our
    > infrastructure. With a little help from HA-proxy you could then create 
very
    > efficient local caches on each node, but without we need complicated and
    > resource intensive shared caches or databases.
    >
    > I hope this would be a feature that is possible to add in the future as it
    > would help to develop simpler and more efficient applications behind
    > HA-Proxy, which in large part can rely in local caches.

    The problem I'm having is that you don't describe exactly what you're
    trying to achieve nor how you want to use that information about the
    broken stickiness, so it's very hard for me to try to figure a working
    solution. I proposed one involving sending the initial server ID in a
    header for example but I have no idea whether this can work in your case.

    So could you please enlighten us on your architecture, the problem that
    broken stickiness causes and how you'd like it to be addressed ?

    Thanks,
    Willy




--
Igor Cicimov | DevOps

[Image removed by sender.]

p. +61 (0) 433 078 728
e. 
ig...@encompasscorporation.com<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fencompasscorporation.com%2F&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=EHGC6bAXJ3BZvIChfLGzkexiVgh%2B6%2FlOFy%2BnHnoDvn8%3D&reserved=0>
w. 
www.encompasscorporation.com<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.encompasscorporation.com&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=TN5uub%2FILE1ERiIGYuT5BDFYT5ygXQT9U3POlNuyeEw%3D&reserved=0>
a. Level 4, 65 York Street, Sydney 2000

Reply via email to