Hi,

On Wed, Mar 21, 2018 at 8:57 PM, Gisle Grimen <gisle.gri...@evry.com> wrote:

> Hi,
>
> Il try to be more specific:
>
> The functionality I was looking for on HA-Proxy in connection with
> sticky-routing is the following:
>
> Normal flow all servers up (this is functionality available today):
> 1. HA-Proxy receives a request
> 2. HA-Proxy checks the sticky table and determines that that request
> should be sent to Server1
> 3. HA-Proxy forwards the request to Server1
>
> Sticky Server is down: (this is functionality I would like HA-proxy to
> have or figure out how to configure)
> 1. HA-Proxy receives a request
> 2. HA-Proxy checks the sticky table and determines that that request
> should be sent to Server1
> 3. HA-Proxy determines that Server1 is down and selects to send the
> request to Server2
> 4. HA-Proxy adds an HTTP header to the request. Example:
> sticky-destination-updated=true
> 5. HA-Proxy updates sticky table that further request from this source
> from now on is sent to server to Server2
> 6. HA-Proxy forwards the request to Server2
>
>
​It does have this of course, see
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20redispatch
 for example. If it didn't many implementations would be broken don't you
think?

I must say though the use of that header you insist of is not really clear
to me except for maybe statistic purposes on the backend. You can have two
types of backends (in terms of sessions): 1) one where each server is aware
of each other sessions (shared session storage in memory or disk) or 2) one
where each server has its own sessions. There is third one where no
sessions are needed but that's not of interest here.

The second case is the one for which you most probably need stickiness for
in which case if the Server1 one goes down and Haproxy re-distributes its
connections between Server2 and Serve3 lets say by definition those servers
will reset the sessions (since have no idea about them) and the user will
have to lets say log in again in the application on their side.
 Once done they will stick to the new server elected. Which brings me to
the point where I don't understand usage of the mentioned header in the
first place. Header or not what you need/want is going to happen anyway.

In the first case with shared sessions, you can use stickiness as well if
you like but it is not critical as in the one described above. In which
case Server2 and Server3 will have knowledge of the Server1's sessions and
it will be business as usual.
​


> Next request from same source would be processed as follows on HA-Proxy
> (assuming server3 is still up):
> 1. HA-Proxy receives a request
> 2. HA-Proxy checks the sticky table and determines that that request
> should be sent to Server2
> 3. HA-Proxy forwards the request to Server2
>
>
​That is already the case with Haproxy,
​

>
> The assumption here is that selecting new  sticky-ness target due to
> existing sticky-ness server is not available is something that happens
> rarely.
>
> What happen on the application when header is set:
> The application will then flush all relevant local caches connected to
> that user/session and so on, ensuring that the server does not work on
> stale data.
>
> This allows one instance of an application to handle all request from one
> user/session, which allows the application to apply aggressively caching of
> data within the specific instance of the application. If for some reason a
> request is forwarded by HA-proxy to another application instance, the
> instance will be able to determine that instance switch has occurred and
> can flush its potential stale cache entries.
>
> You get into issue here on the following case:
> 1. You are first on server 1
> 2. Some reason you are sent to server 2
> 3. Some reason you are sent to server 1 again, which without the described
> functionality we would risk that Server 1 operates on stale data
>
> This scenario is something that for example could happen during high load
> situations.
>
> Best regards,
>
> Gisle
>
> On 21/03/2018, 09:57, "Willy Tarreau" <w...@1wt.eu> wrote:
>
>     On Wed, Mar 21, 2018 at 08:20:44AM +0000, Gisle Grimen wrote:
>     > Hi,
>     >
>     > Thanks for the information. That was sad to hear. In our case the
> traffic is
>     > coming from servers and not a web browser so solving this with
> cookies are
>     > not an option. The communication between the servers are based on
>     > international standards as such we cannot add additional
> requirements to the
>     > server sending the requests. As such we have to solve it within our
>     > infrastructure. With a little help from HA-proxy you could then
> create very
>     > efficient local caches on each node, but without we need complicated
> and
>     > resource intensive shared caches or databases.
>     >
>     > I hope this would be a feature that is possible to add in the future
> as it
>     > would help to develop simpler and more efficient applications behind
>     > HA-Proxy, which in large part can rely in local caches.
>
>     The problem I'm having is that you don't describe exactly what you're
>     trying to achieve nor how you want to use that information about the
>     broken stickiness, so it's very hard for me to try to figure a working
>     solution. I proposed one involving sending the initial server ID in a
>     header for example but I have no idea whether this can work in your
> case.
>
>     So could you please enlighten us on your architecture, the problem that
>     broken stickiness causes and how you'd like it to be addressed ?
>
>     Thanks,
>     Willy
>
>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000

Reply via email to