Hi, 

Il try to be more specific:

The functionality I was looking for on HA-Proxy in connection with 
sticky-routing is the following:

Normal flow all servers up (this is functionality available today):
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server1
3. HA-Proxy forwards the request to Server1

Sticky Server is down: (this is functionality I would like HA-proxy to have or 
figure out how to configure)
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server1
3. HA-Proxy determines that Server1 is down and selects to send the request to 
Server2
4. HA-Proxy adds an HTTP header to the request. Example: 
sticky-destination-updated=true
5. HA-Proxy updates sticky table that further request from this source from now 
on is sent to server to Server2
6. HA-Proxy forwards the request to Server2

Next request from same source would be processed as follows on HA-Proxy 
(assuming server3 is still up):
1. HA-Proxy receives a request
2. HA-Proxy checks the sticky table and determines that that request should be 
sent to Server2
3. HA-Proxy forwards the request to Server2


The assumption here is that selecting new  sticky-ness target due to existing 
sticky-ness server is not available is something that happens rarely.

What happen on the application when header is set:
The application will then flush all relevant local caches connected to that 
user/session and so on, ensuring that the server does not work on stale data.

This allows one instance of an application to handle all request from one 
user/session, which allows the application to apply aggressively caching of 
data within the specific instance of the application. If for some reason a 
request is forwarded by HA-proxy to another application instance, the instance 
will be able to determine that instance switch has occurred and can flush its 
potential stale cache entries.

You get into issue here on the following case:
1. You are first on server 1
2. Some reason you are sent to server 2
3. Some reason you are sent to server 1 again, which without the described 
functionality we would risk that Server 1 operates on stale data

This scenario is something that for example could happen during high load 
situations.

Best regards,

Gisle 
 
On 21/03/2018, 09:57, "Willy Tarreau" <w...@1wt.eu> wrote:

    On Wed, Mar 21, 2018 at 08:20:44AM +0000, Gisle Grimen wrote:
    > Hi,
    > 
    > Thanks for the information. That was sad to hear. In our case the traffic 
is
    > coming from servers and not a web browser so solving this with cookies are
    > not an option. The communication between the servers are based on
    > international standards as such we cannot add additional requirements to 
the
    > server sending the requests. As such we have to solve it within our
    > infrastructure. With a little help from HA-proxy you could then create 
very
    > efficient local caches on each node, but without we need complicated and
    > resource intensive shared caches or databases.
    > 
    > I hope this would be a feature that is possible to add in the future as it
    > would help to develop simpler and more efficient applications behind
    > HA-Proxy, which in large part can rely in local caches.
    
    The problem I'm having is that you don't describe exactly what you're
    trying to achieve nor how you want to use that information about the
    broken stickiness, so it's very hard for me to try to figure a working
    solution. I proposed one involving sending the initial server ID in a
    header for example but I have no idea whether this can work in your case.
    
    So could you please enlighten us on your architecture, the problem that
    broken stickiness causes and how you'd like it to be addressed ?
    
    Thanks,
    Willy
    

Reply via email to