Re: haproxy not creating stick-table entries fast enough

2017-05-12 Thread redundantl y
On Fri, May 12, 2017 at 10:46 AM, Willy Tarreau  wrote:

> On Fri, May 12, 2017 at 10:20:02AM -0700, redundantl y wrote:
> > As I've said before, the issue here is these objects aren't hosted on the
> > same server that they're being called from.
> >
> > "A separately hosted application will generate HTML with several (20-30)
> > elements that will be loaded simultaneously by the end user's browser."
> >
> > So a user might go to www.example.com and that page will load the
> objects
> > from assets.example.com, which is a wholly separate server.
>
> OK but *normally* if there's parallelism when downloading objects from
> assets.example.com, then there's no dependency between them.
>
> > > The principle of stickiness is to ensure that subsequent requests will
> go
> > > to the same server that served the previous ones. The main goal is to
> > > ensure that all requests carrying a session cookie will end up on the
> > > server which holds this session.
> > >
> > > Here as Lukas explained, you're simulating a browser sending many
> totally
> > > independant requests in parallel. There's no reason (nor any way) that
> > > any equipment in the chain would guess they are related since they
> could
> > > arrive in any order, and even end up on multiple nodes.
> > >
> > >
> > Well, all of these requests will have the url_param email=, so the load
> > balancer has the ability to know they are related.  The issue here, at
> > least how it appears to me, is since they come in so fast the stick-table
> > entry doesn't get generated quickly enough and the requests get
> distributed
> > to multiple backend servers and eventually stick to just one.
>
> It's not fast WRT the stick table but WRT the time to connect to the
> server.
> As I mentionned, the principle of stickiness is to send subsequent requests
> to the same server which *served* the previous ones. So if the first
> request
> is sent to server 1, the connection fails several times, then it's
> redispathed
> to server 2 and succeeds, it will be server 2 which will be put into the
> table
> so that next connections will go there as well.
>
> In your workload, there isn't even the time to validate the connection to
> the
> server, and *this* is what causes the problem you're seeing.
>
>
Thank you for explaining what I'm seeing. This makes a lot of sense.


> > Since changing to load balancing on the url_param our issue has been
> > resolved.
>
> So indeed you're facing the type of workloads requiring a hash.
>
> > > Also, most people prefer not to apply stickiness for static objects so
> that
> > > they can be retrieved in parallel from all static servers instead of
> all
> > > hammering the same server. It might possibly not be your case based on
> your
> > > explanation, but this is what people usually do for a better user
> > > experience.
> > >
> > >
> > The objects aren't static.  When they're loaded the application makes
> some
> > calls to external services (3rd party application, database server) to
> > produce the desired objects and links.
>
> OK I see. Then better stick to the hash using url_param. You can improve
> this by combining it with stick anyway if your url_params are frequently
> reused (eg: many requests per client). This will avoid redistributing
> innocent connections in the event a server is added or removed due to
> the hash being recomputed. That can be especially true of your 3rd party
> application sometimes has long response times and the probability of a
> server outage between the first and the last request for a client becomes
> high.
>
>
Thank you for pointing this out, we hadn't considered this scenario.


> > > In conclusion, your expected use case still seem quite obscure to me
> :-/
> > >
> > > Willy
> > >
> >
> > I agree, our use case is fairly unique.
>
> It looks so :-)
>
> Willy
>

Thanks for taking the time to read and respond.  It was very informative
and helpful.


Re: haproxy not creating stick-table entries fast enough

2017-05-12 Thread Willy Tarreau
On Fri, May 12, 2017 at 10:20:02AM -0700, redundantl y wrote:
> As I've said before, the issue here is these objects aren't hosted on the
> same server that they're being called from.
> 
> "A separately hosted application will generate HTML with several (20-30)
> elements that will be loaded simultaneously by the end user's browser."
> 
> So a user might go to www.example.com and that page will load the objects
> from assets.example.com, which is a wholly separate server.

OK but *normally* if there's parallelism when downloading objects from
assets.example.com, then there's no dependency between them.

> > The principle of stickiness is to ensure that subsequent requests will go
> > to the same server that served the previous ones. The main goal is to
> > ensure that all requests carrying a session cookie will end up on the
> > server which holds this session.
> >
> > Here as Lukas explained, you're simulating a browser sending many totally
> > independant requests in parallel. There's no reason (nor any way) that
> > any equipment in the chain would guess they are related since they could
> > arrive in any order, and even end up on multiple nodes.
> >
> >
> Well, all of these requests will have the url_param email=, so the load
> balancer has the ability to know they are related.  The issue here, at
> least how it appears to me, is since they come in so fast the stick-table
> entry doesn't get generated quickly enough and the requests get distributed
> to multiple backend servers and eventually stick to just one.

It's not fast WRT the stick table but WRT the time to connect to the server.
As I mentionned, the principle of stickiness is to send subsequent requests
to the same server which *served* the previous ones. So if the first request
is sent to server 1, the connection fails several times, then it's redispathed
to server 2 and succeeds, it will be server 2 which will be put into the table
so that next connections will go there as well.

In your workload, there isn't even the time to validate the connection to the
server, and *this* is what causes the problem you're seeing.

> Since changing to load balancing on the url_param our issue has been
> resolved.

So indeed you're facing the type of workloads requiring a hash.

> > Also, most people prefer not to apply stickiness for static objects so that
> > they can be retrieved in parallel from all static servers instead of all
> > hammering the same server. It might possibly not be your case based on your
> > explanation, but this is what people usually do for a better user
> > experience.
> >
> >
> The objects aren't static.  When they're loaded the application makes some
> calls to external services (3rd party application, database server) to
> produce the desired objects and links.

OK I see. Then better stick to the hash using url_param. You can improve
this by combining it with stick anyway if your url_params are frequently
reused (eg: many requests per client). This will avoid redistributing
innocent connections in the event a server is added or removed due to
the hash being recomputed. That can be especially true of your 3rd party
application sometimes has long response times and the probability of a
server outage between the first and the last request for a client becomes
high.

> > In conclusion, your expected use case still seem quite obscure to me :-/
> >
> > Willy
> >
> 
> I agree, our use case is fairly unique.

It looks so :-)

Willy



Re: haproxy not creating stick-table entries fast enough

2017-05-12 Thread redundantl y
On Fri, May 12, 2017 at 12:51 AM, Willy Tarreau  wrote:

> On Tue, May 09, 2017 at 09:43:22PM -0700, redundantl y wrote:
> > For example, I have tried with the latest versions of Firefox, Safari,
> and
> > Chrome.  With 30 elements on the page being loaded from the server
> they're
> > all being loaded within 70ms of each other, the first 5 or so happening
> on
> > the same millisecond.  I'm seeing similar behaviour, being sent to
> > alternating backend servers until it "settles" and sticks to just one.
>
> That's only true after the browser starts to retrieve the main page which
> gives it the indication that it needs to request such objects. You *always*
> have a first request before all other ones. The browser cannot guess it
> will have to retrieve many objects out of nowhere.
>
>
As I've said before, the issue here is these objects aren't hosted on the
same server that they're being called from.

"A separately hosted application will generate HTML with several (20-30)
elements that will be loaded simultaneously by the end user's browser."

So a user might go to www.example.com and that page will load the objects
from assets.example.com, which is a wholly separate server.


> The principle of stickiness is to ensure that subsequent requests will go
> to the same server that served the previous ones. The main goal is to
> ensure that all requests carrying a session cookie will end up on the
> server which holds this session.
>
> Here as Lukas explained, you're simulating a browser sending many totally
> independant requests in parallel. There's no reason (nor any way) that
> any equipment in the chain would guess they are related since they could
> arrive in any order, and even end up on multiple nodes.
>
>
Well, all of these requests will have the url_param email=, so the load
balancer has the ability to know they are related.  The issue here, at
least how it appears to me, is since they come in so fast the stick-table
entry doesn't get generated quickly enough and the requests get distributed
to multiple backend servers and eventually stick to just one.


> If despite this that's what you need (for a very obscure reason), then
> you'd rather use hashing for this. It will ensure that the same
> distribution
> algorithm is applied to all these requests regardless of their ordering.
> But
> let me tell you that it still makes me feel like you're trying to address
> the wrong problem.
>
>
Since changing to load balancing on the url_param our issue has been
resolved.


> Also, most people prefer not to apply stickiness for static objects so that
> they can be retrieved in parallel from all static servers instead of all
> hammering the same server. It might possibly not be your case based on your
> explanation, but this is what people usually do for a better user
> experience.
>
>
The objects aren't static.  When they're loaded the application makes some
calls to external services (3rd party application, database server) to
produce the desired objects and links.


> In conclusion, your expected use case still seem quite obscure to me :-/
>
> Willy
>

I agree, our use case is fairly unique.


Re: haproxy not creating stick-table entries fast enough

2017-05-12 Thread Willy Tarreau
On Tue, May 09, 2017 at 09:43:22PM -0700, redundantl y wrote:
> For example, I have tried with the latest versions of Firefox, Safari, and
> Chrome.  With 30 elements on the page being loaded from the server they're
> all being loaded within 70ms of each other, the first 5 or so happening on
> the same millisecond.  I'm seeing similar behaviour, being sent to
> alternating backend servers until it "settles" and sticks to just one.

That's only true after the browser starts to retrieve the main page which
gives it the indication that it needs to request such objects. You *always*
have a first request before all other ones. The browser cannot guess it
will have to retrieve many objects out of nowhere.

The principle of stickiness is to ensure that subsequent requests will go
to the same server that served the previous ones. The main goal is to
ensure that all requests carrying a session cookie will end up on the
server which holds this session.

Here as Lukas explained, you're simulating a browser sending many totally
independant requests in parallel. There's no reason (nor any way) that
any equipment in the chain would guess they are related since they could
arrive in any order, and even end up on multiple nodes.

If despite this that's what you need (for a very obscure reason), then
you'd rather use hashing for this. It will ensure that the same distribution
algorithm is applied to all these requests regardless of their ordering. But
let me tell you that it still makes me feel like you're trying to address
the wrong problem.

Also, most people prefer not to apply stickiness for static objects so that
they can be retrieved in parallel from all static servers instead of all
hammering the same server. It might possibly not be your case based on your
explanation, but this is what people usually do for a better user experience.

In conclusion, your expected use case still seem quite obscure to me :-/

Willy



Re: haproxy not creating stick-table entries fast enough

2017-05-09 Thread redundantl y
On Tue, May 9, 2017 at 2:11 PM, Lukas Tribus  wrote:

> Hello,
>
>
> Am 09.05.2017 um 02:52 schrieb redundantl y:
> > The way ab is being executed is inline with our real world use.  A
> > separately hosted application will generate HTML with several (20-30)
> > elements that will be loaded simultaneously by the end user's
> > browser.  There isn't a delay, the elements aren't loaded sequentially.
>
> I understand that, but I still doubt that a browser will open 10
> concurrent TCP connections within a millisecond. Also browser ALWAYS
> keepalive, which you don't consider here (I don't know if you enabled
> keepalive in haproxy though). I strongly suggest you look at the actual
> browsers behavior (when keepalive is enabled in haproxy), before you
> continue investing your time with a problem that may be hypothetical.
>
>
It does open a lot of concurrent connections, within tens of milliseconds
of each other.

For example, I have tried with the latest versions of Firefox, Safari, and
Chrome.  With 30 elements on the page being loaded from the server they're
all being loaded within 70ms of each other, the first 5 or so happening on
the same millisecond.  I'm seeing similar behaviour, being sent to
alternating backend servers until it "settles" and sticks to just one.

>From what I've read, keep-alive is enabled by default with haproxy 1.5 on
CentOS 7.


>
> > So is this behaviour we're seeing with haproxy expected?  There's no
> > additional options we can try to make it create and use a stick-table
> > entry faster?
>
> This is not about stick-table being slow. Its about requests being
> elaborated concurrently in haproxy, so one cannot depend on the other.
> To get what you need, haproxy would have to block/queue incoming
> requests (I guess you could "maxconn 1" in your frontend, but don't you
> run this in production).
>
>
>From what I can tell this is about the stick-table entry not being created
fast enough. I'd love to be wrong about this.


>
> > In the mean time, we've changed the load balancing method from round
> > robin to URI and are seeing the behaviour we desire, it'll just carry
> > the risk of not having an even distribution of load to the backend
> > servers.
>
> Concurrent requests from ab combined with concurrent request handling in
> haproxy with roundrobin balancing will lead to this behavior. I don't
> see how haproxy could behave differently in this configuration.
>
>
> cheers,
> lukas
>
>
I misspoke, I didn't change it to URI.

Instead of sticking on a url_param I'm now load balancing on the url_param
desired.  Specifically I have the following in my backend configuration:

balance url_param email

haproxy is creating a hash based on the value of that parameter and
deciding which backend server to send it to, so all connections (whether I
use ab or one of three web browsers) are being sent to a single backend
server.  My concern with this was if the hashing method being used was
random enough to create an even split between the backend servers. So far
in our testing it appears that it is pretty evenly split.


Re: haproxy not creating stick-table entries fast enough

2017-05-09 Thread Lukas Tribus
Hello,


Am 09.05.2017 um 02:52 schrieb redundantl y:
> The way ab is being executed is inline with our real world use.  A
> separately hosted application will generate HTML with several (20-30)
> elements that will be loaded simultaneously by the end user's
> browser.  There isn't a delay, the elements aren't loaded sequentially.

I understand that, but I still doubt that a browser will open 10
concurrent TCP connections within a millisecond. Also browser ALWAYS
keepalive, which you don't consider here (I don't know if you enabled
keepalive in haproxy though). I strongly suggest you look at the actual
browsers behavior (when keepalive is enabled in haproxy), before you
continue investing your time with a problem that may be hypothetical.



> So is this behaviour we're seeing with haproxy expected?  There's no
> additional options we can try to make it create and use a stick-table
> entry faster?

This is not about stick-table being slow. Its about requests being
elaborated concurrently in haproxy, so one cannot depend on the other.
To get what you need, haproxy would have to block/queue incoming
requests (I guess you could "maxconn 1" in your frontend, but don't you
run this in production).



> In the mean time, we've changed the load balancing method from round
> robin to URI and are seeing the behaviour we desire, it'll just carry
> the risk of not having an even distribution of load to the backend
> servers.

Concurrent requests from ab combined with concurrent request handling in
haproxy with roundrobin balancing will lead to this behavior. I don't
see how haproxy could behave differently in this configuration.


cheers,
lukas




Re: haproxy not creating stick-table entries fast enough

2017-05-08 Thread redundantl y
On Mon, May 8, 2017 at 4:01 PM, Lukas Tribus  wrote:

> Hello,
>
>
> Am 09.05.2017 um 00:38 schrieb redundantl y:
> > I am running haproxy 1.5.18-3 on CentOS 7 and need to use the
> > stick-table feature to make sure traffic for a specific user persists
> > to a given server.
> >
> > Things work fine when connections come in slowly, however when there's
> > numerous simultaneous connections and a stick-table entry doesn't
> > exist yet some requests will be sent to both backend servers until
> > they eventually stick to just one.
> >
> > For example, using Apache Bench to test:
> >
> > ab -c 10 -n 30 'http://example.com/index.php?email=a...@example.com'
> >
> > I see this in the haproxy log:
> >
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50812
> >  [08/May/2017:14:49:10.934] http_front
> > backend/server-1 0/0/0/7/7 200 222 - -  9/9/9/4/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50811
> >  [08/May/2017:14:49:10.933] http_front
> > backend/server-2 0/0/0/8/8 200 222 - -  8/8/8/4/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50816
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-1 0/0/0/7/7 200 222 - -  7/7/7/1/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50819
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-2 0/0/1/6/7 200 222 - -  6/6/6/1/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50814
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-1 0/0/0/7/7 200 222 - -  5/5/5/1/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50810
> >  [08/May/2017:14:49:10.933] http_front
> > backend/server-1 0/0/0/9/9 200 222 - -  4/4/4/0/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50813
> >  [08/May/2017:14:49:10.934] http_front
> > backend/server-2 0/0/0/8/8 200 222 - -  3/3/3/0/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50815
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-2 0/0/0/7/7 200 222 - -  2/2/2/0/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50817
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-2 0/0/0/7/8 200 222 - -  1/1/1/0/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50818
> >  [08/May/2017:14:49:10.935] http_front
> > backend/server-1 0/0/1/6/8 200 222 - -  0/0/0/0/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50820
> >  [08/May/2017:14:49:10.967] http_front
> > backend/server-1 0/0/0/5/5 200 222 - -  3/3/2/2/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50821
> >  [08/May/2017:14:49:10.968] http_front
> > backend/server-1 0/0/0/4/4 200 222 - -  2/2/2/2/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50823
> >  [08/May/2017:14:49:10.972] http_front
> > backend/server-1 0/0/1/5/6 200 222 - -  7/7/7/7/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50822
> >  [08/May/2017:14:49:10.972] http_front
> > backend/server-1 0/0/0/8/8 200 222 - -  6/6/6/6/0 0/0 "GET
> > /index.php?email=a...@example.com  HTTP/1.0"
> > [...]
> >
> > After this point haproxy correctly sends all traffic to server-1. When
> > the stick-table entry expires the problem occurs again.
> >
> > I have tried persisting off a url parameter and source address, both
> > exhibit the same issue.
> >
> > Is haproxy unable to properly handle numerous simultaneous
> > (concurrent) requests like this? Is there something I can do to get
> > this to work as desired?
>
> Those 10 concurrent requests are not elaborated sequentially, haproxy is
> a event-loop based application and handles those request in 

Re: haproxy not creating stick-table entries fast enough

2017-05-08 Thread Lukas Tribus
Hello,


Am 09.05.2017 um 00:38 schrieb redundantl y:
> I am running haproxy 1.5.18-3 on CentOS 7 and need to use the
> stick-table feature to make sure traffic for a specific user persists
> to a given server.
>
> Things work fine when connections come in slowly, however when there's
> numerous simultaneous connections and a stick-table entry doesn't
> exist yet some requests will be sent to both backend servers until
> they eventually stick to just one.
>
> For example, using Apache Bench to test:
>
> ab -c 10 -n 30 'http://example.com/index.php?email=a...@example.com'
>
> I see this in the haproxy log:
>
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50812
>  [08/May/2017:14:49:10.934] http_front
> backend/server-1 0/0/0/7/7 200 222 - -  9/9/9/4/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50811
>  [08/May/2017:14:49:10.933] http_front
> backend/server-2 0/0/0/8/8 200 222 - -  8/8/8/4/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50816
>  [08/May/2017:14:49:10.935] http_front
> backend/server-1 0/0/0/7/7 200 222 - -  7/7/7/1/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50819
>  [08/May/2017:14:49:10.935] http_front
> backend/server-2 0/0/1/6/7 200 222 - -  6/6/6/1/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50814
>  [08/May/2017:14:49:10.935] http_front
> backend/server-1 0/0/0/7/7 200 222 - -  5/5/5/1/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50810
>  [08/May/2017:14:49:10.933] http_front
> backend/server-1 0/0/0/9/9 200 222 - -  4/4/4/0/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50813
>  [08/May/2017:14:49:10.934] http_front
> backend/server-2 0/0/0/8/8 200 222 - -  3/3/3/0/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50815
>  [08/May/2017:14:49:10.935] http_front
> backend/server-2 0/0/0/7/7 200 222 - -  2/2/2/0/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50817
>  [08/May/2017:14:49:10.935] http_front
> backend/server-2 0/0/0/7/8 200 222 - -  1/1/1/0/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50818
>  [08/May/2017:14:49:10.935] http_front
> backend/server-1 0/0/1/6/8 200 222 - -  0/0/0/0/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50820
>  [08/May/2017:14:49:10.967] http_front
> backend/server-1 0/0/0/5/5 200 222 - -  3/3/2/2/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50821
>  [08/May/2017:14:49:10.968] http_front
> backend/server-1 0/0/0/4/4 200 222 - -  2/2/2/2/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50823
>  [08/May/2017:14:49:10.972] http_front
> backend/server-1 0/0/1/5/6 200 222 - -  7/7/7/7/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50822
>  [08/May/2017:14:49:10.972] http_front
> backend/server-1 0/0/0/8/8 200 222 - -  6/6/6/6/0 0/0 "GET
> /index.php?email=a...@example.com  HTTP/1.0"
> [...]
>
> After this point haproxy correctly sends all traffic to server-1. When
> the stick-table entry expires the problem occurs again.
>
> I have tried persisting off a url parameter and source address, both
> exhibit the same issue.
>
> Is haproxy unable to properly handle numerous simultaneous
> (concurrent) requests like this? Is there something I can do to get
> this to work as desired?

Those 10 concurrent requests are not elaborated sequentially, haproxy is
a event-loop based application and handles those request in parallel.
Meaning in this case all of those 10 request are load-balanced without
stickiness.

Use "-c 1" and maybe "-k" to enable keepalive in ab, which will be more
inline with what happens in the real world.


Regards,
Lukas




haproxy not creating stick-table entries fast enough

2017-05-08 Thread redundantl y
I am running haproxy 1.5.18-3 on CentOS 7 and need to use the stick-table
feature to make sure traffic for a specific user persists to a given server.

Things work fine when connections come in slowly, however when there's
numerous simultaneous connections and a stick-table entry doesn't exist yet
some requests will be sent to both backend servers until they eventually
stick to just one.

For example, using Apache Bench to test:

ab -c 10 -n 30 'http://example.com/index.php?email=a...@example.com'

I see this in the haproxy log:

May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50812
[08/May/2017:14:49:10.934] http_front backend/server-1 0/0/0/7/7 200 222 -
-  9/9/9/4/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50811
[08/May/2017:14:49:10.933] http_front backend/server-2 0/0/0/8/8 200 222 -
-  8/8/8/4/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50816
[08/May/2017:14:49:10.935] http_front backend/server-1 0/0/0/7/7 200 222 -
-  7/7/7/1/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50819
[08/May/2017:14:49:10.935] http_front backend/server-2 0/0/1/6/7 200 222 -
-  6/6/6/1/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50814
[08/May/2017:14:49:10.935] http_front backend/server-1 0/0/0/7/7 200 222 -
-  5/5/5/1/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50810
[08/May/2017:14:49:10.933] http_front backend/server-1 0/0/0/9/9 200 222 -
-  4/4/4/0/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50813
[08/May/2017:14:49:10.934] http_front backend/server-2 0/0/0/8/8 200 222 -
-  3/3/3/0/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50815
[08/May/2017:14:49:10.935] http_front backend/server-2 0/0/0/7/7 200 222 -
-  2/2/2/0/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50817
[08/May/2017:14:49:10.935] http_front backend/server-2 0/0/0/7/8 200 222 -
-  1/1/1/0/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50818
[08/May/2017:14:49:10.935] http_front backend/server-1 0/0/1/6/8 200 222 -
-  0/0/0/0/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50820
[08/May/2017:14:49:10.967] http_front backend/server-1 0/0/0/5/5 200 222 -
-  3/3/2/2/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50821
[08/May/2017:14:49:10.968] http_front backend/server-1 0/0/0/4/4 200 222 -
-  2/2/2/2/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50823
[08/May/2017:14:49:10.972] http_front backend/server-1 0/0/1/5/6 200 222 -
-  7/7/7/7/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
May  8 14:49:10 localhost haproxy[4996]: 1.2.3.4:50822
[08/May/2017:14:49:10.972] http_front backend/server-1 0/0/0/8/8 200 222 -
-  6/6/6/6/0 0/0 "GET /index.php?email=a...@example.com HTTP/1.0"
[...]

After this point haproxy correctly sends all traffic to server-1. When the
stick-table entry expires the problem occurs again.

I have tried persisting off a url parameter and source address, both
exhibit the same issue.

Is haproxy unable to properly handle numerous simultaneous (concurrent)
requests like this? Is there something I can do to get this to work as
desired?

Thanks.