Re: Haproxy - time to split traffic on servers

2014-11-14 Thread Jonathan Matthews
On 14 November 2014 22:59, Gorj Design ( Dragos )  wrote:
> Hello,
>
> I have been using Haproxy to split the traffic between my servers.
> I have a haproxy server and 2 servers that receive the traffic using round
> robin .
> The traffic is split usually very good  50 % on one server and 50 % on the
> other.
>
> But at some point, the traffic gets in so fast for example
> 2014-11-14T20:43:15.702Z
> 2014-11-14T20:43:15.703Z
> 2014-11-14T20:43:15.704Z
> 2014-11-14T20:43:15.705Z
> 2014-11-14T20:43:15.706Z
> ..
> From 15.702 to ..15.706 hundreads of incomming traffic are comming
> and all are sent to server one .
>
> Can I set it somehow so the traffic is split even if it comes at such a low
> milliseconds difference ?

I don't believe you /should/ be seeing this pattern/problem with a
simple round-robin setup. Are you *positive* that neither server
polled down for any period, no matter how small?

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#balance
describes your load balancing algorithm choices. I know it warns
against leastconn with short-lived connections, but I've never had any
problems with using that algorithm for HTTP :-)

Jonathan



Multi-line reqirep?

2014-11-14 Thread Rodney Smith
Is there a way to do a multi-line reqirep? I want to set the host and port
based on elements in the path. Something like this where some path parts
end up in a rewritten host:

http://some.domain.com:80/dynamic/evaluation/1400/path ->
http://evaluation.domain.com:1400/dynamic/path

(The 'evaluation' parts aren't known ahead of time).

Or is there another way to do it?


Haproxy - time to split traffic on servers

2014-11-14 Thread Gorj Design ( Dragos )
Hello,
I have been using Haproxy to split the traffic between my servers.I have a 
haproxy server and 2 servers that receive the traffic using round robin .The 
traffic is split usually very good  50 % on one server and 50 % on the other.
But at some point, the traffic gets in so fast for example 
2014-11-14T20:43:15.702Z
2014-11-14T20:43:15.703Z
2014-11-14T20:43:15.704Z
2014-11-14T20:43:15.705Z
2014-11-14T20:43:15.706Z
..From 15.702 to ..15.706 
hundreads of incomming traffic are comming and all are sent to server one .

Can I set it somehow so the traffic is split even if it comes at such a low 
milliseconds difference ?



Re: Haproxy SSL termination - will it be fast enough?

2014-11-14 Thread Shawn Heisey
On 11/14/2014 11:09 AM, Shawn Heisey wrote:
> I have a co-worker that is concerned with the idea of moving SSL
> termination to haproxy, rather than using LVS to NAT the SSL to back end
> servers directly.  It would be handled by one machine, with
> corosync/pacemaker providing responsive failover to a redundant host.

I got a reply off-list:

On 11/14/2014 1:43 PM, Malcolm Turnbull wrote:
> I would say 100 to 200 TPS would be your sensible maximum - 300 would
> kill the box:
> http://blog.loadbalancer.org/ssl-offload-testing/
>
> You can just use apache bench for load testing.

Looking at that URL, a similar CPU to mine, but a little faster,
Intel(R) Celeron(R) CPU 440 @ 2.00GHz, shows a termination rate of over
300 per second.  My CPU is 1.8 Ghz.  Another difference is that I have
2048 bit certificate, the test was 1024 bit.

By sampling our http logs on one of our busier sites, I have concluded
that our request rate is very likely below 100 per second, so I suspect
that this server will easily handle our traffic, especially if we use
http for the back end and make sure keepalive is enabled.  Right now
most of our traffic is unencrypted, so if we migrate everything to SSL,
we probably will want upgraded load balancer hardware.

Thanks,
Shawn




Support for fair share concurrent request scheduling?

2014-11-14 Thread Jesse Hathaway
Does haproxy have support for fair share concurrent request scheduling?

Description:

Give each user at least their fair share of concurrent connections based on the
current number of users and if capacity exceeds that allotment fairly share
the excess capacity among those users that have requested more than their
share.

Example 1:

  10 backend connections
  5 users(a,b,c,d,e) who request 2 concurrent connections

  Result: a,b,c,d and e each get two concurrent connections

Example 2:

  10 backend connections
  3 users(a,b,c) who request 2 concurrent connections
  1 user(d) who requests 4 concurrent connections

  Result: a,b,c each get 2 concurrent connections
  d gets 4 concurrent connections

Example 3:

  10 backend connections
  3 users(a,b,c) who request 1 concurrent connections
  2 users(d,e) who request 5 concurrent connections

  Result: a,b,c each get 1 concurrent connections
  d gets 3 concurrent connections
  e gets 4 concurrent connections






Re: HTTP Status Code of -1 in the logs

2014-11-14 Thread Tait Clarridge
To add to this, this only happens in http-keep-alive mode and it
appears to be closing the connection without sending a response back
if it is not the first transaction in the keep-alive session and there
is a server side connection timeout. If I use option httpclose there
are no status codes of -1 but this is not a solution.

As our use of HAProxy in this case is server-to-server we need to
always respond with valid HTTP, otherwise we get throttled as it
appears to be an HTTP timeout.
We use errorfile to inject a Connection: close header (and 204/200
response) for these cases that the calling server (client) library
should respect and try to re-establish a connection.

We are continuing to dig, just thought I'd update.

On Wed, Nov 12, 2014 at 9:28 PM, Tait Clarridge  wrote:
> Just saw this initial message in its plain text form - that's the last
> time I send a mailing list message with that mail client.
>
> Here is the original message:
> --
> Hello,
>
> Recently I noticed log lines with a status code of -1, not sure how
> long this has been going on for however so not sure if it is a new
> issue or an older one.
>
> In our environment we set strict timeouts as requests (once we get
> them) need to be processed in < 80ms.
>
> A cleaned up version of the config is located 
> here:http://pastebin.com/FK3KG1Gi
>
>
> They always happen with a backend server connect timeout, but I also
> see 503 responded for the same type of timeout.
>
> Logs (custom log format removed):
>
> Nov 12 21:07:53 haproxy[13800]: x.x.x.x:xx [12/Nov/2014:21:07:53.646]
> incoming bproxy-client1/server1 0/0/-1/-1/4 503 27 - - sC--
> 2046/2046/3/1/0 0/0 "POST /url/here HTTP/1.1"
> Nov 12 21:07:53 haproxy[13800]: x.x.x.x:xx [12/Nov/2014:21:07:53.608]
> incoming bproxy-client1/server1 38/0/-1/-1/43 -1 0 - - sC--
> 2047/2047/2/1/0 0/0 "POST /url/here HTTP/1.1”
>
> I know the default status of each txn is set to -1, so I’m not sure if
> this is bailing out along the line.
>
> This issue is reproducible on 1.5.{dev24, 4, 8}
>
> I am working on reproducing this outside of production.
>
> Thanks!
> Tait



Re: Can I insert a prefix cookie rather than read an existing one?

2014-11-14 Thread Malcolm Turnbull
Baptiste,

I was hoping that was not the case :-).

My main goal was to make it completely application agnostic, never
mind I'll stick with the application cookie version.

Thanks very much.


On 14 November 2014 15:40, Baptiste  wrote:
> On Fri, Nov 14, 2014 at 1:01 PM, Malcolm Turnbull
>  wrote:
>> I was just playing around with the configuration from the excellent
>> blog entry on e-commerce overload protection:
>> http://blog.haproxy.com/2012/09/19/application-delivery-controller-and-ecommerce-websites/
>>
>> If you have a PHPSession or ASPsessionID cookie then you can track the
>> total number of users as follows:
>>
>> listen L7-Test
>> bind 192.168.64.27:80 transparent
>> mode http
>> acl maxcapacity table_cnt ge 5000
>> acl knownuser hdr_sub(cookie) MYCOOK
>> http-request deny if maxcapacity !knownuser
>>
>> stick-table type string len 32 size 10K expire 10m nopurge
>> stick store-response set-cookie(MYCOOK)
>> stick store-request cookie(MYCOOK)
>>
>>
>> balance leastconn
>> cookie SERVERID insert nocache
>> server backup 127.0.0.1:9081 backup non-stick
>> option http-keep-alive
>> option forwardfor
>> option redispatch
>> option abortonclose
>> maxconn 4
>> server Test1 192.168.64.12:80 weight 100 cookie Test1
>> server Test2 192.168.64.13:80 weight 100 cookie Test2
>>
>> But what if you only have a single source IP (so you still want to use
>> cookies to track the usage AND stickyness) but the application doesn't
>> have its own unique session id?
>>
>> Can you do something like using gpc0 to store a random haproxy session
>> id (for overload) and yet still using cookie SERVERID for the
>> persistence?
>> Or using a big IP stick table even though all the IPs will be the same?
>>
>> Or am I just being really stupid today , which is not unusual :-).
>>
>> Thanks in advance.
>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)330 1604540
>> http://www.loadbalancer.org/
>>
>
> Hi Malcolm,
>
> I don't understand the question about the IP address.
> You don't use it at all in your conf, since you're using the cookie,
> which is a layer above.
>
> That said, the case you mentionned is very rare: all users behind a
> single IP and no cookie set by the application.
> Are you sure there is no X-Forwarded-For headers, or whatever other
> you could use to identify a user?
>
> There is no way for now in HAProxy to generate a random cookie...
> well, no "clean" way :)
>
> Baptiste



-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



Haproxy SSL termination - will it be fast enough?

2014-11-14 Thread Shawn Heisey
I have a co-worker that is concerned with the idea of moving SSL
termination to haproxy, rather than using LVS to NAT the SSL to back end
servers directly.  It would be handled by one machine, with
corosync/pacemaker providing responsive failover to a redundant host.

Below is the CPU info from one of those hosts (a Dell PowerEdge R200).

Our websites are not being hit with an enormous traffic load.  They stay
busy, but we're definitely not Google or CNN.

At what traffic level would I need to be concerned about whether the
load balancer can handle SSL termination?  Hundreds of requests per
second?  Thousands?  You can assume that we will be disabling SSL on the
back end.

If I need to gather some info (like openssl benchmarks), just me know
what command to run.

[root@lb1 ~]# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 22
model name  : Intel(R) Celeron(R) CPU  430  @ 1.80GHz
stepping: 1
cpu MHz : 1800.067
cache size  : 512 KB
fpu : yes
fpu_exception   : yes
cpuid level : 10
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss tm syscall nx
lm constant_tsc up pni monitor ds_cpl tm2 cx16 xtpr lahf_lm
bogomips: 3602.87
clflush size: 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

Thanks,
Shawn



Re: mixup in stats summary (4xx/5xx)?

2014-11-14 Thread Philipp

If you missed it: http://marc.info/?l=haproxy&m=141561304511354&w=2

Anyone?

Am 10.11.2014 10:49 schrieb Philipp:

Hello,

this is confusing me and maybe someone could shed some light (or
reasoning) into the count/sums
for the http-responses from frontend and backends.





Re: understanding halog output

2014-11-14 Thread marnik
Hi Willy,

I read the explanation, but cannot make any sense of the last column when
ran on my data:

head:
0.1832 0  0 1  1
0.216650  0 1  5013
0.324980  0 1  5014
0.433310  0 1  5014

tail:
99.5   828754  1006 1383   35019
99.6   829587  1167 1702   37401
99.7   830420  136122630   43127
99.8   831253  162204247   50105
99.9   832086  211286036   67367
100.0  832919  30927  3084  48484  415624

Even for the very first percentiles, data transfer time is 5s+. For 99.5%,
it goes up to 35s. Does that time include transfer to the client, or only
from server to haproxy? I cannot image that my E2E fastest repsonse to
client is 5s+

Thanks,

Marnik




RE: Making TLS go faster

2014-11-14 Thread Lukas Tribus
>> Be advisted that OCSP stapling is slowly dying , check [2] and
>> [3].
>
> I hope not. OCSP without stapling is dying, yes, but OCSP stapling along
> with the X.509 Must Staple extension [1], and mode likely the X.509 TLS
> feature extension [2], are a scalable way of solving a real problem.
>
> [1] https://tools.ietf.org/html/draft-hallambaker-muststaple-00
> [2] https://tools.ietf.org/html/draft-hallambaker-tlsfeature-05

I don't see how those 2 drafts fix downgrade attacks if the browser is
connecting to the HTTPS site for the first time (thus, is not
aware of previous must-staple options) - like a Wireless login
site in a hotel.

I guess we can cover those things only with a DNSSEC chain
of trust, providing SSL related hints ("must-staple" and CA pinning
via DNS(SEC), similar to RFC6844).

Until then, Chrome will continue to use crlsets instead of OCSP, I suspect.



Regards,

Lukas

  


Re: Can I insert a prefix cookie rather than read an existing one?

2014-11-14 Thread Baptiste
On Fri, Nov 14, 2014 at 1:01 PM, Malcolm Turnbull
 wrote:
> I was just playing around with the configuration from the excellent
> blog entry on e-commerce overload protection:
> http://blog.haproxy.com/2012/09/19/application-delivery-controller-and-ecommerce-websites/
>
> If you have a PHPSession or ASPsessionID cookie then you can track the
> total number of users as follows:
>
> listen L7-Test
> bind 192.168.64.27:80 transparent
> mode http
> acl maxcapacity table_cnt ge 5000
> acl knownuser hdr_sub(cookie) MYCOOK
> http-request deny if maxcapacity !knownuser
>
> stick-table type string len 32 size 10K expire 10m nopurge
> stick store-response set-cookie(MYCOOK)
> stick store-request cookie(MYCOOK)
>
>
> balance leastconn
> cookie SERVERID insert nocache
> server backup 127.0.0.1:9081 backup non-stick
> option http-keep-alive
> option forwardfor
> option redispatch
> option abortonclose
> maxconn 4
> server Test1 192.168.64.12:80 weight 100 cookie Test1
> server Test2 192.168.64.13:80 weight 100 cookie Test2
>
> But what if you only have a single source IP (so you still want to use
> cookies to track the usage AND stickyness) but the application doesn't
> have its own unique session id?
>
> Can you do something like using gpc0 to store a random haproxy session
> id (for overload) and yet still using cookie SERVERID for the
> persistence?
> Or using a big IP stick table even though all the IPs will be the same?
>
> Or am I just being really stupid today , which is not unusual :-).
>
> Thanks in advance.
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)330 1604540
> http://www.loadbalancer.org/
>

Hi Malcolm,

I don't understand the question about the IP address.
You don't use it at all in your conf, since you're using the cookie,
which is a layer above.

That said, the case you mentionned is very rare: all users behind a
single IP and no cookie set by the application.
Are you sure there is no X-Forwarded-For headers, or whatever other
you could use to identify a user?

There is no way for now in HAProxy to generate a random cookie...
well, no "clean" way :)

Baptiste



Re: Same servers in multiple backends hit by multiple health checks

2014-11-14 Thread Tait Clarridge
Hi Jeff,

What you want to use here is track.

backend be-1
server1 10.10.10.10:80 check inter 1000 rise 2 fall 2
server2 10.10.10.20:80 check inter 1000 rise 2 fall 2

backend be-2
   server1 10.10.10.10:80 track be-1/server1
   server2 10.10.10.20:80 track be-1/server1


Tait


On Fri, Nov 14, 2014 at 8:27 AM, jeff saremi  wrote:
> It looks like if i have a server which is  a part of more than one backend,
> that server gets multiple health checks instead of one. Could someone
> confirm or disconfirm that please? If this is true, could you accept this as
> a change request to make the behavior a little smarter? thanks
> Jeff
>



Same servers in multiple backends hit by multiple health checks

2014-11-14 Thread jeff saremi
It looks like if i have a server which is  a part of more than one backend, 
that server gets multiple health checks instead of one. Could someone confirm 
or disconfirm that please? If this is true, could you accept this as a change 
request to make the behavior a little smarter? thanks
Jeff
  

Re: RE: Making TLS go faster

2014-11-14 Thread Remi Gacogne
On 11/13/2014 05:36 PM, Lukas Tribus wrote:

> Be advisted that OCSP stapling is slowly dying , check [2] and
> [3].

I hope not. OCSP without stapling is dying, yes, but OCSP stapling along
with the X.509 Must Staple extension [1], and mode likely the X.509 TLS
feature extension [2], are a scalable way of solving a real problem.

[1] https://tools.ietf.org/html/draft-hallambaker-muststaple-00
[2] https://tools.ietf.org/html/draft-hallambaker-tlsfeature-05




signature.asc
Description: OpenPGP digital signature


Can I insert a prefix cookie rather than read an existing one?

2014-11-14 Thread Malcolm Turnbull
I was just playing around with the configuration from the excellent
blog entry on e-commerce overload protection:
http://blog.haproxy.com/2012/09/19/application-delivery-controller-and-ecommerce-websites/

If you have a PHPSession or ASPsessionID cookie then you can track the
total number of users as follows:

listen L7-Test
bind 192.168.64.27:80 transparent
mode http
acl maxcapacity table_cnt ge 5000
acl knownuser hdr_sub(cookie) MYCOOK
http-request deny if maxcapacity !knownuser

stick-table type string len 32 size 10K expire 10m nopurge
stick store-response set-cookie(MYCOOK)
stick store-request cookie(MYCOOK)


balance leastconn
cookie SERVERID insert nocache
server backup 127.0.0.1:9081 backup non-stick
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 4
server Test1 192.168.64.12:80 weight 100 cookie Test1
server Test2 192.168.64.13:80 weight 100 cookie Test2

But what if you only have a single source IP (so you still want to use
cookies to track the usage AND stickyness) but the application doesn't
have its own unique session id?

Can you do something like using gpc0 to store a random haproxy session
id (for overload) and yet still using cookie SERVERID for the
persistence?
Or using a big IP stick table even though all the IPs will be the same?

Or am I just being really stupid today , which is not unusual :-).

Thanks in advance.


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/



[SPAM] If you liked The Wolf of Wall St.. you will love this book!

2014-11-14 Thread First Book


Pixelbuddha




summer‘s coming trimm your sheeps





Take a Wild Ride with Insatiable
Insatiable is one wild ride. This is the amazingly true story of a very
debaucherous individual, who lives every aspect of his life to the razor's
edge. Marc Siegel takes the reader from his first sex act to his favorite
jokes, and from his award winning sales techniques to his sordid
confessions and personal philosophies on life.


From Wall Street to China, leading to his earning the coveted Ernst and

Young Entrepreneur of the Year Award for 2008, Siegel's charm and non-stop
partying exploits lure the reader into his compelling yet zany life's
story. Through the sorrowful passing of his father, his shockingly
hilarious adventures in Chinese massage parlors, and the torrid pace of his
life after retiring independently wealthy at age 48, Insatiable will take
readers on an emotional roller coaster ride. It is the new bible for living
on the edge! Insatiable... read it. learn it. live it!

Pixelbuddha






















You have been sent this advertisement through opting in to one or more of
our offers, or by becoming one of our customers. We are committed to
delivering highly rewarding offers, updates, and promotions. If you no
longer wish to see these e-mails, please go here to remove yourself from
our subscriber list.

Scope Mailings
2805 Oakland Prk Blvd #Y1109141
FTL, FL 33306



En exclusivité sur ZEturf, découvrez les grilles

2014-11-14 Thread ZEturf
Title: ZEturf grille






  

  
  
Si vous ne voyez pas correctement ce message, visualisez notre version en ligne.
Pour être sûr de recevoir tous nos emails, ajoutez newslet...@email.zeturf.com à votre carnet d´adresses
Pour ne plus recevoir de messages de notre part, rendez-vous sur cette page.
  

  


  

  

  

  

  

  
  

  

  
  

  

  

  


  

  
 

 

 
  

  




  
  
  

  
  

  
  Mot de passe oublié ?     | 
     Désinscription Newsletter    |  
    Jeu responsable   |
  Contactez-nous
  

  
  

  
  
  

  


  Voir conditions sur le site.
À tout moment, vous disposez d'un droit d'accès, de modification, de rectification et de suppression des données qui vous concernent.

Jouer comporte des risques : endettement, isolement… Pour être aidé, appelez le 09-74-75-13-13 (appel non surtaxé).
  


  

  
  Vous devez avoir plus de 18 ans pour jouer sur ZEturf








RE: Making TLS go faster

2014-11-14 Thread Lukas Tribus
Hi,



> I actually suspect most of that time due to our own code running in
> Liferay/Tomcat, but I'd like to be able to say that I've done everything
> I can to eliminate TCP, HTTP, and SSL as bottlenecks. If haproxy with a
> recent openssl will automatically do dynamic record sizes without
> config, then I need to know that, or else I need the config required to
> turn it on. I did look in the documentation and came up empty.

You don't need recent openssl for this, it depends solely on haproxy and
how buffers are written to openssl.

As for documentation, it did provide the link to tune.ssl.maxrecord
already.



> I'd like to keep the ubuntu packaged openssl for any packaged software,
> and install an additional library (doesn't have to be an alternate
> implementation, I'd be OK with newer openssl) that I'll use when
> compiling haproxy. I'm only pursuing ALPN because everything says that
> it replaces NPN ... I don't think that would happen if it weren't better
> in some way. If upgrading haproxy's openssl and switching to ALPN won't
> achieve performance differences above single-digit milliseconds, then I
> can abandon this path.

ALPN has nothing to do at all with performance.



> I'd love to switch to non-SSL for the back end ... but I don't know what
> headers to include to convince the back end to construct https links.
> If you know of an accurate guide for Tomcat and Liferay/Tomcat, please
> give me any info you have. Right now the connection goes to Apache (a
> really old version) and Apache relays it to Tomcat via ajp, but if I
> could cut out that middleman, performance would likely go up.

That depends on your application.



> I'm fairly sure that haproxy does keepalive (I haven't disabled it), and
> I believe we have enabled keepalive on the back end, but I am not
> positive.

You need to actually verify all those things. These a very important for
performance.



> I would expect the latency on the Internet side of the equation would
> dwarf any overhead from opening multiple connections on the backend
> LAN, though.

Absolutely not, those things happen one after another (since HAProxy can't
predict new connection coming from the client and doesn't pool backend
connections), SSL, keep-alive and SSL session caching do impact the latency
and performance very much.


Also do keep in mind that every handshake, both frontend and
backend blocks the event loop in haproxy. The more handshake
you have, the more stalls you are going to have for all the traffic.


You really want to focus on those things first.





Regards,

Lukas