unsubscribe

2014-09-07 Thread Sebastien Estienne
Sebastien Estienne



unsubscribe

2013-10-03 Thread Sebastien Estienne
Sebastien Estienne


Re: Proxy protocol patch for nginx 1.4.x

2013-09-19 Thread Sebastien Estienne
Hello Baptiste,

Does it mean that it should work with a module like nginx-rtmp?

thanx,

Sebastien Estienne


On Thu, Sep 19, 2013 at 9:30 PM, Baptiste bed...@gmail.com wrote:

 Hi all,

 I've just updated my patches for proxy protocol in nginx 1.4.x.
 They are avaiable here:
 https://wiki.bedis.eu/nginx/nginx_proxy_protocol_patch

 Note in that version, accept_proxy_protocol is not a server option
 anymore, it is now a bind option.

 Please try it and report any issue / bug / success story.
 (the wiki hosting the page above uses the patch, of course)

 Baptiste




Re: agent-port / loadbalance with CPU usage

2013-09-06 Thread Sebastien Estienne
Ok, makes sense.

The usecase is quite specific as the connection rate is really low (max 5
per minutes) and the cpu usage induced by each connection is really high
(between 10% and 20% per connnection).

I'll try with the leastconnection algo + agent-port every 2 seconds.

thanx

Sebastien Estienne


On Fri, Sep 6, 2013 at 3:10 PM, Willy Tarreau w...@1wt.eu wrote:

 On Thu, Sep 05, 2013 at 12:39:42AM +0200, Sebastien Estienne wrote:
  Hello,
 
  I'm testing the patch of simon implementing agent-port.
 
  I'd like to loadbalance RTMP servers based on the CPU usage, so i
  implemented a small tcp servers that returned the percent of free CPU and
  use the agent-port feature.
 
  I want new connection to always go to the server with the lowest CPU
 usage,
  which balancing algorithm should i use to achieve this?

 It does not make sense at all in fact, it means you'd be able to always
 check all servers' CPU usage before sending a connection, which would
 add a huge overhead and will simply result in all servers appearing
 saturated by the measurements.

 In practice, you want to constantly monitor CPU usages (the agent is suited
 for this), so that this metric is used to increase or reduce the weight.
 Using so, you can use whatever LB algorithm you want.

 The algorithm is simple in the agent :
   - if the server is overloaded, reduce the advertised weight
   - if the server is underloaded, increase the advertised weight

 This will result in all servers constantly running within the CPU usage
 window
 you define in your agent, whatever their power and the impact of each of
 your
 new connections.

 Regards,
 Willy




agent-port / loadbalance with CPU usage

2013-09-04 Thread Sebastien Estienne
Hello,

I'm testing the patch of simon implementing agent-port.

I'd like to loadbalance RTMP servers based on the CPU usage, so i
implemented a small tcp servers that returned the percent of free CPU and
use the agent-port feature.

I want new connection to always go to the server with the lowest CPU usage,
which balancing algorithm should i use to achieve this?

thanx,
Sebastien Estienne


Re: haproxy - varnish - backend server

2012-06-05 Thread Sebastien Estienne
why not put varnish in front of haproxy like this:
haproxy listen on ip public 1 and localhost
varnish listen on ip public 2 and forward to localhost

so cached traffic is immediately served by varnish without hitting haproxy.

and when you don t need to cache the traffic you use ip public 1 (haproxy)

in our setup the varnish ip is s.mydomain.com serving image/css/js
and haproxy is www.mydomain.com serving dynamic content containing urls in 
s.mydomain.com.

as a bonus no cookie is sent to s.mydomain.com

--
Sebastien E.


Le 5 juin 2012 à 21:06, David Coulson da...@davidcoulson.net a écrit :

 Is haproxy adding X-Forwarded-For to the request it sends varnish? If so, 
 just don't have varnish manipulate X-Forwarded-For and your app will use the 
 header added by HAProxy.
 
 David
 
 On 6/5/12 9:04 PM, hapr...@serverphorums.com wrote:
 Hi guys
 
 Originally we had haproxy in front and connecting to backend server
 
haproxy -  backend server
 
 and applications and backend server see the real client ip fine without any 
 issues
 
 But we decided to try adding Varnish cache in between
 
haproxy -  varnish -  backend server
 
 Problem now is backend server and ips are seeing the client ip of the 
 haproxy server and not real visitor client ips.
 
 varnish has the appropriate forwarding of client ips,
 
remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For = client.ip;
 
 and works if Varnish only in front of backends.
 
 So what setting if any in haproxy would I need to add or check for, to get 
 the proper client ip from haproxy through varnish into the backend ?
 
 Using haproxy v1.3 here with Varnish 3.0.2.
 
 thanks
 
 ---
 posted at http://www.serverphorums.com
 http://www.serverphorums.com/read.php?10,508289,508289#msg-508289
 
 



Re: Can't git clone haproxy repo

2012-05-09 Thread Sebastien Estienne
On a side note, why not using github?

I guess you have been asked this one already :)

--
Sebastien E.


Le 9 mai 2012 à 20:11, Baptiste bed...@gmail.com a écrit :

 sometimes, it happens that HAProxy git is not available (or very very slow)  
 :)
 just try again a bit later.
 
 cheers
 
 On Wed, May 9, 2012 at 6:52 PM, Aleksandar Lazic al-hapr...@none.at wrote:
 Hi,
 
 I just copied the git command to clone the repo from
 
 http://haproxy.1wt.eu/git?p=haproxy.git;a=blob;f=README;h=f2465d2fe7600a63eb885493f1afc5e84f9ebcfc;hb=HEAD
 
 I got the following error.
 
 ###
 haproxy@external:~/dev$ git clone git://git.1wt.eu/git/haproxy.git/
 Initialized empty Git repository in /home/haproxy/dev/haproxy/.git/
 git.1wt.eu[0: 62.212.114.60]: errno=Connection timed out
 git.1wt.eu[0: 2001:7a8:363c:2::2]: errno=Network is unreachable
 fatal: unable to connect a socket (Network is unreachable)
 ###
 
 haproxy@external:~/dev$ git version
 git version 1.7.0.4
 
 haproxy@external:~/dev$ cat /etc/issue
 Ubuntu 10.04.4 LTS \n \l
 
 haproxy@external:~/dev$ uname -a
 Linux external 2.6.18-028stab092.1 #1 SMP Wed Jul 20 19:47:12 MSD 2011
 x86_64 GNU/Linux
 
 Please can you help me to get the current repo, thanks.
 
 Br
 
 Aleks
 
 



Re: nginx alone performs x2 than haproxy-nginx

2012-04-30 Thread Sebastien Estienne
Hi Pasi,

Do you know if ubuntu 12.04 has these optimized drivers or not?

thanx

--
Sebastien E.


Le 30 avr. 2012 à 11:06, Pasi Kärkkäinen pa...@iki.fi a écrit :

 On Sun, Apr 29, 2012 at 06:18:52PM +0200, Willy Tarreau wrote:
 
 I'm using VPS machines from Linode.com, they are quite powerful. They're
 based on Xen. I don't see the network card saturated.
 
 OK I see now. There's no point searching anywhere else. Once again you're
 a victim of the high overhead of virtualization that vendors like to pretend
 is almost unnoticeable :-(
 
 As for nf_conntrack, I have iptables enabled with rules as a firewall on
 each machine, I stopped it on all involved machines and I still get those
 results. nf_conntrack is compiled to the kernel (it's a kernel provided by
 Linode) so I don't think I can disable it completely. Just not use it (and
 not use any firewall between them).
 
 It's having the module loaded with default settings which is harmful, so
 even unloading the rules will not change anything. Anyway, now I'm pretty
 sure that the overhead caused by the default conntrack settings is nothing
 compared with the overhead of Xen.
 
 Even if 6-7K is very low (for nginx directly), why is haproxy doing half
 than that?
 
 That's quite simple : it has two sides so it must process twice the number
 of packets. Since you're virtualized, you're packet-bound. Most of the time
 is spent communicating with the host and with the network, so the more the
 packets and the less performance you get. That's why you're seeing a 2x
 increase even with nginx when enabling keep-alive.
 
 I'd say that your numbers are more or less in line with a recent benchmark
 we conducted at Exceliance and which is summarized below (each time the
 hardware was running a single VM) :
 
   
 http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/
 
 (BTW you'll note that Xen was the worst performer here with 80% loss
 compared to native performance).
 
 
 Note that Ubuntu 11.10 kernel is lacking important drivers such as the 
 Xen ACPI power management / cpufreq drivers so it's not able to use the 
 better performing CPU states. That driver is merged to recent upstream Linux 
 3.4 (-rc).
 Also the xen-netback dom0 driver is still unoptimized in the upstream Linux 
 kernel.
 
 Using RHEL5/CentOS5 as Xen host/dom0, or SLES11 or OpenSuse is a better idea 
 today
 for benchmarking because those have the fully optimized kernel/drivers. 
 Upstream Linux will get the optimizations in small steps (per the Linux 
 development model).
 
 Citrix XenServer 6 is using the optimized kernel/drivers so that explains the 
 difference 
 in the benchmark compared to Ubuntu Xen4.1.
 
 I just wanted to hilight that. 
 
 -- Pasi
 
 



Re: PROXY protocol and setting headers X-Forwarded-Protocol=https ou X-Forwarded-Ssl=on

2011-09-18 Thread Sebastien Estienne
thanx wily,

last question, in the proxy-protocol.txt it written 'The receiver MUST
be configured to only receive this protocol and MUST not'

so if i have a line like this in haproxy:
bind 127.0.0.1:80 accept-proxy

only client talking the proxy procol can use the localhost port 80,
standart HTTP client can't work, right?

so if i want both i must do something like this:
bind 127.0.0.1:80
bind 127.0.0.1:81 accept-proxy

correct?

thanx,
Sebastien Estienne



On Sun, Sep 18, 2011 at 12:17, Willy Tarreau w...@1wt.eu wrote:
 Hello Sebastien,

 On Sat, Sep 17, 2011 at 01:27:22PM +0200, Sebastien Estienne wrote:
 Hello,

 I'm using stud with haproxy 1.5 in front of a gunicorn/django webapp.

 Django looks for X-Forwarded-Protocol=https ou X-Forwarded-Ssl=on
 headers to know if the request is secure or not.

 I guess it is the job of haproxy to set and forward these headers, but
 i don't really know on which condition/acl i should set these headers?

 Maybe using the source port of the request (443) as set by stud in the
 PROXY protocol, but is it exposed in haproxy config?

 You mean the destination port, but yes I agree. You should proceed like
 this :

      reqadd X-Forwarded-Protocol:\ https  if { dst_port 443 }

 The dst_port will of course have been fed by stud using the proxy protocol.

 Regards,
 Willy





PROXY protocol and setting headers X-Forwarded-Protocol=https ou X-Forwarded-Ssl=on

2011-09-17 Thread Sebastien Estienne
Hello,

I'm using stud with haproxy 1.5 in front of a gunicorn/django webapp.

Django looks for X-Forwarded-Protocol=https ou X-Forwarded-Ssl=on
headers to know if the request is secure or not.

I guess it is the job of haproxy to set and forward these headers, but
i don't really know on which condition/acl i should set these headers?

Maybe using the source port of the request (443) as set by stud in the
PROXY protocol, but is it exposed in haproxy config?

thanx,
Sebastien Estienne



Re: Proxy Protocol in 1.4.x ?

2011-08-23 Thread Sebastien Estienne
New benchmark on this topic with haproxy:
http://vincent.bernat.im/en/blog/2011-ssl-benchmark.html


On Saturday, July 9, 2011, Willy Tarreau w...@1wt.eu wrote:
 Hello Sébastien,

 On Fri, Jul 08, 2011 at 11:17:12PM +0200, Sébastien Estienne wrote:
 yes we perfectly understand this, and that is what we like about haproxy.
 But the demand for SSL is growing, it s even mandatory for some use
cases.
 Stud looks really promising and solid and a good match for haproxy as it
was designed to be used with haproxy ( http://devblog.bu.mp/introducing-stud).
 Today we have the choice between:
 - haproxy 1.4 + patched stunnel
 - haproxy 1.5 dev + stud
 - patched haproxy 1.4 + stud

 The last one seems the most stable with the best performance, so as the
demand for SSL is growing, i think it would be a big plus that haproxy 1.4
can work with stud  without being patched.

 I see your point. Well, there is also a fourth solution. At Exceliance, we
 have an haproxy enterprise edition (hapee) packaging which includes a
 patched haproxy 1.4, patched stunnel etc... There's a free version you can
 register for. We decided to install it as some of our supported customers
 for free, just because it made maintenance easier for us, and rendered
 their infrastructure more stable.

 I don t know if it would make sense but maybe stud could be integrated
somehow in haproxy like this:
 Instead of starting stud then haproxy separately, the main haproxy
process could fork some stud-like process (binding 443) as it already forks
haproxy childs for multicore and it would discuss using the proxy protocol
transparentlyfor the end user with no need to setup the link between both.

 It's amusing that you're saying that : when I looked at the code, I
thought
 they use the same design model as haproxy and they have the same goals,
 maybe this could be merged. My goal with SSL in haproxy is that we can
 dedicate threads or processes to that task, thus some core changes are
still
 needed, but a first step might precisely be to have totally independant
 processes communicating over a unix socket pair and the proxy protocol.
 It's just not trivial yet to reach the server in SSL, but one thing at a
 time...

 This would offer a seemless SSL integration without hurting haproxy
codebase and stability for clear http content.

 Exactly.

 Thanks for your insights, they comfort me in that mine were not too
 excentric :-)

 Willy



-- 
Sebastien Estienne


Proxy Protocol in 1.4.x ?

2011-07-07 Thread Sebastien Estienne
Hello,

I'd like to use stud https://github.com/bumptech/stud with Haproxy for
SSL support.
Stud implement the haproxy proxy protocol, and i'd like to know if
this will be backported to haproxy 1.4 ?

thanx,
Sebastien Estienne



load patterns from a file

2010-11-26 Thread Sebastien Estienne
Hello,

When using the option '-if' in ACL, the specified file should be in
the same directory as haproxy?
Is there some special rights to set on the file or on the folder?

Sebastien Estienne



Re: Support for SSL

2010-11-16 Thread Sebastien Estienne
Le 16 nov. 2010 à 12:27, Willy Tarreau w...@1wt.eu a écrit :

 Hello,
 
 On Sun, Nov 07, 2010 at 04:15:18PM +0100, Sebastien Estienne wrote:
 Hello,
 
 Is there any news about SSL support?
 
 Yes there are some news, we'll have to work on it at Exceliance.

this is great news, any early timeframe even fuzzy?

 
 With current server's hardware having 8 cores or more, offering SSL is
 quite cheap.
 
 Hehe one thing at a time : haproxy right now only uses one core. Let's
 first have SSL and only then see how we can make use of more cores.
 The really difficult part is to try to use more cores without slowing
 down the overall processing due to very common synchronization. This
 implies massive changes to ensure that there's almost no shared data
 between processes or threads.
 

i thought haproxy could use more than one core with a prefork model like nginx?

 Moreover with tools like firesheep getting widespread offering SSL to
 our users become an important feature
 
 Firesheep is doing nothing more than what has been done for decades with
 many other tools. The same people who believe their traffic cannot be
 sniffed by their coworker because they connect via a switch won't care
 about having their SSL session hijacked with an invalid certificate.
 

We all agree with this like irc and newsgroups existed before emule and 
biittorent :) But with easy tools like this we can t hide the problem anymore.
(like adobe does with rtmpe)

 I know that it's possible to use stunnel, but it would be better to
 have SSL support built in haproxy
 
 Yes indeed. At least stunnel already lets us assemble the bricks to
 build whatever we want, eventhough the configs are sometimes tough !
 
 Regards,
 Willy
 

thanx.



Support for SSL

2010-11-07 Thread Sebastien Estienne
Hello,

Is there any news about SSL support?
With current server's hardware having 8 cores or more, offering SSL is
quite cheap.
Moreover with tools like firesheep getting widespread offering SSL to
our users become an important feature

I know that it's possible to use stunnel, but it would be better to
have SSL support built in haproxy

thanx,
Sebastien Estienne



X-Accel-Redirect / X-Sendfile From Remote Servers

2010-09-09 Thread Sebastien Estienne
Hello,

I'd like to know how to do something similar to this with haproxy:
http://kovyrin.net/2010/07/24/nginx-fu-x-accel-redirect-remote/

the idea is that haproxy proxify an application server that returns
headers like:
X-Accel-Redirect: http://10.0.0.1/some/file.jpg
And when haproxy recieve this header, it request the file and serve it
instead of forwarding the response of the application server

It s like if haproxy was handling the redirect on behalf of the client

regards,
Sebastien Estienne



Re: X-Accel-Redirect / X-Sendfile From Remote Servers

2010-09-09 Thread Sebastien Estienne
It seens that it is also called  X-Reproxy-URL

By setting the X-Reproxy-Url header, a backend process tells
mod_reproxy to serve the response from another location, effectively
letting the request redirect transparently (and within the same
request). This can help to reduce load considerably.

http://github.com/jamis/mod_reproxy

Sebastien Estienne



On Thu, Sep 9, 2010 at 13:48, Sebastien Estienne
sebastien.estie...@gmail.com wrote:
 Hello,

 I'd like to know how to do something similar to this with haproxy:
 http://kovyrin.net/2010/07/24/nginx-fu-x-accel-redirect-remote/

 the idea is that haproxy proxify an application server that returns
 headers like:
 X-Accel-Redirect: http://10.0.0.1/some/file.jpg
 And when haproxy recieve this header, it request the file and serve it
 instead of forwarding the response of the application server

 It s like if haproxy was handling the redirect on behalf of the client

 regards,
 Sebastien Estienne