Re: asking
Στις 29 Σεπ 2014 1:56 ΜΜ, ο χρήστης Bot Budi roboteb...@gmail.com έγραψε: can i used haproxy for caching server?, it there have feature for caching? thanks. Nope, HAProxy is not a caching engine. Pavlos
RE: asking
Pavlos is right, though I believe it's quite common to use Varnish in front of haproxy, or Varnish behind haproxy. Varnish support caching. Vennlig hilsen Jon Arild Tørresdal IT-Arkitekt Frende Forsikring | Mobil 994 33 577 | e-post: jon.torres...@frende.nomailto:...@frende.no Frende Skadeforsikring AS | Krokatjønnveien 15 | 5147 Fyllingsdalen Postadresse: Postboks 3660 - Fyllingsdalen | 5845 Bergen | www.frende.nohttp://www.frende.no Foretaksregisteret | 991 436 960 From: Pavlos Parissis pavlos.paris...@gmail.com Sent: Tuesday, September 30, 2014 08:42 To: Bot Budi Cc: HAProxy Subject: Re: asking 29 ??? 2014 1:56 ??, ? ??? Bot Budi roboteb...@gmail.commailto:roboteb...@gmail.com ??: can i used haproxy for caching server?, it there have feature for caching? thanks. Nope, HAProxy is not a caching engine. Pavlos
Re: shellshock and haproxy
On Mon, Sep 29, 2014 at 2:36 PM, Thomas Heil h...@terminal-consulting.de wrote: Hi, To mitigate the shellshock attack we added two lines in our frontends. -- frontend fe_80 -- reqideny ^[^:]+:\s*\(\s*\) reqideny ^[^:]+:\s+.*?([^]+){5,} -- and checked this via -- curl --referer x() { :; }; ping 127.0.0.1 http://my-haproxy-url/ curl --referer true EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF http://my-haproxy-url/ -- Any hints or further sugestions? cheers thomas Hi Thomas, Thanks for the tips. I blogged it with some differences: http://blog.haproxy.com/2014/09/30/mitigating-the-shellshock-vulnerability-with-haproxy/ Baptiste
Re: Server Sent Events on iOS
On Mon, Sep 29, 2014 at 9:15 PM, William Lewis m...@wlewis.co.uk wrote: Hi all I have a problem with a website which uses Server-Sent Events where the long lived connection for the Server Events seems to be blocking other resources from loading on iOS clients only and only when I have haproxy between client and server. This is my test case. * Create a node express app which serves a html page which subscribes to an EventSource and asynchronously adds 200 300x100px images to the DOM * Node app is configured to serve resources with 500ms delay to reliably reproduce the problem * Configure basic haproxy between node app and client * Reset cache on iOS device and connect to server Expected result * Client open 5 simultaneous http connections to the server * 1 connection is blocked listening for events from the EventSource * The remaining 4 connections are used to download the 200 images Actual Result * Connection to EventSource is established and events start to be logged to the console * Images start to download on the page * Several of the images get blocked and never load Clearing the device cache and connecting directly to the server, all resources load, although the loading pattern of images is significantly different. If anyone has any ideas I would greatly appreciate any suggestions?? Sources and config included below. ** index.html* html head style img { width: 30px; height: 10px; border-style: solid; border-color: black; border-width: 1px; } /style /head body script type=text/javascript var source = new EventSource('/events'); source.onmessage = function(e) { console.log(e.data); } var body = document.querySelectorAll('body'); var createImage = function(i) { var element = document.createElement('img'); element.src = '/' + i + '.png'; body[0].appendChild(element); } window.setTimeout(function() { for (var i = 1; i 200; i++) { createImage(i); } }, 1000); /script /body /html ** app.js* var express = require('express'); var app = express(); app.get('/events', function(req, res) { // let request last as long as possible req.socket.setTimeout(Infinity); var messageCount = 0; res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive' }); res.write('\n'); var timeout; var emitEvent = function() { res.write('id:' + ++messageCount + '\n'); res.write('data:' + new Date().getTime() + '\n\n'); timeout = setTimeout(emitEvent, 3000); } req.on(close, function() { clearTimeout(timeout); }); emitEvent(); }); var staticHandler = express.static(__dirname + '/public'); app.use(function serveStatic(req, res, next) { setTimeout(function() { staticHandler(req, res, next); }, 500); }); var server = app.listen(3000, function() { console.log('Listening on port %d', server.address().port); }); ** haproxy config* global daemon quiet maxconn 1024 pidfile haproxy.pid log 127.0.0.1 local0 log 127.0.0.1 local1 notice defaults log global balance roundrobin mode http frontend external bind :80 default_backend test backend test server test localhost:3000 Hi William, Could you please turn on option httplog and provide us the logs reported by HAProxy? Also, which version of HAProxy are you running? Baptiste
Re: source based loadbalancing hash algorithm
On Thu, Sep 25, 2014 at 3:45 PM, Gerd Müller gmuel...@gmbd.de wrote: Hi list, we want to stress test our system. We have 8 nodes behind the haproxy and 8 server infront to generate the request. Since we are using source based loadbalancing I would like to know how the hash is build so I can give the requesting the proper ips. Thank you, Gerd Hi Gerd, What's your problem exactly? What do you want to test: performance, hash, etc ??? Baptiste
Re: sending traffic to one backend server based on which another backend server sticky session
On Sat, Sep 27, 2014 at 1:33 AM, Joseph Hardeman jwharde...@gmail.com wrote: So I have a need to send a remote visitor to one specific server on another port/backend based on the first backend server they logged in to. Its really the same server just different IP's. Is this possible? Joe Hi Joseph, This is possible with the dev version of HAProxy and using a common stick tables between your two farms. Also server order will be very important, each server and its peer must be in the same order in each farm. And it should do the trick. Baptiste
Re: shellshock and haproxy
On 30.09.2014 10:51, Baptiste wrote: On Mon, Sep 29, 2014 at 2:36 PM, Thomas Heil h...@terminal-consulting.de wrote: Hi, To mitigate the shellshock attack we added two lines in our frontends. -- frontend fe_80 -- reqideny ^[^:]+:\s*\(\s*\) reqideny ^[^:]+:\s+.*?([^]+){5,} -- and checked this via -- curl --referer x() { :; }; ping 127.0.0.1 http://my-haproxy-url/ curl --referer true EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF http://my-haproxy-url/ -- Any hints or further sugestions? cheers thomas Hi Thomas, Thanks for the tips. I blogged it with some differences: http://blog.haproxy.com/2014/09/30/mitigating-the-shellshock-vulnerability-with-haproxy/ Maybe you should add a warning that this may not catch all possible exploits of that bash bug. The parser seems to be quite bad and even after 2 rounds of security releases from vendors the exploit in bash still isn't fixed all the way. The above fixes *some* of the attack vectors but there may be others that bypass this. Regards, Dennis
J'essaie une nouvelle couleur pour mes yeux
Un problème de visualisation ? Utilisez la version en ligne. AIR OPTIX COLORS CHANGEZ DE LOOK EN UN CLIEN D'OEIL! avec les lentilles de couleur AIR OPTIX® COLORS CHANGEZ DE LOOK EN UN CLIEN D'OEIL! Avec ou sans correction visuelle VOS LENTILLES DE COULEUR SATISFAIT OU REMBOURSÉ * Tentez l’expérience sans prendre de risque ! J'EN PROFITE! Votre opticien vous attend pour vous faire essayer les lentilles AIR OPTIX® COLORS: TROUVER UN POINT DE VENTE! Lentilles Une gamme de 9 couleurs pour un nouveau regard naturel avec des lentilles confortables 1 EN SAVOIR PLUS Footer-1Alcon LA SCIENCE VECTEUR DE PERFORMANCE *Offre nominative et limitée à une participation par foyer (même nom, même adresse) réservée aux personnes majeures de France Métropolitaine (Corse inclus) et ayant acheté une boite d’AIR OPTIX® COLORS avant le 31 octobre 2014. 1. Eiden SB, Davis R, Bergenske P. Prospective study of lotrafilcon B lenses comparing 2 versus 4 weeks of wear for objective and subjective measures of health, comfort, and vision. Eye Contact Lens. 2013;39(4):290-294. Les lentilles AIR OPTIX® COLORS sont indiquées pour la correction optique chez des personnes ayant des yeux sains présentant un astigmatisme minime. Les lentilles mensuelles de port journalier nécessitent un entretien approprié chaque soir et doivent être renouvelées tous les mois. Veuillez lire attentivement les instructions figurant dans la notice et sur l’étiquetage. En cas de doute, demandez conseil à votre spécialiste. L’entretien correct des lentilles et le renouvellement régulier de l'étui-lentilles sont essentiels. Le port de lentilles de contact est possible sous réserve de non contre-indication médicale au port de lentilles. Ce dispositif médical est un produit de santé réglementé qui porte, au titre de cette réglementation, le marquage CE. Fabricant : Alcon Laboratories, Inc. Juillet 2014 - O298 Cliquez ici pour ne plus recevoir d'email d'Alcon.
Re: Server Sent Events on iOS
On 30 September 2014 18:54, Baptiste bed...@gmail.com wrote: On Mon, Sep 29, 2014 at 9:15 PM, William Lewis m...@wlewis.co.uk wrote: Hi all I have a problem with a website which uses Server-Sent Events where the long lived connection for the Server Events seems to be blocking other resources from loading on iOS clients only and only when I have haproxy between client and server. This is my test case. * Create a node express app which serves a html page which subscribes to an EventSource and asynchronously adds 200 300x100px images to the DOM * Node app is configured to serve resources with 500ms delay to reliably reproduce the problem * Configure basic haproxy between node app and client * Reset cache on iOS device and connect to server Expected result * Client open 5 simultaneous http connections to the server * 1 connection is blocked listening for events from the EventSource * The remaining 4 connections are used to download the 200 images Actual Result * Connection to EventSource is established and events start to be logged to the console * Images start to download on the page * Several of the images get blocked and never load Clearing the device cache and connecting directly to the server, all resources load, although the loading pattern of images is significantly different. If anyone has any ideas I would greatly appreciate any suggestions?? Sources and config included below. * index.html html head style img { width: 30px; height: 10px; border-style: solid; border-color: black; border-width: 1px; } /style /head body script type=text/javascript var source = new EventSource('/events'); source.onmessage = function(e) { console.log(e.data); } var body = document.querySelectorAll('body'); var createImage = function(i) { var element = document.createElement('img'); element.src = '/' + i + '.png'; body[0].appendChild(element); } window.setTimeout(function() { for (var i = 1; i 200; i++) { createImage(i); } }, 1000); /script /body /html * app.js var express = require('express'); var app = express(); app.get('/events', function(req, res) { // let request last as long as possible req.socket.setTimeout(Infinity); var messageCount = 0; res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', 'Connection': 'keep-alive' }); res.write('\n'); var timeout; var emitEvent = function() { res.write('id:' + ++messageCount + '\n'); res.write('data:' + new Date().getTime() + '\n\n'); timeout = setTimeout(emitEvent, 3000); } req.on(close, function() { clearTimeout(timeout); }); emitEvent(); }); var staticHandler = express.static(__dirname + '/public'); app.use(function serveStatic(req, res, next) { setTimeout(function() { staticHandler(req, res, next); }, 500); }); var server = app.listen(3000, function() { console.log('Listening on port %d', server.address().port); }); * haproxy config global daemon quiet maxconn 1024 pidfile haproxy.pid log 127.0.0.1 local0 log 127.0.0.1 local1 notice defaults log global balance roundrobin mode http frontend external bind :80 default_backend test backend test server test localhost:3000 Hi William, Could you please turn on option httplog and provide us the logs reported by HAProxy? Also, which version of HAProxy are you running? Baptiste Hi William, And, if possible, could you also please provide PCAP dumps for both scenarios 1. from both sides of haproxy 2. between IOS and your backend server ? -- Benjamin Lee mailto:benjamin@realthought.net Melbourne, Australiahttp://www.realthought.net Linux / BSD / GNU tel:+61 4 16 BEN LEE
Re: Server Sent Events on iOS
Hi Manfred + list Please accept my apologies for that, it was a stupid and careless thing to do :( On 30 Sep 2014, at 14:03, Manfred Hollstein mhollst...@t-online.de wrote: Hi William, please: NEVER POST such files to a public mailing list! Instead upload them to a site and then post the URLs for them. You just charged more than 3 MiB of a user's mobile limit without any benefit for most of them, at least not on a mobile tariff ;) Thx, cheers. l8er manfred On Tue, 30 Sep 2014, 14:14:11 +0200, William Lewis wrote: Hi Baptiste / Benjamin, I’ve attached haproxy log, and 3 pcap files The test with haproxy ended with me killing the node process, so the event source request terminated and the hanging resource requests 503’d as shown at the end of the log. Looking at the tcpdumps. 1. With haproxy * You can see that there are 6 concurrent http connections between iOS and haproxy. * In the first connection stream you can see the initial document, followed by the event stream * Then you can see the client has used http piping (pretty dumb considering the browser should know this connection is occupied) to send requests for /21.png /22.png /23.png ( the hanging resources) * The first connection stream carried on responding with data from the event source and the stuck resources are eventually 503’d when the node app is killed 2. Without haproxy * This time we there are 12 distinct http connections that have been made between iOS and node * Again in the first connection stream you see the initial document followed by the event stream and the pipelined requests for the same resources that go stuck above * However this time after the next event is emitted by the event stream, the connection is terminated and carries on with a new connection * And you see this in the browser console, but the event stream carries on seamlessly * The requests that were piped lined in that request get dealt with in other streams e.g. /21.png is in stream 8 I am by no means an expert analysing tcpdumps or how http pipelining is supposed to work but it looks to me that without haproxy in the middle node has managed to identify there are requests stuck in a http pipeline and reset the connection to allow the browser to continue. Is there anyway to achieve the same with haproxy? On 30 Sep 2014, at 12:21, Benjamin Lee benjamin@realthought.net wrote: On 30 September 2014 18:54, Baptiste bed...@gmail.com wrote: On Mon, Sep 29, 2014 at 9:15 PM, William Lewis m...@wlewis.co.uk wrote: Hi all I have a problem with a website which uses Server-Sent Events where the long lived connection for the Server Events seems to be blocking other resources from loading on iOS clients only and only when I have haproxy between client and server. This is my test case. * Create a node express app which serves a html page which subscribes to an EventSource and asynchronously adds 200 300x100px images to the DOM * Node app is configured to serve resources with 500ms delay to reliably reproduce the problem * Configure basic haproxy between node app and client * Reset cache on iOS device and connect to server Expected result * Client open 5 simultaneous http connections to the server * 1 connection is blocked listening for events from the EventSource * The remaining 4 connections are used to download the 200 images Actual Result * Connection to EventSource is established and events start to be logged to the console * Images start to download on the page * Several of the images get blocked and never load Clearing the device cache and connecting directly to the server, all resources load, although the loading pattern of images is significantly different. If anyone has any ideas I would greatly appreciate any suggestions?? Sources and config included below. * index.html html head style img { width: 30px; height: 10px; border-style: solid; border-color: black; border-width: 1px; } /style /head body script type=text/javascript var source = new EventSource('/events'); source.onmessage = function(e) { console.log(e.data); } var body = document.querySelectorAll('body'); var createImage = function(i) { var element = document.createElement('img'); element.src = '/' + i + '.png'; body[0].appendChild(element); } window.setTimeout(function() { for (var i = 1; i 200; i++) { createImage(i); } }, 1000); /script /body /html * app.js var express = require('express'); var app = express(); app.get('/events', function(req, res) { // let request last as long as possible req.socket.setTimeout(Infinity); var messageCount = 0; res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control':
Please remove me
From this list
RE: shellshock and haproxy
The second line throws a config error, whether you use reqdeny or reqideny, complaining that the regex is invalid when running version 1.5.3. This is the error that comes back from a configuration test: [ALERT] 272/080419 (29422) : parsing [/etc/haproxy.cfg:295] : 'reqdeny' : regular expression '^[^:]+:\s+.*?([^]+){5,}' : regex '^[^:]+:\s+.*?([^]+){5,}' is invalid Which version of haproxy were you able to use that regex with? -- Jeff Buchbinder Rave Mobile Safety, Inc jbuchbin...@ravemobilesafety.com From: Thomas Heil [h...@terminal-consulting.de] Sent: Monday, September 29, 2014 8:36 AM To: haproxy@formilux.org Subject: shellshock and haproxy Hi, To mitigate the shellshock attack we added two lines in our frontends. -- frontend fe_80 -- reqideny ^[^:]+:\s*\(\s*\) reqideny ^[^:]+:\s+.*?([^]+){5,} -- and checked this via -- curl --referer x() { :; }; ping 127.0.0.1 http://my-haproxy-url/ curl --referer true EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF EOF http://my-haproxy-url/ -- Any hints or further sugestions? cheers thomas
RE: shellshock and haproxy
Hi Jeff, [ALERT] 272/080419 (29422) : parsing [/etc/haproxy.cfg:295] : 'reqdeny' : regular expression '^[^:]+:\s+.*?([^]+){5,}' : regex '^[^:]+:\s+.*?([^]+){5,}' is invalid Which version of haproxy were you able to use that regex with? Make sure you compiled haproxy with PCRE (USE_PCRE=1). Regards, Lukas
RE: shellshock and haproxy
I think it was accidentally left out of my latest build -- thanks! -- Jeff Buchbinder Rave Mobile Safety, Inc jbuchbin...@ravemobilesafety.com From: Lukas Tribus [luky...@hotmail.com] Sent: Tuesday, September 30, 2014 11:30 AM To: Jeff Buchbinder Cc: haproxy@formilux.org Subject: RE: shellshock and haproxy Hi Jeff, [ALERT] 272/080419 (29422) : parsing [/etc/haproxy.cfg:295] : 'reqdeny' : regular expression '^[^:]+:\s+.*?([^]+){5,}' : regex '^[^:]+:\s+.*?([^]+){5,}' is invalid Which version of haproxy were you able to use that regex with? Make sure you compiled haproxy with PCRE (USE_PCRE=1). Regards, Lukas
Re: shellshock and haproxy
I'm going to update the article as well :) Baptiste
hash mapping on x-forwarded-for header?
Hi We have a backend cluster of 18 api servers which normally get hit from an haproxy instance on the public subnet. We like to use hash-type consistent to load balance and pin clients to specific servers in order to take advantage of local cache on the api servers. We recently deployed a few frontend nginx servers on a new project which are load balanced in this manner as well. However, when these servers hit the api cluster internally via haproxy they get pinned to only 3 backend api servers and cause them to melt. Is it possible to use hash-type consistent on the x-forwarded-for information from the request hitting the frontend nginx servers? Thank you Paul
Re: hash mapping on x-forwarded-for header?
On Tue, Sep 30, 2014 at 11:44 AM, Paul McIntire p...@skout.com wrote: Hi api servers and cause them to melt. Is it possible to use hash-type consistent on the x-forwarded-for information from the request hitting the frontend nginx servers? If you're using 1.5 the balance hdr(x-forwarded-for) option is probably what you want. http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#balance -Bryan
Re: retry new backend on http errors?
On 26/09/2014 11:46 πμ, JCM wrote: On 25 September 2014 14:47, Klavs Klavsen k...@vsen.dk wrote: Any way to make haproxy retry requests with certain http response codes X times (or just until all backends have been tried) ? Nope. You really don't want to do this. And I'd be sad if the devs added anything in to HAProxy to enable this. I don't find his request unreasonable. There are cases where a short burst of 500 could lead to a successful request upon a retry. But, I have to see that this is very trick to decide under which conditions you want HAProxy to retry or let the 500 to get back to the client. Pavlos signature.asc Description: OpenPGP digital signature
cookie persistence when a server is down
Hello, I'm testing HAProxy's cookie based persistence feature(s) and I have a question. Currently I have 2 test servers set up behind HAProxy. They use a JSESSIONID cookie like many java application servers. In haproxy.cfg I have these persistence settings: server server1 127.0.0.1:9443ssl verify none check cookie server1 server server2 172.28.128.3:9443 ssl verify none check cookie server2 cookie JSESSIONID prefix This works as expected. HAProxy adds the prefix to the cookie and this enables sticky sessions. When I put, for example, server1 into maintenance, HAProxy routes server1 clients to server2. I can see this in the HAProxy logs with termination flags --DN. When I put server1 back in service, it routes server1 clients back to server1, because the cookie has not changed (flag --VN). But what if I had a server3? When I put server1 in maint, will server1 clients be randomly routed to server2 3 on each request? Or are they somehow temporarily persisted to server 2 or 3 until server1 becomes available again? Thank you, Colin Ingarfield
dynamically change maxconn of server
Hello, I'm trying to find a way to dynamically set the maxconn of a server at runtime, and haven't been able to find anything that seems right. I was hoping that maybe I'm missing something, or that someone can suggest an alternative. I'm using haproxy internally to proxy a microservice. Haproxy runs on the host running the microservice, with a backend containing a single server (localhost, with the microservice port). I like this setup, primarily because I've found that haproxy handles queueing very well with a lot of options. I set a high maxconn on the frontend, a low maxconn on the server and include a queue timeout so that in case the server is overloaded it can return a 503 to the client rapidly. This particular service needs a low server maxconn during the day, when we're okay with lots of 503 errors but really need to avoid over-consuming CPU. During the night, we could set a high maxconn and run the hosts at an extremely high resource utilization. I was hoping to compute maxconn dynamically I was hoping to set the server maxconn with the unix control socket, but it currently seems to only set maxconn for default or for a frontend. Some extra info: — we configure haproxy with chef, but I was hoping to decouple this. Hosted chef is not always available, and sometimes we disable chef-client. Thanks! -- e s