Re: SSL farm
On 2012-05-23 11:42:24, Hervé COMMOWICK wrote: Or you may use PROXY protocol and set send-proxy in your haproxy configuration and ask stud to merge this : https://github.com/bumptech/stud/pull/81 This is the single ssl server configuration that I explicitly wanted to avoid. Right? /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: SSL farm
On 2012-05-23 16:21:35, Hervé COMMOWICK wrote: No, you may have multiple stud. And how do you load balance between them? DNS round robin is not good enough. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: SSL farm
On 2012-05-23 16:37:53, Hervé COMMOWICK wrote: just use HAProxy to load balance to multiple stud, with send-proxy on HAProxy side, and --read-proxy on stud side. Thanks for the pointers, Hervé. stud is not in debian stable, and both haproxy and stunnel are too old to have this feature. mod_gnutls doesn't support the proxy protocol as far as I can tell. What happens to a browser session when a ssl connection moves from one server to another without the resume feature mid-stream? Reload, at least with stunnel, implies renegotiation of the ssl connection as far as I can tell. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
SSL farm
I read through the last 6 months of archive and the usual answer for SSL support is put nginx/stunnel/stud in front. This, as far as I can tell, means a single server handling SSL, and this is the what http://haproxy.1wt.eu/#desi suggest is a non-scalable solution. You can obviously configure haproxy to route ssl connections to a form via the tcp mode, but you then lose the client IP. The transparent keyword is promising but apparently requires haproxy box to be the gateway. Not sure that is possible with our cloud environment. I understand from: http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html#setting-a-session-cache-with-apache-nginx that session reuse (i.e. mod_gnutls in our case) would need to be configured on the backend to permit ssl resume. But how do you go about distributing traffic to a ssl form without losing the client IP? /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: SSL farm
On 2012-05-22 19:46:45, Vincent Bernat wrote: Yes. And solve session problem by using some kind of persistence, for example source hashing load balancing algorithm. Persistence here meaning ssl packets for a given session goes to the same ssl server? If so what happens if that ssl server dies? /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: balance by selecting host with lowest latency?
On 2011-12-06 21:38:40, Wout Mertens wrote: So if you're doing HTTP load balancing for app servers, it seems to me that the server that responded fastest last time should get the job. HAproxy is already capturing the response times at each request so I think this would allow for really responsive fair load balancing. Would that algorithm not be subject to oscillations? First we send n requests to backend 1, then we send n requests to backend 2 as 1 is now slow. If n is big enough would this not cause cascade of backend failures? Opposed to spreading out the load over all backends. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
cookie domain set based on request
I would like to set the cookie domain to the top level domain of the request. This is not currently possible, right? For example if the request is: Host: www.tld haproxy should behaves as if this configuration was set in the haproxy config file: cookie ... domain=.tld In this case I have 100s of domains on the backend so I cannot just list them out. We redirect to sub-domains and are seeing clients spread requests over all backends instead of respecting the cookie based persistence. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: cookie domain set based on request
On 2011-11-29 22:58:05, Baptiste wrote: what you want to do is not doable. I mean taking a piece of the host header and inserting it into the Set-Cookie header. Thanks. How have you currently setup your persistence in HAProxy? cookie WILDCAT_SERVER insert do you have any application cookie that would stay constant despite the domain browsed and we could rely on to ensure persistence? Yeah, we should be able to find a suitable application cookie rather than inserting a new one. It has the domain set already and should survive redirect. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: hashing + roundrobin algorithm
On 2011-11-26 01:30:41, Rerngvit Yanggratoke wrote: We have over three millions of files. Each static file is rather small ( 5MB) and has a unique identifier used as well as an URL. As a result, we are in the second case you mentioned. In particular, we should concern about if everybody downloads the same file simultaneously. We replicate each file at least two servers to provide fail over and load balancing. In particular, if a server temporary fails, users can retrieve the files kept on the failing server from another server. In order for haproxy to route the request correctly it needs to know, per url, what the two backend servers should be. Or needs to fail on the server that is temporarily down (make sure you define that, and haproxy has the same understanding) and reroute traffic the server that is up. Do you care about the request that sees the first failure? I do not know enough about haproxy yet to determine whatever either option is available. If you replicate all files on server a to server b, then each server needs 200% capacity to handle failover. If you replicate 3 times it would be 150% and if you replicate a given resource to a random via a consistent hash you get much better behavior. Make sure you consider hot spots. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: Increasing amount of spam on the mailing list
On 2011-07-26 09:25:42, Karl Kloppenborg wrote: Surely, like surely you don't need an entirely open mailinglist, it's so easy to implement a verification of identity confirmation these days? I filter spam so the main problem I see is bounce messages which are sent to the list for some strange reason. Noted this a few months back. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Re: Rate limit per IP
On 2011-03-20 10:25:07, Baptiste wrote: Yes, Haproxy can limit rate connection. Please look for rate-limit sessions and fe_sess_rate in the configuration.txt documentation [1]. rate-limit and fe_sess_rate are for global for all IPs, right? In HAproxy 1.5 [2], there are a few more options, like src_conn_ which are more accurate and might help you better. Bear in mind that 1.5 is still in development. src_conn_rate looks interesting. I apologize for not being very familiar with haproxy's release schedule but any hints on when 1.5 becomes stable? /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Rate limit per IP
Is there a way to rate limit per IP (or CDIR)? In the sense our global capacity (rate limit sessions) might be x requests/sec, but to protect against abusive bots or DOS attacks we would to also limit any IP or ideally some bigger buckets like a CDIR to say x/100 requests/sec. /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com
Bounces
Would it be possible to filter out bounce messages instead of having them sent to the list? /Allan -- Allan Wind Life Integrity, LLC http://lifeintegrity.com