It's a violation of proper layer insulation, but you can always have server 1
do it instead. Could be a maintenance problem but if it works, could be very
reasonable, especially if the server can make the determination as to whether
or not a client needs to be sticky.
-Richard
On Jan 29,
We regularly get individual REST GET requests significantly over that length;
the only tuning parameter we've done in that regard is:
tune.bufsize 128000
I don't actually recall if this was mandatory to address the issue, but I'm
thinking that it was.
-Richard
Richard Stanford
CTO
Cyril, thanks for your response.
On Jul 26, 2012, at 4:46 PM, Cyril Bonté wrote:
Please add log global in your backend sections (or in your defaults),
this explains why your log files didn't give you any indication.
After adding this line everywhere (not only in the frontend), you'll see
I'm having some trouble interpreting an HAProxy log file line - it doesn't seem
to make much sense to me, so I'm fairly certain that I'm reading it
incorrectly. My full configuration file is at the end of this email.
Symptomatically, I've been seeing situations in which people are briefly
With this approach you really want 1 fewer public IP than you have public
facing servers. With 2 servers this means 1 IP. DNS is used to distribute the
load around, and keepalived is used to move traffic when a server fails. But
you always want at least 1 servers worth of spare capacity in
would be great from a security
certification standpoint.
Richard Stanford
CTO | KIMBIA
512-474-4447 x777
On May 17, 2012, at 7:41 AM, Willy Tarreau wrote:
Hi,
On Wed, May 16, 2012 at 05:05:05PM +0200, hapr...@serverphorums.com wrote:
I think I am in this exact same boat. I have a site
(.*) https://%{HTTP_HOST}%{REQUEST_URI}
/VirtualHost
Richard Stanford
CTO, KIMBIA
512-474-4447
7 matches
Mail list logo