Hi,
I'm experiencing latency problems while running HAProxy 1.4.18.
Our backend servers reply to HAProxy almost instantly (~4ms), but some
of those replies are sent to the clients more than 100ms later.
We have approx. 50k sessions opened at any time, with a HTTP request
coming in
Hi all,
haproxy is used for http and https load balancing with TLS termination
on haproxy side.
I'm using openbsd -stable on this box. I got CPU saturated with
250Mbps traffic in/out summary on frontend NICs and 3000 ESTABLISHED
connections on frontent interface to haproxy.
Remove:
Here is a brief example below of the commands it would run for OpenSSL
and then HAProxy.
# open ssl
Probably doesn't matter regarding the crash, but:
Why not build openssl statically instead of shipping all those shared
objects in the rpm/directory? Replacing shared with no-shared will
Please provide the output of haproxy -vv of the 1.5.11 executable.
I guess you have an ABI problem between openssl 1.0.1 and 1.0.2.
I wonder if we are not seeing a case not covered by CVE-2015-0290 :
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-0290
And linking haproxy 1.5.11
Hey folks,
I'm setting up HAProxy and keepalived on 2 nodes today. And I'm able
to start HAProxy on the first node, but not on the 2nd node.
If you bind to a virtual IP it will by default only work on if that virtual IP
is currently active on that particular box, which is not what you
Hello,
I recently upgraded from HAProxy 1.5.4 to HAProxy 1.5.11, which
included an upgrade from OpenSSL 1.0.1i to 1.0.2a, and my load
balancers have since been crashing 1-2 times a day.
[...]
Any idea whats going on ? I have had to roll back to 1.5.4 in the meantime.
Please provide
Hi Sarvesh,
Dear Team,
We upgraded my haproxy from 1.5dev21 to 1.5.11 stable version with same
configuration. At the backend, we are using jBoss.
As soon as we upgraded, we encountered serious issue regarding jBoss
thread counts. It has been increased tremendously.
After rollback to
I have the following configuration:
frontend http1 127.0.0.10:1080
rspirep ^Location:\ http://(.*):80(.*) Location:\ https://\1:443\2
rspirep ^Location:\ http://(.*) Location:\ https://\1
default_backend ssl1
backend ssl1
server sslserver 192.168.68.100:443 ssl verify required
Hi Willy,
Put option http-tunnel in your default section, this will restore pre
1.5dev22
behavior. Read more about this here:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-tunnel
Hmmm no, there's option forceclose, so the two are supposed to close.
There's
This should make it work until there's a fix for this.
Currently, using only CN I'm unable to reproduce any issue.
I did my tests here as well, haproxy behavios corretly in all
the scenarios I've tested.
Peter, the traces and informations you have provided off-list
draw a very different
HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau w...@1wt.eu
Build options :
TARGET = linux30
[...]
Available polling systems :
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 2 (2 usable), will use poll.
Also, please
In fact, I am sure its a bug.
I also happen to have the following certs:
*.apps.mycompany.com.au
*.its.apps.mycompany.com.au
If I go to sitea.its.apps.mycompany.com.au, I get the
*.apps.mycompany.com.au certificate
The workaround in the meantime is to make sure haproxy
loads
In fact, I am sure its a bug.
I also happen to have the following certs:
*.apps.mycompany.com.au
*.its.apps.mycompany.com.au
If I go to sitea.its.apps.mycompany.com.au, I get the
*.apps.mycompany.com.au certificate
Where should I log this?
Reporting here is enough. I
I will capture a wireshark. Do you want this running on my workstation that
doing the testing?
Doesn't matter where, as long it captures the complete TCP session (tcpdump
-s0, to avoid truncating the packets) from a ok and from a failed session.
strict-sni seem to help.
Not yet sure why,
Is this a feature of HaProxy? and if so what is the corresponding
option to enable it.
Basically, i want Harpoxy to resend the request which as already
received a 503 from one server, to another server in the same backend.
Thats not supported, no.
Lukas
I have confirmed the behavior. In both cases all new connections
receive a RST when a backend server is not available to service the
request. The behavior is Syn - RST in both cases. Any existing
connections timeout.
That doesn't change the fact that an application can't do this, the
I've tried twice in the past week to unsubscribe from the
haproxy@formilux.orgmailto:haproxy@formilux.org mailing list, but it
doesn't appear to be working.
By writing to haproxy+unsubscr...@formilux.org, right?
I have tried this change already, by renaming them alphabetically.
Didn't make any difference.
It won't in 1.5.8. Only 1.5.11 respects alphabetical ordering of the
certificates in a folder. Please specify them manually:
crt /etc/haproxy/ssl/wildcard.mycompany.com.au.crt crt
i am wondering if the ability exists in HAProxy to reply to a HTTP proxy
request with a reset (RST) if no backend server is available.
the scenario goes as such:
i have a proxy pac file that assigns multiple proxies to all clients,
and through the logic tree in the pac file, the proxies are
I am having some issues with sticky sessions. The sessions are not sticking.
I am using tcp mode with ssl. I have implemented the example out of the
manual.
The user is getting bounced back and forth between the two servers,
which is causing issues in the application.
Am I missing
haproxy is a tcp (layer 3/4) proxy, that can perform application (layer
7) functions. i am already doing service checks against my proxies to
validate their availability. when no pool member is available, haproxy
knows it. there are no external helpers needed to make this
determination. the
Date: Wed, 18 Mar 2015 01:49:47 +0100
From: denni...@conversis.de
To: luky...@hotmail.com; jarno.huusko...@uef.fi
CC: haproxy@formilux.org
Subject: Re: send/accept-proxy over unix socket not working
On 13.03.2015 18:44, Lukas Tribus wrote:
What
What version of haproxy are you using ? (And what OS) ?
In the first frontend I set:
server clear /var/lib/haproxy/test send-proxy
In the second frontend I set:
bind /var/lib/haproxy/test accept-proxy
Are you able to connect to the /var/lib/haproxy/test socket with
netcat or socat ?
Did you declare you local host in the peers section? If I recall correctly,
that will make sure that the stick tables are synched form the old to the
new process.
Unfortunately that leads to an error:
[ALERT] 070/192546 (2839) : Proxy 'back1': peers can't be used in
multi-process mode
Hi,
until a moment ago I was under the impression that when performing a
reload using the init script (which uses the -sf option for the reload)
the stick tables would survive but apparently I was mistaken.
Is there a better way to perform a graceful restart that maintains the
stick table
I've just re-upgraded my haproxy instances to 1.5.11 and added the
option http-tunnel as you suggested. I'm waiting for things to
stabilize before jumping to conclusions, but it looks a bit better at
the moment.
I'll return with verdict about performance in a while after monitoring
further!
Hi,
I have not been able to reproduce the behavior in my lab-environment,
and I would very much appreciate pointers as how to troubleshoot this
further, and where to look. I have a feeling that haproxy 1.5 have
default values which I've not taken into consideration that might
cause this.
I have not been able to reproduce the behavior in my lab-environment,
and I would very much appreciate pointers as how to troubleshoot this
further, and where to look. I have a feeling that haproxy 1.5 have
default values which I've not taken into consideration that might
cause this. Please
So maybe it's time that we backport this patch into 1.5. We haven't
received any negative feedback for 1.6 yet after almost 2 months. What
do people think?
It think it would be a good thing to release 1.6-dev1, unless there are some
critical issues that still need work.
Even if a lot of
Using HA-Proxy version 1.5.6 2014/10/18 on CentOS 6, recently updated, etc.
Upgrade anyway, you may be hitting a bug thats already fixed:
~/haproxy-1.5$ git log --oneline v1.5.6.. | grep agent
bfb8f88 BUG/MEDIUM: Do not consider an agent check as failed on L7 error
8eccbf7 MEDIUM/BUG: Only
Hi Lukas,
Thank you for taking the time to reply.
Here are the global and defaults sections:
Add:
option prefer-last-server
to your default section.
Lukas
Here's my haproxy config:
This config is incomplete. We need at least all options and timeouts,
including global and default sections.
Lukas
The HAProxy is used by normal browsers
but also from cronjobs with various languages(Perl,Python,C,Go etc)
I was surprised about this very long inactivity period for TCP
connection on a system which has reasonable settings for TCP keepalive[3].
This is not what TCP keepalives does. First of
Hold on a second let me get that right.
Without TCP keep alive enabled, a client which sends some data every
10mins and timeout client set to 30m it more or less means that the
connection will only drop by the client. Am I right?
Without *HTTP* keep-alive you mean. Well, that depends what
On Tue, Feb 24, 2015 at 01:33:32PM -0700, NuSkooler wrote:
Thanks, this has all been very helpful.
Unfortunately it seems that some of the pieces to create a debuggable
version of these old clients are currently missing here. If I can get
that together I'll debug and hopefully find
If a site has N haproxy hosts, how should new ticket-keys be
distributed (and processes reloaded) and avoid the race condition of
some hosts using the new keys before those keys are on all hosts?
You distribute the new key to all instances for decryption, but use
the penultimate key for
That is a nice solution.
I didn't understand that was the behavior from reading the
documentation patch from the OP. This makes it sound like the last key
is used for encryption and not the next-to-last (penultimate).
Correct.
Currently there is no choice about which key to use, so maybe
(sorry, again my mailer messed up ...)
That is a nice solution.
I didn't understand that was the behavior from reading the
documentation patch from the OP. This makes it sound like the last key
is used for encryption and not the next-to-last (penultimate).
Correct.
Currently there is
-- Use stats socket to update the list without reload
-- Update Session state at disconnection log schema to include
something useful in case server receives a ticket which was encrypted with key
that is not anymore in the list. Debugging SSL problems is a nightmare
by definition and having
Attached are two captures:
1) ha_lukas-allow-allow.pcap: This is a capture of the bind line you provided:
bind *:443 ssl crt /home/bashby/Lukas/TEST_cert_and_key.pem ciphers \
AES128-SHA verify optional ca-ignore-err all crt-ignore-err all ca-file \
/etc/ssl/certs/cw_client_ca.pem
2)
It would be nice to add a note that without proper rotation, PFS is
compromised by the use of TLS tickets. People may not understand why
they need to put 3 keys in this file and may never change them.
Agreed, we have to clarify that a never changing tls-tickets-keys
file is worse than no file
Attached is a pcap with the bind line cut+paste from your link.
In this case I see Encrypted Alert, but I'm struggling to decrypt it
in WS with this setup.
Ok, so as expected, this didn't really change anything, but at least
we have a config/capture consistency.
Now, it looks like the client
Hi,
I'm not currently sure on the JRE version. These are Android clients
written with a old Android SDK. All new clients are C++ / OpenSSL
based.
I have set the DH param size to 1024 with the same results.
Additionally, I set up a bind statement that reflects that of the
backward
gt; Attached is the information you requested -- and hopefully
performedbrgt; correctly :)brgt;brgt; * no_haproxy.pcap: This is a
successful connection + POST to thebrgt; original Mochiweb server. Note that
here the port is 8443 not 443brgt; (IP=10.3.3.3)brgt; *
ha_self_signed.pcap: Failed
I apologize, the email was destroyed by the mailer...
Attached is the information you requested -- and hopefully performed
correctly :)
* no_haproxy.pcap: This is a successful connection + POST to the
original Mochiweb server. Note that here the port is 8443 not 443
(IP=10.3.3.3)
*
On Centos, after you update openssl, this is one choice ;
bind 0.0.0.0:443 ssl no-sslv3 crt /etc/ssl/certs/yourkey.pem ciphers [...]
On another OS, he qualms page describes how to get the list of ciphers.
My suggestion is to always use the recommended cipher list from Mozilla.
If your
Hi,
Since we can't even properly connect to s_server, that may be the end
of the road for those clients. However, I'm hoping there may be
something that could be configured to allow them through HAProxy.
Below is a s_server log. Note the read failure at the end. A similar
capture in the view
When I connect to haproxy the client uses:
TLS_ECDHE_RSA_WITH_RC4_128_SHA
When I connect to google.com the client uses:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
A part from the RC4 vs AES difference here, which you can
probably fix by an appropriate ciphers string, as long as you
are using a
Hi,
I'm trying to use he option source of HAProxy in order to have the
client's address from my web server.
So i add this option in defaults : source 0.0.0.0 usesrc clientip.
When I restart HAProxy, i receive back this message : Some
configuration options require full privileges,
our setup(1 DC):
* active-active ECMP
* 4 loadbalancers + bird OSPF
* 2 routers + OSPF
* IPs are on loopback interface, added and removed when haproxy service
starts/stops
* OSPF distributes routes to these IPs to routers
* routers route by source IP so same IP always lands on same
Isn't that used more as a multiple datacenter active/active setup thing?
being in the routing part.. and not LAN side of things.
that's the only places I've seen that used.. it's very cool though :)
As I understand anycast and ECMP (and I only know guys who use it and
know what they are
As I understand wikipedia - it is discouraged to use ECMP for
loadbalancing.. Load balancing by per-packet multipath routing is
generally deprecated due to the impact of rapidly changing latency,
packet reordering..
Nobody does per-packet multipathing anymore, in fact, when you use
ECMP for
In each proposition, there is a single master (DNS, LVS...), which
load-balance on two HAProxy.
Me, I try to choose a solution with two master, which will be my two HAProxy.
Maybe it's impossible and i dream ^^, but this is what I need.
CDN's work with anycast and ECMP, that will solve
service haproxy status returns:
haproxy dead but pid file exists
and /var/log/messages shows:
Feb 12 22:46:20 ip-10-72-128-136 kernel: [100695.296333] haproxy[32373]:
segfault at 8 ip 0046b030 sp 7fff2372f4c0 error 6 in
haproxy[40+b5000]
Can you provide the output of
When I try to connect through haproxy ssl, the browser (Firefox and
Chrome) block the contect mixed, in theses cases the http conections,
this not happend when I try to connect without the haproxy.
What can be the problem?
Your application using hard links with the http:// prefix.
Lukas
Is there a problem with health checks and haproxy? Again, using a machine
gun approach on the health check service, we see no problems, but for
whatever reason, occasionally (maybe 1 out 10, could be more), the haproxy
tcp expect fails. Using tcpdump, seems we're getting the right return
Hi Tod,
The only thing I found that I think may be causing this is Outlook
Anywhere/RPC
over HTTPS. I did not find the option for http-no-delay until after testing
so I
am wondering if this one setting could cause this type of behavior?
Do you have problems with the actual application
I tried to implement these recommendations but didn't seem to get
results I was expecting. How exactly does one reliably test that the
1-RTT handshake is actually working?
Enable TFO and announce http/1.1 via NPN and ALPN, that should
do it.
But your client will have to support all those
Hi,
Thanks for getting back to me, I found the issue - we are not using sslv3
for obvious reasons but the ssl health check only uses v3 and thus
the check handshake was failing.
My bad only noticed a little after, is there a plan to support TLS level
handshake ?
Yes, use check (or
Summary: http:5000 to https:54443 with relative_uri as it is. (change
is in scheme and port number).
Use redirect prefix [1] instead of scheme or location:
redirect prefix https://mps-haproxy.int:5443
I am using: HA-Proxy version 1.5-dev25-a339395 2014/05/10
I strongly suggest to
There is no SSL protected repo. I'm surprized that you found the
haproxy.org
site slow, usually it's reasonably fast. Are you sure you weren't
cloning from 1wt.eu instead, which is the slow master ?
Would it be possible to get the haproxy org on github to be synced with
your repos.
Hey,
I have ran into a odd scenario, where the backend is DOWN however the
layer 7 checks are passing. I have included the check which we
received. The haproxy setup is fairly simple using proxy protocol. I
could only find one example of this issue here, however, no follow up
was done on
OpenSSL sometimes acts stupidly like this inside a chroot. We've
encountered a few issues in the past with openssl doing totally crazy
stuff inside a chroot, including abort() on krb5-related things. From
what I understood (others, please correct me if I'm wrong), such
processing may be
The maxconn was set to 4096 before, and after 45 days, haproxy was
using 20gigs...
Ok, can you set maxconn back to 4096, reproduce the leak (to at least
a few gigabytes) and a run show pools a few times to see where
exactly the memory consumption comes from?
Lukas
I have a situation where the no-sslv3 is being ignored using version
1.5.10 on centos 6.6 and my test backend Java Rest api test servers are
rejecting SSL handshakes with :
Please post you configuration, haproxy -vv output and a ssldump of the
failed ssl handshake.
This doesn't make any
With maxconn 5 this is expected behavior, because haproxy will use
RAM up to an amount that is justified for 5 concurrent connections.
Configure maxconn to a proper and real value and the RAM usage will be
predictable.
Lukas
Is there any way to use SNI + specify the server name
to certificate mapping?
To specifically tell it for requests coming in with SNI
of ttrss.neulinger.org, use this certificate instead.
Is there any way to do this equivalent with haproxy?
There is no need to, HAproxy does this
Lukas,
Thanks for the update. How to verify whether haproxy already linked to
openssl.
Can you use SSL or does it fail saying that you need to enable OpenSSL?
If the former is the case, then SSL is enabled, otherwise, its not.
Please note that I'm not build the haproxy from source
Similar question for certs with SANs - does it consider the
alternative names in the selection process?
Yes, as per the doc:
The certificates will be presented to clients who provide a
valid TLS Server Name Indication field matching one of
their CN or *alt subjects*.
And lastly, what if
SAN = Subject Alternative Name
Ah OK. We could double-check but I *believe* Emeric told me about
something like this when he implemented the SNI. But I could be
wrong and could confuse with something else. You could easily
check in the code if you feel at ease with openssl's API (I
I have installed haproxy from ubuntu repo (haproxy version : 1.5.9).
Recently Openssl security team released security patches for
vulnerability (USN-2459-1). Please let me know how to rebuild the
haproxy with newly installed Openssl version.
Don't rebuild. Just use your operating systems
I don't see how. The socket is immediately close()'ed when it hits
tcp-request
connection reject, this is as cheap as it gets.
If you're getting attacked, you try to send as few unnecessary packets
as possible, I guess a silent drop could be nice.
Yes, but that can't be done in
Hi,
Portforwarding to a different IP on the same haproxy-box causes haproxy behave
buggy.
This error happens when one uses the TPROXY target for portforwarding,
ie. like this:
Let's say the IP of the main interface is 192.168.100.100,
and traffic from outside to port 1234 shall be
Hi!
just a thought... wouldn't it make sense to add an option to tcp-request
connection reject to disable the actual TCP RST?
I don't see how. The socket is immediately close()'ed when it hits tcp-request
connection reject, this is as cheap as it gets.
So, an attacker tries to (keep) open
Imagine the 192.168.100.100 is a public IP (for example 1.2.3.4),
and the others are private IPs as they indeed are (192.168.*).
The reason for me is to use internally (ie. between the proxy server
and the backend server) only the private IPs.
The remote IP that your backend sees will be the
As said in the inital posting, the IP of the proxy server is 192.168.100.100
and public port 1234; it needs to be forwarded to the 2nd IP 192.168.100.101
port 5678, and from there to the backend server 192.168.100.102:.
The key question is: what is the reason you don't bind to
Hi Team,
I have an issue Haproxy SSL redirection. Whenever any request is
redirected from HAproxy , then two redirected request is send, one is
with http and other with https while URL for both request is same.
For example,
when the URL is redirected to
Hi,
As far as I see HAProxy removes that header if its contained in the
response from the backend (nginx with keepalive enabled) because
RC7230 says so:
The Connection: keep-alive header doesn't exist in HTTP/1.1,
as HTTP/1.1 implies keep-alive by default. If you don't want to
keep-alive in
Hi Shane,
I have run into a problem using HAProxy with SSL termination that has
me completely stumped.
40-60% of the requests fail and I cannot seem to determine the reason
for the failure/inconsistent results.
Looks like haproxy is interpreting the those failed requests as
Hi folks,
there are some API changes (most structures will became opaque)
in OpenSSL that will break the build with haproxy. It appears
that we will see a 1.1.0 release with those changes by the end
of 2015 [1].
Some of this can be reproduced by linking haproxy against current
openssl
Le 22/12/2014 19:00, Lukas Tribus a écrit :
OK, got it.
What should I do with HAProxy to handle with them? (option
accept-invalid-http-request is set already)
You can't. HAproxy will not support those URI's.
accept-invalid-http-request allows a limited set of forbidden
chars
Hi Alex,
I have a website https://mytest.com. (faked for testing) I intend to
use haproxy in front of it with the option of send-proxy (using
proxyprotocol with ssl connection). The ideal case is that haproxy just
pass through the tcp packet without decoding it, and somehow the
Thanks.
It seems that haproxy needs to terminate ssl connection and start
another ssl connecion to the backend to get it work. (HAProxy in HTTP
mode?)
Using tcp mode of haproxy, it seems that SSL connction negotiation can
not be started.
There is no need to. In TCP mode,
Hi Lucas,
Thank you very much for assuring me that tcp mode is OK. I used ported
code to patch up the backend server to support this.
Might you tell me at what step of SSL negotiation, that the line of
proxyprotocol is inserted, so I can debug the code a little bit?
Before the
Hi Kevin,
We’re migrating a production haproxy 1.4 install to 1.5.
The problem is that initially, it works fine, but then everything
starts locking up.
After a certain period of time or traffic? If yes, how much?
Essentially, one of our backends ends up VERY slow taking a long time
to
I could but that could be a massive amount of data.
IE a few gigabytes. how long would you like a capture.
You can simply capture it on the client, when the issue appears.
Even a single session with a few KB is enough, as long as you
are able to capture the actual problem.
You said that
Hi Sergei,
Full configuration is in attachment. Briefly, we have 1 haproxy, and 2
backend (nginx), tests were running against static file (1kb, and 5kb).
Haproxy listen on 443 in tcp mode (“listen tcp mode”), then listen
figure out what frontend to use (“use-server frontend if condition),
Hi,
could anyone explain what is wrong with following HTTP-request? Nginx
(btw) is working fine.
The content of objectKey (everything after objectKey=).
Lukas
OK, got it.
What should I do with HAProxy to handle with them? (option
accept-invalid-http-request is set already)
You can't. HAproxy will not support those URI's.
accept-invalid-http-request allows a limited set of forbidden
chars, but not the ones you are using ( 127/0x7f ASCII).
You
Hi Sergei,
What about if you run with 5 processes instead of 10? Ar you still maxing out
at 200k session (which would increase the per process sessions to 40k) or are
you maxing out at 100k (maintaing max 20k per process)?
I’ve tried to decrease number of processes - caused decrease in stot
Hi,
I can see the 408 lines showing up in the log however, on the client
side i don't see them.
Meaning that everything works as expected, or is it not?
Lukas
Hi Jeff,
I have used the above directive in my configuration file under
defaults. And it has made not even the slightest difference in seeing
messages like the following getting printed from our http client:
HTTP/1.0 408 Request Time-out
Cache-Control: no-cache
Connection: close
Hi.
Is it possible, and if not is planned, to allow haproxy to be
configured to retry connections on receiving RST and not just timeout?
Linuxes drop the SYN if the listen socket is overflown, but can return
RST if configured so. IMHO, it's better to RST because then you know
that SYN is
Hi David,
I've just take a look in openssl code and your trace. The trace shows
that process try to open 'krb5.conf' and in openssl/ssl/kssl.c there is
some 'abort()' calls.
Do you use openssl kerberos5 auth feature of openssl?
Wait a minute, that sounds familiar ...
David,
are you on Centos 6.4?
Try setting your ciphers to one of the Mozilla recommendation [1],
that way, you won't negiotating kerberos ciphers suites anymore.
If that doesn't help, and you need chroot, then you will have to
upgrade to at least Centos 6.5.
Regards,
Lukas
[1]
We are running 1.5.9 on Centos 6.5. It crashes 10 seconds (give or
take a few seconds) after 1am, 5am, 9am, 1pm, 5pm and 9pm, like
clockwork; let's call that CRASHTIME. Previously we'd been using 1.5.3
on the same hardware for some months without crashes. Once the crashes
started
I will do next time. And yes, was planning to run strace.
Do I need to recompile to enable coredumps?
No, you just adjust ulimit before you start, and make
sure you didn't strip (as in the command strip) the
executable.
Then check the core with:
gdb path/to/the/binary path/to/the/core
PFS depends on using DH algorithm to exchange and create a secret for
the connection.
This is not entirely correct, *DHE* ciphers depend on it, but ECDHE ciphers
don't. Since he disabled all DHE ciphers manually in the configuration,
thats not it.
I didn't have DH parameters, added those,
Hi Daniel,
We have a situation where our app servers sometimes get into a bad
state, and hitting a working server is more important than enforcing
persistence. Generally the number of connections to a bad server
grows rapidly, so we’ve set a maxconn value on the server line which
Hi Sachin,
Hi,
We have SSL backends which are remote, so we want to
use http-keep-alive to pool connections
Connection pooling/multiplexing is simply not (yet) supported.
Its is therefor expected behavior that 1 frontend connection
equals 1 backend connection.
Regards,
Lukas
901 - 1000 of 1576 matches
Mail list logo