Hi,
you should switch net.ipv4.tcp_tw_recycle off; you have already tcp_tw_reuse
on, which serves the same purpose (and it's less dangerous with NATted clients).
http://www.serverphorums.com/read.php?10,182544
Lukas
From: erik.tor...@apicasystem.com
To: bed...@gmail.com
CC:
Does that mean Sandy/Ivy Bridge based Intel Platforms can't be undoubtedly
recommended for high performance usage with haproxy or is there a workaround?
What platform would you recommend for a new setup?
Hi Pasi,
Do you know if ubuntu 12.04 has these optimized drivers or not?
I think Canonical developers are going to add the drivers later
in some update to Ubuntu 12.04 packages. The drivers are not yet in 12.04.
I saw some discussion from Canonical guys on xen-devel about that.
I would suggest terminating SSL on the haproxy box (with stud in front of it),
thus switching haproxy from tcp to http mode. That way, longrunning
keepalive-enabled HTTPS sessions terminate there and apache only sees real
non-SSL request without blocking any threads.
If you would like to avoid
i am trying to understand why option httpclose would be a problem?
With httpclose in your configuration, you need 2 tcp session per *request* on
your haproxy box. When you disable httpclose and enable only http-server-close,
then you will use keepalive between the client and haproxy.
With
Willy, this is huge! Great, great work!
A few comments/questions:
- are you running latest and greatest openssl on demo.1wt.eu? I am asking
because Secure Renegotiation doesn't seem to be supported [1]. Older (1.0.0?)
releases seem to have a higher memory overhead as well, iirc.
- I see you
...@1wt.eu
To: luky...@hotmail.com
CC: haproxy@formilux.org
Subject: Re: HAProxy with native SSL support !
Hi Lukas,
On Tue, Sep 04, 2012 at 03:05:14PM +0200, Lukas Tribus wrote:
Willy, this is huge! Great, great work!
A few comments/questions:
- are you running latest and greatest openssl
Hi,
In fact when I say yassl, I really mean CyaSSL.
Ok, great.
A few more comments about (C)yassl:
- development of new features is obviously not as fast as in OpenSSL. For
example TLS SNI is not supported yet (ETA: next release) [1]. This feature
was introduced in 2007 (0.9.8f)
-(C)yassl doesn't support - by design - renegotiation. They also don't
implement RFC4756 (secure renegotiation), see [3]. While this is not
a security problem (from a server point of view), it will become an
interoperability problem sooner or later, once browser vendors make
Don't know if it helps without some knowledge of the nginx source code, but
here [1] you can find the patches applied to nginx to introduce ocsp support.
Its doesn't seem to be trivial to implement though, because you also need to
run (at regular intervals) an OCSP query towards the CA's OCSP
I updated to your latest snapshot which reads HAProxy version
1.5-dev17, released 2012/12/28 (and not 12/30 as I would have
expected) but the problem is still there.
1.5-dev17 is _not_ the latest snapshot. You can find the latest snapshot from
the URL Willy already posted:
OK, I understand that but when downloading the latest
version you're writing about and compiling it, it says 20121228
I see what you mean.
Those date variables are updated only when Willy releases a new -dev release;
which is a manual process as you can see from git commit [1].
In fact, in
FYI:
Firefox only uses TLSv1.0 (see [1]), while Chrome can use up to TLSv1.1 (see
[2]).
If both Firefox and Chrome trigger the issue without no-tlsv11/12 option,
then the issue can be triggered with TLSv1.0 for sure.
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=733647
[2]
Jan 8 18:30:59 srv11 kernel: [ 3878.272003] [ cut here
]
Jan 8 18:30:59 srv11 kernel: [ 3878.295572] WARNING: at net/ipv4/tcp.c:1330
tcp_cleanup_rbuf+0x4d/0xfc()
Jan 8 18:30:59 srv11 kernel: [ 3878.319107] Hardware name: System x3690 X5
-[7148Z68]-
Jan 8
In the mean time i´ve downgraded to the old kernel, but the performances
issues persist. So this seems to be a issue in haproxy.
This is very strange. In your first mail you reported that your CPU is
spending 30% in userspace and 70% is system. How is your CPU usage now?
You are running the
You should be able to deal with this by adding more ips to your haproxy box and
configuring 2 backends in haproxy pointing to the same F5 VIP, but with
different source-ips [1].
Remember to configure HAproxy for source persistence as well, if your
application needs it.
[1]
Interesting. Are these implementations still in use ? This seems more
like early experimentations than definitive releases to me. I don't
know if such versions were shipped in any LTS distro, so most likely
they'll quickly disappear. Am I wrong ?
Looks like you are correct. In openssl 1.0.1
There has been a POST bug (among many others) which was fixed in -dev16. Please
upgrade to latest 1.5-dev17.
Date: Thu, 10 Jan 2013 17:44:07 -0800
From: rlpow...@cytobank.org
To: haproxy@formilux.org
Subject: Problems with sni and big connections.
Theoretically you are able to offload SSL at haproxy, and pass raw, unencrypted
SPDY to the backend.
I doubt however that this has been done before, and nginx most certainly
doesn't accept unencrypted SPDY traffic, so yes, tcp mode will be the only
option for now.
Regards,
Lukas
From:
I have the same issues on windows XP clients (IE8).
I don't have any issues at all with Win 7/IE9 and win2012/IE10.
FYI:
There are 2 browser using the Windows XP SSL/TLS stack, and thus don't provide
SNI on Windows XP:
- Internet Explorer
- Safari
( and obsolete Chrome releases prior v 6.0)
As a solution to this I'll try to get additional IPs and configure
HAproxy to serve different certificates for different IPs.
Do you foresee any reasons why such an approach won't work?
Don't worry, mapping the certificates to dedicated IPs will fix this
problem on IE/Safari with Windows
As a just-maybe-potentially-useful comment: I've gotten around some
SNI issues by having the client come to a non-SSL URL, at which I
set a cookie (in haproxy) for which site they asked for; then we can
redirect to the SSL URL and select the site in haproxy based on the
cookie.
The site I want to display is designed with Joomla. There are no errors
in Apache logs et Firebugs shows several warnings for js files. The
site is well displayed when I connect directly to the Apache server.
What are those Firebug warnings exactly? Also, can you show
us your haproxy logs?
Notice that having the HAproxy box swapping is a huge performance killer and
you absolutely do not want to do that, so apart from tuning/configuring the OOM
killer you should track the issue down and avoid memory depletion and swapping
in first place.
frontend nodes
maxconn 2400
bind :12340 interface eth1
default_backend nodes
but portscans from another node in the internal network show that 12340 is
sometimes open, but most of the it is closed.
We believe this is a bug in haproxy.
Probably haproxy starts when the VIP
Ah okay, I expected bind :*12340 interface eth1 to listen to traffic
coming to the interface, not to bind to al ips which are bound to the
interface at the moment of starting haproxy. If that's really the case,
the documentation of bind interface could be improved.
I think you misunderstood
The current documentation of the bind option interface can be misleading
(as seen on the ML recently).
This patch tries to address misunderstandings by :
- avoiding the words listen or bind in the behavior description, using
restrict to interface instead
- using a different sentence
CC: haproxy@formilux.org; c...@itscope.de
Subject: Re: [PATCH] DOC: simplify bind option interface explanation
On Tue, Feb 12, 2013 at 10:13:19PM +0100, Lukas Tribus wrote:
The current documentation of the bind option interface can be misleading
(as seen on the ML recently).
(...)
Patch
Hi Willy,
I tried to enable tfo on the bind line today, however, it failed even on
recent kernels for me.
At first it was failing with:
[ALERT] 042/223418 (1141) : parsing [haproxy.cfg:14] : 'bind *:1234' unknown
keyword 'tfo'. Registered keywords :
Since this pointed to a trivial parser
Huh ? It is already present a few lines above :
Correct, I was just confused about it because the other keywords like
defer-accept or v4v6 are also present twice and tfo was missing below
the comment: /* the versions with the NULL parse function*/.
lukas@ubuntuvm:~/haproxy-ss-20130125/src$
Anyway I'm used not to rely on libcs found in the field because many
users upgrade their kernels on supported distros. That's why there are so
many defines in the makefile !
So do you think it would make sense to introduce a new build flag for TFO
and define TCP_FASTOPEN?
This would be
If you want I can send patches, but I will have to fix the mailer first.
Yes, please do !
The attached patch fixes the doc/code comments and should reach you
without being mangled.
Lukas
0001-DOC-tfo-bump-required-kernel-to-linux-3.7.patch
If you upgrade to a recent snapshot you can use the strict-sni feature [1].
This way, when the client doesn't provide SNI, the handshake is aborted.
I think this is important even when your clients are supposed to support SNI;
the client may be buggy or the SNI detection in haproxy -
Are you using keepalive on haproxy? Perhaps you are confronting nginx with
keepalive enabled and haproxy with client-side keepalive disabled?
Can you share the haproxy config?
Thomas is right, you should probably unload conntrack for best performance.
Also make sure you don't have any network
You probably don't want net.ipv4.tcp_tw_recycle = 1 when your clients are
behind NAT/CGN boxes.
If you have troubles with source port exhaustion, refer to the article Baptiste
wrote:
http://blog.exceliance.fr/2012/12/12/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/
Lukas
PM, Lukas Tribus wrote:
Aren't using HTTPS in the frontend when benchmarking haproxy and plain HTTP
when benchmarking the original server, are you? That could explain the
performance differences.
No, I only tested HTTP version :)
Anyway, you do want to enable keepalive and to do
I need it for my firewall it seems so I'll leave it for now.
In this case you may want to bypass conntrack for TCP port 80 traffic only.
Also consider matching your backend traffic with -j NOTRACK.
You can read more about bypassing conntrack and the NOTRACK target here:
When I try with tw_recycle = 0 then I start to get a lot of TIME_WAIT
connections and performance degrades quite quickly so I cannot remove it
for now
This indicates you are running out of source ports and it is
probably why the latency increases with the number of simultaneous connections.
When I try with tw_recycle = 0 then I start to get a lot of TIME_WAIT
connections and performance degrades quite quickly so I cannot remove it
Having a lot of TIME_WAIT connections shouldn't be a problem, in fact its
pretty normal.
With tcp_tw_reuse enabled (and tw_recycle disabled), you
Linux ip-x.x.x.x 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:34:28 EST 2008 x86_64
x86_64 x86_64 GNU/Linux
[...]
CentOS release 5.4 (Final)
This seems to be a weird config, you have CentOS 5.4 (released 2009 with a
2.6.18 kernel), but you are running a xenified Fedora kernel compiled in
2008.
You shouldn't see CLOSE_WAIT for that long on your haproxy box - they should
timeout.
In 1.4.22 there was a bug fixed which may relate to your problem:
BUG/MINOR: checks: expire on timeout.check if smaller than timeout.connect
But we are wondering if it can scale up to thousands or more say
100k ssl certificates. Has anyone tried it?
I believe Willy has seen configurations with 50k certificates running
fine.
I would be very interested in how this was configured, seeing as I
am failing to provide consistent, correct, certificates for every
request for as little as 20-30 sites.
Well, you had your thread Default certificate wrongly delivered about
that issue and it didn't seem to be a haproxy
I suspect that some clients fail to use SNI. I've already seen
this from time to time. It looks like after some errors, they
refrain from using SNI or even TLS at all and fall back to SSLv3.
This scared my a bit, so I've done some digging.
I've found 2 related bug reports with a lot of
Enable logging, then you will see the reason the connection was closed,
I guess there is a connection timeout somewhere. Logging will help you
find out where exactly.
A few comments about your configuration:
noepoll
nokqueue
nopoll
nosepoll
nosplice
Exactly what are you
Hi,
please CC the list, so everyone is seeing your responses.
Ubuntu 12.04 LTS, Kernel 3.5, Haproxy 1.4.22 from repo
please provide the output from haproxy -vv nonetheless. It
should have been compiled with the linux2628 target, as this
will enable features like tcp splicing.
No, I need
In fact, its 1.4.18, not 1.4.22. The Ubuntu repo is not
uptodate.
Consider compiling haproxy from source, to have the latest
bugfixes in it.
Anyway, the build has tcp splicing enabled, so with the
configuration changes I proposed, you should be able
to handle the load.
Remember to check your
With what options would you suggest to compile Haproxy?
Something like this should do it: make TARGET=linux2628 ARCH=native
Read the README file for details.
You were talking about CPU cores assignment how to do it in HAproxy ?
Use taskset [1]. Check [2] for interrupts.
Lukas
[1]
v1.4.20 is not in Ubuntu repositories, 1.4.18 is.
Did you compile haproxy yourself?
Lukas
conntrackd permit to also share TCP states between boxes that will
also run iptables
With conntrackd-syncing you just allow the packet to pass the iptables
barrier; but the session will still be dropped by the OS because the
TCP stack doesn't know the socket, and so does not the application.
It's a point in time dump and restore of the in flight packets.
Can't dump the details and in flight content of a TCP session if
the host is already dead.
So either this will work only for manual switchovers (but not for
sudden hardware/software failure; also at this point TCP connection
SSL is possible in the 1.5 development branch only.
You can find all the necessary informations on the
website http://haproxy.1wt.eu/
If you need someone guiding you step by step trough the configuration,
I would suggest you acquire commercial support:
Hi Nick,
in fact, its not very fast.
You can try the formilux mirrors, they are kept up-to-date:
$ git clone http://master.formilux.org/git/people/willy/haproxy.git/
$ git clone http://master.formilux.org/git/people/willy/haproxy-1.4.git/
Lukas
Hi Nick,
For maxconn, in defaults. I read a lot of places that this value
depends on the ulimit value of the system, but it's not clear in what
way they are related.
suppose you set maxconn [1] to 2000. This means you can have up to 2000
concurrent session on the haproxy (!) process
So you are saying that by using the maxconn property in the global
config block HAProxy is automatically adjusting the systems ulimit
setting?
Yes, HAproxy is automatically adjusting and configuring ulimit based
on your configuration (based on settings like maxconn and maxpipes).
So unless
What kernel are you running? You need at least 2.6.37 to do this
with non-local IPv6 binds.
Date: Wed, 27 Mar 2013 08:35:18 +0100
From: kolm...@zid.tuwien.ac.at
To: haproxy@formilux.org
Subject: IPv6 vrrp and bind transparent
Hi,
I am new to the
Hi Ashley,
please share the relevant part of your haproxy configuration and
the output of haproxy -vv and egrep BEGIN|END /etc/ssl/server.pem.
That looks like configuration issue to me (those errors sounds like
haproxy is configured to do plaintext HTTP on the port instead of ssl).
Cheers,
A few of our clients send us GET https://host.example.com/path/to/url;
style requests which some of our backends can't handle.
I'm not sure its a good idea to rewrite those request; clearly those clients
are misconfigured. This are requests for forwarding proxies. You should deny
service
that leaves 87,88,89,93,94,95,96,99,100 and 101, right?
git-bisect is your friend :)
That said, I see some hangs after commit a890d072, but thats
patch 104 and not in ss-20130402, so its a separate issue
(I will post a new topic about this).
Willy, I would rather not issue dev-18 now ;)
Hi Willy,
since commit a890d072, the following frontend configuration
(in http mode) hangs haproxy, debug output above.
acl iscluster1-2 hdr_sub(host) -i testdom2.local
use_backend cluster1-2 if iscluster1-2
After process_switching_rules haproxy seems to hang (new
connections
Thank you, you did an amazing job here again!
TLS ALPN was implemented similarly as NPN was made. It is supposed
to replace NPN.
Jesus Christ, this was fast. It was only discussed at IETF 86 mid-march
(and in 2 drafts this year), and 15 days later its already in
haproxy-1.5.
I think HAProxy
Hi Igor,
error detected while parsing ACL 'side2' : regex
'\b(?:\d{1,3}\.){3}\d{1,3}\b' is invalid.
The config works fine without JIT enable.
Yes, I can reproduce this. In fact, it does not work _at all_ and
fails even if the expression just contains a few letters, like
literally
This thread is about a bug which is already fixed in dev18.
The only bug fixed after dev18 is the issue from the
haproxy-dev18 http-request thread.
If you need a .tar.gz with all bugfixes, I suggest you use a snapshot
from [1]. They are build every night if something changed in the git,
so you
It is possible to do that, but only if really necessary. And I probably
only want to share that with direct HAProxy developers.
No problem with that; but we need to get more informations somehow and I
have the feeling that this is hard to reproduce ...
Can you give us the output of the failed
Hi!
Since upgrading from dev17 to dev18 I'm getting a segfault:
I can reproduce this. Here a few details:
- a4312fa28e897ed7373785c49ddf3acbc8f9f264 is the first bad commit
- does not happen when build with USE_OPENSSL=1
- gdb backtrace (without compiler optimizations):
(gdb) bt
#0
Can you recompile with the debug option and without compiler optimizations,
create a coredump and backtrace it:
# compile (debug + no optimizations)
make DEBUG=-DDEBUG_FULL CFLAGS=-g -O0 TARGET=[...]
# reise ulimit
ulimit -c 75000
# enable coredumping
echo 2 /proc/sys/fs/suid_dumpable
#
Hi Ahmed,
I wanted to find out if the features I'm using can be considered
complete for 1.5.
[...]
if these are feature complete (or stable)
The latest 1.5 snapshot should be fairly usable also in production,
but since there is no feature freeze, that will very likely change
with new
Hi Will!
stopped responding to most requests.
How did you recover from that condition? Restart HAproxy?
Reboot the whole VM?
When in that condition, did you see any other warnings or
errors (dmesg, etc)? How was the load and memory usage
(top, vmstat, free, uptime, etc)?
Regards,
Lukas
No warnings is syslog or dmesg
Can you issue a show errors [1] on the control socket?
To recover I switched away from the haproxy server to
an older Apache-based web server.
So haproxy is still in a broken condition on that server?
If thats the case, you could attach strace to the process
Unfortunately, I restarted haproxy, so show errors
returns little of use.
Alright, since we don't know what actually happened and
there are no similar reports, we will probably not find
the root cause this time.
Next time this happens, please collect the information
I asked using show errors,
Hi Hiroaki!
I made a patch to fix this problem.
After applying this patch, it seems work correctly now.
Thank you, it does work with your patch applied, great!
Now you are declaring error and erroffset outside #ifdef USE_PCRE_JIT,
but actually use them only inside. This causes a compiler
Hi Henry,
that sounds like a very serious bug indeed.
I suggest you take a traffic capture with a ring buffer and
capture the exact frontend traffic.
You can do this with dumpcap for example, something like
should do it (if your frontend traffic is tcp port 80):
dumpcap -i eth0 -p -s0 -b
Hi!
Hi Henry,
I found the bug. It happens when hashing a parameter which is not found
(typically an absent URL parameter) and then the connection to the selected
server experiences connection retries, and the server goes down and up
before the redispatching, and is the last server in the
Hi Thomas,
I'm trying to follow this blog post:
http://blog.exceliance.fr/2012/10/03/ssl-client-certificate-management-at-application-level/,
but I can't get the client certificate to work with 1.5dev18.
Could you try a few older releases, specifically dev12, 13 and 14 (which is
around
Hi Hong!
Here is the full config and relevant section of the log:
https://gist.github.com/kiafaldorius/5379517
Could you try this exact configuration:
acl multipart hdr_beg(Content-Type) -i multipart/form-data
Regards,
Lukas
Hi!
Huh, that one seems to work! That's strange...I know I tried it with
hdr_beg(Content-Type) multipart and it didn't work.
Now I feel a little dumb, but at least it works =]
Which should work. But probably you tested this before you switched
everything to http mode and without
Hi,
Thank you. So check.txt is just an empty txt file?
Yes. You can test this with curl. Your configuration
expects the following request to return 200 OK:
curl -I http://192.168.88.97:80/check.txt
If this is not happening, either because the file
doesn't exist, HEAD request are not
Hi Jon!
I have played around with the haproxy configuration using 'redirect
location https:// and redirect prefix https:// but without success.
And what exactly is the issue with that? Redirecting is a very basic
task haproxy can do without problems.
On speaking to the vendor, they are
Hi!
Cloud Firewall - Cloud SLB - DMZ Web Agent
Well, do they all forward TCP port 80 to your haproxy box? There is not
much haproxy can do if the http request even doesn't arrive.
Configuration would probably look like this (use redirect prefix, not
redirect location):
frontend unsecured
Hi Merton,
maxconn can be configured in multiple places:
- as a per process limit in the global section [1]
- as a per defaults/frontend/listen limit in the that particular section [2]
- as a per server or server option, for that specific socket or server [3]
All 3 maxconn configurations have a
Hi,
you probably need to set the transparent keyword on the bind line, if
you want to bind to a possible non-existing ip address. See docs [1].
how will keepalived communicate with haproxy?
It doesn't communicate with haproxy afaik. Why should it?
Cheers,
Lukas
[1]
In that case what is haproxy for?
To load-balance traffic on different backends. Haproxy can't load-balance
its own load, that would be a catch-22.
I thought it is suppose to be a load balancer
HAproxy is a reverse proxy and a load balancer, yes.
keepalived is a heartbeat to detect the
So do i broadcast the ip for haproxy to the web? Or the virtual ip?
Which do i connect from the WAN?
Configuration would usually look like this:
HAproxy server 1 (active):
physical IP: 192.168.0.11/24
virtual IP: 192.168.0.20/24 (active)
HAproxy server 2 (standby):
physical IP: 192.168.0.12/24
Hi Ben!
Running haproxy 15 development snapshot from 12/30/2012
I suggest you upgrade to the latest snapshot available from [1].
It doesn't make sense to troubleshoot problems on old development
snapshots if you are 155 commits back (you are missing critical
bug fixes!):
#:~/haproxy$ git log
Hi Merton,
Please let me know if the above understanding is correct.
Yes, thats the concept.
if I have multiple backends (multiple server options), does the
sum of 'maxconn' of their server options have to be no more than
the 'maxconn' of their corresponding frontend?
Yes that does make
Hi Willy!
Done, thanks Thomas!
Can you also push it so that people see it in git and a nightly snapshot
is created?
The bug is pretty major, if it is in git and there are snapshots with this
fix included at least we don't forget about it and people not following
the ML are aware of it (like
These connections tend to stay open for 30-60 mins
Ok.
Currently, I have only one process of this backend running with HAProxy
on the same machine (actually VPS). The machine has 8-core cpu and 1g
memory.
How much free memory do you have left and how much does your backend
software
Hi Dan,
Is there a feature in 1.4 to share a sticky table between two or more
instances of haproxy.
Not in 1.4. There is a feature available in 1.5 (-dev) to do this [1].
Regards,
Lukas
[1] http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.5
Hi Daniel,
kindly CC the mailing-list, there are a lot of other people who
may be able to help you better than myself.
Would it be possible to use balance source? Is the algorithm deterministic?
If the backend configuration with servers is exactly the same, balance source
may work in that
Hi Geoff!
I've told them to ensure the weight is '1' on all servers that
should be serving content, but as we all know something like
that can be missed pretty easily
Agreed.
I'm not sure if this is duplicating functionality that wasn't
functional or not, but here are the changes I've
Hi!
Please also note that the second SOAP call made that fails
the handshake also causes the HAProxy server to crash.
Could you:
- use latest snapshot from [1]
- provide the output of haproxy -vv
- can you tell us OS, kernel and openssl version?
- compile haproxy with debug and without
,
Zack
-Original Message-
From: Lukas Tribus [mailto:luky...@hotmail.com]
Sent: Thursday, April 25, 2013 4:19 PM
To: Connelly, Zachary (CGI Federal)
Subject: RE: Follow-up on thread 'SSL handshake failure' from 2/5/2013
Hi Zack,
in fact
Hi!
report the exact snapshot you used.
He is at current HEAD by using 20130425 with c621d36ba applied
manually on it (linux 2.6.18 without tproxy support).
He also saw the crashes in -dev18, but I had him update the code.
Thanks,
Lukas
Hi,
throwing in my two cents here, based on a few uneducated guesses reading
the Makefile, etc. Feel free to disagree/correct/shout at me :)
(actually I wrote this before Willy answered)
As for renaming the CONFIG_HAP_LINUX_TPROXY to something different would
require everyone that on a
Hi Jinge!
I believe you are facing 2 different issues here.
Today, our haproxy CPU grow to 100%. And the machine become terribly slow.
Please upgrade to recent 1.4 code, you are missing a a few fixes, including
one a security fix. I suggest the snapshot 20130427 which also includes a
loop
Hi James!
I always receive a HTTP 200 response to my browser
How do you know that? Do you have a browser extension or are you
using tcpdump/wireshark? Can you show this with a curl request, so
that we understand whats going on excatly? Something like this
should do the job:
curl -vv
Hi James!
(sorry for repost, first mail was accidentally html)
I always receive a HTTP 200 response to my browser
How do you know that? Do you have a browser extension or are you
using tcpdump/wireshark? Can you show this with a curl request, so
that we understand whats going on excatly?
Hi Smana!
haproxy crashes with the following error :
kernel: [334012.858141] haproxy[6914] general protection ip:46832d
sp:7fffe5e219e8 error:0 in haproxy[40+89000]
Please share the output of haproxy -vv.
This behavior appears only with ssl frontends.
Upgrade to latest snapshot
Yes, please reproduce with latest snapshot, and provide the output
of haproxy -vv. Also, setup haproxy so it can generate a core.
Make sure you CC the list haproxy@formilux.org when responding.
Thanks,
Lukas
Date: Fri, 3 May 2013 11:14:14 +0200
Subject:
Hi James,
I am packet capturing on a client (172.22.0.220, not in the monitor
subnet), browsing to the monitor uri (GET /oowahboh6eibooca) you can
see at 14:08:24 I get a response 200 OK. Then I refresh the page 2
seconds later at 14:08:26.215969 and at 14:08:26.217989 I get a 404
response.
1 - 100 of 1576 matches
Mail list logo