Re: RDP Session Broker Redirect Token

2014-05-07 Thread Willem
Willy Tarreau w at 1wt.eu writes:

 
 Hi Mathew,
 
 On Thu, Aug 15, 2013 at 10:21:51AM +0100, Mathew Levett wrote:
  Hello Willy,
  
  I believe the client (mstsc.exe) connects to the Gateway server via RPC
  over HTTPS (443), the gateway then terminates this, and makes a new 
normal
  RDP connection to haproxy, and then onwards to the Real servers, so in 
this
  case the Gateway is the client to haproxy.
  
  However what seams to be happening is that the loadbalancer then 
balances
  the connections as normal but does not seam to honor the MSTS cookie at
  all. its there in the packet capture and its encoded IP match the 
correct
  server but it seams haproxy ignores it.
 
 I suspect there is a very minor difference in the packets that make 
haproxy
 not recognize it as the one supposed to contain the MSTS cookie. It could 
be
 both a horrible or a subtle bug. Could you please send me privately a copy
 of the packet capture for the faulty connection ? I'd like to run the 
protocol
 parser by hand on it to understand what's wrong there.
 
 Thanks!
 Willy
 
 


Hi, 
I just stumpled upon this post while googling. We had exactly the same 
issue a couple of months a go in a very familiar setting. From Wan to LAn , 
if the customer hires multiple terminalservers, the usersessions pass these 
components: 

0: Hardware firewall
1: Keepalived/LVS Loadbalancer (Layer 4, In Direct Return modus running on 
CentOS 6.5)
2: RemoteDesktop Gateway, redundant on 2x Windows 2012 (Not R2) virtual 
machines
3: HaProxy version 1.5-dev19, single instance running on CentOS 6.5
4: 1 out of 3 terminal servers, running Windows 2012 (Not R2)

Just like Mathew stated; Persistance works great when connecting to the VIP 
op HaProxy, but fails when taking the remotedesktopgateway into the mix. 
HaProxy just wont reconnect users to their existing session. The Keepalived 
loadbalancer mentioned in bullet 1 does not seem to contribute to the 
problem.

To work around this issue, we decided to work around the lack of 
persistance by installing the RemoteDesktop ConnectionBroker-role onto the 
RD Gateway servers. This works great, but it kind of defeats the use of 
HaProxy. It also adds to the complexity of the build solution, because we 
now need SQL to enable high-availability on the Connectionbroker-role.
In turn SQL also would have to be build redundant. 

I guess the question of the day would be: Where you guess able to figure 
out why userpersitance didn't work for mathew? Is there any way i can 
contribute to a solution? (By providing certain logging, doing TCP dumps, 
or anything else?) 








HAProxy config updates through REST messages and Python

2014-05-07 Thread Naveen Chandra Sekhara
Hi All,
 Greetings.

Is there any GPL code which is implemented in Python.
This code receives REST messages(proably through Flask or something),
updates cfg file and restarts.
Surprisingly cannot find anything online through HAProxy is so popular.

Thanks for your help.
Best Regards,
Naveen


Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread John-Paul Bader

Hey Willy,


this morning I was running another test without kqueue but sadly with 
the same result. Here is my test protocol:


Running fine with nokqueue for about an hour at about 20% CPU per 
process, then sudden CPU spike on all processes up to 90%, I started 
ktrace but meanwhile the CPU went back to around 33% on each process 
[2]. Then after 10 more minutes 3 of the 8 haproxy processes died with a 
segfault.


kernel: pid 3963 (haproxy), uid 0: exited on signal 11 (core dumped)

Unfortunately the coredump [1] is not that expressive even with compiled 
debug symbols.


The remaining 5 processes survived another 10 minutes before they ramped 
up cpu again - this time up to 100%. I have created a ktrace in this 
state before reaching the 100%. [3]


When they were at 100% and not accepting any requests anymore I took 
another ktrace sample but _nothing_ was written to the output anymore! 
That means in this state no syscalls where happening anymore? I also 
took a full ktrace sample with IO and everything - it was empty as well.


So it seems unrelated to kqueue as well. Later I will try to run the 
test with a fraction of the traffic without nbproc (all the traffic is 
too much for one process)


You can find the coredump and the traces here:

[1] http://smyck.org/haproxy/haproxy_coredump_freebsd_poll.txt
[2] http://smyck.org/haproxy/haproxy_ktrace_poll_01_30_percent.txt
[3] 
http://smyck.org/haproxy/haproxy_krace_poll_remaining_processes_ramping_up_cpu.txt


I hope that already helps to narrow it down a bit.

Kind regards,

John




Willy Tarreau wrote:

Hi John-Paul,

On Tue, May 06, 2014 at 11:57:08PM +0200, John-Paul Bader wrote:

Hey,

I will do more elaborate test runs in the next couple of days.


No problem.


I will
create traces with ktrace which is not as nice as strace but at least
will provide more context. Is there anything in particular you'd be
interested in like only syscalls?


I tend to think that syscalls should tell us what's happening. Indeed,
FreeBSD and Linux are both modern operating systems and quite close,
so in general, what works on one of them works on the other one without
any difficulty. The only differences here might be :
   - kqueue vs epoll
   - specific return value of a syscall that we don't handle properly
 (eg: we had a few ENOTCONN vs EAGAIN issues in the past)


Meanwhile I have build haproxy with debug symbols but in the tests I ran
today, haproxy did not coredump but only went for the 100% CPU way of
failing where I had to kill it manually. This happened with httpclose
and with keep-alive so I'd say the problem is not really related to that.


I'm not surprized. If the OS makes a difference, it's in the lower layers,
so what close vs keep-alive may do is only hint the problem to happen more
often.

What I'm thinking about is that it's possible that we don't always properly
consider an error on a file descriptor, then we don't remove it properly
from the list of polled FDs, and that it might be returned by the poller
as active when we think it's closed. At this point, everything can happen :
   - loop forever because we get an error when trying to access this fd
 and we can't remove it from the polled fd list ;
   - crash when we try to dereference the connection which is attached
 to this fd.


Its so sad because before the CPU load suddenly risees, and
requests/connections aren't handled anymore haproxy performs so well and
effortless.

Also, if I can help by providing access to a FreeBSD machine, just let
me know. I have plenty :)


At some point it could be useful, especially if we manage to reproduce
the problem on a test platform.


If you have any other idea apart from ktrace, coredumps to make
troubleshooting more effective I'd be more than happy to help.


There's something you can try to see if it's related to what I suspect
above. If you apply this patch and it crashes earlier, it definitely
means that we're having a problem with an fd which is reported after
being closed :

diff --git a/src/connection.c b/src/connection.c
index 1483f18..27bb6c5 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -44,7 +44,7 @@ int conn_fd_handler(int fd)
unsigned int flags;

if (unlikely(!conn))
-   return 0;
+   abort();

conn_refresh_polling_flags(conn);
flags = conn-flags  ~CO_FL_ERROR; /* ensure to call the wake handler 
upon error */

If this happens, retry without kqueue, it will use poll and the issue
should not appear, or we have a bigger bug.

Regards,
Willy



--
John-Paul Bader | Software Development

www.wooga.com
wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
Sitz der Gesellschaft: Berlin; HRB 117846 B
Registergericht Berlin-Charlottenburg
Geschaeftsfuehrung: Jens Begemann, Philipp Moeser



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Willy Tarreau
Hi John-Paul,

On Wed, May 07, 2014 at 09:22:32AM +0200, John-Paul Bader wrote:
 Hey Willy,
 
 
 this morning I was running another test without kqueue but sadly with 
 the same result.

OK so let's rule out any possible kqueue issue there for now.

 Here is my test protocol:
 
 Running fine with nokqueue for about an hour at about 20% CPU per 
 process, then sudden CPU spike on all processes up to 90%, I started 
 ktrace but meanwhile the CPU went back to around 33% on each process 
 [2]. Then after 10 more minutes 3 of the 8 haproxy processes died with a 
 segfault.
 
 kernel: pid 3963 (haproxy), uid 0: exited on signal 11 (core dumped)
 
 Unfortunately the coredump [1] is not that expressive even with compiled 
 debug symbols.

It's very interesting, it contains a call to ssl_update_cache(). I didn't
know you were using SSL, but in multi-process mode we have the shared context
model to share the SSL sessions between processes. On Linux, we almost only
use futexes. On other systems, we use mutexes. So that's a difference. It
might be possible that we have a bug in the mutex implementation causing
various effects.

You could try to rebuild with the private cache mode, but it will be a bit
more complicated, because if you have a high load, I guess you want to keep
your users sessions. So you'll probably need to have one front shared process
running in TCP mode and distributing the load to the SSL processes according
to the SSL ID, in order to maintain stickiness between users and processes.

The fact that you have no symbols in your gdb output indicates that the
crash very likely happens inside libssl, maybe it retrieves some crap from
the session cache that it cannot reliably deal with.

 The remaining 5 processes survived another 10 minutes before they ramped 
 up cpu again - this time up to 100%. I have created a ktrace in this 
 state before reaching the 100%. [3]
 
 When they were at 100% and not accepting any requests anymore I took 
 another ktrace sample but _nothing_ was written to the output anymore! 

That could indicate an attempt to acquire a lock in loops, or simply
that the code is looping in userspace due to a side effect of some memory
corruption consecutive to the bug for example.

 That means in this state no syscalls where happening anymore? I also 
 took a full ktrace sample with IO and everything - it was empty as well.

Oh and BTW, I can confirm that ktrace is really poor compared to strace :-)

 So it seems unrelated to kqueue as well. Later I will try to run the 
 test with a fraction of the traffic without nbproc (all the traffic is 
 too much for one process)

That would be great! You can try to build with USE_PRIVATE_CACHE=1 in
order to disable session sharing.

If you have plenty of clients, you can first try to spread the load between
processes using a simple source hash :

listen front
   bind-process 1
   bind pub_ip:443
   balance source
   server proc2 127.0.0.2:443 send-proxy
   server proc3 127.0.0.3:443 send-proxy
   server proc4 127.0.0.4:443 send-proxy
   server proc5 127.0.0.5:443 send-proxy
   server proc6 127.0.0.6:443 send-proxy
   server proc7 127.0.0.7:443 send-proxy
   server proc8 127.0.0.8:443 send-proxy

frontend proc2
   bind-process 2
   bind 127.0.0.2:443 ssl crt ... accept-proxy
   ... usual stuff

frontend proc3
   bind-process 3
   bind 127.0.0.3:443 ssl crt ... accept-proxy
   ... usual stuff

etc.. till process 8.

It's much easier than dealing with SSL ID and might be done with
less adjustments to your existing configuration. And that way you
don't need to share any SSL context between your processes.

Please tell me if you need some help to try to set up something like this.

Willy




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread John-Paul Bader

Willy Tarreau wrote:


It's very interesting, it contains a call to ssl_update_cache(). I didn't
know you were using SSL, but in multi-process mode we have the shared context
model to share the SSL sessions between processes.


Yes, sorry. In the initial email on this thread I posted our 
configuration which included the SSL setup.


We're using OpenSSL 1.0.1g 7 Apr 2014 to benefit from the AES-NI 
acceleration.



Oh and BTW, I can confirm that ktrace is really poor compared to strace :-)


haproxy does not include DTrace probes by any chance right? :)


So it seems unrelated to kqueue as well. Later I will try to run the
test with a fraction of the traffic without nbproc (all the traffic is
too much for one process)


That would be great! You can try to build with USE_PRIVATE_CACHE=1 in
order to disable session sharing.


Right now I'm running a test just with disabled nbproc. Next I will try 
to recompile with USE_PRIVATE_CACHE=1


Do I have to pass that option like this:

make CFLAGS=-g -O0 USE_PRIVATE_CACHE=1 ?

These are our current build options - for completeness:

haproxy -vv
HA-Proxy version 1.5-dev24-8860dcd 2014/04/26
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -g -O0 -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
Running on OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.34 2013-12-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Kind regards,

John

--
John-Paul Bader | Software Development

www.wooga.com
wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
Sitz der Gesellschaft: Berlin; HRB 117846 B
Registergericht Berlin-Charlottenburg
Geschaeftsfuehrung: Jens Begemann, Philipp Moeser



about pcre

2014-05-07 Thread k simon
Hi,Lists,
  I found I can not share the same regex txt for haproxy and squid. And
I noticed that haproxy use OS libc's regex by default, and can change it
with compile parameters REGEX=pcre.
  Should I recompile haproxy and share the same regex txt?


Regards
Simon



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Willy Tarreau
On Wed, May 07, 2014 at 10:28:18AM +0200, John-Paul Bader wrote:
 Willy Tarreau wrote:
 
 It's very interesting, it contains a call to ssl_update_cache(). I didn't
 know you were using SSL, but in multi-process mode we have the shared 
 context
 model to share the SSL sessions between processes.
 
 Yes, sorry. In the initial email on this thread I posted our 
 configuration which included the SSL setup.

Yes I remember having seen your config, but unfortunately I'm having
a hard time remembering all the configs I see during a single day, I'm
sorry.

 We're using OpenSSL 1.0.1g 7 Apr 2014 to benefit from the AES-NI 
 acceleration.

OK.

 Oh and BTW, I can confirm that ktrace is really poor compared to strace :-)
 
 haproxy does not include DTrace probes by any chance right? :)

No, and I have no idea how this works either. But if you feel like it
can provide some value and be done without too much effort, feel free
to try :-)

 So it seems unrelated to kqueue as well. Later I will try to run the
 test with a fraction of the traffic without nbproc (all the traffic is
 too much for one process)
 
 That would be great! You can try to build with USE_PRIVATE_CACHE=1 in
 order to disable session sharing.
 
 Right now I'm running a test just with disabled nbproc. Next I will try 
 to recompile with USE_PRIVATE_CACHE=1

Great.

 Do I have to pass that option like this:
 
 make CFLAGS=-g -O0 USE_PRIVATE_CACHE=1 ?

Yes that's the principle. You can look at the makefile, all build options
are referenced at the top.

 These are our current build options - for completeness:
 
 haproxy -vv
 HA-Proxy version 1.5-dev24-8860dcd 2014/04/26

BTW, be careful, a few bugs introduced in dev23 on ACLs were fixed after dev24.
So with this version, acl foo xxx -i yyy will not work for example. Balance
url_param is broken as well. All of them are fixed in latest snapshot though.

 Copyright 2000-2014 Willy Tarreau w...@1wt.eu
 
 Build options :
   TARGET  = freebsd
   CPU = generic
   CC  = cc
   CFLAGS  = -g -O0 -DFREEBSD_PORTS
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
 
 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
 
 Encrypted password support via crypt(3): yes
 Built with zlib version : 1.2.7
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
 Running on OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.34 2013-12-15
 PCRE library supports JIT : no (USE_PCRE_JIT not set)
 Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
 
 Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use kqueue.

OK, nothing unusual here. Thanks for the detailed output, it always
helps!

Best regards,
Willy




Re: please check

2014-05-07 Thread Willy Tarreau
On Wed, May 07, 2014 at 07:30:43AM +0200, Willy Tarreau wrote:
  The strange news: Contrary to your statement, the client connection is
  closed after the 1 second timeout. It even logs this. The only thing
  that doesn't happen properly is the absence of any response. Just
  immediate connection close.
  
  
  Before patch:
  haproxy[26318]: 127.0.0.1:51995 [06/May/2014:18:55:33.002] f1 b1/s1
  0/0/0/-1/2001 504 194 - - sH-- 0/0/0/0/0 0/0 GET / HTTP/1.1
  
  After patch:
  haproxy[27216]: 127.0.0.1:52027 [06/May/2014:18:56:34.165] f1 b1/s1
  0/0/0/-1/1002 -1 0 - - cD-- 0/0/0/0/0 0/0 GET / HTTP/1.1
 
 Interesting, for me it waited till the end. Or maybe you have
 option abortonclose ?

OK I found, it's because your server receives the abort and closes immediately
that the client timeout is enforced. In my test, the server was waiting a
predefined amount of time, thus the server timeout was enforced.

Willy




Re: HAProxy config updates through REST messages and Python

2014-05-07 Thread Steven Le Roux
You can take this python config generator
https://github.com/StevenLeRoux/webhub as a start.

It uses an agnostic flat config format that makes you able to generate
config for an other reverse proxy like httpd, nginx, etc... but to use
an API I would use a nested format instead.

You need to just add a tornado frontend to have the Rest API, and to
maintain the config in memory and persist it (to file or document
store) to load it back at startup.

Not hard.

If you does not know how to start, I could sketch a basic REST python API.

On Wed, May 7, 2014 at 9:11 AM, Naveen Chandra Sekhara
cnave...@gmail.com wrote:
 Hi All,
  Greetings.

 Is there any GPL code which is implemented in Python.
 This code receives REST messages(proably through Flask or something),
 updates cfg file and restarts.
 Surprisingly cannot find anything online through HAProxy is so popular.

 Thanks for your help.
 Best Regards,
 Naveen



-- 
Steven Le Roux
Jabber-ID : ste...@jabber.fr
0x39494CCB ste...@le-roux.info
2FF7 226B 552E 4709 03F0  6281 72D7 A010 3949 4CCB



Re: Feature Request: Extract IP from TCP Options Header

2014-05-07 Thread Willy Tarreau
Hi Jim,

On Fri, May 02, 2014 at 04:13:40PM +0100, Jim Rippon wrote:
 Hi all, 
 
 As mentioned on the IRC channel today, I have a
 requirement to extract an end users IP address from the TCP Options
 Header (in my case with key 34 or 0x22, but there are other similar
 implementations using 28 or 0x1C). This header is being added by some
 Application Delivery Optimisation solutions by providers such as Akamai
 (with their IPA product line) and CDNetworks (with their DNA product)
 though there are likely others out there hijacking the TCP headers this
 way. 

Cool, I'm happy that some people start to use TCP options for this, it
could drive forward improved APIs in various operating systems to help
retrieve these options. We designed the PROXY protocol precisely as an
alternative for the lack of ability to access these.

 Because the options headers won't be forwarded by haproxy to the
 back-end servers, the most useful way to deal with this for our http
 services would be to extract the IP address encoded and place it into
 either the X-Forwarded-For or X-Real-IP headers, so that it can be
 understood and handled by the upstream servers. 
 
 Sample implementations
 can be found in documentation from F5 [1] and Citrix [2] below. In the
 TCP SYN packet (and some later packets, but always in the initial SYN)
 we see the option at the end of the options header field like so in our
 packet capture: 
 
 22 06 ac 10 05 0a 
 
 Broken down, we have: 
 
 22 = TCP
 Options Header key (34 in this case with CDNetworks) 
 
 06 = Field size
 - this appears to include the key, this size field and the option value
 
 
 ac 10 05 0a = the IP address of the end-user - faked in this example
 to private address 172.16.5.10 
 
 This would be hugely useful
 functionality - it would allow us to avoid the expense of high-end load
 balancer devices and licenses to support testing of our CDN
 implementations before going into production. 

Sure it would be great, and even better if we could set them. The only
problem is that there is no way to retrieve these information from userland.

The option is present in the incoming SYN packet, is not recognized by the
kernel which skips it, and as soon as the system responds with the SYN/ACK,
the information is lost. Are you aware of kernel patches to retrieve these
options ? If at least one of them is widely deployed, we could consider
implementing support for it, just like we did in the past with the cttproxy
or tcpsplicing patches.

Best regards,
Willy




Feature Request: Reset down time on 'clear counters all'

2014-05-07 Thread Dimitris Baltas
Hello,

I am frequently running the show  stats command and use the csv formatted 
output in a custom monitoring tool.
Running clear counters all resets all numbers except the down time of servers 
and service.

I understand that down time is a critical element, but
given that down_time does reset anyway when HAProxy reloads or restarts,
It would make sense to also reset it on clear counters all

Best,
Dimitris Baltas


Dimitris Baltas

RD Manager



Address:
4, Karageorgi Servias str
105 62, Athens Greece

Reservations:
14824 (0,37/min, land line - 0,46/min, mobile)

Phone:
+30 211 1079680

Fax:
+30 210 7299664

Email:
dbal...@travelplanet24.gr

Website:
www.travelplanet24.com

Subscribe:
Newsletter

Join us:



P please consider the environment before printing this email

email disclaimer:
the information contained in this email is intended solely for the addressee. 
access to this email by anyone else is unauthorized. if you are not the 
intended recipient, any form of disclosure, reproduction, distribution or any 
action taken or refrained from in reliance on it, is prohibited and may be 
unlawful. please notify the sender immediately.






Re: please check

2014-05-07 Thread Willy Tarreau
Hi Patrick, hi Rachel,

so with these two patches applied on top of the previous one, I get the
behaviour that we discussed here.

Specifically, we differentiate client-read timeout, server-write timeouts
and server read timeouts during the data forwarding phase. Also, we disable
server read timeout until the client has sent its whole request. That way
I'm seeing the following flags in the logs :

  - cH when client does not send everything before the server starts to
respond, which is OK. Status=408 there.

  - cD when client stops sending data after the server starts to respond,
or if the client stops reading data, which in both case is a clear
client timeout. In both cases, the status is unaltered and nothing
is emitted since the beginning of the response was already transmitted ;

  - sH when the server does not respond, including if it stops reading the
message body (eg: process stuck). Then we have 504.

  - sD if the server stops reading or sending data during the data phase.

The changes were a bit tricky, so any confirmation from any of you would
make me more comfortable merging them into mainline. I'm attaching these
two extra patches, please give them a try.

Thanks,
Willy

From b9edf8fbecc9d1b5c82794735adcc367a80a4ae2 Mon Sep 17 00:00:00 2001
From: Willy Tarreau w...@1wt.eu
Date: Wed, 7 May 2014 14:24:16 +0200
Subject: BUG/MEDIUM: http: correctly report request body timeouts

This is the continuation of previous patch BUG/MEDIUM: http/session:
disable client-side expiration only after body.

This one takes care of properly reporting the client-side read timeout
when waiting for a body from the client. Since the timeout may happen
before or after the server starts to respond, we have to take care of
the situation in three different ways :
  - if the server does not read our data fast enough, we emit a 504
if we're waiting for headers, or we simply break the connection
if headers were already received. We report either sH or sD
depending on whether we've seen headers or not.

  - if the server has not yet started to respond, but has read all of
the client's data and we're still waiting for more data from the
client, we can safely emit a 408 and abort the request ;

  - if the server has already started to respond (thus it's a transfer
timeout during a bidirectional exchange), then we silently break
the connection, and only the session flags will indicate in the
logs that something went wrong with client or server side.

This bug is tagged MEDIUM because it touches very sensible areas, however
its impact is very low. It might be worth performing a careful backport
to 1.4 once it has been confirmed that everything is correct and that it
does not introduce any regression.
---
 src/proto_http.c | 76 ++--
 1 file changed, 74 insertions(+), 2 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index e473228..797b3b8 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -5175,6 +5175,13 @@ int http_request_forward_body(struct session *s, struct 
channel *req, int an_bit
 */
msg-msg_state = HTTP_MSG_ERROR;
http_resync_states(s);
+
+   if (req-flags  CF_READ_TIMEOUT)
+   goto cli_timeout;
+
+   if (req-flags  CF_WRITE_TIMEOUT)
+   goto srv_timeout;
+
return 1;
}
 
@@ -5455,6 +5462,68 @@ int http_request_forward_body(struct session *s, struct 
channel *req, int an_bit
s-flags |= SN_FINST_D;
}
return 0;
+
+ cli_timeout:
+   if (!(s-flags  SN_ERR_MASK))
+   s-flags |= SN_ERR_CLITO;
+
+   if (!(s-flags  SN_FINST_MASK)) {
+   if (txn-rsp.msg_state  HTTP_MSG_ERROR)
+   s-flags |= SN_FINST_H;
+   else
+   s-flags |= SN_FINST_D;
+   }
+
+   if (txn-status  0) {
+   /* Don't send any error message if something was already sent */
+   stream_int_retnclose(req-prod, NULL);
+   }
+   else {
+   txn-status = 408;
+   stream_int_retnclose(req-prod, http_error_message(s, 
HTTP_ERR_408));
+   }
+
+   msg-msg_state = HTTP_MSG_ERROR;
+   req-analysers = 0;
+   s-rep-analysers = 0; /* we're in data phase, we want to abort both 
directions */
+
+   session_inc_http_err_ctr(s);
+   s-fe-fe_counters.failed_req++;
+   s-be-be_counters.failed_req++;
+   if (s-listener-counters)
+   s-listener-counters-failed_req++;
+   return 0;
+
+ srv_timeout:
+   if (!(s-flags  SN_ERR_MASK))
+   s-flags |= SN_ERR_SRVTO;
+
+   if (!(s-flags  SN_FINST_MASK)) {
+   if (txn-rsp.msg_state  HTTP_MSG_ERROR)
+   s-flags |= SN_FINST_H;
+   else
+   s-flags |= SN_FINST_D;
+   }
+
+ 

Re: please check

2014-05-07 Thread Patrick Hemmer

*From: *Willy Tarreau w...@1wt.eu
*Sent: * 2014-05-07 09:45:47 E
*To: *Patrick Hemmer hapr...@stormcloud9.net, Rachel Chavez
rachel.chave...@gmail.com
*CC: *haproxy@formilux.org
*Subject: *Re: please check

 Hi Patrick, hi Rachel,

 so with these two patches applied on top of the previous one, I get the
 behaviour that we discussed here.

 Specifically, we differentiate client-read timeout, server-write timeouts
 and server read timeouts during the data forwarding phase. Also, we disable
 server read timeout until the client has sent its whole request. That way
 I'm seeing the following flags in the logs :

   - cH when client does not send everything before the server starts to
 respond, which is OK. Status=408 there.

   - cD when client stops sending data after the server starts to respond,
 or if the client stops reading data, which in both case is a clear
 client timeout. In both cases, the status is unaltered and nothing
 is emitted since the beginning of the response was already transmitted ;

   - sH when the server does not respond, including if it stops reading the
 message body (eg: process stuck). Then we have 504.

   - sD if the server stops reading or sending data during the data phase.

 The changes were a bit tricky, so any confirmation from any of you would
 make me more comfortable merging them into mainline. I'm attaching these
 two extra patches, please give them a try.

 Thanks,
 Willy

Works beautifully. I had created a little test suite to test to test a
bunch of conditions around this, and they all pass.
Will see about throwing this in our development environment in the next
few days if a release doesn't come out before then.

Thank you much :-)

-Patrick


Re: please check

2014-05-07 Thread Willy Tarreau
On Wed, May 07, 2014 at 09:55:35AM -0400, Patrick Hemmer wrote:
 Works beautifully. I had created a little test suite to test to test a
 bunch of conditions around this, and they all pass.

Wow, impressed with the speed of your test! Thanks!

 Will see about throwing this in our development environment in the next
 few days if a release doesn't come out before then.

Perfect, I'm merging them now. Emeric is currently investigating the
issue with SSL on FreeBSD, and I'd like to at least get rid of the
bind-process item before the dev25 since it's supposed to be a quick
change when I don't spend my time working on other bugs :-)

Cheers,
Willy




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Emeric Brun

On 05/07/2014 11:15 AM, Willy Tarreau wrote:

On Wed, May 07, 2014 at 10:28:18AM +0200, John-Paul Bader wrote:

Willy Tarreau wrote:


It's very interesting, it contains a call to ssl_update_cache(). I didn't
know you were using SSL, but in multi-process mode we have the shared
context
model to share the SSL sessions between processes.


Yes, sorry. In the initial email on this thread I posted our
configuration which included the SSL setup.


Yes I remember having seen your config, but unfortunately I'm having
a hard time remembering all the configs I see during a single day, I'm
sorry.


We're using OpenSSL 1.0.1g 7 Apr 2014 to benefit from the AES-NI
acceleration.


OK.


Oh and BTW, I can confirm that ktrace is really poor compared to strace :-)


haproxy does not include DTrace probes by any chance right? :)


No, and I have no idea how this works either. But if you feel like it
can provide some value and be done without too much effort, feel free
to try :-)


So it seems unrelated to kqueue as well. Later I will try to run the
test with a fraction of the traffic without nbproc (all the traffic is
too much for one process)


That would be great! You can try to build with USE_PRIVATE_CACHE=1 in
order to disable session sharing.


Right now I'm running a test just with disabled nbproc. Next I will try
to recompile with USE_PRIVATE_CACHE=1


Great.


Do I have to pass that option like this:

make CFLAGS=-g -O0 USE_PRIVATE_CACHE=1 ?


Yes that's the principle. You can look at the makefile, all build options
are referenced at the top.


These are our current build options - for completeness:

haproxy -vv
HA-Proxy version 1.5-dev24-8860dcd 2014/04/26


BTW, be careful, a few bugs introduced in dev23 on ACLs were fixed after dev24.
So with this version, acl foo xxx -i yyy will not work for example. Balance
url_param is broken as well. All of them are fixed in latest snapshot though.


Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
   TARGET  = freebsd
   CPU = generic
   CC  = cc
   CFLAGS  = -g -O0 -DFREEBSD_PORTS
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
Running on OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.34 2013-12-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.


OK, nothing unusual here. Thanks for the detailed output, it always
helps!

Best regards,
Willy




Hi All,

I suspect FreeBSD to not support process shared mutex (supported in both 
linux and solaris).


I've just made a patch to add errors check on mutex init, and to 
fallback on SSL private session cache in error case.


Could you try this patch and feedback to tell us if this warning appear:

Unable to init lock for the shared SSL session cache. fallback to 
private cache.


Regards,
Emeric

From 69c7942c782ca22180ddbf5679c9c1e045693ff3 Mon Sep 17 00:00:00 2001
From: Emeric Brun eb...@exceliance.fr
Date: Wed, 7 May 2014 16:10:18 +0200
Subject: [PATCH] BUG/MAJOR: ssl: Fallback to private session cache if current
 lock mode is not supported.

Process shared mutex seems not supported on some OSs (FreeBSD).

This patch checks errors on mutex lock init to fallback
on a private session cache (per process cache) in error cases.
---
 include/proto/shctx.h |  3 +++
 src/cfgparse.c| 18 ++
 src/shctx.c   | 24 
 3 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/include/proto/shctx.h b/include/proto/shctx.h
index a84e4a6..e0c695d 100644
--- a/include/proto/shctx.h
+++ b/include/proto/shctx.h
@@ -28,6 +28,9 @@
 #define SHCTX_APPNAME haproxy
 #endif
 
+#define SHCTX_E_ALLOC_CACHE -1
+#define SHCTX_E_INIT_LOCK   -2
+
 /* Allocate shared memory context.
  * size is the number of allocated blocks into cache (default 128 bytes)
  * A block is large enough to contain a classic session (without client cert)
diff --git a/src/cfgparse.c b/src/cfgparse.c
index c4f092f..f2f55ed 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -6744,6 +6744,8 @@ out_uri_auth_compat:
 		 * remains NULL so that listeners can later detach.
 		 */
 		list_for_each_entry(bind_conf, curproxy-conf.bind, by_fe) {
+			int alloc_ctx;
+
 			if (!bind_conf-is_ssl) {
 if (bind_conf-default_ctx) {
 	Warning(Proxy '%s': A certificate was specified but SSL was not 

Re: RDP Session Broker Redirect Token

2014-05-07 Thread Mathew Levett
Hi Willem,

That sounds very similar to the issue I was experiencing however our set-up
was a little different.

1. HA Proxy VIP 1
2. Gateway Servers (two of)
3. HA Proxy Vip 2
4. RDP Servers

We also had session broker mode enabled as well.  This worked fine for any
local connections but if you went though the gateway server you lost
persistence, it seems that the gateway server when converting the RDP over
SSL to pure RDP did not send the correct token so adding a couple of lines
to haproxy resolved this.

What your experiencing sounds like the RDP cookie issue we have seen in the
past and we have blogged about
http://blog.loadbalancer.org/microsoft-drops-support-for-mstshash-cookies/

In this case Micro$oft answer was to simply use session broker.  What we
were seeing is that not every packet sent by the RDP client would contain
the mstshash cookie despite what the technet articles said.

Hope that helps.


On 6 May 2014 19:58, Willem wil...@wwgr.nl wrote:

 Willy Tarreau w at 1wt.eu writes:

 
  Hi Mathew,
 
  On Thu, Aug 15, 2013 at 10:21:51AM +0100, Mathew Levett wrote:
   Hello Willy,
  
   I believe the client (mstsc.exe) connects to the Gateway server via RPC
   over HTTPS (443), the gateway then terminates this, and makes a new
 normal
   RDP connection to haproxy, and then onwards to the Real servers, so in
 this
   case the Gateway is the client to haproxy.
  
   However what seams to be happening is that the loadbalancer then
 balances
   the connections as normal but does not seam to honor the MSTS cookie at
   all. its there in the packet capture and its encoded IP match the
 correct
   server but it seams haproxy ignores it.
 
  I suspect there is a very minor difference in the packets that make
 haproxy
  not recognize it as the one supposed to contain the MSTS cookie. It could
 be
  both a horrible or a subtle bug. Could you please send me privately a
 copy
  of the packet capture for the faulty connection ? I'd like to run the
 protocol
  parser by hand on it to understand what's wrong there.
 
  Thanks!
  Willy
 
 


 Hi,
 I just stumpled upon this post while googling. We had exactly the same
 issue a couple of months a go in a very familiar setting. From Wan to LAn ,
 if the customer hires multiple terminalservers, the usersessions pass these
 components:

 0: Hardware firewall
 1: Keepalived/LVS Loadbalancer (Layer 4, In Direct Return modus running on
 CentOS 6.5)
 2: RemoteDesktop Gateway, redundant on 2x Windows 2012 (Not R2) virtual
 machines
 3: HaProxy version 1.5-dev19, single instance running on CentOS 6.5
 4: 1 out of 3 terminal servers, running Windows 2012 (Not R2)

 Just like Mathew stated; Persistance works great when connecting to the VIP
 op HaProxy, but fails when taking the remotedesktopgateway into the mix.
 HaProxy just wont reconnect users to their existing session. The Keepalived
 loadbalancer mentioned in bullet 1 does not seem to contribute to the
 problem.

 To work around this issue, we decided to work around the lack of
 persistance by installing the RemoteDesktop ConnectionBroker-role onto the
 RD Gateway servers. This works great, but it kind of defeats the use of
 HaProxy. It also adds to the complexity of the build solution, because we
 now need SQL to enable high-availability on the Connectionbroker-role.
 In turn SQL also would have to be build redundant.

 I guess the question of the day would be: Where you guess able to figure
 out why userpersitance didn't work for mathew? Is there any way i can
 contribute to a solution? (By providing certain logging, doing TCP dumps,
 or anything else?)









Re: RDP Session Broker Redirect Token

2014-05-07 Thread Emeric Brun

On 05/07/2014 04:40 PM, Mathew Levett wrote:

Hi Willem,

That sounds very similar to the issue I was experiencing however our
set-up was a little different.

1. HA Proxy VIP 1
2. Gateway Servers (two of)
3. HA Proxy Vip 2
4. RDP Servers

We also had session broker mode enabled as well.  This worked fine for
any local connections but if you went though the gateway server you lost
persistence, it seems that the gateway server when converting the RDP
over SSL to pure RDP did not send the correct token so adding a couple
of lines to haproxy resolved this.

What your experiencing sounds like the RDP cookie issue we have seen in
the past and we have blogged about
http://blog.loadbalancer.org/microsoft-drops-support-for-mstshash-cookies/

In this case Micro$oft answer was to simply use session broker.  What we
were seeing is that not every packet sent by the RDP client would
contain the mstshash cookie despite what the technet articles said.

Hope that helps.


On 6 May 2014 19:58, Willem wil...@wwgr.nl mailto:wil...@wwgr.nl wrote:

Willy Tarreau w at 1wt.eu http://1wt.eu writes:

 
  Hi Mathew,
 
  On Thu, Aug 15, 2013 at 10:21:51AM +0100, Mathew Levett wrote:
   Hello Willy,
  
   I believe the client (mstsc.exe) connects to the Gateway server
via RPC
   over HTTPS (443), the gateway then terminates this, and makes a new
normal
   RDP connection to haproxy, and then onwards to the Real
servers, so in
this
   case the Gateway is the client to haproxy.
  
   However what seams to be happening is that the loadbalancer then
balances
   the connections as normal but does not seam to honor the MSTS
cookie at
   all. its there in the packet capture and its encoded IP match the
correct
   server but it seams haproxy ignores it.
 
  I suspect there is a very minor difference in the packets that make
haproxy
  not recognize it as the one supposed to contain the MSTS cookie.
It could
be
  both a horrible or a subtle bug. Could you please send me
privately a copy
  of the packet capture for the faulty connection ? I'd like to run the
protocol
  parser by hand on it to understand what's wrong there.
 
  Thanks!
  Willy
 
 


Hi,
I just stumpled upon this post while googling. We had exactly the same
issue a couple of months a go in a very familiar setting. From Wan
to LAn ,
if the customer hires multiple terminalservers, the usersessions
pass these
components:

0: Hardware firewall
1: Keepalived/LVS Loadbalancer (Layer 4, In Direct Return modus
running on
CentOS 6.5)
2: RemoteDesktop Gateway, redundant on 2x Windows 2012 (Not R2) virtual
machines
3: HaProxy version 1.5-dev19, single instance running on CentOS 6.5
4: 1 out of 3 terminal servers, running Windows 2012 (Not R2)

Just like Mathew stated; Persistance works great when connecting to
the VIP
op HaProxy, but fails when taking the remotedesktopgateway into the mix.
HaProxy just wont reconnect users to their existing session. The
Keepalived
loadbalancer mentioned in bullet 1 does not seem to contribute to the
problem.

To work around this issue, we decided to work around the lack of
persistance by installing the RemoteDesktop ConnectionBroker-role
onto the
RD Gateway servers. This works great, but it kind of defeats the use of
HaProxy. It also adds to the complexity of the build solution,
because we
now need SQL to enable high-availability on the Connectionbroker-role.
In turn SQL also would have to be build redundant.

I guess the question of the day would be: Where you guess able to figure
out why userpersitance didn't work for mathew? Is there any way i can
contribute to a solution? (By providing certain logging, doing TCP
dumps,
or anything else?)









Hi Mathew,

It seems to be the same problem we discussed off-list from 22 to 29 
august 2013.


We solved it adding a line in config to force the gateway server to 
re-attempt a clear connection.


tcp-request content reject if { req_ssl_hello_type 1 }


Regards,

Emeric



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread John-Paul Bader

Hey Emeric,


I have just consulted the Readme of the haproxy source and it says in 
the OpenSSL section:


»The BSD and OSX makefiles do not support build options for OpenSSL nor 
zlib. Also, at least on OpenBSD, pthread_mutexattr_setpshared() does not 
exist so the SSL session cache cannot be shared between multiple 
processes. If you want to enable these options, you need to use GNU make 
with the default makefile as follows :«


I have just checked if pthread_mutexattr_setpshared is available in 
FreeBSD and it does not seem to be the case. So maybe we're on the right 
track here.


I will try to apply your patch and confirm this. Would that mean that we 
have to use the solution proposed by Willy? Using a source hash to 
balance to multiple ssl enabled frontends?


Kind regards,

John



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Vincent Bernat
 ❦  7 mai 2014 11:15 +0200, Willy Tarreau w...@1wt.eu :

 haproxy does not include DTrace probes by any chance right? :)

 No, and I have no idea how this works either. But if you feel like it
 can provide some value and be done without too much effort, feel free
 to try :-)

Here is a proof of concept. To test, use `make TARGET=linux2628
USE_DTRACE=1`. On Linux, you need systemtap-sdt-dev or something like
that. Then, there is a quick example in example/haproxy.stp. You can try
it like this:

#+begin_src sh
sudo stap  ./examples/haproxy.stp
#+end_src

It is possible to convert the probes.d to a tapset (which is a recipe
for systemtap) to be able to name arguments and convert them in the
appropriate type. I am using this AWK script:
 
https://github.com/vincentbernat/lldpd/blob/master/src/daemon/dtrace2systemtap.awk

Only works with simple probes.

For dtrace, this would be something like that but I cannot test right
now:

#+begin_src dtrace
haproxy$target:::frontend_accept
{
   printf(Frontend %s accepted a connection, copyinstr(arg0));
}
#+end_src

The trick with those tracepoints is that they are just NOOP until you
enable them. So, even when someone compiles dtrace support, they will
not have any performance impact until trying to use the tracepoints.

While the probe arguments can be anything, it is simpler to only keep
simple types like null-terminated strings or int. Otherwise, they are
difficult to exploit. If you put struct, without the debug symbols, the
data is not exploitable.

Now, all the hard work is to put trace points everywhere. A good target
is where stuff are logged. But they can also be put in places where logs
would be too verbose. I currently don't have interest in doing that but
if someone is willing too, it is only a matter of defining the probes in
probes.d and placing them in the C code. This is really nifty to debug
stuff in production. However, I think that people interested in that can
also use debug symbols to place probe at any place they want to. GCC is
now better at providing debug symbols which work on optimized
executables. Ubuntu is providing debug symbols for almost
everything. Tracepoints are still interesting as they can be listed and
they are hand-picked.

From 504504f2f8c13f077f09e0906cd7e7d3ca405acc Mon Sep 17 00:00:00 2001
From: Vincent Bernat vinc...@bernat.im
Date: Wed, 7 May 2014 18:18:07 +0200
Subject: [PATCH] MINOR: dtrace: add dtrace support (WIP)

Both dtrace and systemtap are supported. Currently, only one tracepoint
is defined.
---
 .gitignore |  1 +
 Makefile   | 18 +-
 examples/haproxy.stp   |  3 +++
 include/common/debug.h |  8 
 src/frontend.c |  2 ++
 src/probes.d   | 21 +
 6 files changed, 52 insertions(+), 1 deletion(-)
 create mode 100644 examples/haproxy.stp
 create mode 100644 src/probes.d

diff --git a/.gitignore b/.gitignore
index ec1545a7a3df..c13934c19835 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,3 +17,4 @@ make-*
 dlmalloc.c
 00*.patch
 *.service
+include/common/probes.h
diff --git a/Makefile b/Makefile
index f95ba03ac60f..617ab4447e69 100644
--- a/Makefile
+++ b/Makefile
@@ -33,6 +33,7 @@
 #   USE_ZLIB : enable zlib library support.
 #   USE_CPU_AFFINITY : enable pinning processes to CPU on Linux. Automatic.
 #   USE_TFO  : enable TCP fast open. Supported on Linux = 3.7.
+#   USE_DTRACE   : enable DTrace/systemtap support
 #
 # Options can be forced by specifying USE_xxx=1 or can be disabled by using
 # USE_xxx= (empty string).
@@ -582,6 +583,12 @@ OPTIONS_CFLAGS  += -DUSE_TFO
 BUILD_OPTIONS   += $(call ignore_implicit,USE_TFO)
 endif
 
+# DTrace
+ifneq ($(USE_DTRACE),)
+DTRACE = dtrace
+OPTIONS_CFLAGS  += -DUSE_DTRACE
+endif
+
 # This one can be changed to look for ebtree files in an external directory
 EBTREE_DIR := ebtree
 
@@ -655,6 +662,10 @@ EBTREE_OBJS = $(EBTREE_DIR)/ebtree.o \
 ifneq ($(TRACE),)
 OBJS += src/trace.o
 endif
+ifneq ($(USE_DTRACE),)
+OBJS += src/probes.o
+$(OBJS): | include/common/probes.h
+endif
 
 WRAPPER_OBJS = src/haproxy-systemd-wrapper.o
 
@@ -679,6 +690,11 @@ objsize: haproxy
 src/trace.o: src/trace.c
 	$(CC) $(TRACE_COPTS) -c -o $@ $
 
+include/common/probes.h: src/probes.d
+	$(DTRACE) -C -h -s $ -o $@
+src/probes.o: src/probes.d
+	$(DTRACE) -C -G -s $ -o $@
+
 src/haproxy.o:	src/haproxy.c
 	$(CC) $(COPTS) \
 	  -DBUILD_TARGET='$(strip $(TARGET))' \
@@ -715,7 +731,7 @@ install-bin: haproxy haproxy-systemd-wrapper
 install: install-bin install-man install-doc
 
 clean:
-	rm -f *.[oas] src/*.[oas] ebtree/*.[oas] haproxy test
+	rm -f *.[oas] src/*.[oas] ebtree/*.[oas] haproxy test include/common/probes.h
 	for dir in . src include/* doc ebtree; do rm -f $$dir/*~ $$dir/*.rej $$dir/core; done
 	rm -f haproxy-$(VERSION).tar.gz haproxy-$(VERSION)$(SUBVERS).tar.gz
 	rm -f haproxy-$(VERSION) haproxy-$(VERSION)$(SUBVERS) nohup.out gmon.out
diff --git a/examples/haproxy.stp 

About distribution requests

2014-05-07 Thread Odalinda Morales Rojas
Hi! 
I have installed HAProxy 1.4.25 and install stunnel to receive requests https, 
but I failed to get the real IP of the client, therefore, in HAProxy receive 
all requests with the same IP and my setup has balance sorce, obviously sends 
me all requests to the same backend. How I can do to make haproxy distribute 
requests among all backends? 
thanks in advance 

Intuit Software Users - 2014

2014-05-07 Thread Jasper Gates
Hi,

Would you be interested in acquiring contact database with complete Business
Email and phone Numbers of your most potential prospects to boost your sales
efforts?

You can acquire these contacts to target the key decision makers of
different Industries/Companies.

We provide comprehensive Mailing List according to your requirements and
offer a highly targeted Opt-in Emails and mails to maximize the chance to
focus on your actual target audience and your company's primary objectives.

We can provide you with details for titles like,

Job Titles: CIO, CISO, Cloud Services, Head/Director/Manager, COO, CTO, Head
of Field Service, Head of IS Head of Technical Department, Head/Director/VP
of (Enterprise) Mobility, Head/Director/VP of IT, HR
Manager/Head/Director/VP, ICT Director, IT Director, IT Manager, IT
Security, Logistics Director, Marketing Director, Mobile Workforce
Management Director/Head/Manager, Operations Director, Project Manager etc. 

List Contains: First Name, Last Name, Contact Title, Phone Number, Mobile
Number, Fax Number, Email, Postal Address and Zip Code, Company Name, Web
Address, SIC Code  NAICS Code etc.

Let me know your thoughts or pass on the message to the right person in your
company.

Thanks  regards,

Jasper Gates

Business Analyst





If you prefer not to receive further email messages then please type Leave
Out in the subject line.

 

 

 

 



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Willy Tarreau
Hi John,

On Wed, May 07, 2014 at 06:14:13PM +0200, John-Paul Bader wrote:
 Ok,
 
 
 I have just built haproxy with your patches like this:
 
 gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
 
 When trying to start haproxy it failed with:
 
 [ALERT] 126/160108 (25333) : Unable to allocate SSL session cache.
 [ALERT] 126/160108 (25333) : Fatal errors found in configuration.
 /usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for haproxy
 
 Without the patches, haproxy is starting.
(...)

That's a *very* good news!

Emeric is currently working on an alternate locking mechanism which
would work only using spinlocks and no pthreads. The principle is that
accesses to the cache are so rare (once or twice per SSL connection)
that there's almost never any collision (confirmed by the fact that it
requires seconds to minutes for a crash to happen) and it's worthless
to rely on heavy non-portable mechanisms when a simple hand-crafted
spinlock will do the job fine with no overhead.

So... stay tuned, I think Emeric will soon have something to propose you
to test :-)

Cheers,
Willy




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Willy Tarreau
Hi Vincent,

On Wed, May 07, 2014 at 06:35:06PM +0200, Vincent Bernat wrote:
  ???  7 mai 2014 11:15 +0200, Willy Tarreau w...@1wt.eu :
 
  haproxy does not include DTrace probes by any chance right? :)
 
  No, and I have no idea how this works either. But if you feel like it
  can provide some value and be done without too much effort, feel free
  to try :-)
 
 Here is a proof of concept. To test, use `make TARGET=linux2628
 USE_DTRACE=1`. On Linux, you need systemtap-sdt-dev or something like
 that. Then, there is a quick example in example/haproxy.stp.

Interesting, but just for my understanding, what does it provide beyond
building with TRACE=1 where the compiler dumps *all* function calls,
and not only those that were instrumented ? I'm asking because I never
used dtrace, so I'm totally ignorant here.

 You can try
 it like this:
 
 #+begin_src sh
 sudo stap  ./examples/haproxy.stp
 #+end_src
 
 It is possible to convert the probes.d to a tapset (which is a recipe
 for systemtap) to be able to name arguments and convert them in the
 appropriate type. I am using this AWK script:
  
 https://github.com/vincentbernat/lldpd/blob/master/src/daemon/dtrace2systemtap.awk
 
 Only works with simple probes.
 
 For dtrace, this would be something like that but I cannot test right
 now:
 
 #+begin_src dtrace
 haproxy$target:::frontend_accept
 {
printf(Frontend %s accepted a connection, copyinstr(arg0));
 }
 #+end_src
 
 The trick with those tracepoints is that they are just NOOP until you
 enable them. So, even when someone compiles dtrace support, they will
 not have any performance impact until trying to use the tracepoints.

Well, they will at least have the performance impact of the if which
disables them and the inflated/reordered functions I guess! So at least
we have to be reasonable not to put them everywhere (eg: not in the
polling loops nor in the scheduler).

 While the probe arguments can be anything, it is simpler to only keep
 simple types like null-terminated strings or int. Otherwise, they are
 difficult to exploit. If you put struct, without the debug symbols, the
 data is not exploitable.
 
 Now, all the hard work is to put trace points everywhere.

That's where gcc does the stuff free of charge in fact. I still tend to
be cautious about what the debugging code becomes over time, because we
had this twice, once with the DPRINTF() macro which was never up to date,
and once with the http_silent_debug() macro which became so unbalanced
over time that I recently totally removed it.

 A good target is where stuff are logged.

Yeah that's a good idea.

 But they can also be put in places where logs
 would be too verbose. I currently don't have interest in doing that but
 if someone is willing too, it is only a matter of defining the probes in
 probes.d and placing them in the C code. This is really nifty to debug
 stuff in production. However, I think that people interested in that can
 also use debug symbols to place probe at any place they want to. GCC is
 now better at providing debug symbols which work on optimized
 executables. Ubuntu is providing debug symbols for almost
 everything. Tracepoints are still interesting as they can be listed and
 they are hand-picked.

That was the principle of the http_silent_debug() in fact. Just to know
where we passed, in which order at a low cost. But I think I failed at it
by trying to maintain this code stable, while in practice we probably only
need something properly instrumented to easily add new tracepoints when
needed. Maybe your patch can be a nice step forward in that direction, I
have no idea. It's not intrusive, that's possibly something we can merge
and see if it is quickly adopted or not.

Regards,
Willy




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Emeric BRUN
 
My fix is broken, it should only show a warning and fallback on private
cache, i've just pointed the issue.

I will try to send you a workarounf patch soon.

Emeric

original message-
De: John-Paul Bader john-paul.ba...@wooga.net
A: Willy Tarreau w...@1wt.eu
Copie à: John-Paul Bader john-paul.ba...@wooga.net, Emeric Brun
eb...@exceliance.fr, haproxy@formilux.org
Date: Wed, 07 May 2014 22:09:28 +0200
-
 
 
 Woohoo - this sounds very good :)
 
 Thanks in advance for your efforts - much appreciated!
 
 Kind regards,
 
 John
 
 Willy Tarreau wrote:
 Hi John,

 On Wed, May 07, 2014 at 06:14:13PM +0200, John-Paul Bader wrote:
 Ok,


 I have just built haproxy with your patches like this:

 gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1

 When trying to start haproxy it failed with:

 [ALERT] 126/160108 (25333) : Unable to allocate SSL session cache.
 [ALERT] 126/160108 (25333) : Fatal errors found in configuration.
 /usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for 
 haproxy

 Without the patches, haproxy is starting.
 (...)

 That's a *very* good news!

 Emeric is currently working on an alternate locking mechanism which
 would work only using spinlocks and no pthreads. The principle is that
 accesses to the cache are so rare (once or twice per SSL connection)
 that there's almost never any collision (confirmed by the fact that it
 requires seconds to minutes for a crash to happen) and it's worthless
 to rely on heavy non-portable mechanisms when a simple hand-crafted
 spinlock will do the job fine with no overhead.

 So... stay tuned, I think Emeric will soon have something to propose you
 to test :-)

 Cheers,
 Willy

 
 -- 
 John-Paul Bader | Software Development
 
 www.wooga.com
 wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
 Sitz der Gesellschaft: Berlin; HRB 117846 B
 Registergericht Berlin-Charlottenburg
 Geschaeftsfuehrung: Jens Begemann, Philipp Moeser
 
 





Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread John-Paul Bader
Hmm yeah I noticed from what you wrote in the mail and by reading 
through the patch - but still it confirmed that the shared pthread thing 
was not available on FreeBSD right?


Would I also need to compile with USE_PRIVATE_CACHE=1 or would you patch 
take care of that?


When it uses the private cache, I would also have to change the 
configuration to allow ssl sessions over multiple http requests right?


Kind regards,

John

Emeric BRUN wrote:


My fix is broken, it should only show a warning and fallback on private
cache, i've just pointed the issue.

I will try to send you a workarounf patch soon.

Emeric

original message-
De: John-Paul Bader john-paul.ba...@wooga.net
A: Willy Tarreau w...@1wt.eu
Copie à: John-Paul Bader john-paul.ba...@wooga.net, Emeric Brun
eb...@exceliance.fr, haproxy@formilux.org
Date: Wed, 07 May 2014 22:09:28 +0200
-



Woohoo - this sounds very good :)

Thanks in advance for your efforts - much appreciated!

Kind regards,

John

Willy Tarreau wrote:

Hi John,

On Wed, May 07, 2014 at 06:14:13PM +0200, John-Paul Bader wrote:

Ok,


I have just built haproxy with your patches like this:

gmake TARGET=freebsd USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1

When trying to start haproxy it failed with:

[ALERT] 126/160108 (25333) : Unable to allocate SSL session cache.
[ALERT] 126/160108 (25333) : Fatal errors found in configuration.
/usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for
haproxy

Without the patches, haproxy is starting.

(...)

That's a *very* good news!

Emeric is currently working on an alternate locking mechanism which
would work only using spinlocks and no pthreads. The principle is that
accesses to the cache are so rare (once or twice per SSL connection)
that there's almost never any collision (confirmed by the fact that it
requires seconds to minutes for a crash to happen) and it's worthless
to rely on heavy non-portable mechanisms when a simple hand-crafted
spinlock will do the job fine with no overhead.

So... stay tuned, I think Emeric will soon have something to propose you
to test :-)

Cheers,
Willy


--
John-Paul Bader | Software Development

www.wooga.com
wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
Sitz der Gesellschaft: Berlin; HRB 117846 B
Registergericht Berlin-Charlottenburg
Geschaeftsfuehrung: Jens Begemann, Philipp Moeser







--
John-Paul Bader | Software Development

www.wooga.com
wooga GmbH | Saarbruecker Str. 38 | D-10405 Berlin
Sitz der Gesellschaft: Berlin; HRB 117846 B
Registergericht Berlin-Charlottenburg
Geschaeftsfuehrung: Jens Begemann, Philipp Moeser



Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Vincent Bernat
 ❦  7 mai 2014 22:19 +0200, Willy Tarreau w...@1wt.eu :

 Here is a proof of concept. To test, use `make TARGET=linux2628
 USE_DTRACE=1`. On Linux, you need systemtap-sdt-dev or something like
 that. Then, there is a quick example in example/haproxy.stp.

 Interesting, but just for my understanding, what does it provide beyond
 building with TRACE=1 where the compiler dumps *all* function calls,
 and not only those that were instrumented ? I'm asking because I never
 used dtrace, so I'm totally ignorant here.

See below.

 The trick with those tracepoints is that they are just NOOP until you
 enable them. So, even when someone compiles dtrace support, they will
 not have any performance impact until trying to use the tracepoints.

 Well, they will at least have the performance impact of the if which
 disables them and the inflated/reordered functions I guess! So at least
 we have to be reasonable not to put them everywhere (eg: not in the
 polling loops nor in the scheduler).

No, they are really just NOP. They are registered in some part of the
ELF executable and when the tracepoint is activated, the NOP is
replaced by a JMP.

When arguments are expensive to build, there is the possibility to test
if the probe is enabled, but in this case, even when the probe is not
enabled, there is a cost. So, better keep the arguments simple.

I cannot find a link which explains that clearly (I am pretty sure there
was an article on LWN for that). I can show you the result:

$ readelf -x .note.stapsdt ./haproxy

Hex dump of section '.note.stapsdt':
  0x 0800 3d00 0300 73746170 =...stap
  0x0010 73647400 af7c4200  6f634800 sdt..|B.ocH.
  0x0020  b09e6900  68617072 ..i.hapr
  0x0030 6f787900 66726f6e 74656e64 5f616363 oxy.frontend_acc
  0x0040 65707400 38403235 36382825 72617829 ept.8@2568(%rax)
  0x0050 

systemtap/dtrace are able to read this section:

$ stap -L 'process(./haproxy).mark(*)'
process(./haproxy).mark(frontend_accept) $arg1:long

(all arguments are seen as long/pointer because this is not something
encoded)

gdb is also able to use them:

(gdb) info probes
Provider NameWhere  Semaphore  Object   
 
haproxy  frontend_accept 0x00427caf 0x00699eb0 
/home/bernat/code/dailymotion/haproxy/haproxy 
(gdb) disassemble frontend_accept 
Dump of assembler code for function frontend_accept:
   0x00427c90 +0: push   %r14
   0x00427c92 +2: push   %r13
   0x00427c94 +4: push   %r12
   0x00427c96 +6: push   %rbp
   0x00427c97 +7: push   %rbx
   0x00427c98 +8: mov%rdi,%rbx
   0x00427c9b +11:add$0xff80,%rsp
   0x00427c9f +15:mov0x270(%rdi),%r12
   0x00427ca6 +22:mov0x20(%rdi),%rax
   0x00427caa +26:mov0x34(%r12),%ebp
   0x00427caf +31:nop
   0x00427cb0 +32:mov0x30(%rdi),%rax
   0x00427cb4 +36:movq   $0x0,0x2f0(%rdi)
   0x00427cbf +47:movq   $0x0,0x2e8(%rdi)
   [...]

See the nop at 427caf?

So the main interest of those probes are:

 * low overhead, they can be left in production to be here when you
   really need them
 * discoverable, someone not tech-savvy enough to read the source can
   list them and decide which ones to enable because someone more
   tech-savvy chosed them

 While the probe arguments can be anything, it is simpler to only keep
 simple types like null-terminated strings or int. Otherwise, they are
 difficult to exploit. If you put struct, without the debug symbols, the
 data is not exploitable.
 
 Now, all the hard work is to put trace points everywhere.

 That's where gcc does the stuff free of charge in fact. I still tend to
 be cautious about what the debugging code becomes over time, because we
 had this twice, once with the DPRINTF() macro which was never up to date,
 and once with the http_silent_debug() macro which became so unbalanced
 over time that I recently totally removed it.

Yes, this is a big problem. In the kernel where a similar mechanism
exists, some maintainers are reluctant to provide tracepoints because
they would become part of the user/kernel interface and have to be
maintained which is a lot of work.

 But they can also be put in places where logs
 would be too verbose. I currently don't have interest in doing that but
 if someone is willing too, it is only a matter of defining the probes in
 probes.d and placing them in the C code. This is really nifty to debug
 stuff in production. However, I think that people interested in that can
 also use debug symbols to place probe at any place they want to. GCC is
 now better at providing debug symbols which work on optimized
 executables. Ubuntu is providing debug symbols for almost
 everything. Tracepoints are still interesting as 

Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Vincent Bernat
 ❦  7 mai 2014 22:56 +0200, Vincent Bernat ber...@luffy.cx :

 So the main interest of those probes are:

  * low overhead, they can be left in production to be here when you
really need them

And you enable/disable them while the program is running.
-- 
panic (No CPUs found.  System halted.\n);
2.4.3 linux/arch/parisc/kernel/setup.c



Dtrace for haproxy (Was: haproxy 1.5-dev24: 100% CPU Load or Core Dumped)

2014-05-07 Thread Willy Tarreau
On Wed, May 07, 2014 at 10:59:43PM +0200, Vincent Bernat wrote:
  ???  7 mai 2014 22:56 +0200, Vincent Bernat ber...@luffy.cx :
 
  So the main interest of those probes are:
 
   * low overhead, they can be left in production to be here when you
 really need them
 
 And you enable/disable them while the program is running.

:-)

Thanks very much for the detailed explanation Vincent. So from what I
understand, dtrace is more for production uses while TRACE=1 is more
for the developer. Neither fits the two purposes but if we agree that
neither of them must not cross the frontier towards the other one, both
can be useful and very efficient at little cost for the purpose they aim
at serving (typically your I was here but I won't dump my args).

So that makes a lot of sense indeed.

I renamed the thread to help people find it in mail archives when they
search for the feature. I think your explanation and patch will be a
nice starting point for whoever wants to devote some time to this.

Thanks!
Willy




Re: haproxy 1.5-dev24: 100% CPU Load or Core Dumped

2014-05-07 Thread Willy Tarreau
Hi John,

On Wed, May 07, 2014 at 10:54:33PM +0200, John-Paul Bader wrote:
 Hmm yeah I noticed from what you wrote in the mail and by reading 
 through the patch - but still it confirmed that the shared pthread thing 
 was not available on FreeBSD right?

Yes that's it. Old freebsd code did not return an error for this, and
haproxy did not check this error. Newer freebsd code now does return
an error, but haproxy still didn't check. Emeric's patch introduced
the test for the feature. Note that older freebsd versions will still
pretend to work well but be broken, hence the proposal to remove
pthread by default since there's no reliable way of detecting its
support at runtime.

 Would I also need to compile with USE_PRIVATE_CACHE=1 or would you patch 
 take care of that?

No you don't need it anymore.

 When it uses the private cache, I would also have to change the 
 configuration to allow ssl sessions over multiple http requests right?

No you don't need to change anymore, what Emeric's patch does is to
reimplement a hand-crafted spinlock mechanism. I just ran a few tests
here and at 4.5k conn/s spread over 4 processes I see that the lock is
already held about 1% of the time, which is very low and does not justify
using a syscall to sleep.

I'm appending the two patches for you to test. They're to be applied on
top of latest master, but I think it will be OK on yours (provided you
don't already have previous patches from Emeric).

You don't need to pass any specific options to the Makefile, it defaults
to using this implementation.

Once you confirm that these ones fix your issue, I'll merge them.

Thanks!
Willy

From 13dea9e46ccb84655a5f945f076a0e03327515a5 Mon Sep 17 00:00:00 2001
From: Emeric Brun eb...@exceliance.fr
Date: Wed, 7 May 2014 16:10:18 +0200
Subject: BUG/MAJOR: ssl: Fallback to private session cache if current lock
 mode is not supported.

Process shared mutex seems not supported on some OSs (FreeBSD).

This patch checks errors on mutex lock init to fallback
on a private session cache (per process cache) in error cases.
---
 include/proto/shctx.h |  3 +++
 src/cfgparse.c| 18 ++
 src/shctx.c   | 29 +++--
 3 files changed, 40 insertions(+), 10 deletions(-)

diff --git a/include/proto/shctx.h b/include/proto/shctx.h
index a84e4a6..e0c695d 100644
--- a/include/proto/shctx.h
+++ b/include/proto/shctx.h
@@ -28,6 +28,9 @@
 #define SHCTX_APPNAME haproxy
 #endif
 
+#define SHCTX_E_ALLOC_CACHE -1
+#define SHCTX_E_INIT_LOCK   -2
+
 /* Allocate shared memory context.
  * size is the number of allocated blocks into cache (default 128 bytes)
  * A block is large enough to contain a classic session (without client cert)
diff --git a/src/cfgparse.c b/src/cfgparse.c
index c4f092f..7176b59 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -6744,6 +6744,8 @@ out_uri_auth_compat:
 * remains NULL so that listeners can later detach.
 */
list_for_each_entry(bind_conf, curproxy-conf.bind, by_fe) {
+   int alloc_ctx;
+
if (!bind_conf-is_ssl) {
if (bind_conf-default_ctx) {
Warning(Proxy '%s': A certificate was 
specified but SSL was not enabled on bind '%s' at [%s:%d] (use 'ssl').\n,
@@ -6758,10 +6760,18 @@ out_uri_auth_compat:
continue;
}
 
-   if (shared_context_init(global.tune.sslcachesize, 
(global.nbproc  1) ? 1 : 0)  0) {
-   Alert(Unable to allocate SSL session 
cache.\n);
-   cfgerr++;
-   continue;
+   alloc_ctx = 
shared_context_init(global.tune.sslcachesize, (global.nbproc  1) ? 1 : 0);
+   if (alloc_ctx  0) {
+   if (alloc_ctx == SHCTX_E_INIT_LOCK) {
+   Warning(Unable to init lock for the 
shared SSL session cache. Falling back to private cache.\n);
+   alloc_ctx = 
shared_context_init(global.tune.sslcachesize, 0);
+   }
+
+   if (alloc_ctx  0) {
+   Alert(Unable to allocate SSL session 
cache.\n);
+   cfgerr++;
+   continue;
+   }
}
 
/* initialize all certificate contexts */
diff --git a/src/shctx.c b/src/shctx.c
index f259b9c..86e6056 100644
--- a/src/shctx.c
+++ b/src/shctx.c
@@ -532,19 +532,36 @@ int shared_context_init(int size, int shared)
  PROT_READ | PROT_WRITE, maptype | 
MAP_ANON, -1, 0);
if (!shctx || shctx == MAP_FAILED) {
shctx = NULL;
-   return -1;
+