Re: Detecting if the the client connected using SSL

2014-07-07 Thread Baptiste
On Mon, Jul 7, 2014 at 3:48 AM, Dennis Jacobfeuerborn
denni...@conversis.de wrote:
 Hi,
 I'm experimenting with the SSL capabilities of haproxy and I'm wondering
 if there is a way to detect if the client connected using SSL?

 The background is that I have two frontends one for SSL and one for
 regular http. In the SSL frontend I forward the requests to the http
 frontend via send-proxy. This part works well.
 The problem I have happens when I want to redirect non-SSL requests to SSL.
 The common way seems to be to put this in the http frontend:
 redirect scheme https if !{ ssl_fc }

 However since ALL requests arriving there are regular http requests
 (either received via port 80 or accept-proxy) this obviously ends in a
 redirect loop since ssl_fc only checks if the request received by the
 current frontend is a SSL one and not if the original request is.

 What seems to work is this:
 redirect scheme https if { dst_port eq 80 }

 This works around the problem but now I have to make sure that the port
 I check here matches the port in the bind statement.
 A cleaner way would be if I could check if the original request is a SSL
 one or not. Is this possible somehow?

 Regards,
   Dennis



Hi Dennis,

You should not point your SSL frontend to your clear one.
Just use the clear one with a simple redirect rule to SSL one and make
the SSL one point to your backend.
And you're done.

Baptiste



Difference between Disable and soft stop

2014-07-07 Thread David
Hello,

I have installed HAproxy 1.5 in my RDS farm. But when i check the disable 
option for one server, this server is still active in my farm and users can 
connect to it ?

May i have to use soft stop instead ? What is the difference between these 
two options ?

Thank you by advance for your answer.

David.






Re: Detecting if the the client connected using SSL

2014-07-07 Thread Dennis Jacobfeuerborn
On 07.07.2014 08:57, Baptiste wrote:
 On Mon, Jul 7, 2014 at 3:48 AM, Dennis Jacobfeuerborn
 denni...@conversis.de wrote:
 Hi,
 I'm experimenting with the SSL capabilities of haproxy and I'm wondering
 if there is a way to detect if the client connected using SSL?

 The background is that I have two frontends one for SSL and one for
 regular http. In the SSL frontend I forward the requests to the http
 frontend via send-proxy. This part works well.
 The problem I have happens when I want to redirect non-SSL requests to SSL.
 The common way seems to be to put this in the http frontend:
 redirect scheme https if !{ ssl_fc }

 However since ALL requests arriving there are regular http requests
 (either received via port 80 or accept-proxy) this obviously ends in a
 redirect loop since ssl_fc only checks if the request received by the
 current frontend is a SSL one and not if the original request is.

 What seems to work is this:
 redirect scheme https if { dst_port eq 80 }

 This works around the problem but now I have to make sure that the port
 I check here matches the port in the bind statement.
 A cleaner way would be if I could check if the original request is a SSL
 one or not. Is this possible somehow?

 Regards,
   Dennis

 
 
 Hi Dennis,
 
 You should not point your SSL frontend to your clear one.
 Just use the clear one with a simple redirect rule to SSL one and make
 the SSL one point to your backend.
 And you're done.

This makes sense but what I forgot to mention is that I use a
configuration trick posted here a while ago where I bind SSL frontend to
several cores to do the SSL offloading and then proxy the requests to
the http frontend which is bound to a single core to do the
load-balancing/ha/stats. If I remember correctly then doing the actual
handling of that stuff on multiple cores is not recommended.
This is the frontend config I use currently:

listen front-https
bind-process 2-4
bind 10.99.0.200:443 ssl crt /etc/pki/tls/certs/testcert.chain.pem
ciphers
ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM
no-sslv3
reqadd X-Forwarded-Proto:\ https
server clear abns@ssl-proxy send-proxy

frontend front1
bind-process 1
bind 10.99.0.200:80
bind abns@ssl-proxy accept-proxy
redirect scheme https if { dst_port eq 80 }
default_backend back1

Regards.
  Dennis




RE: Strange crash of HAProxy 1.5.1

2014-07-07 Thread Lukas Tribus
Hi Merton,


 Hi Cyril, 
 
 Yes, I tried make clean first before compiling and still the same 
 problem on 10.04 LTS.

Are you compiling on the same machine that is crashing, correct?

You cannot mix executables from a more recent box, because openssl,
pcre headers will not match the running library.


Anyway, it doesn't seem so from your -vv output.


Can you recompile without compiler optimizations (CFLAGS=-g -O0 after
make) and trigger a coredump (set ulimit -c unlimited before starting
haproxy, also see [1])?

Then use gdb to create a backtrace and post the output:
gdb /path/to/application /path/to/corefile
(gdb) $ backtrace full


Also see [1].



Thanks,

Lukas


[1] http://www.cyberciti.biz/tips/linux-core-dumps.html

  


Filing bugs.. found a bug in 1.5.1 (http-send-name-header is broken)

2014-07-07 Thread Alexey Zilber
Hey guys,

  I couldn't a bug tracker for HAProxy, and I found a serious bug in 1.5.1
that may be a harbinger of other broken things in header manipulation.

 The bug is:

  I added 'http-send-name-header sfdev1' unde the defaults section of
haproxy.cfg.

When we would do a POST with that option enabled, we would get 'sf'
injected into a random variable.   When posting with a time field like
'07/06/2014 23:43:01' we would get back '07/06/2014 23:43:sf' consistently.

Thanks,
-Alex


PRIX DE FOLIE sur les Balles de Golf

2014-07-07 Thread CGR GOLF
Si ce message ne s'affiche pas correctement consultez-le en ligne 




PRIX DE FOLIE !
 Balles d'occasion
 
Balles reconditionnées
FRAIS DE PORT OFFERT 






 
TITLEIST 
Pro V1 
   
TITLEIST 
Pro V1X 
   
TITLEIST 
NXT Tour
   
TITLEIST 
NXT
  
Balles reconditionnées : 
à partir de 1,99€ 
  Balles reconditionnées : 
à partir de 1,99€ 
  Balles reconditionnées : 
à partir de 0,99€ 
  Balles reconditionnées : 
à partir de 0,95€ 
  
Balles d'occasion : 
à partir de 2,19€ 
  Balles d'occasion : 
à partir de 2,19€ 
  Balles d'occasion : 
à partir de 1,09€ 
  Balles d'occasion : 
à partir de 0,99€ 
  






 
Callaway 
Tour 
   
Callaway 
Hot 
   
Nike 
Tour 
   
Nike 
Mix 
  
Balles reconditionnées : 
à partir de 1,95 € 
  Balles reconditionnées : 
à partir de 0,99€ 
  Balles reconditionnées : 
à partir de 1,95€ 
  Balles reconditionnées : 
à partir de 0,95€ 
  
Balles d'occasion : 
à partir de 2,09€ 
  Balles d'occasion :   
à partir de 1,05€ 
  Balles d'occasion :
à partir de 2,09€ 
  Balles d'occasion :   
à partir de 0,99€ 
  




 
Taylor Made 
Tour 
   
Taylor Made 
Mix 
  
Balles reconditionnées : 
à partir de 1,95€ 
  Balles reconditionnées : 
à partir de 0,95€ 
  
Balles d'occasion : 
à partir de 2,09€ 
 Balles d'occasion : 
à partir de 1,05€ 
 




 Recevez nos NewsletterSuivez-nous sur Facebook



  Se désinscrire de cette newsletter



Re: Filing bugs.. found a bug in 1.5.1 (http-send-name-header is broken)

2014-07-07 Thread Jonathan Matthews
On 7 Jul 2014 14:44, Alexey Zilber alexeyzil...@gmail.com wrote:

 Hey guys,

   I couldn't a bug tracker for HAProxy, and I found a serious bug in
1.5.1 that may be a harbinger of other broken things in header manipulation.

  The bug is:

   I added 'http-send-name-header sfdev1' unde the defaults section of
haproxy.cfg.

 When we would do a POST with that option enabled, we would get 'sf'
injected into a random variable.   When posting with a time field like
'07/06/2014 23:43:01' we would get back '07/06/2014 23:43:sf' consistently.

Alex -

Would you be able to post a (redacted) config that causes haproxy to
exhibit this behaviour, along with a fuller example of exactly where this
unwanted data appears in context?

If you could post a packet capture of the data being inserted, that will
probably help people to home in on the cause of the problem. Don't forget
to redact anything from the capture as you feel necessary, such as auth
creds, public IPs and host headers. (Anything you're content /not/ to
redact could only help, however!)

Jonathan


Re: Using the socket interface to access ACLs

2014-07-07 Thread William Jimenez
On Thu, Jul 3, 2014 at 5:59 AM, Baptiste bed...@gmail.com wrote:

 On Thu, Jul 3, 2014 at 2:24 PM, Thierry FOURNIER tfourn...@haproxy.com
 wrote:
  On Tue, 1 Jul 2014 23:00:13 +0200
  Baptiste bed...@gmail.com wrote:
 
  On Tue, Jul 1, 2014 at 10:54 PM, William Jimenez
  william.jime...@itsoninc.com wrote:
   Hello
   I am trying to modify ACLs via the socket interface. When I try to do
   something like 'get acl', I get an error:
  
   Missing ACL identifier and/or key.
  
   How do I find the ACL identifier or key for a specific ACL? I see the
 list
   of ACLs when i do a 'show acl', but unsure which of these values is
 the file
   or key:
  
   # id (file) description
   0 () acl 'always_true' file '/etc/haproxy/haproxy.cfg' line 19
   1 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 20
   2 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 21
   3 () acl 'src' file '/etc/haproxy/haproxy.cfg' line 22
  
   Thanks
 
  Hi William,
 
  In order to be able to update ACL content, they must load their
  content from a file.
  The file name will be considered as a 'reference' you can point to
  when updating content.
  Don't forget to update simultaneously the content from an ACL and from
  the flat file to make HAProxy reload reliable :)
 
  Baptiste
 
 
  Hi
 
  You can modify ACL without file. The identifier is the number prefixed
  by the char '#', like this:
 
 add acl #1 127.0.0.1
 
  get acl is used to debug acl.
 
  Thierry
 
 

 Yes, but acl number is not reliable, since it can change in time.
 Furthermore, it's easier to update content of a flat file than
 updating ACL values in HAproxy's configuration.

 Baptiste


Here is my config for reference:

global
   daemon
   maxconn 4096
   chroot /var/lib/haproxy
   pidfile /var/run/haproxy.pid
   uid 99
   gid 99
   stats socket /var/lib/haproxy/stats level admin
 defaults
   mode http
   timeout connect 5000ms
   timeout client 5ms
   timeout server 5ms
 frontend 01-fend-in
   bind localhost:80
   default_backend 01_bend
   acl myacl hdr(Host) -f /root/myacl
   #acl redir_true always_false
   redirect code 307 location http://example.com if redir_true
 backend ffd_bend
   option httpchk GET /
   option http-server-close
   server bend013 localhost:8180 check
   server bend012 localhost:8180 check


Thanks


[no subject]

2014-07-07 Thread Christophe Rahier

-- Professional Virtual Officehttp://www.contactoffice.com

help

2014-07-07 Thread Christophe Rahier

-- Professional Virtual Officehttp://www.contactoffice.com

[PATCH] DOC: expand the docs for the provided stats.

2014-07-07 Thread James Westby
Indicate for each statistic which types may have a value for
that statistic.

Explain some of the provided statistics a little more deeply.
---
 doc/configuration.txt | 111 --
 1 file changed, 62 insertions(+), 49 deletions(-)


Hi,

I have spent some of the past weeks getting to know haproxy more deeply,
particularly looking in to the provided stats while running performance
tests on our services.

I found the documentation of the stats somewhat lacking, meaning it took
me a long time to fully understand the provided information.

I wanted to improve the docs so that others would be able to benefit
from what I have learned.

This is a patch to document some of those things. It's not exactly
complete as there are still some stats that I haven't really looked at.

I can spend some time improving the patch based on feedback, but I've
been asked to move on to other things, so won't be able to spend too
much time if you aren't happy with what is provided.

Thanks,

James


diff --git a/doc/configuration.txt b/doc/configuration.txt
index 670dfee..70f774a 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12926,46 +12926,59 @@ Unix socket.
 ---
 
 The statistics may be consulted either from the unix socket or from the HTTP
-page. Both means provide a CSV format whose fields follow.
+page. Both means provide a CSV format whose fields follow. In brackets after
+each field are the types for which the field may take a value. Types not
+in that list will always have a blank value for that field.
 
-  0. pxname: proxy name
+  0. pname: proxy name [FRONTEND, BACKEND, SERVER]
   1. svname: service name (FRONTEND for frontend, BACKEND for backend, any name
-for server)
-  2. qcur: current queued requests
-  3. qmax: max queued requests
-  4. scur: current sessions
-  5. smax: max sessions
-  6. slim: sessions limit
-  7. stot: total sessions
-  8. bin: bytes in
-  9. bout: bytes out
- 10. dreq: denied requests
- 11. dresp: denied responses
- 12. ereq: request errors
- 13. econ: connection errors
- 14. eresp: response errors (among which srv_abrt)
- 15. wretr: retries (warning)
- 16. wredis: redispatches (warning)
- 17. status: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
- 18. weight: server weight (server), total weight (backend)
- 19. act: server is active (server), number of active servers (backend)
- 20. bck: server is backup (server), number of backup servers (backend)
- 21. chkfail: number of failed checks
- 22. chkdown: number of UP-DOWN transitions
- 23. lastchg: last status change (in seconds)
- 24. downtime: total downtime (in seconds)
- 25. qlimit: queue limit
- 26. pid: process id (0 for first instance, 1 for second, ...)
- 27. iid: unique proxy id
- 28. sid: service id (unique inside a proxy)
- 29. throttle: warm up status
- 30. lbtot: total number of times a server was selected
- 31. tracked: id of proxy/server if tracking is enabled
- 32. type (0=frontend, 1=backend, 2=server, 3=socket)
- 33. rate: number of sessions per second over last elapsed second
- 34. rate_lim: limit on new sessions per second
- 35. rate_max: max number of new sessions per second
- 36. check_status: status of last health check, one of:
+for server) [FRONTEND, BACKEND, SERVER]
+  2. qcur: current queued requests. For the backend this reports the number 
queued without a server assigned. [BACKEND, SERVER]
+  3. qmax: max value of qcur [BACKEND, SERVER]
+  4. scur: current sessions [FRONTEND, BACKEND, SERVER]
+  5. smax: max sessions [FRONTEND, BACKEND, SERVER]
+  6. slim: configured session limit [FRONTEND, BACKEND, SERVER]
+  7. stot: cumulative number of connections [FRONTEND, BACKEND, SERVER]
+  8. bin: bytes in [FRONTED, BACKEND, SERVER]
+  9. bout: bytes out [FRONTEND, BACKEND, SERVER]
+ 10. dreq: requests denied because of security concerns. [FRONTEND, BACKEND]
+ - For tcp this is because of a matched tcp-request content rule.
+ - For http this is because of a matched http-request or tarpit rule.
+ 11. dresp: responses denied because of security concerns. [FRONTEND, BACKEND, 
SERVER]
+ - For http this is because of a matched http-request rule, or option 
checkcache.
+ 12. ereq: request errors. Some of the possible causes are: [FRONTEND]
+ - early termination from the client, before the request has been sent.
+ - read error from the client
+ - client timeout
+ - client closed connection
+ - various bad requests from the client.
+ - request was tarpitted.
+ 13. econ: number of requests that encountered an error trying to connect to a 
backend server. The backend stat is the sum of the stat for all servers of that 
backend, plus any connection errors not associated with a particular server 
(such as the backend having no active servers). [BACKEND, SERVER]
+ 14. eresp: response errors. srv_abrt will be counted here also. Some other 
errors are: [BACKEND, SERVER]
+ - write error on the client socket (won't be 

Re: [PATCH] bug: backend: Fix method declaration for map_get_server_hash

2014-07-07 Thread Willy Tarreau
Hello Dan,

On Fri, Jul 04, 2014 at 10:24:44PM -0700, Dan Dubovik wrote:
 Hello,
 
 Recently, we were trying to segment our account provisioning using HAProxy.
  We are having HAProxy to port NAT traffic to a backend, using the djb2
 hash to select the backend based on the Host header.  When attempting to
 predict the backend that HAProxy would select, we were unable to come to
 the same results as HAProxy did.
 
 We dove into the code that HAProxy used to implement the djb2 hash, and
 discovered a bug in the map_get_server_hash function declaration.  Where in
 the rest of the code, it uses an unsigned long for the hash value, the
 map_get_server_hash function uses an unsigned int.

You spotted an interesting thing, but in fact it's not that easy. Desping
hash being declared as long in most of these functions (note I said most
since it appears that get_server_sh() uses an int), it's mostly used as a
32-bit quantity. chash_get_server_hash() uses an unsigned int as well. And
full_hash() reduces it to 32-bit. So in practice we should use unsigned ints
everywhere instead.

 The end result is that we have a consistent value chosen for a backend by
 HAProxy, but one that is unpredictable by a standard implementation of djb2.
 
 Attached is the patch we used that resolved this issue.

Unfortunately it will make things even worse, because not only will this
change all hashes for all deployed load balancers, which is hardly acceptable,
but additionally it will make the hash result dependant on the machine's word
size, meaning that people who are currently upgrading their old 32-bit systems
to 64-bit will have inconsistent hashing between the two.

Thus I'd rather fix all this by ensuring we're using unsigned ints everywhere
a hash result is used from backend.c. That will both maintain compatibility
with existing setups and ensure small and large systems provide the same hash
result. The easiest way to do this would be to modify gen_hash() to return
an unsigned int and to replace all unsigned long hash occurrences with
unsigned ints.

Is this something you'd be willing to do ? (it would save me an extra hour).

Additionally, since you're checking your hash results, would you be interested
in working on a utility to run from the stats socket which would give you the
selected server for a given pattern ? I've long wanted to do that but I'm not
sure how easy/complex it is now that we can hash many things. It's basically
the same as applying the LB algorithm but we want to bypass the data extraction
to always hash the same thing. Thus we could do :

get-server-hash backend1 10.20.30.40
   server1
get-server-hash backend2 /index.html
   server3

And ideally it would report the hash value, the server count (or farm's
weight) and the server's index. I've always thought it could be useful,
but never had the time to work on this. That seems pretty close to what
you're currently doing.

Best regards,
Willy




Re: Difference between Disable and soft stop

2014-07-07 Thread Pavlos Parissis
On 07/07/2014 11:49 πμ, David wrote:
 Hello,
 
 I have installed HAproxy 1.5 in my RDS farm. But when i check the disable 
 option for one server, this server is still active in my farm and users can 
 connect to it ?
 

I assume you mean that it took while for the server to stop receiving
after it was disabled, am I right ?

I have observed this only when I used TCP mode, in my case it took some
time(20mins) for a server to stop getting traffic. I switched(for other
reasons) to HTTP mode with keep-alive enabled and this particular
behavior doesn't occur. Have you tried to enable 'option forceclose'? I
have no clue if it will do the trick.


 May i have to use soft stop instead ? What is the difference between these 
 two options ?
 
 Thank you by advance for your answer.
 
 David.
 
 
 
 




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-07 Thread Willy Tarreau
Hi James,

On Mon, Jul 07, 2014 at 03:43:38PM -0400, James Westby wrote:
 Indicate for each statistic which types may have a value for
 that statistic.
 
 Explain some of the provided statistics a little more deeply.

That's really kind, but I have some comments below :

  The statistics may be consulted either from the unix socket or from the HTTP
 -page. Both means provide a CSV format whose fields follow.
 +page. Both means provide a CSV format whose fields follow. In brackets after
 +each field are the types for which the field may take a value. Types not
 +in that list will always have a blank value for that field.
  
 -  0. pxname: proxy name
 +  0. pname: proxy name [FRONTEND, BACKEND, SERVER]
^
 here you mangled the field name, it's pxname

1. svname: service name (FRONTEND for frontend, BACKEND for backend, any 
 name
(...)
 +for server) [FRONTEND, BACKEND, SERVER]
 +  2. qcur: current queued requests. For the backend this reports the number 
 queued without a server assigned. [BACKEND, SERVER]
(...)

Please respect the 80-char limit on the doc, that's really important for
people reading in directly on servers. And as a person whom it happens from
time to time and even over a serial port sometimes, I can tell you how much
a pain it is to see lines wrapping. There are quite a number hyper-long lines
starting at point 13!

However I really appreciate the amount of details you've put there and am
really willing to get this merged. I have not verified if there's any other
mangled field name, please double-check, it can avoid some implementers'
headache!

Thanks!
Willy




Re: Abstract namespace sockets handling

2014-07-07 Thread hodor
Hello,

On 2014-07-02, 18:39:09, Willy Tarreau wrote:
  BTW, if abns@ bind() fails during a -sf reload, then /tmp/test1.sock of 
  the
  old HAProxy instance is unlinked/unreachable and thus cannot accept new
  connections. The new unix@ bind() succeeded, the .tmp got renamed, the 
  .bak
  was unlinked. So far so good. But when the abns@ bind() fails, then the 
  whole
  new HAProxy instance fails and leaves the old instance degraded.
  
  I do not know how to simply fix this.
 
 That's a good point. We have no link()/unlink() on abstract sockets, so
 I don't see how we'll be able to temporarily rename a socket. Since this
 is linux-only, we can consider any os-specific trick available and see if
 one is satisfying (eg: shutdown(RD) followed by listen(), etc).

I was unable to make it work that way. I used the attached code in one
console, and in the other: 

strace -e trace=bind socat abstract-listen:test,unix-tightsocklen=0 stdio

I tried a few combinations of shutdown() / listen(), the ordering, ...,
but no luck.

 So what socat does prevents us from building binary addresses (eg: random
 addresses for internal communications). But, in practice, we're making names
 out of human-readable strings, we're not building addresses out of binary
 data, so I'd be tempted to think that it would make more sense to adopt
 socat's naming method instead of padding the remaining bytes with zeroes.
 
 Still I feel concerned about future evolutions. I have a deep feeling that
 this is something we could regret sooner or later. Indeed, if we use strlen()
 over possibly binary contents to retrieve a socket address length, something
 tells me that we're sacrifying the ability to later extend this scheme to
 other usages.

Indeed. I wasn't happy with using strlen() either :).

 
  The second attached patch (haproxy_correct_abns_length.diff) seems to fix 
  it.
  No idea whether this is the right way to do it.
 
 I've just found in socat's man that it supports both modes :
 
unix-tightsocklen=[0|1]
 On socket operations, pass a socket address length that does not
   include the whole  struct  sockaddr_un record  but  (besides
   other components)  only  the  relevant part of the filename or
   abstract string.
   Default is 1.
 
 and indeed it works here :
 
   $ strace -s 200 -e trace=connect socat readline 
 abstract-connect:foo,unix-tightsocklen=0
   connect(3, {sa_family=AF_FILE, path=@foo}, 110) = 0
 
 So given that, I'd prefer to stay on the current scheme and document this
 way to interface with socat instead. It seems safer and more durable to me.
 What do you think ?

Oh, I didn't know socat had such an option until you mentioned it :).
Neat. Keeping the current scheme makes sense now.

 
  I can work around all of those problems by using plain old unix sockets
  instead, but I like the idea of abstract namespace sockets :).
 
 I like it as well, but you see, another concern I have is the fact that
 any other process -not just haproxy- could be bound to an abstract socket.
 While there are well-known ports for TCP/UDP services, there's no official
 registry of well-known abstract socket addresses and this is a problematic
 point. I suspect that we'll quickly come up with a complement which is a
 random address : we would generate a random address and try to bind. Then
 we would map each abstract name to that random address. It would solve all
 the communications issues within a group of processes belonging to the same
 config space.

Also with abstract sockets, we lack any access control (apart from the
apparent possibility of collisions). Any process can bind() / connect()
to any name.

 My gut feeling is that we should use file-system for anything system-wide
 (ie: socat to haproxy), as it's the only way to safely connect two ends. But
 we could use abstract sockets inside the same process or group of processes.

 An intermediate mechanism could be to have haproxy automatically prefix
 abstract sockets with the starting process' pid or with a short random.

But that would kill the possibility of two processes inside different
chroots to communicate efficiently (without some mount --bind tricks).

(I don't have any practical example of such a setup, though :))

 So I have applied patch #1 and not #2. Instead I think we'd rather continue
 to discuss the design choices and options here.

Thanks.

 Best regards,
 Willy

Best regards,

-- 
hodor

#include stdio.h
#include string.h
#include stdlib.h
#include sys/socket.h
#include sys/types.h
#include sys/un.h
#include assert.h
#include unistd.h

int main(int argc, char ** argv)
{
int s, err, one=1;
struct sockaddr_un sun;


s = socket(AF_UNIX, SOCK_STREAM, 0);
assert(s != -1);

memset(sun, 0, sizeof sun);
sun.sun_family = AF_UNIX;
strcpy(sun.sun_path[1], test);

err = setsockopt(s, SOL_SOCKET, SO_REUSEADDR, one, sizeof(one));
assert(err != -1);

err = bind(s, 

Re: [PATCH] DOC: expand the docs for the provided stats.

2014-07-07 Thread Willy Tarreau
Hi James,

On Mon, Jul 07, 2014 at 05:02:33PM -0400, James Westby wrote:
 1. svname: service name (FRONTEND for frontend, BACKEND for backend, 
  any name
  (...)
  +for server) [FRONTEND, BACKEND, SERVER]
  +  2. qcur: current queued requests. For the backend this reports the 
  number queued without a server assigned. [BACKEND, SERVER]
  (...)
 
  Please respect the 80-char limit on the doc, that's really important for
  people reading in directly on servers. And as a person whom it happens from
  time to time and even over a serial port sometimes, I can tell you how much
  a pain it is to see lines wrapping. There are quite a number hyper-long 
  lines
  starting at point 13!
 
 Yeah, the formatting is bad, but I'm not sure what would be preferred?
 Would formatting like the Flags/Reason section of section state at
 disconnection be good for this section?

Let's simply do it like this, with the continuation line below the beginning
of the text after the number :

   2. qcur: current queued requests. For the backend this reports the number
  queued without a server assigned. [BACKEND, SERVER]

In general what matters is that it's easily readable on screen. Cyril's dconv
utility already does an excellent job at converting something looking fine to
similarly-looking HTML output.

Thanks,
Willy




[PATCH] DOC: expand the docs for the provided stats.

2014-07-07 Thread James Westby
Indicate for each statistic which types may have a value for
that statistic.

Explain some of the provided statistics a little more deeply.
---
 doc/configuration.txt | 129 ++
 1 file changed, 88 insertions(+), 41 deletions(-)

Hi,

This time with improved formatting, and pxname fixed.

Thanks,

James

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 670dfee..5f1e300 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12926,45 +12926,89 @@ Unix socket.
 ---
 
 The statistics may be consulted either from the unix socket or from the HTTP
-page. Both means provide a CSV format whose fields follow.
-
-  0. pxname: proxy name
-  1. svname: service name (FRONTEND for frontend, BACKEND for backend, any name
-for server)
-  2. qcur: current queued requests
-  3. qmax: max queued requests
-  4. scur: current sessions
-  5. smax: max sessions
-  6. slim: sessions limit
-  7. stot: total sessions
-  8. bin: bytes in
-  9. bout: bytes out
- 10. dreq: denied requests
- 11. dresp: denied responses
- 12. ereq: request errors
- 13. econ: connection errors
- 14. eresp: response errors (among which srv_abrt)
- 15. wretr: retries (warning)
- 16. wredis: redispatches (warning)
+page. Both means provide a CSV format whose fields follow. In brackets after
+each field are the types for which the field may take a value. Types not
+in that list will always have a blank value for that field.
+
+  0. pxname: proxy name [FRONTEND, BACKEND, SERVER]
+  1. svname: service name (FRONTEND for frontend, BACKEND for backend, any
+ name for server) [FRONTEND, BACKEND, SERVER]
+  2. qcur: current queued requests. For the backend this reports the number
+ queued without a server assigned. [BACKEND, SERVER]
+  3. qmax: max value of qcur [BACKEND, SERVER]
+  4. scur: current sessions [FRONTEND, BACKEND, SERVER]
+  5. smax: max sessions [FRONTEND, BACKEND, SERVER]
+  6. slim: configured session limit [FRONTEND, BACKEND, SERVER]
+  7. stot: cumulative number of connections [FRONTEND, BACKEND, SERVER]
+  8. bin: bytes in [FRONTED, BACKEND, SERVER]
+  9. bout: bytes out [FRONTEND, BACKEND, SERVER]
+ 10. dreq: requests denied because of security concerns.
+ - For tcp this is because of a matched tcp-request content rule.
+ - For http this is because of a matched http-request or tarpit rule.
+ [FRONTEND, BACKEND]
+ 11. dresp: responses denied because of security concerns.
+ - For http this is because of a matched http-request rule, or
+   option checkcache.
+ [FRONTEND, BACKEND, SERVER]
+ 12. ereq: request errors. Some of the possible causes are:
+ - early termination from the client, before the request has been sent.
+ - read error from the client
+ - client timeout
+ - client closed connection
+ - various bad requests from the client.
+ - request was tarpitted.
+ [FRONTEND]
+ 13. econ: number of requests that encountered an error trying to connect to
+ a backend server. The backend stat is the sum of the stat for all servers
+ of that backend, plus any connection errors not associated with a
+ particular server (such as the backend having no active servers).
+ [BACKEND, SERVER]
+ 14. eresp: response errors. srv_abrt will be counted here also. Some other
+ errors are:
+ - write error on the client socket (won't be counted for the server stat)
+ - failure applying filters to the response.
+ [BACKEND, SERVER]
+ 15. wretr: number of times a connection to a server was retried.
+ [BACKEND, SERVER]
+ 16. wredis: number of times a request was redispatched to another server.
+ The server value counts the number of times that server was switched
+ away from. [BACKEND, SERVER]
  17. status: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
- 18. weight: server weight (server), total weight (backend)
+ [FRONTEND, BACKEND, SERVER]
+ 18. weight: server weight (server), total weight (backend) [BACKEND, SERVER]
  19. act: server is active (server), number of active servers (backend)
+ [BACKEND, SERVER]
  20. bck: server is backup (server), number of backup servers (backend)
- 21. chkfail: number of failed checks
- 22. chkdown: number of UP-DOWN transitions
- 23. lastchg: last status change (in seconds)
- 24. downtime: total downtime (in seconds)
- 25. qlimit: queue limit
+ [BACKEND, SERVER]
+ 21. chkfail: number of failed checks. (Only counts checks failed when the
+ server is up.) [SERVER]
+ 22. chkdown: number of UP-DOWN transitions. The backend counter counts
+ transitions to the whole backend being down, rather than the sum of the
+ counters for each server. [BACKEND, SERVER]
+ 23. lastchg: number of seconds since the last UP-DOWN transition
+ [BACKEND, SERVER]
+ 24. downtime: total downtime (in seconds). The value for the backend is the
+ downtime for the whole backend, not the sum of the server downtime.
+ [BACKEND, SERVER]
+ 25. 

How do you tell if a url has a path

2014-07-07 Thread Jeffrey Scott Flesher Gmail
I want to check the URL to see if any path is passed, 
http://domain.tdl
or
http://domain.tdl/
as such, both of these are considered not to have a path,
my problem is that I only want to rewrite the path, 
if either of the two are true, meaning it has no path,
this fails:
acl has_path_uri path_beg -i /
If the url has no path I want to add a ww to it as such:
http://domain.tdl/ww
so that my wthttp app will work,
but if I use
acl has_ww_uri path_beg -i /ww
reqirep ^([^\ :]*)\ /(.*) \1\ /ww/\2 if !has_ww_uri
it rewrites every url that does not have ww in it, 
which is not what I want, because it rewrites resources like css and images,
so how do I determine if the url has no path?

Thanks for any help.