Re: Confused by the performance result in tcp mode of haproxy

2012-09-23 Thread Godbach
Any advice on using TCP splicing with HAProxy would be greatly appreciated.

Best Regards,
Godbach



Re: Confused by the performance result in tcp mode of haproxy

2012-09-26 Thread Godbach
Hi, Willy.

Thank you very much. I will update this issue and the result of
performance test if I get any improvement later.

Best Regards,
Godbach



Failed to persist on SSL session ID

2013-01-18 Thread Godbach
Hi, all
I have tested persistence on SSL session ID of haproxy-1.5-dev16.  The
main configuration of frontend and backend as below:
frontend fe
bind 172.22.16.112:443 ssl crt /home/fortiadc/ca-user-key.pem
ca-file /home/fortiadc/ca-root.cer #verify required crl-file
/home/fortiadc/ca-root-crl.cer
maxconn 1048576
use_backend be unless

backend be
balance roundrobin

# maximum SSL session ID length is 32 bytes.
stick-table type binary len 32 size 30k expire 30m

acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
# use tcp content accepts to detects ssl client and server hello.
tcp-request inspect-delay 5s
tcp-request content accept if clienthello

# no timeout on response inspect delay by default.
tcp-response content accept if serverhello
# SSL session ID (SSLID) may be present on a client or server hello.
# Its length is coded on 1 byte at offset 43 and its value starts
# at offset 44.

# Match and learn on request if client hello.
# should be (44, 32) if sticking to SSL session ID ?
stick on payload_lv(43,1) if clienthello

# Learn on response if server hello.
# should be (44, 32) if storing SSL session ID ?
stick store-response payload_lv(43,1) if serverhello

server 1 10.128.7.1:80 cookie cookie weight 1 maxconn 0 slowstart 0s
server 2 127.0.0.1:80 cookie cookie weight 1 maxconn 0 slowstart 0s

Both the acl and stick configurations are copied from introduction of
'stick store-response' in manual.

After testing, I found that haproxy has not stored SSL session ID
becaused of the acl 'clienthello'
has not matched.

From the codes of SSL supporting, SSL_do_handshake() supplied by
OpenSSL library was called to do whole SSL handshake.
It means that haproxy doesn't have the chance to copy TCP payload
during SSL handshake to session buffer. As a result,
haproxy cannot check the data of handshake.  Maybe we just check SSL
hello type in SSL record data.

In addition, I want to know that the main configuration should be like
this if I want to stick on SSL session ID:

stick on payload_lv(44,32) if clienthello
stick store-response payload_lv(44,32) if serverhello

I am also wondering that whether I used incorrect configuration to
lead the failure or it is a bug indeed.

Best Regards,
Godbach



Re: Failed to persist on SSL session ID

2013-01-20 Thread Godbach
Hi, Emeric

Thank you for your suggestion.

There are some codes implemented for ssl session cache in haproxy
indeed. But I cannot find the implementation of the option
stick on ssl_session_id
either in dev16 or dev17.

And haproxy failed to start with 'stick on ssl_session_id' set in
configuration file. The error information as below:
[ALERT] 020/114500 (29257) : parsing [h-ssl.cfg:33] : 'stick':
unknown fetch method 'ssl_session_id'.


Best Regards,
Godbach

2013/1/19 Emeric BRUN eb...@exceliance.fr:
  Hi,

 If you want to handle ssl offloading on haproxy and ssl session id
 persistance you should use:
 stick on ssl_session_id

 Payload_lv method is used when haproxy doesn't perform ssl offloading and
 load balance in tcp mode.


 Note: in addition, there is a lot of fixes between dev16 and dev17.

 Regards,

 Emeric


 original message-
 De: Godbach nylzhao...@gmail.com
 A: haproxy@formilux.org
 Date: Sat, 19 Jan 2013 01:47:02 +0800
 -


 Hi, all
 I have tested persistence on SSL session ID of haproxy-1.5-dev16. The
 main configuration of frontend and backend as below:
 frontend fe
   bind 172.22.16.112:443 ssl crt /home/fortiadc/ca-user-key.pem
 ca-file /home/fortiadc/ca-root.cer #verify required crl-file
 /home/fortiadc/ca-root-crl.cer
   maxconn 1048576
   use_backend be unless

 backend be
   balance roundrobin

 # maximum SSL session ID length is 32 bytes.
 stick-table type binary len 32 size 30k expire 30m

 acl clienthello req_ssl_hello_type 1
 acl serverhello rep_ssl_hello_type 2
 # use tcp content accepts to detects ssl client and server hello.
 tcp-request inspect-delay 5s
 tcp-request content accept if clienthello

 # no timeout on response inspect delay by default.
 tcp-response content accept if serverhello
   # SSL session ID (SSLID) may be present on a client or server hello.
 # Its length is coded on 1 byte at offset 43 and its value starts
 # at offset 44.

 # Match and learn on request if client hello.
 # should be (44, 32) if sticking to SSL session ID ?
 stick on payload_lv(43,1) if clienthello

 # Learn on response if server hello.
 # should be (44, 32) if storing SSL session ID ?
 stick store-response payload_lv(43,1) if serverhello

   server 1 10.128.7.1:80 cookie cookie weight 1 maxconn 0 slowstart 0s
   server 2 127.0.0.1:80 cookie cookie weight 1 maxconn 0 slowstart 0s

 Both the acl and stick configurations are copied from introduction of
 'stick store-response' in manual.

 After testing, I found that haproxy has not stored SSL session ID
 becaused of the acl 'clienthello'
 has not matched.

 From the codes of SSL supporting, SSL_do_handshake() supplied by
 OpenSSL library was called to do whole SSL handshake.
 It means that haproxy doesn't have the chance to copy TCP payload
 during SSL handshake to session buffer. As a result,
 haproxy cannot check the data of handshake. Maybe we just check SSL
 hello type in SSL record data.

 In addition, I want to know that the main configuration should be like
 this if I want to stick on SSL session ID:

 stick on payload_lv(44,32) if clienthello
 stick store-response payload_lv(44,32) if serverhello

 I am also wondering that whether I used incorrect configuration to
 lead the failure or it is a bug indeed.

 Best Regards,
 Godbach







Re: Failed to persist on SSL session ID

2013-01-21 Thread Godbach
Hi, Emeric

Thank you  very much.

I tested again just now with the following two lines in backend section:
stick-table type string len 32 size 30k expire 30m
stick on ssl_fc_session_id

The persistence on ssl sessions id worked well.

Best Regards,
Godbach



Re: Failed to persist on SSL session ID

2013-02-26 Thread Godbach
Hi Bobby1
 Maybe you forgot to set USE_OPENSSL=1 to in Makefile to support ssl.

Godbach

2013/2/26 bobby1 b...@yahoo.com:

 Emeric BRUN ebrun@... writes:



  Ki,

 original message-
 De: Godbach nylzhaowei@...
 A: Emeric BRUN ebrun@...
 Copie à: haproxy@...
 Date: Mon, 21 Jan 2013 11:46:46 +0800
 -


  Hi, Emeric
 
  Thank you for your suggestion.
 
  There are some codes implemented for ssl session cache in haproxy
  indeed. But I cannot find the implementation of the option
  stick on ssl_session_id
  either in dev16 or dev17.
 
  And haproxy failed to start with 'stick on ssl_session_id' set in
  configuration file. The error information as below:
  [ALERT] 020/114500 (29257) : parsing [h-ssl.cfg:33] : 'stick':
  unknown fetch method 'ssl_session_id'.

 Sorry,

 stick on ssl_fc_session_id

 
 
  Best Regards,
  Godbach

 Hi Godbach,

 Am trying ssl_fc_session_id using 1.5dev17, and I am getting this error also.

 [ALERT] 055/105232 (25450) : parsing [haproxy.cfg.sslfcsessionid:37] : 
 'stick':
 unknown fetch method 'ssl_fc_session_id'.

 My config is:


 backend appSrv1
 mode tcp
 balance roundrobin

 stick-table type string len 32 size 30k expire 15m
 stick on ssl_fc_session_id

 server s1 *:443
 server s2 *:443

 Do I need to add any parameters in the compile to get the method working?

 Thanks,
 Bobby1






Is there any plan to support OCSP to verify cert

2013-03-05 Thread Godbach
Hi, all
   OCSP(Online Certificate Status Protocol) is also used to verify
certificates. I am wondering that if there is any plan to support OCSP
in haproxy in the future.

Best Regards,
Godbach



Re: Is there any plan to support OCSP to verify cert

2013-03-05 Thread Godbach
Hi, JohnF
Thanks for your reply.

OCSP which has been supported by openssl library and stunnel is
another way to validate client certificates besides CRL. And CRL has a
shortcoming that it should be updated in time.  So I am wondering that
whether haproxy will suport OCSP to verify cleint certificate in the
future.

Best Regards,
Godbach

2013/3/6 John Marrett jo...@zioncluster.ca:
 Godbach,

 I'm interested to better understand what you want to do with OSCP.
 Ordinarily if you present a certificate using haproxy clients will validate
 it using methods specified in the certificate itself. If these include OSCP
 than it could potentially be used.

 In this context your question doesn't make that much sense to me, unless you
 want to validate client certificates used for authentication or you want
 haproxy to prevalidate its certifiate(s) before starting?

 What are you trying to do with OSCP that haproxy doesn't support?

 -JohnF

 On Mar 5, 2013 9:51 PM, Godbach nylzhao...@gmail.com wrote:

 Hi, all
OCSP(Online Certificate Status Protocol) is also used to verify
 certificates. I am wondering that if there is any plan to support OCSP
 in haproxy in the future.

 Best Regards,
 Godbach





Re: Is there any plan to support OCSP to verify cert

2013-03-05 Thread Godbach
Hi, JohnF

Thanks for your reply.

OCSP which has been supported by openssl library and stunnel is
another way to validate client certificates besides CRL. And CRL has a
shortcoming that it should be updated in time.  So I am wondering that
whether haproxy will support OCSP to validate client certificates in
the future.

Best Regards,
Godbach
2013/3/6 John Marrett jo...@zioncluster.ca:
 Godbach,

 I'm interested to better understand what you want to do with OSCP.
 Ordinarily if you present a certificate using haproxy clients will validate
 it using methods specified in the certificate itself. If these include OSCP
 than it could potentially be used.

 In this context your question doesn't make that much sense to me, unless you
 want to validate client certificates used for authentication or you want
 haproxy to prevalidate its certifiate(s) before starting?

 What are you trying to do with OSCP that haproxy doesn't support?

 -JohnF

 On Mar 5, 2013 9:51 PM, Godbach nylzhao...@gmail.com wrote:

 Hi, all
OCSP(Online Certificate Status Protocol) is also used to verify
 certificates. I am wondering that if there is any plan to support OCSP
 in haproxy in the future.

 Best Regards,
 Godbach





Re: Failed to use the source address for outgoing connections

2013-04-22 Thread Godbach

Hi,

Yes. I have tried that and it works as expected.

The original information in my last mail as below:

 Then I put source configuration 'source 0.0.0.0 usesrc clientip' in
 backend section, the source function can work well.

Best Regards,
Godbach



Hi,

Have you tried moving the source statement directly in the backend?
(just to confirm it works as expected).

Baptiste

On Tue, Apr 23, 2013 at 6:27 AM, Godbach nylzhao...@gmail.com wrote:

Hi, all

I have tested 'source' config in haproxy-1.5-dev18, but it didn't work
with the following line in default section:
 source 0.0.0.0 usesrc clientip

Other related settings by iptables and ip rule have been set correctly.

There are some codes in cfg_parse_listen() (Line 1812-1815 in lastest
commit) to do source function init for a new backend proxy as below

 if (defproxy.conn_src.iface_name)
 curproxy-conn_src.iface_name =
strdup(defproxy.conn_src.iface_name);
 curproxy-conn_src.iface_len = defproxy.conn_src.iface_len;
 curproxy-conn_src.opts = defproxy.conn_src.opts  ~CO_SRC_TPROXY_MASK;

The last line of codes will set the value of opts represents such as
'client', 'clientip' and so on in defproxy to current backend proxy. But
I was confused that why clear the low three bits. CO_SRC_TPROXY_MASK is
defined in include/types/connection.h as below:

 /* source address settings for outgoing connections */
 enum {
 /* Tproxy exclusive values from 0 to 7 */
 CO_SRC_TPROXY_ADDR = 0x0001,/* bind to this non-local address
when connecting */
 CO_SRC_TPROXY_CIP  = 0x0002,/* bind to the client's IP address
when connecting */
 CO_SRC_TPROXY_CLI  = 0x0003,/* bind to the client's IP+port
when connecting */
 CO_SRC_TPROXY_DYN  = 0x0004,/* bind to a dynamically computed
non-local address */
 CO_SRC_TPROXY_MASK = 0x0007,/* bind to a non-local address when
connecting */

 CO_SRC_BIND= 0x0008,/* bind to a specific source
address when connecting */
 };

The low three bits store the configuration of usesrc, they should be
copied to the backend proxy without modified. But they were clear in
backend proxy actually.

Then I put source configuration 'source 0.0.0.0 usesrc clientip' in
backend section, the source function can work well.

So in my opinion, the code
 curproxy-conn_src.opts = defproxy.conn_src.opts  ~CO_SRC_TPROXY_MASK;
shoulde be modifed as below:
 curproxy-conn_src.opts = defproxy.conn_src.opts;

Best Regards,
Godbach









Re: Failed to use the source address for outgoing connections

2013-04-23 Thread Godbach



Hi,

(it's strange I didn't get the original e-mail).


On Tue, Apr 23, 2013 at 6:27 AM, Godbach nylzhao...@gmail.com wrote:

Hi, all

I have tested 'source' config in haproxy-1.5-dev18, but it didn't work
with the following line in default section:
 source 0.0.0.0 usesrc clientip

Other related settings by iptables and ip rule have been set correctly.

There are some codes in cfg_parse_listen() (Line 1812-1815 in lastest
commit) to do source function init for a new backend proxy as below

 if (defproxy.conn_src.iface_name)
 curproxy-conn_src.iface_name =
strdup(defproxy.conn_src.iface_name);
 curproxy-conn_src.iface_len = defproxy.conn_src.iface_len;
 curproxy-conn_src.opts = defproxy.conn_src.opts  ~CO_SRC_TPROXY_MASK;

The last line of codes will set the value of opts represents such as
'client', 'clientip' and so on in defproxy to current backend proxy. But
I was confused that why clear the low three bits. CO_SRC_TPROXY_MASK is
defined in include/types/connection.h as below:

 /* source address settings for outgoing connections */
 enum {
 /* Tproxy exclusive values from 0 to 7 */
 CO_SRC_TPROXY_ADDR = 0x0001,/* bind to this non-local address
when connecting */
 CO_SRC_TPROXY_CIP  = 0x0002,/* bind to the client's IP address
when connecting */
 CO_SRC_TPROXY_CLI  = 0x0003,/* bind to the client's IP+port
when connecting */
 CO_SRC_TPROXY_DYN  = 0x0004,/* bind to a dynamically computed
non-local address */
 CO_SRC_TPROXY_MASK = 0x0007,/* bind to a non-local address when
connecting */

 CO_SRC_BIND= 0x0008,/* bind to a specific source
address when connecting */
 };

The low three bits store the configuration of usesrc, they should be
copied to the backend proxy without modified. But they were clear in
backend proxy actually.

Then I put source configuration 'source 0.0.0.0 usesrc clientip' in
backend section, the source function can work well.

So in my opinion, the code
 curproxy-conn_src.opts = defproxy.conn_src.opts  ~CO_SRC_TPROXY_MASK;
shoulde be modifed as below:
 curproxy-conn_src.opts = defproxy.conn_src.opts;


Godbach, thanks for your analysis, you're perfectly right. I've looked
at the code and before dev15, the bind options were set in the proxy's
options, covered by PR_O_TPXY_MASK, and these bits were not cleared when
creating a new backend from the default section. Since commit ef9a3605,
we have dedicated options for this and indeed the bits are cleared.

Please confirm the modification you suggest works (specifically binding
to an explicit sources address such as source 0.0.0.0 usesrc 1.2.3.4),
and if that's OK, please send a patch which I'll happily apply. You can
reuse your analysis above as the commit message, it's perfectly clear!

Best regards,
Willy




Hi, Willy

I have tested the configuration you suggested with the following patch:

diff --git a/src/cfgparse.c b/src/cfgparse.c
index cc515a2..3514e83 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -1812,7 +1812,7 @@ int cfg_parse_listen(const char *file, int 
linenum, char **args, int kwm)

if (defproxy.conn_src.iface_name)
curproxy-conn_src.iface_name = 
strdup(defproxy.conn_src.iface_name);
curproxy-conn_src.iface_len = 
defproxy.conn_src.iface_len;
-   curproxy-conn_src.opts = defproxy.conn_src.opts 
 ~CO_SRC_TPROXY_MASK;

+   curproxy-conn_src.opts = defproxy.conn_src.opts;
}

if (curproxy-cap  PR_CAP_FE) {


With explicit source address set, it still can not work well in default 
section, and can work well in backend section.


Wiht 'usesrc client' configuration, it can work well both in default and
backend section after applying the above patch.

I will go on debugging this problem.

Best Regards,
Godbach



Re: Failed to use the source address for outgoing connections

2013-04-23 Thread Godbach



On Tue, Apr 23, 2013 at 02:44:17PM +0800, Godbach wrote:



Hi,

(it's strange I didn't get the original e-mail).


On Tue, Apr 23, 2013 at 6:27 AM, Godbach nylzhao...@gmail.com wrote:

Hi, all

I have tested 'source' config in haproxy-1.5-dev18, but it didn't work
with the following line in default section:
 source 0.0.0.0 usesrc clientip

Other related settings by iptables and ip rule have been set correctly.

There are some codes in cfg_parse_listen() (Line 1812-1815 in lastest
commit) to do source function init for a new backend proxy as below

 if (defproxy.conn_src.iface_name)
 curproxy-conn_src.iface_name =
strdup(defproxy.conn_src.iface_name);
 curproxy-conn_src.iface_len = defproxy.conn_src.iface_len;
 curproxy-conn_src.opts = defproxy.conn_src.opts 
 ~CO_SRC_TPROXY_MASK;

The last line of codes will set the value of opts represents such as
'client', 'clientip' and so on in defproxy to current backend proxy. But
I was confused that why clear the low three bits. CO_SRC_TPROXY_MASK is
defined in include/types/connection.h as below:

 /* source address settings for outgoing connections */
 enum {
 /* Tproxy exclusive values from 0 to 7 */
 CO_SRC_TPROXY_ADDR = 0x0001,/* bind to this non-local
 address
when connecting */
 CO_SRC_TPROXY_CIP  = 0x0002,/* bind to the client's IP
 address
when connecting */
 CO_SRC_TPROXY_CLI  = 0x0003,/* bind to the client's IP+port
when connecting */
 CO_SRC_TPROXY_DYN  = 0x0004,/* bind to a dynamically
 computed
non-local address */
 CO_SRC_TPROXY_MASK = 0x0007,/* bind to a non-local address
 when
connecting */

 CO_SRC_BIND= 0x0008,/* bind to a specific source
address when connecting */
 };

The low three bits store the configuration of usesrc, they should be
copied to the backend proxy without modified. But they were clear in
backend proxy actually.

Then I put source configuration 'source 0.0.0.0 usesrc clientip' in
backend section, the source function can work well.

So in my opinion, the code
 curproxy-conn_src.opts = defproxy.conn_src.opts 
 ~CO_SRC_TPROXY_MASK;
shoulde be modifed as below:
 curproxy-conn_src.opts = defproxy.conn_src.opts;


Godbach, thanks for your analysis, you're perfectly right. I've looked
at the code and before dev15, the bind options were set in the proxy's
options, covered by PR_O_TPXY_MASK, and these bits were not cleared when
creating a new backend from the default section. Since commit ef9a3605,
we have dedicated options for this and indeed the bits are cleared.

Please confirm the modification you suggest works (specifically binding
to an explicit sources address such as source 0.0.0.0 usesrc 1.2.3.4),
and if that's OK, please send a patch which I'll happily apply. You can
reuse your analysis above as the commit message, it's perfectly clear!

Best regards,
Willy




Hi, Willy

I have tested the configuration you suggested with the following patch:

diff --git a/src/cfgparse.c b/src/cfgparse.c
index cc515a2..3514e83 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -1812,7 +1812,7 @@ int cfg_parse_listen(const char *file, int
linenum, char **args, int kwm)
 if (defproxy.conn_src.iface_name)
 curproxy-conn_src.iface_name =
strdup(defproxy.conn_src.iface_name);
 curproxy-conn_src.iface_len =
defproxy.conn_src.iface_len;
-   curproxy-conn_src.opts = defproxy.conn_src.opts
 ~CO_SRC_TPROXY_MASK;
+   curproxy-conn_src.opts = defproxy.conn_src.opts;
 }

 if (curproxy-cap  PR_CAP_FE) {


With explicit source address set, it still can not work well in default
section, and can work well in backend section.

Wiht 'usesrc client' configuration, it can work well both in default and
backend section after applying the above patch.

I will go on debugging this problem.


Thanks, so maybe that was the reason for clearing the flags, which was
still a stupid solution in my opinion :-/

I *believe* we don't keep the address in the default proxy, this is something
to be confirmed anyway, as it would be possible that we simply don't copy it
in fact.

Best regards,
Willy





Hi, Willy

Here is the patch to fix the bug of source function in attachment for 
your information.


Commit log as below:

 Bugfix: copy conn_src.opts and conn_src.tproxy_addr from
 defproxy to backend proxy correctly

Source function will not work with the following line in default section:
 source 0.0.0.0 usesrc clientip
even that related settings by iptables and ip rule have been set correctly.
But it can work well in backend setcion.

The reason is that the operation in line 1815 in cfgparse.c as below:
 curproxy-conn_src.opts = defproxy.conn_src.opts  
~CO_SRC_TPROXY_MASK;


clears three low bits of conn_src.opts which stores the configuration

CRL verification problem

2013-04-26 Thread Godbach
Hi, all

I have tested CRL verification for master of haproxy git repository
under such conditions:
* two CAs(CA1CA2) used to do verification
* CRL file specified, but contains CRL only issued by CA1

When I send request with certificate issued by CA2, the verification
will fail with the reason of 'Unknown CA', certificates issued by CA1
will be verified successfully. Then I append CRL issued by CA2 into CRL
file. There are two CRLs in CRL file now. Client certificates issued by
CA1 or CA2 can be verified OK.

It means that if more than one CA used and CRL verification is enabled,
each CRL issued by each CA should be put into one single file, otherwise
client certificates issued by the CA which does not supply CRL may fail
to be verified.

Since haproxy called SSL library to do CRL verification with the
following code:
X509_STORE_set_flags(store,
X509_V_FLAG_CRL_CHECK|X509_V_FLAG_CRL_CHECK_ALL);
I guess that it may be the behavior of SSL library but found nothing
related with this problem.

Any help will be much appreciated.

Best Regards,
Godbach



Re: CRL verification problem

2013-04-27 Thread Godbach

 Hi, all
 
 I have tested CRL verification for master of haproxy git repository
 under such conditions:
 * two CAs(CA1CA2) used to do verification
 * CRL file specified, but contains CRL only issued by CA1
 
 When I send request with certificate issued by CA2, the verification
 will fail with the reason of 'Unknown CA', certificates issued by CA1
 will be verified successfully. Then I append CRL issued by CA2 into CRL
 file. There are two CRLs in CRL file now. Client certificates issued by
 CA1 or CA2 can be verified OK.
 
 It means that if more than one CA used and CRL verification is enabled,
 each CRL issued by each CA should be put into one single file, otherwise
 client certificates issued by the CA which does not supply CRL may fail
 to be verified.
 
 Since haproxy called SSL library to do CRL verification with the
 following code:
  X509_STORE_set_flags(store,
 X509_V_FLAG_CRL_CHECK|X509_V_FLAG_CRL_CHECK_ALL);
 I guess that it may be the behavior of SSL library but found nothing
 related with this problem.
 
 Any help will be much appreciated.
 
 Best Regards,
 Godbach
 

The main configuration for frontend as below:

frontend fe
bind ip:443 ssl crt 1.cer verify required ca-file ca-root-both.cer
crl-file ca-root-crl-both.cer

There are two CAs in file ca-root-both.cer in PEM format, and two CRLs
in ca-root-crl-both.cer in PEM format.

If only one CRL issued by one of two CAs in crl-file. The cert issued
by the other CA will be failed to verify.

Anyone who had this problem?

Best Regards,
Godbach



Haproxy crash while initializing compression

2013-04-27 Thread Godbach
Hi, all

Haproxy of latest snapshot will crash while initializing compression
under such configuration:

global
...
tune.zlib.memlevel 6
tune.zlib.windowsize 8
frontend
...
compression algo gzip deflate
...

The coredump information as below:

Core was generated by `./haproxy -f h.cfg -d'.
Program terminated with signal 11, Segmentation fault.
#0  0x00360e4066d3 in deflateReset () from /lib64/libz.so.1
Missing separate debuginfos, use: debuginfo-install
glibc-2.12-1.47.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64
krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64
libselinux-2.0.94-5.3.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64
openssl-1.0.0-25.el6_3.1.x86_64 zlib-1.2.3-27.el6.x86_64
(gdb) bt
#0  0x00360e4066d3 in deflateReset () from /lib64/libz.so.1
#1  0x00360e406aa4 in deflateInit_ () from /lib64/libz.so.1
#2  0x00460d63 in deflate_init (comp_ctx=0x7fffe3b7b728,
level=9) at src/compression.c:524
#3  0x0041c8d8 in cfg_parse_listen (file=0x7fffe3b7d35d h.cfg,
linenum=27, args=0x7fffe3b7bfd0, kwm=value optimized out)
at src/cfgparse.c:5560
#4  0x0041f418 in readcfgfile (file=0x7fffe3b7d35d h.cfg) at
src/cfgparse.c:5987
#5  0x00405736 in init (argc=value optimized out, argv=value
optimized out) at src/haproxy.c:646
#6  0x00406c49 in main (argc=value optimized out,
argv=0x7fffe3b7c4b8) at src/haproxy.c:1287
(gdb) quit


Linenum 27 of cofiguration file is the compression algo config:
   compression algo gzip deflate

There will be no crash if I use the following configuration:
   compression algo deflate gzip
The only difference is to change the order of gzip and deflate.

Ther will be no crash either if set tune.zlib.windowsize to default
value 15 just keeping the algo order 'gzip deflate'.

Zlib version is zlib-1.2.3.

Best Regards,
Godbach





Re: Haproxy crash while initializing compression

2013-04-28 Thread Godbach



Hi Godbach,

On Sun, Apr 28, 2013 at 12:16:17PM +0800, Godbach wrote:

Hi, all

Haproxy of latest snapshot will crash while initializing compression
under such configuration:

global
 ...
 tune.zlib.memlevel 6
 tune.zlib.windowsize 8
frontend
 ...
 compression algo gzip deflate
 ...

The coredump information as below:

Core was generated by `./haproxy -f h.cfg -d'.
Program terminated with signal 11, Segmentation fault.
#0  0x00360e4066d3 in deflateReset () from /lib64/libz.so.1
Missing separate debuginfos, use: debuginfo-install
glibc-2.12-1.47.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64
krb5-libs-1.9-33.el6_3.3.x86_64 libcom_err-1.41.12-12.el6.x86_64
libselinux-2.0.94-5.3.el6.x86_64 nss-softokn-freebl-3.12.9-11.el6.x86_64
openssl-1.0.0-25.el6_3.1.x86_64 zlib-1.2.3-27.el6.x86_64
(gdb) bt
#0  0x00360e4066d3 in deflateReset () from /lib64/libz.so.1
#1  0x00360e406aa4 in deflateInit_ () from /lib64/libz.so.1
#2  0x00460d63 in deflate_init (comp_ctx=0x7fffe3b7b728,
level=9) at src/compression.c:524
#3  0x0041c8d8 in cfg_parse_listen (file=0x7fffe3b7d35d h.cfg,
linenum=27, args=0x7fffe3b7bfd0, kwm=value optimized out)
 at src/cfgparse.c:5560
#4  0x0041f418 in readcfgfile (file=0x7fffe3b7d35d h.cfg) at
src/cfgparse.c:5987
#5  0x00405736 in init (argc=value optimized out, argv=value
optimized out) at src/haproxy.c:646
#6  0x00406c49 in main (argc=value optimized out,
argv=0x7fffe3b7c4b8) at src/haproxy.c:1287
(gdb) quit


Linenum 27 of cofiguration file is the compression algo config:
compression algo gzip deflate

There will be no crash if I use the following configuration:
compression algo deflate gzip
The only difference is to change the order of gzip and deflate.

Ther will be no crash either if set tune.zlib.windowsize to default
value 15 just keeping the algo order 'gzip deflate'.

Zlib version is zlib-1.2.3.


Thank you for reporting this bug. I just found that deflate() still
uses default memory settings, but with the same allocator as gzip
which used the smaller settings you specified. The net result is
that the first call to deflate() overflows the small memory area.
Fortunately, since the compression algo is tested while parsing the
config, the error is detected very early.

I've fixed the bug, tested it and pushed it. You can use the attached
patch.

Thanks,
Willy




Hi, Willy

Thank you for replying so quickly.

Yeah, I have tested the patch just now and it works well. I have viewded 
the codes that test comression algo during parsing config yesterday. It 
is indeed a wonderful design. Perfect work!


Best Regards,
Godbach



Re: SMTP load balancer.

2013-05-01 Thread Godbach



On 5/1/2013 8:39 AM, Bryan Talbot wrote:

Looks like you've listed the same IP address twice.

-Bryan


Typo, Thanks.
How have I missed that??

Eliezer



Hi, Elizer

You can use roundrobin instead of leastconn as below:

listen tcp 0.0.0.0:225
mode tcp
option tcplog
balance roundrobin
...

Then have a try. :-)

Godbach



Re: SMTP load balancer.

2013-05-01 Thread Godbach

On 5/2/2013 5:46 AM, Godbach wrote:



On 5/1/2013 8:39 AM, Bryan Talbot wrote:

Looks like you've listed the same IP address twice.

-Bryan


Typo, Thanks.
How have I missed that??

Eliezer



Hi, Elizer

You can use roundrobin instead of leastconn as below:

listen tcp 0.0.0.0:225
 mode tcp
 option tcplog
 balance roundrobin
 ...

Then have a try. :-)

Godbach

Thanks,
I have used the roundrobin.
There is another factor in the picture:
##start
listen tcp 0.0.0.0:225
 mode tcp
 option tcplog
 balance leastconn
 maxconn 1
 server smtp1 192.168.25.1:25 maxconn 10
 server smtp2 192.168.25.1:25 maxconn 10
##end
the maxconn is causing the first to be with high load and the another
one to be with lower load which is what we want to achieve.

Eliezer




Hi, Eliezer

It seems that smtp1 and smpt2 have the same IP:Port

  server smtp1 192.168.25.1:25 maxconn 10
  server smtp2 192.168.25.1:25 maxconn 10



Re: Limit frontend bandwidth rate?

2013-05-04 Thread Godbach

On Thu, May 02, 2013 at 03:22:33PM +0800, Delta Yeh wrote:

Is server side keepalive  planned in 1.6?

it's planned for 1.5 and is the condition to release 1.5.

Willy


.


Hi, Willy

I  am wondering that whether server side keepalive you said  is http 
keepalive between
haproxy and server or not. After testing lastest snapshot, http 
keepalive has already

token effect between haproxy and server under such conditions:
* configuration without 'option http-server-close' set
* server side has enabled http keepalive.

Best Regards,
Godbach



Re: Limit frontend bandwidth rate?

2013-05-04 Thread Godbach

Hi LukasBaptiste

Thank you very much.

It means that tunnel mode has been implemented in current haproxy.
But it can not do balance by http transcations since it only processes
first request and response for each tcp connection.

Full keepalive support on the server side means something else.
For example we could have long-lived backend tcp connection to
serve more than one frontend users (theoretically).

If so, haproxy will try to reuse the connection between backend and server.
As a result, there will be less new backend connections than in 
http-server-close

mode and the throughput of haproxy will be promoted. That's what I have
understood, am I right?

Best Regards,
Godbach


Hi Godbach!


I am wondering that whether server side keepalive you said is
http keepalive between haproxy and server or not.

Thats it exactly.




After testing lastest snapshot, http keepalive has already
token effect between haproxy and server under such conditions:

What you are seeing is not http keepalive between haproxy
and the server, but is actually between the client and the
server, because the way you configured it, haproxy is in tunnel
mode.

This is from the docs:


By default HAProxy operates in a tunnel-like mode with regards
to persistent connections: for each connection it processes
the first request and forwards everything else (including
additional requests) to selected server.


Full keepalive support on the server side means something else.
For example we could have long-lived backend tcp connection to
serve more than one frontend users (theoretically).



Regards,
Lukas   





Re: Limit frontend bandwidth rate?

2013-05-04 Thread Godbach

If so, haproxy will try to reuse the connection between backend
and server. As a result, there will be less new backend
connections than in http-server-close mode and the throughput
of haproxy will be promoted.

This is my understanding, yes.

I'm not sure this will significantly enhance throughput, but it
will use CPU and Memory more efficiently.


Lukas   

Hi Lukas

Thank you.

I have tested tunnel mode and http-server-close mode under same condition,
the throughput in tunnel mode is higher than that in http-server-close mode.

Since there will be less new backend connections and more data to be 
processed
by userland in full keepalive mode, CPU shoudl be used more efficiently 
as you said.


Best Regards,
Godbach



HTTP Request still gets response from the server with weight 0 and src persistence

2013-05-28 Thread Godbach
Hi, all

It is expected that new http request will not get response from the
server of which weight was changed to 0. It cannot work well with
persistence on src but work well without the persistence in lastest
snapshot.

There are two servers in my backend, and persistence on src ip has been
enabled in backend. The configuration in backend as below:

backend pool
balance roundrobin.
stick-table type ip size 200k expire 600s
stick on src
server 1 10.128.7.1:80 id 1 cookie srv1 weight 1 maxconn 0
slowstart 0s
server 2 10.128.7.2:80 id 2 cookie srv2 weight 1 maxconn 0
slowstart 0s

During continuous http requset with the same client, the stick table as
below:
# table: pool, type: ip, size:204800, used:1
0x17d2284: key=172.22.16.250 use=0 exp=599095 server_id=1
Then I set weight of server 1 to 0 use command as below:
set weight pool/1 0
And I get the weight of server 1 with command:
get weight pool/1
The result is
0 (initial 1)
So I think I have set the weight of sever1 to 0 successfully. But the
response still comes from server 1 which server 2 is expected. And the
stick table keeps the same.

I review the code of process_sticking_rules() in session.c. The codes
when server is found as below:

1403 ptr = stktable_data_ptr(rule-table.t, ts,
STKTABLE_DT_SERVER_ID);
1404 node =
eb32_lookup(px-conf.used_server_id, stktable_data_cast(ptr, server_id));
1405 if (node) {
1406 struct server *srv;
1407
1408 srv = container_of(node, struct server,
conf.id);
1409 if ((srv-state  SRV_RUNNING) ||
1410 (px-options  PR_O_PERSIST) ||
1411 (s-flags  SN_FORCE_PRST)) {
1412 s-flags |= SN_DIRECT | SN_ASSIGNED;
1413 s-target = srv-obj_type;
1414 }
1415 }
1416 }
1417 stktable_touch(rule-table.t, ts, 1);

Line 1409 used (srv-state  SRV_RUNNING) to check the server status.
If I used srv_is_usable() to instead such as below:
-if ((srv-state  SRV_RUNNING) ||
+if (srv_is_usable(srv-state, srv-eweight) ||

The new request will get response from server 2 once the weight of
server 1 is set to 0. But it seems to be just a workaround.

Since the manual of haproxy about 'set weight' says that:
A typical usage of this command is to disable a server during an update
by setting its weight to zero.

I am wondering that whether the flag SRV_RUNNING of server should be
cleared or not when its weight is set to 0.

-- 
Best Regards,
Godbach



Re: HTTP Request still gets response from the server with weight 0 and src persistence

2013-05-29 Thread Godbach

Hi Baptiste

Thanks for your replying.

I am using the balance roundrobin algorithm and sticking on src, not the 
the balance source algorithm. The configuration has been presented in my 
first mail as below:


  backend pool
  balance roundrobin.
  stick-table type ip size 200k expire 600s
  stick on src
  server 1 10.128.7.1:80 id 1 cookie srv1 weight 1 maxconn 0
 slowstart 0s
  server 2 10.128.7.2:80 id 2 cookie srv2 weight 1 maxconn 0
 slowstart 0s


Best Regards,
Godbach

On 2013/5/29 13:35, Baptiste wrote:

Hi Godbach,

Before reading HAProxy source code, it worths reading its
configuration guide for the options you use.
IE, the balance source algorithm would tell you that:

  This algorithm is static by default, which means that changing a
server's weight on the fly will have no effect, but this can be
changed using hash-type.

Please update your configuration following the recommandation above
and let us know your feedback.

Baptiste



On Wed, May 29, 2013 at 5:22 AM, Godbach nylzhao...@gmail.com wrote:

Hi, all

It is expected that new http request will not get response from the
server of which weight was changed to 0. It cannot work well with
persistence on src but work well without the persistence in lastest
snapshot.

There are two servers in my backend, and persistence on src ip has been
enabled in backend. The configuration in backend as below:

 backend pool
 balance roundrobin.
 stick-table type ip size 200k expire 600s
 stick on src
 server 1 10.128.7.1:80 id 1 cookie srv1 weight 1 maxconn 0
slowstart 0s
 server 2 10.128.7.2:80 id 2 cookie srv2 weight 1 maxconn 0
slowstart 0s

During continuous http requset with the same client, the stick table as
below:
 # table: pool, type: ip, size:204800, used:1
 0x17d2284: key=172.22.16.250 use=0 exp=599095 server_id=1
Then I set weight of server 1 to 0 use command as below:
 set weight pool/1 0
And I get the weight of server 1 with command:
 get weight pool/1
The result is
 0 (initial 1)
So I think I have set the weight of sever1 to 0 successfully. But the
response still comes from server 1 which server 2 is expected. And the
stick table keeps the same.

I review the code of process_sticking_rules() in session.c. The codes
when server is found as below:

1403 ptr = stktable_data_ptr(rule-table.t, ts,
STKTABLE_DT_SERVER_ID);
1404 node =
eb32_lookup(px-conf.used_server_id, stktable_data_cast(ptr, server_id));
1405 if (node) {
1406 struct server *srv;
1407
1408 srv = container_of(node, struct server,
conf.id);
1409 if ((srv-state  SRV_RUNNING) ||
1410 (px-options  PR_O_PERSIST) ||
1411 (s-flags  SN_FORCE_PRST)) {
1412 s-flags |= SN_DIRECT | SN_ASSIGNED;
1413 s-target = srv-obj_type;
1414 }
1415 }
1416 }
1417 stktable_touch(rule-table.t, ts, 1);

Line 1409 used (srv-state  SRV_RUNNING) to check the server status.
If I used srv_is_usable() to instead such as below:
-if ((srv-state  SRV_RUNNING) ||
+if (srv_is_usable(srv-state, srv-eweight) ||

The new request will get response from server 2 once the weight of
server 1 is set to 0. But it seems to be just a workaround.

Since the manual of haproxy about 'set weight' says that:
A typical usage of this command is to disable a server during an update
by setting its weight to zero.

I am wondering that whether the flag SRV_RUNNING of server should be
cleared or not when its weight is set to 0.

--
Best Regards,
Godbach






--
Best Regards,
Godbach



Re: HTTP Request still gets response from the server with weight 0 and src persistence

2013-05-29 Thread Godbach

Hi Baptiste

It doesn't matter. :-)

When the weight of server is set to 0 with the balance roundrobin 
algorithm,  srv-eweight is update  to 0 and

fwrr_update_server_weight() (lb_fwrr.c) will be called  as below:

static void fwrr_update_server_weight(struct server *srv)
{
...
old_state = srv_is_usable(srv-prev_state, srv-prev_eweight);
new_state = srv_is_usable(srv-state, srv-eweight);

if (!old_state  !new_state) {
srv-prev_state = srv-state;
srv-prev_eweight = srv-eweight;
return;
}
else if (!old_state  new_state) {
fwrr_set_server_status_up(srv);
return;
}
else if (old_state  !new_state) {
fwrr_set_server_status_down(srv);
return;
}
...
}

Since srv-eweight is 0, new_state should be also 0, then 
fwrr_set_server_status_down() will be called.
At the end the server will be remove from weight tree by 
fwrr_dequeue_srv() and lb_tree by fwrr_remove_from_tree().
But the srv-state has not been updated, still keeps in SRV_RUNNING. As 
a result, when the same server is selected by

sticking rules, it will be used again.



AH, sorry, my mistake.
I read your mail too quickly.

Baptiste

On Wed, May 29, 2013 at 9:18 AM, Godbach nylzhao...@gmail.com wrote:

Hi Baptiste

Thanks for your replying.

I am using the balance roundrobin algorithm and sticking on src, not the the
balance source algorithm. The configuration has been presented in my first
mail as below:



  backend pool
  balance roundrobin.
  stick-table type ip size 200k expire 600s
  stick on src
  server 1 10.128.7.1:80 id 1 cookie srv1 weight 1 maxconn 0
slowstart 0s
  server 2 10.128.7.2:80 id 2 cookie srv2 weight 1 maxconn 0
slowstart 0s


Best Regards,
Godbach


On 2013/5/29 13:35, Baptiste wrote:

Hi Godbach,

Before reading HAProxy source code, it worths reading its
configuration guide for the options you use.
IE, the balance source algorithm would tell you that:

   This algorithm is static by default, which means that changing a
server's weight on the fly will have no effect, but this can be
changed using hash-type.

Please update your configuration following the recommandation above
and let us know your feedback.

Baptiste



On Wed, May 29, 2013 at 5:22 AM, Godbach nylzhao...@gmail.com wrote:

Hi, all

It is expected that new http request will not get response from the
server of which weight was changed to 0. It cannot work well with
persistence on src but work well without the persistence in lastest
snapshot.

There are two servers in my backend, and persistence on src ip has been
enabled in backend. The configuration in backend as below:

  backend pool
  balance roundrobin.
  stick-table type ip size 200k expire 600s
  stick on src
  server 1 10.128.7.1:80 id 1 cookie srv1 weight 1 maxconn 0
slowstart 0s
  server 2 10.128.7.2:80 id 2 cookie srv2 weight 1 maxconn 0
slowstart 0s

During continuous http requset with the same client, the stick table as
below:
  # table: pool, type: ip, size:204800, used:1
  0x17d2284: key=172.22.16.250 use=0 exp=599095 server_id=1
Then I set weight of server 1 to 0 use command as below:
  set weight pool/1 0
And I get the weight of server 1 with command:
  get weight pool/1
The result is
  0 (initial 1)
So I think I have set the weight of sever1 to 0 successfully. But the
response still comes from server 1 which server 2 is expected. And the
stick table keeps the same.

I review the code of process_sticking_rules() in session.c. The codes
when server is found as below:

1403 ptr = stktable_data_ptr(rule-table.t, ts,
STKTABLE_DT_SERVER_ID);
1404 node =
eb32_lookup(px-conf.used_server_id, stktable_data_cast(ptr,
server_id));
1405 if (node) {
1406 struct server *srv;
1407
1408 srv = container_of(node, struct server,
conf.id);
1409 if ((srv-state  SRV_RUNNING) ||
1410 (px-options  PR_O_PERSIST) ||
1411 (s-flags  SN_FORCE_PRST)) {
1412 s-flags |= SN_DIRECT | SN_ASSIGNED;
1413 s-target = srv-obj_type;
1414 }
1415 }
1416 }
1417 stktable_touch(rule-table.t, ts, 1);

Line 1409 used (srv-state  SRV_RUNNING) to check the server status.
If I used srv_is_usable() to instead such as below:
-if ((srv-state  SRV_RUNNING) ||
+if (srv_is_usable(srv-state, srv-eweight) ||

The new request will get response from server 2 once the weight of
server 1 is set to 0. But it seems to be just a workaround.

Since the manual of haproxy about 'set weight' says that:
A typical usage of this command is to disable

Re: HTTP Request still gets response from the server with weight 0 and src persistence

2013-05-29 Thread Godbach

Hi Baptiste

Yeah, I got it. Thank you very much for your explanation.

Best Regards,
Godbach


Actually, this is the purpose of dropping a weight to 0: being able to
maintain sticky sessions.
If you want to shutdown rudely your server, preventing everybody to
access it, use the disable keyword.

Baptiste





[patch] CLEANUP: session: remove event_accept() which was not used anymore

2013-06-19 Thread Godbach
Hi, Willy

Since event_accept() was not used any more in latest snapshot. There is
a patch in attachment for removing this function for your information.

The commit log is as follows:

CLEANUP: session: remove event_accept() which was not used anymore

Remove event_accept() in include/proto/proto_http.h and use correct
function name in other two files instead of event_accept().

-- 
Best Regards,
Godbach
From 949b6e22353260b8d36178cb704bb88d4564894a Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Thu, 20 Jun 2013 13:28:38 +0800
Subject: [PATCH] CLEANUP: session: remove event_accept() which was not used
 anymore

Remove event_accept() in include/proto/proto_http.h and use correct function
name in other two files instead of event_accept().

Signed-off-by: Godbach nylzhao...@gmail.com
---
 include/proto/proto_http.h |1 -
 include/types/session.h|2 +-
 src/peers.c|4 ++--
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/proto/proto_http.h b/include/proto/proto_http.h
index 24e3581..979219e 100644
--- a/include/proto/proto_http.h
+++ b/include/proto/proto_http.h
@@ -63,7 +63,6 @@ extern char *get_http_auth_buff;
 #define HTTP_IS_TOKEN(x) (http_is_token[(unsigned char)(x)])
 #define HTTP_IS_VER_TOKEN(x) (http_is_ver_token[(unsigned char)(x)])
 
-int event_accept(int fd);
 int process_cli(struct session *t);
 int process_srv_data(struct session *t);
 int process_srv_conn(struct session *t);
diff --git a/include/types/session.h b/include/types/session.h
index 00ed4cb..42d37db 100644
--- a/include/types/session.h
+++ b/include/types/session.h
@@ -97,7 +97,7 @@
 #define SN_BE_TRACK_ANY 0x00E0  /* union of all SN_BE_TRACK_* above */
 
 
-/* WARNING: if new fields are added, they must be initialized in event_accept()
+/* WARNING: if new fields are added, they must be initialized in 
session_accept()
  * and freed in session_free() !
  */
 
diff --git a/src/peers.c b/src/peers.c
index 998e61d..83781ba 100644
--- a/src/peers.c
+++ b/src/peers.c
@@ -1094,7 +1094,7 @@ static struct session *peer_session_create(struct peer 
*peer, struct peer_sessio
struct task *t;
 
if ((s = pool_alloc2(pool2_session)) == NULL) { /* disable this proxy 
for a while */
-   Alert(out of memory in event_accept().\n);
+   Alert(out of memory in peer_session_create().\n);
goto out_close;
}
 
@@ -1113,7 +1113,7 @@ static struct session *peer_session_create(struct peer 
*peer, struct peer_sessio
 * it as soon as possible, which means closing it immediately for TCP.
 */
if ((t = task_new()) == NULL) { /* disable this proxy for a while */
-   Alert(out of memory in event_accept().\n);
+   Alert(out of memory in peer_session_create().\n);
goto out_free_session;
}
 
-- 
1.7.7



Re: 'SSL handshake failure' errors

2013-06-20 Thread Godbach

Hi Merton,

It is a good way to capture the packets during SSL handshake by tcpdump 
or wireshark from your client to find out what error happens. I have 
used this method in debugging SSL feature in haproxy.


FYI.

Best Regards,
Godbach

On 2013/6/20 1:46, Merton Lister wrote:

Thank you Lukas. We will see whether SSLv3 improves things.

Best,

Merton


On Wed, Jun 19, 2013 at 1:15 AM, Lukas Tribus luky...@hotmail.com
mailto:luky...@hotmail.com wrote:

Hi Merton!


don't forget to CC the mailing-list :)


  Out of the 5 possible causes you listed, we probably can't do much
  about the other ones. But we can control the above two from our
end. I
  suppose that most 'modern' browsers nowadays should be able to do TLS
  v1.0, and SSLv3 is considered as a weaker choice (But I wonder if it
  will make more compatible for clients to support both TLSv1.0 and
  SSLv3?). The specific ciphers we've chosen is to speed up the SSL but
  it might 'screen out' some clients.

The issue is probably with embedded, mobile and outdated browsers.
If you have a 5 year old Windows CE phone, chances are it will connect
in SSLv3 only (for example).



  We also see in the log that some clients connected/handshake
  successfully initially on some page, but failed the handshake in
  subsequent requests to other parts of the site.

Keep in mind that a browsers may open a connection to accelerate a
_possible_ future HTTP transaction - and if the users doesn't request
another page, the connection will just be dropped.

Those optimizations in browsers can trigger warnings on the server-side.



  Any suggestion on making SSL as much compatible as possible while
  keeping it fast?

You may enable SSLv3 again and monitor the log.



Regards,

Lukas







About half close and reset connection by client

2013-06-20 Thread Godbach
Hi, Willy

I have noticed that half-closed timeout appears in ROADMAP file.
That't the issue I have tested several days ago.

I have done a test under such conditions:
1) block the response from server by iptables rules.
2) client closes connection after sending request to haproxy ASAP.

and enabled following options in haproxy:
option http-server-close
option http-pretend-keepalive

I want to test whether haproxy will close both frontend and backend
connections under the above conditions. No matter I closed connection
from client with FIN or RST packet, the backend connection will keep
established and not be closed until the related timeout expired. The
only difference is that the connection will disappear when closed with
RST but enter into CLOSE_WAIT status when closed with FIN. The latter
should be so called half closed.

After reviewing the codes and testing, it seems that haproxy will not
care about the events occur on front connection until it gets response
from the related backend connection. As a result, both FIN and RST
packets from client will be ignored. It should be harmful for servers to
keep a lot of established connections in this test condtion.

I am wondering that what I have analysed is correct or not. If correct,
it should be necessary to care about the events on frontend connection
even that haproxy has not got response from backend server.

-- 
Best Regards,
Godbach



Re: About half close and reset connection by client

2013-06-20 Thread Godbach
On 2013/6/20 21:31, Godbach wrote:
 On 2013/6/20 21:18, Willy Tarreau wrote:
 Hi Godbach,

 On Thu, Jun 20, 2013 at 09:07:57PM +0800, Godbach wrote:
 Hi, Willy

 I have noticed that half-closed timeout appears in ROADMAP file.
 That't the issue I have tested several days ago.

 I have done a test under such conditions:
 1) block the response from server by iptables rules.
 2) client closes connection after sending request to haproxy ASAP.

 and enabled following options in haproxy:
   option http-server-close
   option http-pretend-keepalive

 I want to test whether haproxy will close both frontend and backend
 connections under the above conditions. No matter I closed connection
 from client with FIN or RST packet, the backend connection will keep
 established and not be closed until the related timeout expired. The
 only difference is that the connection will disappear when closed with
 RST but enter into CLOSE_WAIT status when closed with FIN. The latter
 should be so called half closed.

 After reviewing the codes and testing, it seems that haproxy will not
 care about the events occur on front connection until it gets response
 from the related backend connection. As a result, both FIN and RST
 packets from client will be ignored. It should be harmful for servers to
 keep a lot of established connections in this test condtion.

 I am wondering that what I have analysed is correct or not. If correct,
 it should be necessary to care about the events on frontend connection
 even that haproxy has not got response from backend server.

 Your analysis is correct, but you missed one case which is designed to
 solve your issue : if you enable option abortonclose, then the close
 from the client will be propagated to the other side.

 Best regards,
 Willy


 
 Hi, Willy
 
 Thank you for your time.
 
 Yes, it works as expected with option abortnoclose enabled. I should
 read the configuration manual more carefully.
 
 Best Regards,
 Godbach
 

Hi, Willy

There is a further test about option abortonclose and nolinger I
have done under the condition that reponse from server is blocked with
iptables rule as follows:

1. only enable abortonclose
Client aborts the request(without reading) with FIN, haproxy will not
care about the event on frontend connection which will be closed when
the related timeout expired.

Client aborts the reqeust with RST, haproxy will close frontend
connection ASAP, and the backend connection in haproxy enters into
FIN_WAIT1 status. Haproxy should have sent FIN to server but have not
received FIN from server.

2. Both enable abortonclose and nolinger
No matter FIN or RST is sent by client to abort the reqeust, both
frontend and backend connections will be reset ASAP.

So in my scenario, I should both enable abortonclose and nolinger. There
is an advice for using nolinger in manual:

For this reason, it is not recommended to use this option when not
absolutely needed. You know that you need it when you have thousands of
FIN_WAIT1 sessions on your system (TIME_WAIT ones do not count).

It means that I can enable option nolinger if a lot of FIN_WAINT1
sessions, in other words, ther servers are slow.

-- 
Best Regards,
Godbach



Re: About half close and reset connection by client

2013-06-21 Thread Godbach

Hi Willy,
Thanks for your patch.

On 2013/6/21 14:23, Willy Tarreau wrote:


Warning, this is not an abort from the client! A FIN is just a
half close of the write channel, it just means that the client
has nothing else to write and still seems interested by the
response. A number of script-based monitoring systems will
behave exactly like this. For example, you can have it using
this simple script :

printf GET /test HTTP/1.0\r\n\r\n | nc 127.0.0.1 80


Yes, it is just as half close indeed, we'd better keep what half close 
should be.



This one could be improved in my opinion. We should be able to
detect that we're closing because of an error and forward an RST
instead. OK I've done it in the attached patch.
...

No really you should not use nolinger. In fact you *could* use it in
the backend, but you must never use it in the frontend, otherwise
your clients will receive truncated objects, because it's what it's
made for : destroy unacked socket buffers on close. This is how we
can manage to send an RST instead of a FIN.

Best regards,
Willy

I have tested this patch and it works as expected without option 
nolinger enabled. The RST can be forwarded to server if haproxy gets RST 
from client. It's a smart way to set SI_FL_NOLINGER on 
s-req-cons-flags to achieve the same result as nolinger does.


In a word, harpoxy will forward the same packet FIN or RST which is 
receieved from client to server with abortonclose enabled. So it's not 
necessary for me to enable option nolinger now.

--
Best Regards,
Godbach



[PATCH] BUG/MINOR: deinit: free fdinfo while doing cleanup

2013-06-26 Thread Godbach
Hi Willy,

The pointer fdinfo is allocated memory when haproxy is starting, but is
not freed when haproxy exits. It should be a minor bug and there is a
patch in attatchment for your information.

The commit log is as follows:

Both fdinfo and fdtab are allocated memory in init() while haproxy is
starting, but only fdtab was freed in deinit(), fdinfo should also be freed.

Best Regards,
Godbach
From b0c51207cb74ec01b3cdb0f79b3b60f359912a4a Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 26 Jun 2013 16:49:51 +0800
Subject: [PATCH] BUG/MINOR: deinit: free fdinfo while doing cleanup

Both fdinfo and fdtab are allocated memory in init() while haproxy is starting,
but only fdtab is freed in deinit(), fdinfo should also be freed.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/haproxy.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index ac9fba1..ec9f513 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1198,6 +1198,7 @@ void deinit(void)
free(global.pidfile); global.pidfile = NULL;
free(global.node);global.node = NULL;
free(global.desc);global.desc = NULL;
+   free(fdinfo); fdinfo  = NULL;
free(fdtab);  fdtab   = NULL;
free(oldpids);oldpids = NULL;
free(global_listener_queue_task); global_listener_queue_task = NULL;
-- 
1.7.7



[patch] Two minor patches about buffer and typo in DOC

2013-07-01 Thread Godbach
HiWilly,
There are two patches for your informationwith patch 0002 is just a typo
fix.

For patch 0001, the orignal code confuses me that why bitwise OR isused
to check if the buffer is empty or not as below:
static inline int buffer_not_empty(const struct buffer *buf)
{
return buf-i | buf-o;
}

In my opinion, buffer_not_empty() returns a bool type, and logical OR is
a better choiceas follow:
static inline int buffer_not_empty(const struct buffer *buf)
{
return buf-i || buf-o;
}

Best Regards,
Godbach


From ecfcee76b3ac4db843dc3c85f168a2c789a4c653 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Tue, 2 Jul 2013 01:03:15 +0800
Subject: [PATCH 1/2] OPTIM: buffer: use logical OR to check if the buffer is
 empty or not

Logical OR should be used instead of bitwise OR in buffer_not_empty() since the
function returns a bool type indeed.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 include/common/buffer.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/common/buffer.h b/include/common/buffer.h
index 18ced91..1041ddf 100644
--- a/include/common/buffer.h
+++ b/include/common/buffer.h
@@ -154,7 +154,7 @@ static inline int buffer_len(const struct buffer *buf)
 /* Return non-zero only if the buffer is not empty */
 static inline int buffer_not_empty(const struct buffer *buf)
 {
-   return buf-i | buf-o;
+   return buf-i || buf-o;
 }
 
 /* Return non-zero only if the buffer is empty */
-- 
1.8.3.1

From 4ba54922f46021f18421633c521bbc9e5c5d4d31 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Tue, 2 Jul 2013 01:19:15 +0800
Subject: [PATCH 2/2] DOC: minor typo fix in documentation

http-reqsponse = http-response

Signed-off-by: Godbach nylzhao...@gmail.com
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8feee6e..5140ff1 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2875,7 +2875,7 @@ http-response { allow | deny | add-header name fmt | 
set-nice nice |
 
   There is no limit to the number of http-response statements per instance.
 
-  It is important to know that http-reqsponse rules are processed very early in
+  It is important to know that http-response rules are processed very early in
   the HTTP processing, before reqdel or reqrep rules. That way, headers
   added by add-header/set-header are visible by almost all further ACL
   rules.
-- 
1.8.3.1



Re: [patch] Two minor patches about buffer and typo in DOC

2013-07-01 Thread Godbach



No it's not an error, it's intentional. The result is the same
(eg: only i==0  o==0 will return 0), except that the bitwise
OR produces more efficient code than the logical one which
depending on the compiler and code will produce conditional jumps
and explicit setting to 1 or 0. Since a boolean true is anything
not zero, we don't care about the output value and we can safely
use a bitwise OR.

Here's a typical example :

 int bitwise_or(int a, int b)
 {
 return a | b;
 }

 int logical_or(int a, int b)
 {
 return a || b;
 }

The ASM code gives this with gcc 4.2 in x86 mode :

 bitwise_or:
0:   8b 44 24 08 mov0x8(%esp),%eax
4:   0b 44 24 04 or 0x4(%esp),%eax
8:   c3  ret
9:   8d 76 00lea0x0(%esi),%esi

000c logical_or:
c:   8b 44 24 04 mov0x4(%esp),%eax
   10:   85 c0   test   %eax,%eax
   12:   74 08   je 1c logical_or+0x10
   14:   b8 01 00 00 00  mov$0x1,%eax
   19:   c3  ret
   1a:   89 f6   mov%esi,%esi
   1c:   31 c0   xor%eax,%eax
   1e:   83 7c 24 08 00  cmpl   $0x0,0x8(%esp)
   23:   0f 95 c0setne  %al
   26:   c3  ret

Two instructions for the first one, 8 for the second. Some compiler
versions produce better code (especially the older ones), it really
depends in fact. Anyway the first one is always better since it does
not have to build new values.

Best regards,
Willy



Hi, Willy

Thanks for your detailed analysis. Cool!

I have observed some optimizations in HAProxy before, it is a new one.

Best Regards,
Godbach

--
Best Regards,
Godbach




Performance of concurrent connections decreases from 1.5-dev7 to 1.5-dev17

2013-07-05 Thread Godbach
Hi Willy,

I have done concurrent test for both haproxy 1.5-dev7 and 1.5-dev17.The
amount of connections decreases in haproxy-1.5-dev17 compared to
haproxy-1.5-dev7 under the the same condition and the same configuration
file.

I have gathered the memory usage by free command during test, and the
statistics is as follow:

ConncurrentMem/dev7   Mem/dev17
25,000 0.55GB 0.7GB
50,000 1.1GB  1.4GB
100,0002.2GB  2.8GB

Yes, the memory usage in dev17 is more than that in dev7. But I have
compared memory usage for each session allocated by memory pool between
these two versions, and dev17 is slightly less than dev7.

It seems that dev17 may use some additional memory allocated and freed
besides pool. Any suggestions will be much appreciate. Since there are
new features such as SSL and http compression in dev17, I have not
compile them into haproxy.

There is some key information in my system:
kernel: 2.6.38.8 x86_64
CPU: Intel(R) Celeron(R) CPU G540 @ 2.50GHz, two cores
Mem: 4G, 3.4G freed

Two instances of haproxy with the same configuration but wiht different
bind IP:
global
node http-vlan601.0
pidfile xxx
stats socket xxx
maxconn 20
tune.bufsize 8030
tune.maxrewrite 1024
daemon
quiet

defaults
mode http
option http-server-close
option abortonclose
option http-pretend-keepalive
timeout http-keep-alive 500s
timeout http-request 500s
option splice-auto
timeout client 500s
timeout server 500s
timeout connect 500s
timeout queue 500s
retries 0

frontend fe
bind 10.61.106.1:80
maxconn 20
use_backend be unless

backend be
balance roundrobin
hash-type consistent
srv1...
srv2...
[40 servers]


Best Regards,
Godbach



Re: Performance of concurrent connections decreases from 1.5-dev7 to 1.5-dev17

2013-07-06 Thread Godbach
On 2013/7/6 16:52, Willy Tarreau wrote:
 On Sat, Jul 06, 2013 at 04:31:49PM +0800, Godbach wrote:
 I've got what you meant. Once allocated from system, haproxy will not
 release the memory until it receives SIGQUIT or does soft stop. If less
 memory during running is expected, just decrease the maxconn.

 In addition, can I consider that dlmalloc can be used to instead of
 CALLOC/MALLOC as your own suggestion to maintain better memory
 management in production.
 
 Yes, and for this you just need to put dlmalloc.c to a location pointed to
 by DLMALLOC_SRC (default: src/dlmalloc.c) and build using USE_DLMALLOC=1 :
 
cp ~/dlmalloc.c src/
make TARGET=linux2628 USE_DLMALLOC=1
 
 That said, the only benefit (which is important in my usage) is that the
 calls to free() upon SIGQUIT or SIGUSR1 will really release the unused
 memory, while with traditional malloc(), it will only happen once the
 top of the memory is released.
 
 Regards,
 Willy
 
 

I got it. Thank you.

-- 
Best Regards,
Godbach



Re: SSL Handshake errors

2013-07-08 Thread Godbach

On 2013/7/8 14:16, Andrei Marinescu wrote:

Hello Willy,

Thank you for your answer! I've attached a dump with two requests from
the same ip. First one failed with Connection closed during SSL
handshake, the second one failed with Timeout during SSL handshake.

I've translated the .cap file with tcpdump -qns 0 -X -r file.cap 
translated.cap in order to make the dump readable and extract the two
requests. If the original dump is needed, let me know and I'll attach it
a.s.a.p.



Hi, Andrei

You'd better supply the original pcap file instead, not the readable format.

--
Best Regards,
Godbach



Re: Confused by the behaviour of calloc during init() in haproxy.c

2013-07-18 Thread Godbach

Hi Willy,

On 2013/7/19 13:32, Willy Tarreau wrote:

Hi Godbach,

On Fri, Jul 19, 2013 at 12:21:23PM +0800, Godbach wrote:

You should strace it. I'm sure you'll see an mmap() call that matches
the size of your allocation, meaning that the libc has decided to use
this instead of sbrk() to allocate the area. Then by definition all
the area will return zeroes, so the libc does not need to perform a
memset on it. The issue however is that we believe the memory is granted
but we can fail to use it on low memory condition.

If this is what happens, I'd prefer that we explicitly use memset on these
critical elements such as fdtab etc... to guarantee the memory is allocated.

Best regards,
Willy




Here is part information about strace during haproxy startup:

open(h.cfg, O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1985, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
= 0x7fadfa0e

read(3, global\n\tnode vs_1\n\tpidfile /tmp/..., 4096) = 1985
read(3, , 4096)   = 0
close(3)= 0
munmap(0x7fadfa0e, 4096)= 0
brk(0)  = 0x164d000
brk(0x166f000)  = 0x166f000
mmap(NULL, 41947136, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf78c
mmap(NULL, 62918656, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf3cbf000
mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf32be000
mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf28bd000

epoll_create(2621472)   = 3
mmap(NULL, 31461376, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf0abc000


The allocation of memory related to 40Mbytes and 60Mbytes should be in 
these two lines:


mmap(NULL, 41947136, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf78c
mmap(NULL, 62918656, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fadf3cbf000


Yes, as you said, mmap is used to allocate the areas. Is that what you 
meant libc has not called memset yet.


--
Best Regards,
Godbach



Re: Confused by the behaviour of calloc during init() in haproxy.c

2013-07-19 Thread Godbach

On 2013/7/19 14:34, Willy Tarreau wrote:

On Fri, Jul 19, 2013 at 01:55:43PM +0800, Godbach wrote:

Hi Willy,

Here is part information about strace during haproxy startup:

open(h.cfg, O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1985, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x7fadfa0e
read(3, global\n\tnode vs_1\n\tpidfile /tmp/..., 4096) = 1985
read(3, , 4096)   = 0
close(3)= 0
munmap(0x7fadfa0e, 4096)= 0
brk(0)  = 0x164d000
brk(0x166f000)  = 0x166f000
mmap(NULL, 41947136, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf78c
mmap(NULL, 62918656, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf3cbf000
mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf32be000
mmap(NULL, 10489856, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf28bd000
epoll_create(2621472)   = 3
mmap(NULL, 31461376, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf0abc000

The allocation of memory related to 40Mbytes and 60Mbytes should be in
these two lines:

mmap(NULL, 41947136, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf78c
mmap(NULL, 62918656, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0) = 0x7fadf3cbf000

Yes, as you said, mmap is used to allocate the areas. Is that what you
meant libc has not called memset yet.


Yes exactly. So maybe we should change this. I had a draft a long time
ago involving a safe_malloc function that would allocate and clear
an area. Maybe it would be time to switch to using such a thing for all
the critical parts of the initialization code.

Regards,
Willy




Oh, one thing may be forgotten mentioned in my first mail:


Then I run a simple program which also called calloc to alloc 300Mbytes
meomory, and the reuslt as below:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
17797 root  20   0  304m 300m  332 S  0.0  7.7   0:00.15 tmem
The RES is 300M just as it supposed to be.


The simple test program 'tmem' I wrote just allocates memory in 
300Mbytes by calloc, mmap is also be used to do the allocation, the 
strace result is as below:


execve(./tmem, [./tmem], [/* 33 vars */]) = 0
brk(0)  = 0xd22000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
= 0x7fcb67263000
access(/etc/ld.so.preload, R_OK)  = -1 ENOENT (No such file or 
directory)

open(/etc/ld.so.cache, O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=114390, ...}) = 0
mmap(NULL, 114390, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fcb67247000
close(3)= 0
open(/lib64/libc.so.6, O_RDONLY|O_CLOEXEC) = 3
read(3, 
\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\260\27B\25?\0\0\0..., 
832) = 832

fstat(3, {st_mode=S_IFREG|0755, st_size=2076800, ...}) = 0
mmap(0x3f1540, 3896632, PROT_READ|PROT_EXEC, 
MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x3f1540

mprotect(0x3f155ad000, 2097152, PROT_NONE) = 0
mmap(0x3f157ad000, 24576, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1ad000) = 0x3f157ad000
mmap(0x3f157b3000, 17720, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x3f157b3000

close(3)= 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
= 0x7fcb67246000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
= 0x7fcb67245000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) 
= 0x7fcb67244000

arch_prctl(ARCH_SET_FS, 0x7fcb67245700) = 0
mprotect(0x3f157ad000, 16384, PROT_READ) = 0
mprotect(0x3f14e21000, 4096, PROT_READ) = 0
munmap(0x7fcb67247000, 114390)  = 0
mmap(NULL, 314576896, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, 
-1, 0) = 0x7fcb54643000

pause(

The last line shows that mmap is used. Since this program can use full 
size memory it supposed to, there is a conflict between this result and 
the conclusion we talked about before.


The test codes is as below:

#include stdio.h
#include stdlib.h
#include unistd.h

#define MEMSIZE (300 * 1024 * 1024)

int main(int argc, char *argv[])
{
char *p = NULL;

p = calloc(1, MEMSIZE);
pause();
free(p);
return 0;
}




--
Best Regards,
Godbach



Roundrobin can not work well when a server's weight is 256

2013-07-20 Thread Godbach
Hi Willy,

I have done a test for haproxy on latest snapshot which has two servers
with such key configuration
lb algo is roundrobin
server A: weight 128
server B: weight 256

I sent 9 requests and the order for getting responses from server is
expected as:
BBABBABBA
But I got the unexpected order as below:
ABBAA

Indeed, I have noticed that there is a problem for the implentation of
roundrobin yesterday before this test, and this test verified the
problem. The description is as below:

srv-eweight will exceed SRV_EWGHT_MAX in accordance with the fromula:
srv-eweight = srv-uweight * BE_WEIGHT_SCALE;

Since uweight can be reach to 256 and BE_WEIGHT_SCALE equals to 16. So
the max vaule of eweight should be 256*16. But there is a macro named
SRV_EWGHT_MAX, it equals to 255*16(SRV_UWGHT_MAX*BE_WEIGHT_SCALE). And
when a server is insterted into round robin tree during initialization,
fwrr_queue_by_weight() is called:

static inline void fwrr_queue_by_weight(struct eb_root *root, struct
server *s)
{
s-lb_node.key = SRV_EWGHT_MAX - s-eweight;
eb32_insert(root, s-lb_node);
s-lb_tree = root;
}

When server's weight is 256, eweight is 256*16. The eweight will larger
than SRV_EWGHT_MAX. As a result, the key value in lb tree will be closed
to UNIT_MAX in unsigned type, and it will be not elected as the first
server to process request. Furthermore, roundrobin can not work well in
my test.

Since there is a macro SRV_UWGHT_MAX equals to 255 as below:
#define SRV_UWGHT_RANGE 256
#define SRV_UWGHT_MAX   (SRV_UWGHT_RANGE - 1)

I want to know whether the server's weight should not larger than
SRV_UWGHT_MAX(255), but the max value is 256 now in cfgparse.c as below:
else if (!strcmp(args[cur_arg], weight)) {
int w;
w = atol(args[cur_arg + 1]);
if (w  0 || w  256) {

If so, I will give a patch to fix it, or it should be fixed in other way.

-- 
Best Regards,
Godbach



Re: Roundrobin can not work well when a server's weight is 256

2013-07-21 Thread Godbach

On 2013-7-21 16:53, Willy Tarreau wrote:

Hi Godbach,

That's quite a good analysis. I'm realizing that durnig the development
of the algorithm, the maximum weight was 255, and it was later changed to
256. The test code that was used for this is still in tests/filltab25.c if
you're curious. It displays in which order servers are picked, and applies
random weight changes on the fly.

I carefully checked and for me we can set SRV_UWGHT_MAX to SRV_UWGHT_RANGE.
It will have the effect of limiting the maximum number of servers a full
weight to 4095 instead of 4128 (so please update the doc for this). It is
also used in the leastconn algorithm, and similarly, the effective max
number of connections per server without overflowing will be limited to
1048575 instead of 1052688 (this is not a big issue).

So yeah, please go ahead and send a fix for this by. Please also replace
4128 with 4095 in the documentation!

Best regards,
Willy



Hi Willy,

Do you mean that we set  SRV_UWEIGHT_MAX to SRV_UWEIGHT_RANGE in function
fwrr_queue_by_weight for roundrobin and similar function for leastconn:

static inline void fwrr_queue_by_weight(struct eb_root *root, struct
server *s)
{
s-lb_node.key = SRV_EWGHT_MAX - s-eweight;
eb32_insert(root, s-lb_node);
s-lb_tree = root;
}

Since I have done this before I sent the first mail, but the result was 
still not as expected,
I want to know that the check you have done was a real test not only 
code review. If so, I will

do the test again later and send the patch if it works well.

--
Best Regards,
Godbach




Re: Roundrobin can not work well when a server's weight is 256

2013-07-21 Thread Godbach
On 2013-7-21 18:16, Godbach wrote:
 On 2013-7-21 16:53, Willy Tarreau wrote:
 Hi Godbach,

 That's quite a good analysis. I'm realizing that durnig the development
 of the algorithm, the maximum weight was 255, and it was later
 changed to
 256. The test code that was used for this is still in
 tests/filltab25.c if
 you're curious. It displays in which order servers are picked, and
 applies
 random weight changes on the fly.

 I carefully checked and for me we can set SRV_UWGHT_MAX to
 SRV_UWGHT_RANGE.
 It will have the effect of limiting the maximum number of servers a full
 weight to 4095 instead of 4128 (so please update the doc for this).
 It is
 also used in the leastconn algorithm, and similarly, the effective max
 number of connections per server without overflowing will be limited to
 1048575 instead of 1052688 (this is not a big issue).

 So yeah, please go ahead and send a fix for this by. Please also replace
 4128 with 4095 in the documentation!

 Best regards,
 Willy


 Hi Willy,

 Do you mean that we set SRV_UWEIGHT_MAX to SRV_UWEIGHT_RANGE in function
 fwrr_queue_by_weight for roundrobin and similar function for leastconn:

 static inline void fwrr_queue_by_weight(struct eb_root *root, struct
 server *s)
 {
 s-lb_node.key = SRV_EWGHT_MAX - s-eweight;
 eb32_insert(root, s-lb_node);
 s-lb_tree = root;
 }

 Since I have done this before I sent the first mail, but the result
 was still not as expected,
 I want to know that the check you have done was a real test not only
 code review. If so, I will
 do the test again later and send the patch if it works well.

Oh, I guess you meant that set SRV_EWGHT_MAX to SRV_EWGHT_RANGE,
not SRV_UWGHT_*.

-- 
Best Regards,
Godbach




Re: Roundrobin can not work well when a server's weight is 256

2013-07-21 Thread Godbach

On 2013-7-21 21:17, Willy Tarreau wrote:

No, I really mean UWGHT_MAX set to UWGHT_RANGE since it is used to
fix set highest user weight that can be set.

Willy




Yeah, I get what you meant now. With the following fix:

diff --git a/include/types/server.h b/include/types/server.h
index b58a062..062b06e 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -72,7 +72,7 @@

 /* various constants */
 #define SRV_UWGHT_RANGE 256
-#define SRV_UWGHT_MAX   (SRV_UWGHT_RANGE - 1)
+#define SRV_UWGHT_MAX   SRV_UWGHT_RANGE
 #define SRV_EWGHT_RANGE (SRV_UWGHT_RANGE * BE_WEIGHT_SCALE)
 #define SRV_EWGHT_MAX   (SRV_UWGHT_MAX   * BE_WEIGHT_SCALE)

Roundrobin can work well with server's weight 256 after testing again.

So I should post two patches, one for source code and the other one for 
documentation.
There are also related comments of roundrobin and/or leastconn to be 
fixed together.


I will finish this job carefully and post patches ASAP.

--
Best Regards,
Godbach




Re: Roundrobin can not work well when a server's weight is 256

2013-07-21 Thread Godbach

On 2013-7-21 16:53, Willy Tarreau wrote:

Hi Godbach,

I carefully checked and for me we can set SRV_UWGHT_MAX to SRV_UWGHT_RANGE.
It will have the effect of limiting the maximum number of servers a full
weight to 4095 instead of 4128 (so please update the doc for this). It is
also used in the leastconn algorithm, and similarly, the effective max
number of connections per server without overflowing will be limited to
1048575 instead of 1052688 (this is not a big issue).

So yeah, please go ahead and send a fix for this by. Please also replace
4128 with 4095 in the documentation!




Hi Willy,

For roundrobin algo, the max mumber of servers is caculated by the formula
2^32/255^2/scale ~= 66051/scale

With scale is 16 and SRV_WGHT_MAX is 255, the result is 4128.

If SRV_WGHT_MAX is set to 256, the max number should be:
2^32/256^2/16 = 65536/16 = 4096

But you said this value should be 4095. I am confused by it.

--
Best Regards,
Godbach




Re: Roundrobin can not work well when a server's weight is 256

2013-07-23 Thread Godbach
2013/7/21 Godbach nylzhao...@gmail.com

 On 2013-7-21 21:17, Willy Tarreau wrote:

 No, I really mean UWGHT_MAX set to UWGHT_RANGE since it is used to
 fix set highest user weight that can be set.

 Willy



 Yeah, I get what you meant now. With the following fix:

 diff --git a/include/types/server.h b/include/types/server.h
 index b58a062..062b06e 100644
 --- a/include/types/server.h
 +++ b/include/types/server.h
 @@ -72,7 +72,7 @@

  /* various constants */
  #define SRV_UWGHT_RANGE 256
 -#define SRV_UWGHT_MAX   (SRV_UWGHT_RANGE - 1)
 +#define SRV_UWGHT_MAX   SRV_UWGHT_RANGE
  #define SRV_EWGHT_RANGE (SRV_UWGHT_RANGE * BE_WEIGHT_SCALE)
  #define SRV_EWGHT_MAX   (SRV_UWGHT_MAX   * BE_WEIGHT_SCALE)

 Roundrobin can work well with server's weight 256 after testing again.

 So I should post two patches, one for source code and the other one for
 documentation.
 There are also related comments of roundrobin and/or leastconn to be fixed
 together.

 I will finish this job carefully and post patches ASAP.

 --
 Best Regards,
 Godbach


Hi Willy,

There are four patches for your information. Some additional information
about these four
patches is as below:

Patch 0001
The main patch to fix the bug what we have talked about. Since FWRR can not
work well
with server's weight 256, the level of this bug should be MEDIUM in my
opinion. You can
modify the level if any other level is better.

Patch 0002
Change the max number of servers for roundrobin. The value is 4096 in my
opinion with
the formula:
2^32/256^2/16 = 65536/16 = 4096
If what I understand is not correct, I will appreciate that you tell me the
reason.

Patch 0003
Use SRV_UWGHT_MAX to check the weight of server instead of magic number 256

Patch 0004
fix typo in lb_fwrr.c, maybe non-ASCII chars are intruduced into.

Best Regards,
Godbach


0001-BUG-MEDIUM-server-set-the-macro-for-server-s-max-wei.patch
Description: Binary data


0002-DOC-modify-the-max-number-of-servers-per-backend-for.patch
Description: Binary data


0003-OPTIM-server-use-SRV_UWGHT_MAX-to-check-the-weight-o.patch
Description: Binary data


0004-BUG-MINOR-fwrr-fix-typo-in-comment.patch
Description: Binary data


Re: Roundrobin can not work well when a server's weight is 256

2013-07-23 Thread Godbach
2013/7/22 Willy Tarreau w...@1wt.eu

 On Sun, Jul 21, 2013 at 10:56:26PM +0800, Godbach wrote:
  On 2013-7-21 16:53, Willy Tarreau wrote:
  Hi Godbach,
  
  I carefully checked and for me we can set SRV_UWGHT_MAX to
 SRV_UWGHT_RANGE.
  It will have the effect of limiting the maximum number of servers a full
  weight to 4095 instead of 4128 (so please update the doc for this). It
 is
  also used in the leastconn algorithm, and similarly, the effective max
  number of connections per server without overflowing will be limited to
  1048575 instead of 1052688 (this is not a big issue).
  
  So yeah, please go ahead and send a fix for this by. Please also replace
  4128 with 4095 in the documentation!
  
  
 
  Hi Willy,
 
  For roundrobin algo, the max mumber of servers is caculated by the
 formula
  2^32/255^2/scale ~= 66051/scale
 
  With scale is 16 and SRV_WGHT_MAX is 255, the result is 4128.
 
  If SRV_WGHT_MAX is set to 256, the max number should be:
  2^32/256^2/16 = 65536/16 = 4096
 
  But you said this value should be 4095. I am confused by it.

 Yes because the formula is wrong as well, in fact it was slightly
 simplified in a way that didn't make any difference but now does.
 2^32 is not allowed, it overflows, so the max product for computing
 the position is ((2^32)-1)/(256^2)/16 = 4095.99...

 Don't worry, I'll update this in your patch. I'll merge them all
 since it's the same functional change (with its propagation to the
 doc), and we'd rather just pick one patch for the backports than
 try to pick several related ones.

 Thank you very much for doing this work. I don't know how's the whether
 on your side but here it's been too hot to do anything productive during
 the day :-)

 Cheers,
 Willy


Hi Willy,
I have merged all four patches into a single one according to your
suggestion. If you have not megered them yet, the new patch can be usefull
and please do any change in case that anything necessary.
Here is nearly more than 30 degrees centigrade/86F at noon. I have to
open the fan or air conditioner to calm me down. :-)

Best Regards,
Godbach


0001-BUG-MEDIUM-server-set-the-macro-for-server-s-max-wei.patch
Description: Binary data


Re: Confused by the behaviour of calloc during init() in haproxy.c

2013-07-24 Thread Godbach

On 2013/7/24 13:42, Willy Tarreau wrote:



I agree and can confirm your results here. Also, if I replace the
calloc with a malloc, I still see mmap and we don't have the 300M
RSS anymore :

willy@pcw:~$ ps auxw|grep a.out
willy31956  0.0  0.0   2880   632 pts/4S+   07:36   0:00 strace ./a.out
willy31957  0.0  0.0 308764   320 pts/4S+   07:36   0:00 ./a.out

So... it's becoming increasingly likely that we have a bug in haproxy
or at least that we're overlooking something. That said I don't see
what the issue could be considering that we're initializing fdinfo and
fdtab with a calloc() just at one place. There's not even a realloc().

Best regards,
Willy




Yes, it is a strange behavior of haproxy as calloc is not replaced yet. 
There is also another test I have done in haproxy-1.5-dev7:


#ps axuw|grep haproxy
root  6142  0.3  7.5 206196 115820 pts/4   S+   15:48   0:00 
./haproxy -f h.cfg -d


The RSS is 113Mbytes with maxconn is 1,048,576. If I decrease the 
maxconn to be half, RSS is 55Mbytes.


It seems that dev7 is somehow different from lastest snapshot in the 
same condition.



--
Best Regards,
Godbach



Re: Confused by the behaviour of calloc during init() in haproxy.c

2013-07-24 Thread Godbach

On 2013/7/24 16:18, Willy Tarreau wrote:

On Wed, Jul 24, 2013 at 03:55:23PM +0800, Godbach wrote:

On 2013/7/24 13:42, Willy Tarreau wrote:



I agree and can confirm your results here. Also, if I replace the
calloc with a malloc, I still see mmap and we don't have the 300M
RSS anymore :

willy@pcw:~$ ps auxw|grep a.out
willy31956  0.0  0.0   2880   632 pts/4S+   07:36   0:00 strace
./a.out
willy31957  0.0  0.0 308764   320 pts/4S+   07:36   0:00 ./a.out

So... it's becoming increasingly likely that we have a bug in haproxy
or at least that we're overlooking something. That said I don't see
what the issue could be considering that we're initializing fdinfo and
fdtab with a calloc() just at one place. There's not even a realloc().

Best regards,
Willy




Yes, it is a strange behavior of haproxy as calloc is not replaced yet.
There is also another test I have done in haproxy-1.5-dev7:

#ps axuw|grep haproxy
root  6142  0.3  7.5 206196 115820 pts/4   S+   15:48   0:00
./haproxy -f h.cfg -d

The RSS is 113Mbytes with maxconn is 1,048,576. If I decrease the
maxconn to be half, RSS is 55Mbytes.

It seems that dev7 is somehow different from lastest snapshot in the
same condition.


dev7 did not have the poisoning/memset code, so that can make a
difference on other structures.

Willy




The mechanism of poisoning/memset is used for memory pool. Memory pool 
uses MALLOC in dev7 and CALLOC in master, most chunks of memory are 
allocated from memory pool while processing sessions. But what we have 
talked about is memory usage which is allocated directly by calloc() 
during startup with almost no session.


So it seems that the memory usage we talk about here should not be 
related to this mechanism in my opinion.


--
Best Regards,
Godbach



Re: Confused by the behaviour of calloc during init() in haproxy.c

2013-07-24 Thread Godbach

On 2013/7/24 17:07, Willy Tarreau wrote:

On Wed, Jul 24, 2013 at 04:58:31PM +0800, Godbach wrote:

The mechanism of poisoning/memset is used for memory pool. Memory pool
uses MALLOC in dev7 and CALLOC in master, most chunks of memory are
allocated from memory pool while processing sessions. But what we have
talked about is memory usage which is allocated directly by calloc()
during startup with almost no session.


I agree, but I thought that the difference could lie in *some* of
the pools being used. That said if this is just upon startup, I
agree that the pools should be empty and thus cannot explain the
difference.


So it seems that the memory usage we talk about here should not be
related to this mechanism in my opinion.


Indeed. I have no idea why we're observing these differences, and I
don't know if the libc uses heuristics to decide to memset() the
area or not.

I think we'd better define a zalloc() function aimed at replacing
calloc() and which would always clear the area, than rely on some
random behaviour we have no control over.

Willy




Yes, it is a good choice to clear the memory area explicitly. I will go 
on reporting if anything new I get about this issue.


--
Best Regards,
Godbach



Different check conditions for server selected from lb trees

2013-08-01 Thread Godbach
Hi Willy,

Haproxy will check the number of concurrent sessions assigned to the
server after being selected from lb trees. Both leastconn and roundrobin
use the same condition as below:

fwlc_get_next_server():
if (!s-maxconn || (!s-nbpend  s-served  srv_dynamic_maxconn(s))) {
if (s != srvtoavoid) {
srv = s;
break;
}
avoided = s;
}

fwrr_get_next_server():
if (!srv-maxconn || (!srv-nbpend  srv-served 
srv_dynamic_maxconn(srv))) {
/* make sure it is not the server we are trying to exclude... */
if (srv != srvtoavoid || avoided)
break;

avoided = srv; /* ...but remember that is was selected yet avoided */
}

It means that the server will not be used if there are sessions in
pending queue or srv-served exceeds the dynamic maxconn of server with
srv-maxconn set.

But static-rr ignores the sessions in pending queue and just uses
srv-cur_sess to compare with dynamic maxconn ad below:

map_get_server_rr():
if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv)) {
/* make sure it is not the server we are try to exclude... */
if (srv != srvtoavoid) {
px-lbprm.map.rr_idx = newidx;
return srv;
}

avoided = srv;  /* ...but remember that is was selected yet avoided */
avoididx = newidx;
}

So I am wondering that if there is something intentionally to use
different check condition that I have not caught.

-- 
Best Regards,
Godbach



Re: Different check conditions for server selected from lb trees

2013-08-06 Thread Godbach
On 2013/8/1 17:05, Godbach wrote:
 Hi Willy,
 
 Haproxy will check the number of concurrent sessions assigned to the
 server after being selected from lb trees. Both leastconn and roundrobin
 use the same condition as below:
 
 fwlc_get_next_server():
 if (!s-maxconn || (!s-nbpend  s-served  srv_dynamic_maxconn(s))) {
   if (s != srvtoavoid) {
   srv = s;
   break;
   }
   avoided = s;
 }
 
 fwrr_get_next_server():
 if (!srv-maxconn || (!srv-nbpend  srv-served 
 srv_dynamic_maxconn(srv))) {
   /* make sure it is not the server we are trying to exclude... */
   if (srv != srvtoavoid || avoided)
   break;
 
   avoided = srv; /* ...but remember that is was selected yet avoided */
 }
 
 It means that the server will not be used if there are sessions in
 pending queue or srv-served exceeds the dynamic maxconn of server with
 srv-maxconn set.
 
 But static-rr ignores the sessions in pending queue and just uses
 srv-cur_sess to compare with dynamic maxconn ad below:
 
 map_get_server_rr():
 if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv)) {
   /* make sure it is not the server we are try to exclude... */
   if (srv != srvtoavoid) {
   px-lbprm.map.rr_idx = newidx;
   return srv;
   }
 
   avoided = srv;  /* ...but remember that is was selected yet avoided */
   avoididx = newidx;
 }
 
 So I am wondering that if there is something intentionally to use
 different check condition that I have not caught.
 

Hi Willy,

It seems that static-rr should also use the same check condition for the
server after being selected as roundrobin and leastconn as below:

if (!srv-maxconn || (!srv-nbpend  srv-served 
srv_dynamic_maxconn(srv)))

not the following:

if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv)) {


-- 
Best Regards,
Godbach



Re: Different check conditions for server selected from lb trees

2013-08-06 Thread Godbach
On 2013/8/6 20:36, Willy Tarreau wrote:
 Hi Godbach,
 
 Sorry for replying late, I don't have a regular internet access at the
 moment.
 
 On Tue, Aug 06, 2013 at 07:48:46PM +0800, Godbach wrote:
 It seems that static-rr should also use the same check condition for the
 server after being selected as roundrobin and leastconn as below:

 if (!srv-maxconn || (!srv-nbpend  srv-served 
 srv_dynamic_maxconn(srv)))

 not the following:

 if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv)) {
 
 Yes I agree with you, I think the reason is that the code was
 extracted from the hash codes (which also fall back to RR if
 the content to hash is not here), and which initially did not
 need to handle this situation.
 
 I don't remember if the hash algorithms now fall back to the
 generic RR code or if they use their own, but it is possible
 that more code needs fixing in the end.
 
 Care to send a patch ? :-)
 
 Best regards,
 Willy
 
 

Hi Willy,

Thank you for your replying.

I have checked all the hash algorithms, if the hash type is consistent,
they will call chash_get_server_hash(), and map_get_server_hash() will
be called with hash type is map-based. Both functions will return server
directly without checking server limition. The check will be executed in
assign_server_and_queue() as below:

err = assign_server(s);
...
switch (err) {
...
case SRV_STATUS_OK:

if (srv-maxconn 
(srv-nbpend || srv-served = srv_dynamic_maxconn(srv))) {

if (srv-maxqueue  0  srv-nbpend = srv-maxqueue)
return SRV_STATUS_FULL;

p = pendconn_add(s);
if (p)
return SRV_STATUS_QUEUED;
else
return SRV_STATUS_INTERNAL;
}
...

So in my opinion, it is not neccessary to worry about hash algorithms.

The attachment is the patch which fix the check condition for static-rr
in map_get_server_rr() for your information. This patch will only affect
static-rr. Though all hash algorithms with type map-based will use the
same server map as static-rr, they call another function
map_get_server_hash() to get server.

By the way, it seems that hash type avalanche was not used anymore since
there are only definition and parsing for it in lastest snapshot as below:

src/cfgparse.c
4101:   curproxy-lbprm.algo |= BE_LB_HASH_AVAL;

include/types/backend.h
108:#define BE_LB_HASH_AVAL   0x20 /* run an avalanche hash before a
map */

Is there anything I missed?

-- 
Best Regards,
Godbach
From 370a74e89af0153a96ed8b7ebd4648258c89109e Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 7 Aug 2013 09:48:23 +0800
Subject: [PATCH] BUG/MINOR: use the same check condition for server as other
 algorithms

Such load balance algorithms as roundrobin, leastconn and first will check the
server after being selected with the following condition:
if (!s-maxconn || (!s-nbpend  s-served  srv_dynamic_maxconn(s)))

But static-rr uses the different one in map_get_server_rr()  as below:
if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv))
After viewing this difference, it is a better choice for static-rr to use the
same check condition as other algorithms.

This change will only affect static-rr. Though all hash algorithms with type
map-based will use the same server map as static-rr, they call another function
map_get_server_hash() to get server.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/lb_map.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/src/lb_map.c b/src/lb_map.c
index 49805ad..9858249 100644
--- a/src/lb_map.c
+++ b/src/lb_map.c
@@ -229,7 +229,7 @@ struct server *map_get_server_rr(struct proxy *px, struct 
server *srvtoavoid)
avoididx = 0; /* shut a gcc warning */
do {
srv = px-lbprm.map.srv[newidx++];
-   if (!srv-maxconn || srv-cur_sess  srv_dynamic_maxconn(srv)) {
+   if (!srv-maxconn || (!srv-nbpend  srv-served  
srv_dynamic_maxconn(srv))) {
/* make sure it is not the server we are try to 
exclude... */
if (srv != srvtoavoid) {
px-lbprm.map.rr_idx = newidx;
-- 
1.7.7



Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-08 Thread Godbach

On 2013/8/8 18:50, Mark Brooks wrote:

The issue I am seeing is that using the dev version of HAProxy
1.5-dev19 git commit id 00f0084752eab236af80e61291d672e835790cff

I have a source IP stick table and im trying to drop specific entries
from it but its resulting in the whole table being dropped each time.

My table looks like this -

0x24a8294: key=192.168.136.10 use=0 exp=1761492 server_id=3
0x24a8344: key=192.168.136.11 use=0 exp=1761506 server_id=2
0x24a83f4: key=192.168.136.12 use=0 exp=1761520 server_id=3
0x24a84a4: key=192.168.136.13 use=0 exp=1761534 server_id=2
0x24a8554: key=192.168.136.14 use=0 exp=1761548 server_id=3
0x24a8604: key=192.168.136.15 use=0 exp=1761563 server_id=2
0x24a86b4: key=192.168.136.16 use=0 exp=1761580 server_id=3
0x24a8764: key=192.168.136.17 use=0 exp=1761592 server_id=2
0x24a8814: key=192.168.136.18 use=0 exp=1761607 server_id=3
0x24a88c4: key=192.168.136.19 use=0 exp=1761622 server_id=2
0x24a8974: key=192.168.136.20 use=0 exp=1761636 server_id=3
0x24a8a24: key=192.168.136.21 use=0 exp=1761649 server_id=2

im running the command -

socat unix-connect:/var/run/haproxy.stat stdio  'clear table
VIP_Name-2 data.server_id eq 2'

Id assume that the entries with server_id = 2 would be removed but its
removing everything each time.


My Configuration file is below -

listen VIP_Name-2
bind 192.168.138.2:80 transparent
mode http
balance leastconn
stick-table type ip size 10240k expire 30m
stick on src
server backup 127.0.0.1:9081 backup  non-stick
option httpclose
option forwardfor
option redispatch
option abortonclose
maxconn 4
log global
option httplog
server RIP_Name 192.168.66.50  weight 100  check port 80  inter 2000
rise 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions
server RIP_Name-1 192.168.66.51  weight 100  check port 80  inter 2000
  rise 2  fall 3 minconn 0  maxconn 0  on-marked-down shutdown-sessions

Thanks

Mark




Hi Mark,

I have got the same problem with you. It may be a bug which can also 
reproduce while showing an specified data type with such command as below:

show table table-name data.server_id id

stats_table_request() in dumpstat.c are called to delete stick entries. 
The main codes are as follow:


 if (show  !skip_entry 
 !stats_dump_table_entry_to_buffer(trash, si, 
si-applet.ctx.table.proxy,

   si-applet.ctx.table.entry))
 return 0;

 si-applet.ctx.table.entry-ref_cnt--;

 eb = ebmb_next(si-applet.ctx.table.entry-key);
 if (eb) {
 struct stksess *old = si-applet.ctx.table.entry;
 si-applet.ctx.table.entry = ebmb_entry(eb, struct stksess, key);
 if (show)
 stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
 else
 stksess_kill(si-applet.ctx.table.proxy-table, old);
 si-applet.ctx.table.entry-ref_cnt++;
 break;
 }

It seems that skip_entry should be checked also while calling 
stksess_kill() since the entry is marked as skipped to be deleted.


I have tried to fix this problem but a pity that I cannot fix it 
completely by now. :-(


--
Best Regards,
Godbach



Re: keep-alive connections to the backend

2013-08-08 Thread Godbach

On 2013/8/9 5:40, Eric Tang wrote:

Hi

I'm looking for a solution that allows the backend to maintain keep-alive
connections. From the documentation and the mailing list it seems that
the keep alive can only be maintained in the frontend.

I found a similar request back in 2010 and I was wondering if there was
any action taken?
http://www.formilux.org/archives/haproxy/1009/3830.html

My configuration:

haproxy - load balancing 12 servers in the backend.
Each backend server hosts an application running Jetty.

From the client side we have dozens of clients (nginx/php) making http

requests to haproxy which then forwards the http request to one of the
12 servers.The server will send an http response (and some data) back to
the client.

Under very high load, we found that the huge amount of open/closed
connections in the backend servers causes the application to run at
almost 100% cpu load.

Regards
Eric




Hi Eric,

Do you mean the feature for tcp connection reuse in backend. If so, this 
feature has already in the ROADMAP as below:


1.5 (ETA 2013/12/31) :
  - server-side HTTP keepalive
= maybe with limitation to only reuse connections that don't depend
   on layer7 in a first time (just check the target).

HAProxy will release 1.5 stable version with this feature.
--
Best Regards,
Godbach



Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-13 Thread Godbach

On 2013/8/13 22:54, Willy Tarreau wrote:

Hi guys,

On Thu, Aug 08, 2013 at 08:48:53PM +0800, Godbach wrote:

On 2013/8/8 18:50, Mark Brooks wrote:

The issue I am seeing is that using the dev version of HAProxy
1.5-dev19 git commit id 00f0084752eab236af80e61291d672e835790cff

I have a source IP stick table and im trying to drop specific entries

from it but its resulting in the whole table being dropped each time.


(...)

I got it, it's a stupid fix for a previous bug that was killing a bit
too much this time.

Here's the fix.

Best regards,
Willy



Hi Willy,

I have done a new test with this patch, it works well now.

Yes, just do the same test as last node for other nodes to be removed. I 
had tried to fixed it just by testing skip_entry but forgetten to test 
si-applet.ctx.table.entry-ref_cnt. And the test condition should have 
been found for the last node.


There is another issue I want to make sure. There are nodes to be 
deleted even in 'show table' action if expired as below:


eb = ebmb_next(si-applet.ctx.table.entry-key);
if (eb) {
struct stksess *old = si-applet.ctx.table.entry;
si-applet.ctx.table.entry = ebmb_entry(eb, struct stksess, key);

if (show)
stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
else if (!skip_entry  !si-applet.ctx.table.entry-ref_cnt)
stksess_kill(si-applet.ctx.table.proxy-table, old);
si-applet.ctx.table.entry-ref_cnt++;
break;
}

If the expired nodes are not removed here, they can still be removed in 
expiration task by calling process_table_expire(). So the idea to remove 
expired nodes in 'show table' action can make process_table_expire() do 
less work.


--
Best Regards,
Godbach



Re: Attempting to clear entries in stick table based on server id, results in all entries being dropped.

2013-08-14 Thread Godbach

On 2013/8/14 13:10, Willy Tarreau wrote:

Hi Godbach,

On Wed, Aug 14, 2013 at 10:20:10AM +0800, Godbach wrote:

I have done a new test with this patch, it works well now.


Thanks for testing.


Yes, just do the same test as last node for other nodes to be removed. I
had tried to fixed it just by testing skip_entry but forgetten to test
si-applet.ctx.table.entry-ref_cnt. And the test condition should have
been found for the last node.


It was not obvious for me either, I found it only by single stepping in gdb !


There is another issue I want to make sure. There are nodes to be
deleted even in 'show table' action if expired as below:

eb = ebmb_next(si-applet.ctx.table.entry-key);
if (eb) {
 struct stksess *old = si-applet.ctx.table.entry;
 si-applet.ctx.table.entry = ebmb_entry(eb, struct stksess, key);

 if (show)
 stksess_kill_if_expired(si-applet.ctx.table.proxy-table, old);
 else if (!skip_entry  !si-applet.ctx.table.entry-ref_cnt)
 stksess_kill(si-applet.ctx.table.proxy-table, old);
 si-applet.ctx.table.entry-ref_cnt++;
 break;
}

If the expired nodes are not removed here, they can still be removed in
expiration task by calling process_table_expire(). So the idea to remove
expired nodes in 'show table' action can make process_table_expire() do
less work.


I've seen this as well and had a hard time reminding why it was done
this way. I was sure it was needed but the cause was not obvious to me.
IIRC, the reason was that we want show table to report valid entry
counts, so if we don't kill the entries ourselves and there is low
activity, nothing else will kill them fast enough to have valid counts.
And as you say, it also reduces the work of process_table_expire(),
eventhough this is a very minor benefit since we're not supposed to
be dumping the stats all the day along :-)

Best regards,
Willy




Hi Willy,

Thank you for your replying.

--
Best Regards,
Godbach



Re: Does Haproxy can deal with any request for one long conntect?

2013-08-15 Thread Godbach

On 2013/8/15 7:27, perlbox wrote:

 Because the next request has something to do with the previous
request in the  long connection, I want to replace the string and keep
alive the connection and let the client close the connection when it
want. Is there any option or method to do it?



Hi GuoXiang,

Without option http-server-close enabled, haproxy will be in tunnel mode 
which only processes the first request for one connection from client, 
the other requests from the same connection will be forwarded to the 
same server directly without being touched.


So if you want to modify each request from the same connection, you 
should enable option http-server-close. As far as I know, haproxy maybe 
can not fulfill your requirement.



--
Best Regards,
Godbach



[PATCH] DOC: fix typo in comments

2013-09-29 Thread Godbach
Hi Willy,

There is a patch to fix typo in comments, please check the attachment
for you information.

The commit log is as below:

commit 9824d1b3740ac2746894f1aa611c795366c84210
Author: Godbach nylzhao...@gmail.com
Date:   Mon Sep 30 11:05:42 2013 +0800

DOC: fix typo in comments

  0x2000 - 0x4000
  vuf - buf
  ethod - Method

Signed-off-by: Godbach nylzhao...@gmail.com


-- 
Best Regards,
Godbach
From 9824d1b3740ac2746894f1aa611c795366c84210 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Mon, 30 Sep 2013 11:05:42 +0800
Subject: [PATCH] DOC: fix typo in comments

  0x2000 - 0x4000
  vuf - buf
  ethod - Method

Signed-off-by: Godbach nylzhao...@gmail.com
---
 include/types/channel.h|4 ++--
 include/types/proto_http.h |2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/types/channel.h b/include/types/channel.h
index ee5f1b8..42160af 100644
--- a/include/types/channel.h
+++ b/include/types/channel.h
@@ -118,7 +118,7 @@
 #define CF_NEVER_WAIT 0x0800  /* never wait for sending data 
(permanent) */
 
 #define CF_WAKE_ONCE  0x1000  /* pretend there is activity on this 
channel (one-shoot) */
-/* unused: 0x2000, 0x2000, 0x8000 */
+/* unused: 0x2000, 0x4000, 0x8000 */
 
 /* Use these masks to clear the flags before going back to lower layers */
 #define CF_CLEAR_READ 
(~(CF_READ_NULL|CF_READ_PARTIAL|CF_READ_ERROR|CF_READ_ATTACHED))
@@ -266,7 +266,7 @@ struct channel {
eventually leave the buffer. So as long as -to_forward is larger than
global.maxrewrite, we can fill the buffer. If -to_forward is smaller than
global.maxrewrite, then we don't want to fill the buffer with more than
-   vuf-size - global.maxrewrite + -to_forward.
+   buf-size - global.maxrewrite + -to_forward.
 
A buffer may contain up to 5 areas :
  - the data waiting to be sent. These data are located between buf-p-o and
diff --git a/include/types/proto_http.h b/include/types/proto_http.h
index 1d7c92f..d0fa766 100644
--- a/include/types/proto_http.h
+++ b/include/types/proto_http.h
@@ -121,7 +121,7 @@
 
 /* these ones define a request start line */
 #define HTTP_MSG_RQMETH2 // parsing the Method
-#define HTTP_MSG_RQMETH_SP 3 // space(s) after the ethod
+#define HTTP_MSG_RQMETH_SP 3 // space(s) after the Method
 #define HTTP_MSG_RQURI 4 // parsing the Request URI
 #define HTTP_MSG_RQURI_SP  5 // space(s) after the Request URI
 #define HTTP_MSG_RQVER 6 // parsing the Request Version
-- 
1.7.7



Re: [PATCH] DOC: fix typo in comments

2013-10-01 Thread Godbach

On 2013-10-1 15:49, Willy Tarreau wrote:

Hi Godbach,

On Mon, Sep 30, 2013 at 11:23:10AM +0800, Godbach wrote:

Hi Willy,

There is a patch to fix typo in comments, please check the attachment
for you information.

The commit log is as below:

commit 9824d1b3740ac2746894f1aa611c795366c84210
Author: Godbach nylzhao...@gmail.com
Date:   Mon Sep 30 11:05:42 2013 +0800

 DOC: fix typo in comments

   0x2000 - 0x4000
   vuf - buf
   ethod - Method

 Signed-off-by: Godbach nylzhao...@gmail.com

Patch applied, thanks!

Willy




Hi Willy,

It seems that you submit the whole text of mail as commit log to 
the git repository.


--
Best Regards,
Godbach




Re: [PATCH] DOC: fix typo in comments

2013-10-01 Thread Godbach

On 2013-10-2 1:12, Willy Tarreau wrote:

On Wed, Oct 02, 2013 at 01:08:50AM +0800, Godbach wrote:

 It seems that you submit the whole text of mail as commit log to
the git repository.

Ah yes, you're right :-)

At the moment I applied it, I was fooled by the subject into thinking
it was just the patch itself. I'll be more careful next time. That's
no big deal anyway, just looks ugly :-)

Cheers,
Willy



I got it. It does not matter. :-)

--
Best Regards,
Godbach




[PATCH] BUG/MINOR: deinit: free server map which is allocated in init_server_map()

2013-10-02 Thread Godbach
Hi Willy,

There is a patch for freeing server map while doing cleanup.

Please check the attatchment for your information.

The commit log is as below:
Both static-rr and hash with type map-based call init_server_map() to
allocate
server map, so the server map should be freed while doing cleanup if one of
the above load balance algorithms is used.

-- 
Best Regards,
Godbach

From 9366d553d9415c3d3d7f6889903eca2e7382f597 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 2 Oct 2013 17:10:11 +0800
Subject: [PATCH] BUG/MINOR: deinit: free server map which is allocated in
 init_server_map()

Both static-rr and hash with type map-based call init_server_map() to allocate
server map, so the server map should be freed while doing cleanup if one of
the above load balance algorithms is used.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/haproxy.c |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index ec9f513..265c39c 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -975,6 +975,8 @@ void deinit(void)
free(p-conf.lfs_file);
free(p-conf.uniqueid_format_string);
free(p-conf.uif_file);
+   if (p-lbprm.map.srv)
+   free(p-lbprm.map.srv);
 
for (i = 0; i  HTTP_ERR_SIZE; i++)
chunk_destroy(p-errmsg[i]);
-- 
1.7.7



Re: [PATCH] BUG/MINOR: deinit: free server map which is allocated in init_server_map()

2013-10-07 Thread Godbach

Hi Willy,

On 2013-10-6 19:32, Willy Tarreau wrote:

Hi Godbach,


OK. Just for your info, the if before the free() is not needed
because free() already does it, so I'll remove it.


diff --git a/src/haproxy.c b/src/haproxy.c
index ec9f513..265c39c 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -975,6 +975,8 @@ void deinit(void)
free(p-conf.lfs_file);
free(p-conf.uniqueid_format_string);
free(p-conf.uif_file);
+   if (p-lbprm.map.srv)
+   free(p-lbprm.map.srv);
  
  		for (i = 0; i  HTTP_ERR_SIZE; i++)

chunk_destroy(p-errmsg[i]);
--
1.7.7

Thanks,
Willy


Yes, I have checked the manual of free(), it said that If ptr is NULL, 
no operation is performed.

Please remove it directly.

--
Best Regards,
Godbach




Re: [PATCH] BUG/MINOR: deinit: free server map which is allocated in init_server_map()

2013-10-07 Thread Godbach

On 2013-10-7 15:47, Willy Tarreau wrote:

Hi Godbach,

On Mon, Oct 07, 2013 at 03:37:19PM +0800, Godbach wrote:

Yes, I have checked the manual of free(), it said that If ptr is NULL,
no operation is performed.

In fact it's true on *most* platforms, and all those supported by haproxy.
So we removed all of these if a few years ago to make the code more
readable and more maintainable. I remember that we even discovered some
copy-paste mistakes such as :

if (a)
   free(a);
if (b)
   free(b);
if (b)
   free(c);


Please remove it directly.

Done :-)

Willy



Hi Willy,

Thanks for your so detailed explanation. And the patch is updated 
according with our discussion

for your information.

--
Best Regards,
Godbach

From 75a874183092ef4f7a59a67b66d68edadf4727e3 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Mon, 7 Oct 2013 15:55:16 +0800
Subject: [PATCH] BUG/MINOR: deinit: free server map which is allocated in
 init_server_map()

Both static-rr and hash with type map-based call init_server_map() to allocate
server map, so the server map should be freed while doing cleanup.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/haproxy.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index ec9f513..bc03a73 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -975,6 +975,7 @@ void deinit(void)
free(p-conf.lfs_file);
free(p-conf.uniqueid_format_string);
free(p-conf.uif_file);
+   free(p-lbprm.map.srv);
 
for (i = 0; i  HTTP_ERR_SIZE; i++)
chunk_destroy(p-errmsg[i]);
-- 
1.7.7



About si_conn_send_loop()

2013-10-09 Thread Godbach
Hi Willy,

It seems that the loop will be only excuted once.

The related codes as below(src/stream_interface.c):
 static int si_conn_send_loop(struct connection *conn)
 {
 ...
 while (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {
 ...
 ret = conn-xprt-snd_buf(conn, chn-buf, send_flag);
 if (ret = 0)
 break;

 chn-flags |= CF_WRITE_PARTIAL;

 if (!chn-buf-o) {
 ...
 }

 /* if some data remain in the buffer, it's only because the
 * system bufers are full, so we don't want to loop again.
 */
 break;
 } /* while */

 if (conn-flags  CO_FL_ERROR)
 return -1;

 return 0;
 }
Since there is a 'break' with no condition in the last line of while
loop, the loop will
be only excuted once. It just looks like a 'if' as below:
 - while (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {
 + if (!(conn-flags  (CO_FL_ERROR | CO_FL_SOCK_WR_SH | ...))) {

-- 
Best Regards,
Godbach




Re: About si_conn_send_loop()

2013-10-11 Thread Godbach

On 2013/10/11 14:10, Willy Tarreau wrote:

Hi Godbach,

Yes I remember about this change, it happened one year ago in the
following commit :

   ed7f836 BUG/MINOR: stream_interface: don't loop over -snd_buf()

In 1.4 and 1.5 before the connection rework, the loop was used to
call send() over each buffer's half (when data was wrapped around
at the end). Now the transport protocol's send() function does the
job and we don't need to loop anymore.

However, I left the code as is because not having to replace the
breaks by gotos etc... reduced the amount of changes at the time
(we were at a stage where the new connections were not completely
stabilized yet).

Now it's OK. Feel free to send a cleanup patch if you want (and
change the comment at the top of the loop).

Best regards,
Willy




Hi Willy,

Thank you.

Yes, if 'while' is replaced by 'if', the breaks should be processed 
carefully.


I will send a cleanup patch later. The function name may be also changed 
from si_conn_send_loop() to si_conn_send() in my opinion, and the codes 
int two lines which call this function will be also modified.


--
Best Regards,
Godbach



Complie Failed with DEBUG option -DDEBUG_FSM

2013-10-14 Thread Godbach
Hi Willy,

Since http_silent_debug()(src/proto_http.c) uses some members which only
exist in the snapshot before struct channel is introduced, compile
errors occur once DEBUG_FSM is enabled.

The codes of http_silent_debug() are as below ans only req related codes
are token as an example:

#if defined(DEBUG_FSM)
static void http_silent_debug(int line, struct session *s)
{
chunk_printf(trash,
 [%04d] req: p=%d(%d) s=%d bf=%08x an=%08x data=%p size=%d
l=%d w=%p r=%p o=%p sm=%d fw=%ld tf=%08x\n,
 line,
 s-si[0].state, s-si[0].fd, s-txn.req.msg_state,
s-req-flags, s-req-analysers,
 s-req-buf-data, s-req-buf-size, s-req-l,
s-req-w, s-req-r, s-req-buf-p, s-req-buf-o,
s-req-to_forward, s-txn.flags);
write(-1, trash.str, trash.len);

...
}

The following members are obsolete:
* s-si[0].fd
* s-req-l
* s-req-w
* s-req-r

According to the latest implementation, the above members should be
replaced as below in my opinion:
* s-si[0].fd  =  s-si[0].conn-t.sock.fd
* s-req-l=  buffer_len(s-req-buf)
* s-req-w=  bo_ptr(s-req-buf)
* s-req-r=  bi_ptr(s-req-buf)

In addition ,I was confused by o=%p, what should it be. If it should
be s-req-buf-p, it is the same as bi_ptr(s-req-buf) actually.

If the replacements are correct and the use of  o=%p can be confirmed,
I will send a patch later.

-- 
Best Regards,
Godbach



Re: Complie Failed with DEBUG option -DDEBUG_FSM

2013-10-15 Thread Godbach

On 2013/10/15 1:15, Willy Tarreau wrote:

Hi godbach,

It would make more sense to completely redefine the output based on the
information we have now in my opinion. So better dump the elements of
the channel one by one instead of trying to map them to the old ones.
Indeed, the only reason for printing these info is to be able to quickly
use print *some-address in GDB.

You may be interested in reading doc/internals/buffer-ops.fig to see
how the buffers changed and what they contain right now.

Cheers,
Willy




Hi Willy,

Thank you.

I will read the doc later.

--
Best Regards,
Godbach



[PATCH] OPTIM: buffer: align the last output line of buffer_dump()

2013-11-13 Thread Godbach
Hi Willy,

buffer_dump() in src/buffer.c is a usefull function to dump buffer to do
debugging. However, the last output line should be aligned in a more
readable way.

Please check the attachment for the patch.
-- 
Best Regards,
Godbach
From 7d19e9f3d35be3fb253ab586041b67312a03232f Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Thu, 14 Nov 2013 10:15:20 +0800
Subject: [PATCH] OPTIM: buffer: align the last output line of buffer_dump()

If the dumped length of buffer is not multiple of 16, the last output line can
be seen as below:

Dumping contents from byte 0 to byte 125
 0  1  2  3  4  5  6  78  9  a  b  c  d  e  f
  : 47 45 54 20 2f 69 6e 64 - 65 78 2e 68 74 6d 20 48   GET /index.htm H
  0010: 54 54 50 2f 31 2e 30 0d - 0a 55 73 65 72 2d 41 67   TTP/1.0..User-Ag
  ...
  0060: 30 0d 0a 43 6f 6e 6e 65 - 63 74 69 6f 6e 3a 20 4b   0..Connection: K
  0070: 65 65 70 2d 41 6c 69 76 - 65 0d 0a 0d 0a   eep-Alive

Yes, the hex column will be overlapped by the text column. Both the hex and
text column should be aligned at their own area as below:

Dumping contents from byte 0 to byte 125
 0  1  2  3  4  5  6  78  9  a  b  c  d  e  f
  : 47 45 54 20 2f 69 6e 64 - 65 78 2e 68 74 6d 20 48   GET /index.htm H
  0010: 54 54 50 2f 31 2e 30 0d - 0a 55 73 65 72 2d 41 67   TTP/1.0..User-Ag
  ...
  0060: 30 0d 0a 43 6f 6e 6e 65 - 63 74 69 6f 6e 3a 20 4b   0..Connection: K
  0070: 65 65 70 2d 41 6c 69 76 - 65 0d 0a 0d 0aeep-Alive

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/buffer.c |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/src/buffer.c b/src/buffer.c
index 60fb3fc..c3e9d51 100644
--- a/src/buffer.c
+++ b/src/buffer.c
@@ -215,6 +215,11 @@ void buffer_dump(FILE *o, struct buffer *b, int from, int 
to)
if (((from + i)   15) == 7)
fprintf(o, - );
}
+   if (to - from  16) {
+   int j;
+   for (j = 0; j   from + 16 - to; j++)
+   fprintf(o,);
+   }
fprintf(o,   );
for (i = 0; (from + i  to)  (i  16) ; i++) {
fprintf(o, %c, isprint((int)b-data[from + i]) ? 
b-data[from + i] : '.') ;
-- 
1.7.7



Re: How do sticky tables work?

2013-11-14 Thread Godbach

On 2013/11/13 22:27, Jeff Zellner wrote:

Hi there fellow HAProxy users :)

Please excuse me if this is in the docs, I looked through but didn't
find anything definitive.

How do sticky tables work with respect to backend servers changing? If
we balance connections on source, and sticky on same -- what happens
if we lose a backend server? Are entries for that backend server
invalidated? Are all entries invalidated? Something else entirely?
Just trying to understand the definitive process.

Thanks for your help!

-Jeff




Hi Jeff,

HAProxy will lookup the stick table first to get a server for current 
session. If no server is found, load balance algorithm will be excuted 
subsequently.


When HAProxy gets a server from stick table, it will check if the server 
can serve the current session or not. HAProxy will use the server if one 
of the following conditions is fulfilled:

1) Server is running
2) Server is not running anymore, but option persist or force-persist 
... has been enabled. It means that the server can be used even it is down.


There is also one parameter named expire while defining stick table. If 
the entry is not referred anymore, it will be deleted once expired.


You can clear table by the following Unix socket command:
clear table table [ data.type operator value ] | [ key key ]

I still recommand that you will get more information what you care about 
after reading the manual of HAProxy:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html

--
Best Regards,
Godbach



[PATCH] two patches about manual and stream_interface

2013-12-03 Thread Godbach
Hi Willy,

Please check the attachments for the two patches.

Patch 0001: DOC: stick-table can be declared in such sections as
frontend, listen and backend

Patch 0002:  OPTIM: stream_interface: return directly if the connection
flag CO_FL_ERROR has been set

Actually, for patch 0002, the branch out_error in si_conn_send_cb() can
be also deleted since only the following codes use it:

static void si_conn_send_cb(struct connection *conn)
{
...
/* OK there are data waiting to be sent */
if (si_conn_send_loop(conn)  0)
goto out_error;

/* OK all done */
return;

 out_error:
/* Write error on the connection, report the error and stop I/O */
conn-flags |= CO_FL_ERROR;
}

And si_conn_send_loop() will return -1 when the connection flag has
CO_FL_ERROR set. As a result, we even do not need check the return value
of si_conn_send_loop() since we just want to set CO_FL_ERROR on the
connection flag and there is. The fix should be as below:
 @@ -1138,15 +1138,10 @@ static void si_conn_send_cb(struct connection
*conn)
 return;

 /* OK there are data waiting to be sent */
 -   if (si_conn_send(conn)  0)
 -   goto out_error;
 +   si_conn_send(conn);

 /* OK all done */
 return;
 -
 - out_error:
 -   /* Write error on the connection, report the error and stop I/O */
 -   conn-flags |= CO_FL_ERROR;
  }

If you agree with me, I will merge this fix into patch0002 and send the
patch again.

-- 
Best Regards,
Godbach
From 1d9b14eadbfe88f99831e091ee2be6836b44a004 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 4 Dec 2013 10:14:50 +0800
Subject: [PATCH 1/2] DOC: stick-table can be declared in such sections as
 frontend, listen and backend

The original manual has only mentioned backend.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 doc/configuration.txt |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 803d42e..38da5bc 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6133,7 +6133,8 @@ stick store-request pattern [table table] [{if | 
unless} condition]
 stick-table type {ip | integer | string [len length] | binary [len length]}
 size size [expire expire] [nopurge] [peers peersect]
 [store data_type]*
-  Configure the stickiness table for the current backend
+  Configure the stickiness table for the current section including frontend,
+  listen and backend.
   May be used in sections :   defaults | frontend | listen | backend
  no|yes   |   yes  |   yes
 
-- 
1.7.7

From 2ae8e1c73850addf1aa0ae570eb09a32e6feaa22 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 4 Dec 2013 11:10:19 +0800
Subject: [PATCH 2/2] OPTIM: stream_interface: return directly if the
 connection flag CO_FL_ERROR has been set

The connection flag CO_FL_ERROR will be tested in the functions both
si_conn_recv_cb() and si_conn_send_cb(). If CO_FL_ERROR has been set, out_error
branch will be executed. But the only job of out_error branch is to set
CO_FL_ERROR on connection flag. So it's better return directly than goto
out_error branch under such test conditions.

In addition, out_error branch is needless and deleted from si_conn_recv_cb().

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/stream_interface.c |   12 
 1 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/src/stream_interface.c b/src/stream_interface.c
index 9f0c26a..505fa71 100644
--- a/src/stream_interface.c
+++ b/src/stream_interface.c
@@ -915,7 +915,7 @@ static void si_conn_recv_cb(struct connection *conn)
 * which rejects it before reading it all.
 */
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
/* stop here if we reached the end of data */
if (conn_data_read0_pending(conn))
@@ -968,7 +968,7 @@ static void si_conn_recv_cb(struct connection *conn)
goto out_shutdown_r;
 
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
if (conn-flags  CO_FL_WAIT_ROOM) {
/* the pipe is full or we have read enough data that it
@@ -1098,7 +1098,7 @@ static void si_conn_recv_cb(struct connection *conn)
} /* while !flags */
 
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
if (conn_data_read0_pending(conn))
/* connection closed */
@@ -1114,10 +1114,6 @@ static void si_conn_recv_cb(struct connection *conn)
stream_sock_read0(si);
conn_data_read0(conn);
return;
-
- out_error:
-   /* Read error on the connection, report the error and stop I/O */
-   conn-flags |= CO_FL_ERROR;
 }
 
 /*
@@ -1131,7 +1127,7 @@ static void si_conn_send_cb

Re: [PATCH] two patches about manual and stream_interface

2013-12-04 Thread Godbach

Hi Willy,

On 2013/12/4 15:02, Willy Tarreau wrote:

Hi Godbach,

On Wed, Dec 04, 2013 at 11:50:36AM +0800, Godbach wrote:

Hi Willy,

Please check the attachments for the two patches.

Patch 0001: DOC: stick-table can be declared in such sections as
frontend, listen and backend


OK in principle but we could make it even simpler :

   stick-table type {ip | integer | string [len length] | binary [len 
length]}
   size size [expire expire] [nopurge] [peers peersect]
   [store data_type]*
  -  Configure the stickiness table for the current backend
  +  Configure the stickiness table for the current section
 May be used in sections :   defaults | frontend | listen | backend
no|yes   |   yes  |   yes




Yes, just section should be OK. :-)
Please check the attachment for patch 0001.



OK in principle, but you'll see that there is another call in
stream_int_chk_snd_conn() :

if (si_conn_send(si-conn)  0) {
/* Write error on the file descriptor */
fd_stop_both(si-conn-t.sock.fd);
__conn_data_stop_both(si-conn);
si-flags |= SI_FL_ERR;
si-conn-flags |= CO_FL_ERROR;
goto out_wakeup;
}

So it should also be simplified this way :

si_conn_send(si-conn);
if (si-conn-flags  CO_FL_ERROR) {
/* Write error on the file descriptor */
fd_stop_both(si-conn-t.sock.fd);
__conn_data_stop_both(si-conn);
si-flags |= SI_FL_ERR;
goto out_wakeup;
}



Thank you for pointing this. Yes, both stream_int_chk_snd_conn() and 
si_conn_send_cb() will call si_conn_send(), and the code

si-conn-flags |= CO_FL_ERROR;
in stream_int_chk_snd_conn() as you listed above can be removed too.


I accept the change provided that you change the comments on top
of si_conn_send() to clearly state that the caller should check
conn-flags for errors.


It seems that the return value of si_conn_send() will not be used 
anymore. Furthermore, si_conn_send_cb() does not need to check 
conn-flags for errors since it will return directly after calling

si_conn_send().

Please check the attachment for patch 0002.

--
Best Regards,
Godbach
From f36cee9f55c248bbbc4a47472904de32166ef5af Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 4 Dec 2013 16:08:22 +0800
Subject: [PATCH 1/2] DOC: stick-table: modify the description

The stickiness table can be declared in such sections as frontend, listen
and backend, but the original manual only mentioned backend. Modify the
description simply as below:
current backend - current section

Signed-off-by: Godbach nylzhao...@gmail.com
---
 doc/configuration.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 803d42e..81b4295 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6133,7 +6133,7 @@ stick store-request pattern [table table] [{if | 
unless} condition]
 stick-table type {ip | integer | string [len length] | binary [len length]}
 size size [expire expire] [nopurge] [peers peersect]
 [store data_type]*
-  Configure the stickiness table for the current backend
+  Configure the stickiness table for the current section
   May be used in sections :   defaults | frontend | listen | backend
  no|yes   |   yes  |   yes
 
-- 
1.7.7

From 7ad2aeb849bc95b4734a941b010b1582fe6e9e35 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 4 Dec 2013 16:24:06 +0800
Subject: [PATCH 2/2] OPTIM: stream_interface: return directly if the
 connection flag CO_FL_ERROR has been set

The connection flag CO_FL_ERROR will be tested in the functions both
si_conn_recv_cb() and si_conn_send_cb(). If CO_FL_ERROR has been set, out_error
branch will be executed. But the only job of out_error branch is to set
CO_FL_ERROR on connection flag. So it's better return directly than goto
out_error branch under such conditions. As a result, out_error branch becomes
needless and can be removed.

In addition, the caller of si_conn_send_loop() should check conn-flags for
errors instead of the returned value.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/stream_interface.c |   26 +-
 1 files changed, 9 insertions(+), 17 deletions(-)

diff --git a/src/stream_interface.c b/src/stream_interface.c
index 9f0c26a..8bec27f 100644
--- a/src/stream_interface.c
+++ b/src/stream_interface.c
@@ -610,7 +610,8 @@ static int si_conn_wake_cb(struct connection *conn)
  * This function is called to send buffer data to a stream socket.
  * It returns -1 in case of unrecoverable error, otherwise zero.
  * It calls the transport layer's snd_buf function. It relies on the
- * caller to commit polling changes.
+ * caller to commit polling changes. The caller should check

Re: [PATCH] two patches about manual and stream_interface

2013-12-04 Thread Godbach

Hi Willy,

On 2013/12/4 17:03, Willy Tarreau wrote:


It seems that the return value of si_conn_send() will not be used
anymore. Furthermore, si_conn_send_cb() does not need to check
conn-flags for errors since it will return directly after calling
si_conn_send().


Sorry for not being clear about this in the first mail but I mean we
should have si_conn_send() return void so that the caller only checks
ther flags, hence the removal of the comment above.

Willy




Got it. :-)

Please check the attachment for the lastest patch.

--
Best Regards,
Godbach
From 7376d96cde7899b2ecfcff11a2c49f058157d8be Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 4 Dec 2013 17:24:06 +0800
Subject: [PATCH] OPTIM: stream_interface: return directly if the connection
 flag CO_FL_ERROR has been set

The connection flag CO_FL_ERROR will be tested in the functions both
si_conn_recv_cb() and si_conn_send_cb(). If CO_FL_ERROR has been set, out_error
branch will be executed. But the only job of out_error branch is to set
CO_FL_ERROR on connection flag. So it's better return directly than goto
out_error branch under such conditions. As a result, out_error branch becomes
needless and can be removed.

In addition, the return type of si_conn_send_loop() is also changed to void.
The caller should check conn-flags for errors just like 
stream_int_chk_snd_conn()
does as below:

static void stream_int_chk_snd_conn(struct stream_interface *si)
{
...
conn_refresh_polling_flags(si-conn);

-   if (si_conn_send(si-conn)  0) {
+   si_conn_send(si-conn);
+   if (si-conn-flags  CO_FL_ERROR) {
...
}

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/stream_interface.c |   38 +-
 1 files changed, 13 insertions(+), 25 deletions(-)

diff --git a/src/stream_interface.c b/src/stream_interface.c
index 9f0c26a..6fdaff3 100644
--- a/src/stream_interface.c
+++ b/src/stream_interface.c
@@ -608,11 +608,11 @@ static int si_conn_wake_cb(struct connection *conn)
 
 /*
  * This function is called to send buffer data to a stream socket.
- * It returns -1 in case of unrecoverable error, otherwise zero.
  * It calls the transport layer's snd_buf function. It relies on the
- * caller to commit polling changes.
+ * caller to commit polling changes. The caller should check conn-flags
+ * for errors.
  */
-static int si_conn_send(struct connection *conn)
+static void si_conn_send(struct connection *conn)
 {
struct stream_interface *si = conn-owner;
struct channel *chn = si-ob;
@@ -629,14 +629,14 @@ static int si_conn_send(struct connection *conn)
}
 
if (conn-flags  CO_FL_ERROR)
-   return -1;
+   return;
}
 
/* At this point, the pipe is empty, but we may still have data pending
 * in the normal buffer.
 */
if (!chn-buf-o)
-   return 0;
+   return;
 
/* when we're here, we already know that there is no spliced
 * data left, and that there are sendable buffered data.
@@ -675,10 +675,7 @@ static int si_conn_send(struct connection *conn)
}
}
 
-   if (conn-flags  CO_FL_ERROR)
-   return -1;
-
-   return 0;
+   return;
 }
 
 
@@ -821,12 +818,12 @@ static void stream_int_chk_snd_conn(struct 
stream_interface *si)
 
conn_refresh_polling_flags(si-conn);
 
-   if (si_conn_send(si-conn)  0) {
+   si_conn_send(si-conn);
+   if (si-conn-flags  CO_FL_ERROR) {
/* Write error on the file descriptor */
fd_stop_both(si-conn-t.sock.fd);
__conn_data_stop_both(si-conn);
si-flags |= SI_FL_ERR;
-   si-conn-flags |= CO_FL_ERROR;
goto out_wakeup;
}
}
@@ -915,7 +912,7 @@ static void si_conn_recv_cb(struct connection *conn)
 * which rejects it before reading it all.
 */
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
/* stop here if we reached the end of data */
if (conn_data_read0_pending(conn))
@@ -968,7 +965,7 @@ static void si_conn_recv_cb(struct connection *conn)
goto out_shutdown_r;
 
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
if (conn-flags  CO_FL_WAIT_ROOM) {
/* the pipe is full or we have read enough data that it
@@ -1098,7 +1095,7 @@ static void si_conn_recv_cb(struct connection *conn)
} /* while !flags */
 
if (conn-flags  CO_FL_ERROR)
-   goto out_error;
+   return;
 
if (conn_data_read0_pending(conn))
/* connection closed */
@@ -1114,10 +,6 @@ static void si_conn_recv_cb(struct

[PATCH] two patches about code style and manual

2013-12-11 Thread Godbach
Hi Willy,

Please check the attachments for two patches:

Patch0001: MINOR: code style: use tabs to indent codes
Patch0002: DOC: checkcache: block responses with cacheable cookies


There is also another problem I want to talk about stick table. The
following codes are from lastest snapshot:

cfgparse.c/check_config_validity()

for (curproxy = proxy; curproxy; curproxy = curproxy-next)
stktable_init(curproxy-table);

stktable_init() will be called to init stick table in
check_config_validity(). But the returned value is ignored. The codes of
stktable_init() are as below:

int stktable_init(struct stktable *t)
{
if (t-size) {
memset(t-keys, 0, sizeof(t-keys));
memset(t-exps, 0, sizeof(t-exps));

t-pool = create_pool(sticktables, sizeof(struct stksess) +
t-data_size + t-key_size, MEM_F_SHARED);

...

return t-pool != NULL;
}
return 1;
}

This function will return false if create_pool() returns NULL. Since the
returned value of this function, HAProxy will crash if t-pool is NULL
and stksess_new() is called to allocate a new stick session.
Of course the probability is a bit. However, in my opinion, the returned
value should be checked as other codes do in check_config_validity(),
the one possibe fix is as follow:
for (curproxy = proxy; curproxy; curproxy = curproxy-next) {
if (!stktable_init(curproxy-table))
cfgerr++;
}
If stktable_init() fails, HAProxy will exit eventually.

I can submit a new patch for this issue if you agree with me about this fix.

-- 
Best Regards,
Godbach
From 35942e286901a50dad02a3cf355f0d943a137eec Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 11 Dec 2013 19:48:57 +0800
Subject: [PATCH 1/2] MINOR: code style: use tabs to indent codes

The original codes are indented by spaces and not aligned with the former line.
It should be a convention to indent by tabs in HAProxy.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/cfgparse.c |   12 ++--
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 73a08a4..b05bfe9 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2932,12 +2932,12 @@ int cfg_parse_listen(const char *file, int linenum, 
char **args, int kwm)
}
else if (strcmp(args[myidx], peers) == 0) {
myidx++;
-if (!*(args[myidx])) {
-Alert(parsing [%s:%d] : stick-table: 
missing argument after '%s'.\n,
-  file, linenum, args[myidx-1]);
-err_code |= ERR_ALERT | ERR_FATAL;
-goto out;
-}
+   if (!*(args[myidx])) {
+   Alert(parsing [%s:%d] : stick-table: 
missing argument after '%s'.\n,
+ file, linenum, args[myidx-1]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
curproxy-table.peers.name = 
strdup(args[myidx++]);
}
else if (strcmp(args[myidx], expire) == 0) {
-- 
1.7.7

From 57d74098da0ab7f5037a220d989b08ca94099db7 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 11 Dec 2013 20:01:07 +0800
Subject: [PATCH 2/2] DOC: checkcache: block responses with cacheable cookies

requests - responses

Signed-off-by: Godbach nylzhao...@gmail.com
---
 doc/configuration.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 1d13c1e..dda20e3 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3518,7 +3518,7 @@ no option allbackups
 
 option checkcache
 no option checkcache
-  Analyze all server responses and block requests with cacheable cookies
+  Analyze all server responses and block responses with cacheable cookies
   May be used in sections :   defaults | frontend | listen | backend
  yes   | no   |   yes  |   yes
   Arguments : none
-- 
1.7.7



Re: [PATCH] two patches about code style and manual

2013-12-11 Thread Godbach

Hi Willy,

On 2013/12/11 20:43, Willy Tarreau wrote:

Hi Godbach,



There is also another problem I want to talk about stick table. The
following codes are from lastest snapshot:

(...)

If stktable_init() fails, HAProxy will exit eventually.

I can submit a new patch for this issue if you agree with me about this fix.


yes, feel free to do so!

thanks,
Willy




Please check the attachment for the patch:

Patch0001: BUG/MINOR: check_config_validity: check the returned value of 
stktable_init()


--
Best Regards,
Godbach
From 2665f44aa70e313c7e20a0e00cfff6aa5cfb296d Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 11 Dec 2013 21:11:41 +0800
Subject: [PATCH] BUG/MINOR: check_config_validity: check the returned value
 of stktable_init()

The function stktable_init() will return 0 if create_pool() returns NULL. Since
the returned value of this function is ignored, HAProxy will crash if the pool
of stick table is NULL and stksess_new() is called to allocate a new stick
session. It is a better choice to check the returned value and make HAProxy exit
with alert message if any error is caught.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/cfgparse.c |8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index b05bfe9..f9e5191 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -7535,8 +7535,12 @@ out_uri_auth_compat:
 * be done earlier because the data size may be discovered while parsing
 * other proxies.
 */
-   for (curproxy = proxy; curproxy; curproxy = curproxy-next)
-   stktable_init(curproxy-table);
+   for (curproxy = proxy; curproxy; curproxy = curproxy-next) {
+   if (!stktable_init(curproxy-table)) {
+   Alert(Proxy '%s': failed to initialize 
stick-table.\n, curproxy-id);
+   cfgerr++;
+   }
+   }
 
/*
 * Recount currently required checks.
-- 
1.7.7



Re: [ANNOUNCE] haproxy-1.5-dev21

2013-12-16 Thread Godbach

On 2013/12/17 7:49, Willy Tarreau wrote:

Hi all,

I've released 1.5-dev21 which fixes all the issue reported on dev20 :

 - MINOR: stats: don't use a monospace font to report numbers
 - MINOR: session: remove debugging code
 - BUG/MAJOR: patterns: fix double free caused by loading strings from files
 - MEDIUM: http: make option http_proxy automatically rewrite the URL
 - BUG/MEDIUM: http: cook_cnt() forgets to set its output type
 - BUG/MINOR: stats: correctly report throttle rate of low weight servers
 - BUG/MEDIUM: checks: servers must not start in slowstart mode
 - BUG/MINOR: acl: parser must also stop at comma on ACL-only keywords
 - MEDIUM: stream-int: implement a very simplistic idle connection manager
 - DOC: update the ROADMAP file

I'd suggest using this one instead of dev20 considering the numerous
annoying bugs that were present there.

Usual links below :

  Site index   : http://haproxy.1wt.eu/
  Sources  : http://haproxy.1wt.eu/download/1.5/src/devel/
  Changelog: http://haproxy.1wt.eu/download/1.5/src/CHANGELOG
  Cyril's HTML doc : 
http://cbonte.github.com/haproxy-dconv/configuration-1.5.html

Thanks to all the early testers who reported issues, configs etc.

Willy





Great news!

1.5-dev21 may be the fastest one of all releases. :-)

--
Best Regards,
Godbach



Re: RES: Bytes In Per Second

2014-03-02 Thread Godbach

On 2014/3/3 9:07, Fred Pedrisa wrote:

Hi !

It doesn't have this information (packets per second based in source
address/port),
and even if it does, how can this be used in a backend to perform such
limiting ?

Sincerely,

Fred

-Mensagem original-
De: Baptiste [mailto:bed...@gmail.com]
Enviada em: sábado, 1 de março de 2014 12:24
Para: Fred Pedrisa
Cc: HAProxy
Assunto: Re: Bytes In Per Second

Hi Fred,

HAProxy already report this on its stats page: http://demo.1wt.eu/

Baptiste

On Sat, Mar 1, 2014 at 4:44 AM, Fred Pedrisa fredhp...@hotmail.com wrote:

Hello, Guys !



I would like to know if there is a possibility to add a conter for the
number of packets/requests/streams per second like we do with bytes
per second for in/out to haproxy !



If so it would be very cool !



Fred






Hi Fred,

As I know, HAProxy has present rate limit of sessions for frontend. The 
configuration from the latest docmument is as below:


rate-limit sessions rate
Set a limit on the number of new sessions accepted per second on a frontend

It seems that rate limit for sessions has not been implemented in backend.

--
Best Regards,
Godbach



Re: Changing server weights without stopping the service

2014-03-24 Thread Godbach

On 2014/3/24 14:53, Yasaman Amannejad wrote:

Hi,

Is there a way to change the server weights in runtime, without
restarting Haproxy? I want to assign weights to servers and change them
dynamically but I do not want to stop Haproxy.
Any help is appreciated.

Thanks


Hi,

You can change the sever's weight dynamically by Unix Socket commands:
set weight backend/server weight[%]

And I suggest that you read Section 9.2 from the configuration manual of 
HAProxy to get more information about your issue.


Some load balance algorithms are static such as static-rr, uri and so 
on. It means that changing a server's weight on the fly will have no effect.



--
Best Regards,
Godbach



[PATCH] two minor patches about typo and code style

2014-04-21 Thread Godbach
Hi Willy,

Please check the atthachments for two minor patches about typo and code
style fix.

-- 
Best Regards,
Godbach
From 3013eada648ee4134c27e0a38d75e4cc4e5f97d9 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Mon, 21 Apr 2014 21:42:41 +0800
Subject: [PATCH 1/2] DOC: fix typo

a intermediate CA = an intermediate CA

Signed-off-by: Godbach nylzhao...@gmail.com
---
 doc/configuration.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 5ffcb4f..6d6f45e 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -7925,7 +7925,7 @@ crt cert
 
   Some CAs (such as Godaddy) offer a drop down list of server types that do not
   include HAProxy when obtaining a certificate. If this happens be sure to
-  choose a webserver that the CA believes requires a intermediate CA (for
+  choose a webserver that the CA believes requires an intermediate CA (for
   Godaddy, selection Apache Tomcat will get the correct bundle, but many
   others, e.g. nginx, result in a wrong bundle that will not work for some
   clients).
-- 
1.7.7

From 0084fbcbae0657b1f2f2974778f3694b5c45ba39 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Mon, 21 Apr 2014 21:52:23 +0800
Subject: [PATCH 2/2] CLEANUP: code style: use tabs to indent codes instead of
 spaces

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/cfgparse.c |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 124ad24..d4893a1 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2998,10 +2998,10 @@ int cfg_parse_listen(const char *file, int linenum, 
char **args, int kwm)
else if (strcmp(args[myidx], peers) == 0) {
myidx++;
if (!*(args[myidx])) {
-   Alert(parsing [%s:%d] : stick-table: 
missing argument after '%s'.\n,
- file, linenum, args[myidx-1]);
-   err_code |= ERR_ALERT | ERR_FATAL;
-   goto out;
+   Alert(parsing [%s:%d] : stick-table: 
missing argument after '%s'.\n,
+ file, linenum, args[myidx-1]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
}
curproxy-table.peers.name = 
strdup(args[myidx++]);
}
-- 
1.7.7



Re: [ANNOUNCE] haproxy-1.5-dev23

2014-04-23 Thread Godbach
 misplaced http-request rules
 - MEDIUM: config: report misplaced use-server rules
 - DOC: update roadmap with what was done.
---





Hi Willy,

Great job. Thank you.

--
Best Regards,
Godbach



How to rewirte both URI and Host

2014-05-13 Thread Godbach
Hi Willy,

If I want to rewrite both Host and URI in request, and they are also in
the match conditions, can HAProxy implement this requirement?

The ACLs are as below:
acl match_host_uri  hdr_reg(Host) www.example.com
ack match_host_uri  /example.html

But the reqrep should be used twice to rewrite both Host and URI
reqrep expresion for rewritting host if  match_host_uri
reqrep expresion for rewritting URI  if  match_host_uri

Since Host has been rewritten in firest reqrep directive, as I know, the
second reqrep will not be executed as the match will fail.

-- 
Best Regards,
Godbach



Re: How to rewirte both URI and Host

2014-05-13 Thread Godbach

On 2014/5/13 15:05, Willy Tarreau wrote:

Hi Godbach,

On Tue, May 13, 2014 at 03:01:58PM +0800, Godbach wrote:

Hi Willy,

If I want to rewrite both Host and URI in request, and they are also in
the match conditions, can HAProxy implement this requirement?

The ACLs are as below:
acl match_host_uri  hdr_reg(Host) www.example.com
ack match_host_uri  /example.html

But the reqrep should be used twice to rewrite both Host and URI
reqrep expresion for rewritting host if  match_host_uri
reqrep expresion for rewritting URI  if  match_host_uri

Since Host has been rewritten in firest reqrep directive, as I know, the
second reqrep will not be executed as the match will fail.


Indeed. The trick consists in capturing it using capture request header
then apply your rule based on capture.req.hdr(), since this one will not
change during the request.

Regards,
Willy



Hi Willy,

Thank you.

Yes, there are some more caputre and fetch configurations about request 
and response introduced in recent releases. I can capture any header I 
and URI I need, then do the match base on the capture content. The 
requirement can be fulfilled with these new features. Great job.


--
Best Regards,
Godbach



Re: How to rewirte both URI and Host

2014-05-13 Thread Godbach

On 2014/5/13 16:05, Godbach wrote:

On 2014/5/13 15:05, Willy Tarreau wrote:

Hi Godbach,

On Tue, May 13, 2014 at 03:01:58PM +0800, Godbach wrote:

Hi Willy,

If I want to rewrite both Host and URI in request, and they are also in
the match conditions, can HAProxy implement this requirement?

The ACLs are as below:
acl match_host_uri  hdr_reg(Host) www.example.com
ack match_host_uri  /example.html

But the reqrep should be used twice to rewrite both Host and URI
reqrep expresion for rewritting host if  match_host_uri
reqrep expresion for rewritting URI  if  match_host_uri

Since Host has been rewritten in firest reqrep directive, as I know, the
second reqrep will not be executed as the match will fail.


Indeed. The trick consists in capturing it using capture request header
then apply your rule based on capture.req.hdr(), since this one will not
change during the request.

Regards,
Willy



Hi Willy,

Thank you.

Yes, there are some more caputre and fetch configurations about request
and response introduced in recent releases. I can capture any header I
and URI I need, then do the match base on the capture content. The
requirement can be fulfilled with these new features. Great juser-defined 
chainob.



Hi Willy,

There is still an issue that the ACL will be evaluated maore than one 
times. Is it possbile or reasonable to rewrite more than one time with 
only evaluating ACL once?


The similar cenario like iptables to do all matches once and do more 
actions in user-defined chain:


iptables -N NEW_CHAIN
iptables -A INPUT -p tcp -s x.x.x.x -j NEW_CHAIN
iptalbes -A NEW_CHAIN -j LOG
iptalbes -A NEW_CHAIN -j ACCPET

or we will need two rules both with need to do match as below:
iptables -A INPUT -p tcp -s x.x.x.x -j LOG
iptables -A INPUT -p tcp -s x.x.x.x -j ACCEPT

--
Best Regards,
Godbach



[PATCH] BUG/MINOR: server: move the directive #endif to the end of file

2014-07-28 Thread Godbach
Hi Willy,

Please check the attached for a minor bug fix.

The commit log is as below:

[PATCH] BUG/MINOR: server: move the directive #endif to the end of
 file

If a source file includes proto/server.h twice or more, redefinition
errors will be triggered for such inline functions as
server_throttle_rate(), server_is_draining(), srv_adm_set_maint() and so
on. Just move #endif directive to the end of file to solve this issue.

-- 
Best Regards,
Godbach
From 4639eb27ae2dbe4cd047dcf30c51f9190195b1b5 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Mon, 28 Jul 2014 17:31:57 +0800
Subject: [PATCH] BUG/MINOR: server: move the directive #endif to the end of
 file

If a source file includes proto/server.h twice or more, redefinition errors will
be triggered for such inline functions as server_throttle_rate(),
server_is_draining(), srv_adm_set_maint() and so on. Just move #endif directive
to the end of file to solve this issue.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 include/proto/server.h |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/proto/server.h b/include/proto/server.h
index 9893266..71c8b13 100644
--- a/include/proto/server.h
+++ b/include/proto/server.h
@@ -54,8 +54,6 @@ static void inline srv_set_sess_last(struct server *s)
s-counters.last_sess = now.tv_sec;
 }
 
-#endif /* _PROTO_SERVER_H */
-
 /*
  * Registers the server keyword list kwl as a list of valid keywords for next
  * parsing sessions.
@@ -200,6 +198,8 @@ static inline void srv_adm_set_ready(struct server *s)
srv_clr_admin_flag(s, SRV_ADMF_FMAINT);
 }
 
+#endif /* _PROTO_SERVER_H */
+
 /*
  * Local variables:
  *  c-indent-level: 8
-- 
1.7.7



Re: Question about srv_conn and beconn

2014-10-19 Thread Godbach



On 2014/10/17 16:49, 王继红 wrote:

Hi haproxy team:
 I see the document about these:
beconn is the total number of concurrent connections handled by the
 backend when the session was logged.
srv_conn is the total number of concurrent connections still active on
 the server when the session was logged.

fell confused about these two log values, what's the different between
them ?
and are these two values is at the begin process time values or at the
end process doing log values?


Regards



Hi,

The backend may include more than one server. Furthermore, it is also 
possible that there is an active connection on backend but has not been 
assigned to any server.


--
Best Regards,
Godbach



Re: Question about srv_conn and beconn

2014-10-23 Thread Godbach

Hi,

On 2014/10/23 13:24, 王继红 wrote:


Hi Godbach
 Another question I want to ask, I see when the backend web server
response time is more than 50s, then the haproxy will return http 504 to
the client user,
if the web server response the request but the response content is
partly to the haproxy server (async process in web server internal)
, whether the haproxy will response the partly content to client user so
the user will not get a 504 error?

Regards



HAProxy will only emit HTTP 504 to client if it can not get the whole 
header of http response.



So if HAProxy can receive the whole http header of response, it can 
forward the data all it received to client successfully, maybe including 
partial body, http 504 will not be emitted anymore. Meanwhile, HAProxy 
will reset the connection if it cannot received more data of http body 
for a long time which exceeds server timeout.


For your situation, you can enlarge the server timeout value by the 
following configuration to avoid 504:

timeout server timeout


--
Best Regards,
Godbach



[BUG report] haproxy will crash with some rewrite operation

2014-10-30 Thread Godbach

Hi Willy,

I have test both haproxy-1.5 and latest snapshot. HAProxy will crash 
with the following configuration:


global
...
tune.bufsize 1024
tune.maxrewrite 0
frontend xxx
...
backend  yyy
   ...
   cookie cookie insert maxidle 300s

If client sends a request of which object size is more than tune.bufsize 
(1024 bytes), HAProxy will crash.


After doing some debugging, the crash was caused by 
http_header_add_tail2() - buffer_insert_line2() while inserting cookie 
at the end of response header. Part codes of buffer_insert_line2() are 
as below:


int buffer_insert_line2(struct buffer *b, char *pos, const char *str, 
int len)

{
int delta;

delta = len + 2;

if (bi_end(b) + delta = b-data + b-size)
return 0;  /* no space left */

/* first, protect the end of the buffer */
memmove(pos + delta, pos, bi_end(b) - pos);
...
}

If HAProxy received 1024 bytes which is equals to full buffer size since 
maxrewrite is 0, the buffer should be full and bi_end(b) will be wrapped 
to the start of buffer which pointed to b-data. As a result, though 
there is no space in buffer, the check condition


if (bi_end(b) + delta = b-data + b-size)
will be true, then memmove() is called, and pos + delta will exceed 
b-data + b-size, HAProxy crashes.


In my opinion, this bug will affect all the rewrite actions which will 
add new header to the end of current http header.


Just take buffer_replace2() as a reference, the other check should be 
added into buffer_insert_line2() as below:


diff --git a/src/buffer.c b/src/buffer.c
index 91bee63..9037dd3 100644
--- a/src/buffer.c
+++ b/src/buffer.c
@@ -88,6 +88,11 @@ int buffer_insert_line2(struct buffer *b, char *pos, 
const char *str, int len)

if (bi_end(b) + delta = b-data + b-size)
return 0;  /* no space left */

+   if (buffer_not_empty(b) 
+   bi_end(b) + delta  bo_ptr(b) 
+   bo_ptr(b) = bi_end(b))
+   return 0;  /* no space left before wrapping data */
+
/* first, protect the end of the buffer */
memmove(pos + delta, pos, bi_end(b) - pos);

This patch can be work well for both 1.5 release and 1.6-dev. Please 
help review this patch and I will send a patch if it is OK.


--
Best Regards,
Godbach



Re: [BUG report] haproxy will crash with some rewrite operation

2014-10-30 Thread Godbach

Hi Willy,

On 2014/10/30 21:48, Willy Tarreau wrote:

Hi Godbach,

[ and first, sorry for not having yet responded to your
   other mail about caching ]



Take it easy. :)



Yes, it looks fine to me, feel free to send a patch and tag it as BUG/MAJOR.
In the future we'll probably need to rework all these functions, they have
lived a lot since version 1.1...

Thanks!
Willy



Please check the attached for the patch.

--
Best Regards,
Godbach
From 553f81769046f39a1047b70ac5505176bec50dec Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Fri, 31 Oct 2014 13:16:37 +0800
Subject: [PATCH] BUG/MAJOR: buffer: check the space left is enough or not
 when input data in a buffer is wrapped

HAProxy will crash with the following configuration:

global
...
tune.bufsize 1024
tune.maxrewrite 0
frontend xxx
...
backend  yyy
   ...
   cookie cookie insert maxidle 300s

If client sends a request of which object size is more than tune.bufsize (1024
bytes), HAProxy will crash.

After doing some debugging, the crash was caused by http_header_add_tail2() -
buffer_insert_line2() while inserting cookie at the end of response header.
Part codes of buffer_insert_line2() are as below:

int buffer_insert_line2(struct buffer *b, char *pos, const char *str, int len)
{
int delta;

delta = len + 2;

if (bi_end(b) + delta = b-data + b-size)
return 0;  /* no space left */

/* first, protect the end of the buffer */
memmove(pos + delta, pos, bi_end(b) - pos);
...
}

Since tune.maxrewrite is 0, HAProxy can receive 1024 bytes once which is equals
to full buffer size.  Under such condition, the buffer is full and bi_end(b)
will be wrapped to the start of buffer which pointed to b-data. As a result,
though there is no space left in buffer, the check condition
if (bi_end(b) + delta = b-data + b-size)
will be true, then memmove() is called, and (pos + delta) will exceed the end
of buffer (b-data + b-size), HAProxy crashes

Just take buffer_replace2() as a reference, the other check when input data in
a buffer is wrapped should be also added into buffer_insert_line2().

This fix must be backported to 1.5.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/buffer.c |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/src/buffer.c b/src/buffer.c
index 91bee63..9037dd3 100644
--- a/src/buffer.c
+++ b/src/buffer.c
@@ -88,6 +88,11 @@ int buffer_insert_line2(struct buffer *b, char *pos, const 
char *str, int len)
if (bi_end(b) + delta = b-data + b-size)
return 0;  /* no space left */
 
+   if (buffer_not_empty(b) 
+   bi_end(b) + delta  bo_ptr(b) 
+   bo_ptr(b) = bi_end(b))
+   return 0;  /* no space left before wrapping data */
+
/* first, protect the end of the buffer */
memmove(pos + delta, pos, bi_end(b) - pos);
 
-- 
1.7.7



Re: How to ignore backend busy server

2014-11-10 Thread Godbach

Hi,

On 2014/11/10 16:15, Lukas Tribus wrote:

Hi,



Hi haproxy team,
I met a trouble for haproxy, I found my backend servers sometimes
parts of them got very busy, too much more connections in srv_conn,
maybe the backend database have slow response to the backend web
server.
So my question is , is there anyway to configure haproxy to send
less new requests to the servers when the srv_conn value is too much
higher than other servers so that the busy server can do a better
response for the client.


This is exactly what the maxconn keyword on the server line is for:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-maxconn


server abc 1.2.3.4:80 maxconn 50




Regards,

Lukas





Besides configurating maxconn for backend server as Lukas suggested, you 
can also use leastconn method instead, which the server with the lowest 
number of connections receives the new connection:


http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#method


--
Best Regards,
Godbach



Re: haproxy 1.5.8 segfault

2014-11-20 Thread Godbach

Hi Willy,

On 2014/11/19 2:31, Willy Tarreau wrote:

On Tue, Nov 18, 2014 at 08:23:57PM +0200, Denys Fedoryshchenko wrote:

Thanks! Seems working for me :) Will test more tomorrow.


There's no reason it would not, otherwise we'd have a different bug.
When I'm unsure I ask for testing before committing, but here there
was no doubt once the issue was understood :-)

Willy




A so quick fix. Cool! :-)

In fact, I have also experienced this kind issue before. Of course it is 
not caused by original HAProxy codes but my own codes added to HAProxy. 
However, the real reason is the same as this issue: the memory allocated 
from pool is not reset properly.


So I have an idea for this kind issue: how about HAProxy reset the 
memory allocated from pool directly in pool_alloc2().


If we worry about that the performance may be decreased by calling 
memset() in each pool_alloc2(), a new option which allows user to enable 
or disable memset() in pool_alloc2() can be added into HAProxy.


Since it is not an urgent issue, just take your time. :-)
--
Best Regards,
Godbach



Re: haproxy 1.5.8 segfault

2014-11-21 Thread Godbach

Hi Willy,

On 2014/11/21 14:35, Willy Tarreau wrote:

Hi Godbach!

On Fri, Nov 21, 2014 at 11:02:52AM +0800, Godbach wrote:

Hi Willy,

On 2014/11/19 2:31, Willy Tarreau wrote:

On Tue, Nov 18, 2014 at 08:23:57PM +0200, Denys Fedoryshchenko wrote:

Thanks! Seems working for me :) Will test more tomorrow.


There's no reason it would not, otherwise we'd have a different bug.
When I'm unsure I ask for testing before committing, but here there
was no doubt once the issue was understood :-)

Willy




A so quick fix. Cool! :-)

In fact, I have also experienced this kind issue before. Of course it is
not caused by original HAProxy codes but my own codes added to HAProxy.
However, the real reason is the same as this issue: the memory allocated
from pool is not reset properly.


And that's intended. pool_alloc2() works exactly like malloc() : the caller
is responsible for initializing it if needed.


So I have an idea for this kind issue: how about HAProxy reset the
memory allocated from pool directly in pool_alloc2().

If we worry about that the performance may be decreased by calling
memset() in each pool_alloc2(), a new option which allows user to enable
or disable memset() in pool_alloc2() can be added into HAProxy.


We only do that (partially) when using memory poisonning/debugging (to
reproduce issues easier). Yes performance suffers a lot when doing so,
especially when using large buffers, and people using large buffers are
the ones who care the most about performance.

I'd agree to slightly change the pool_alloc2() to *always* memset the area
when memory poisonning is in place, so that developers can more easily
detect if they missed something. But I don't want to use memset all the
time as a brown paper bag so that developers don't have to be careful.
We're missing some doc of course, and people can get trapped from time
to time (and I do as well), so this is what we must improve, and not
have the code hide the bugs instead.

What is really needed is that each field of session/transaction is
documented : who uses it, when, and who's responsible for initializing
it. Here with the capture, I missed the fact that the captures are part
of a transaction, thus were initialized by the HTTP code, so when using
tcp without http, there's an issue... A simple comment like
/* initialized by http_init_txn() */
in front of the capture field in the struct would have been enough to
avoid this. This is what must be improved. We also need to write
developer guidelines to remind people to update the doc/comments when
modifying the API. I know it's not easy, I miss a number of them as
well.

Cheers,
Willy



Thank you. Got it.

I do agree with you. The developer should be clear that when and where 
to initial the memory properly.


--
Best Regards,
Godbach



[PATCH] DOC: fix typo

2014-12-09 Thread Godbach

Hi Willy,

There are some typos in latest snapshot in 1.6-dev0. Attached is the patch.

There total files are modified as below:

include/types/proto_http.h: hwen - when
include/types/server.h: SRV_ST_DOWN - SRV_ST_STOPPED
src/backend.c: prefer-current-server - prefer-last-server

Please see the patch for detailed information. In addition, this patch 
should be backported to 1.5 if necessary.


--
Best Regards,
Godbach
From 73f3617908e1da775f10a2116394a1654d150d0f Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 10 Dec 2014 10:21:30 +0800
Subject: [PATCH] DOC: fix typo

include/types/proto_http.h: hwen - when
include/types/server.h: SRV_ST_DOWN - SRV_ST_STOPPED
src/backend.c: prefer-current-server - prefer-last-server

Signed-off-by: Godbach nylzhao...@gmail.com
---
 include/types/proto_http.h | 2 +-
 include/types/server.h | 2 +-
 src/backend.c  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/types/proto_http.h b/include/types/proto_http.h
index 95bf59d..2654b78 100644
--- a/include/types/proto_http.h
+++ b/include/types/proto_http.h
@@ -71,7 +71,7 @@
 
 /* indicate how we *want* the connection to behave, regardless of what is in
  * the headers. We have 4 possible values right now :
- * - WANT_KAL : try to maintain keep-alive (default hwen nothing configured)
+ * - WANT_KAL : try to maintain keep-alive (default when nothing configured)
  * - WANT_TUN : will be a tunnel (CONNECT).
  * - WANT_SCL : enforce close on the server side
  * - WANT_CLO : enforce close on both sides
diff --git a/include/types/server.h b/include/types/server.h
index 4847def..1cabb83 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -43,7 +43,7 @@
 #include types/checks.h
 
 
-/* server states. Only SRV_ST_DOWN indicates a down server. */
+/* server states. Only SRV_ST_STOPPED indicates a down server. */
 enum srv_state {
SRV_ST_STOPPED = 0,  /* the server is down. Please keep set 
to zero. */
SRV_ST_STARTING, /* the server is warming up (up but 
throttled) */
diff --git a/src/backend.c b/src/backend.c
index e222160..70ddaa7 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -553,7 +553,7 @@ int assign_server(struct session *s)
   (__objt_server(conn-target)-nbpend + 1)  
s-be-max_ka_queue))) 
srv_is_usable(__objt_server(conn-target))) {
/* This session was relying on a server in a previous request
-* and the proxy has option prefer-current-server set, so
+* and the proxy has option prefer-last-server set, so
 * let's try to reuse the same server.
 */
srv = __objt_server(conn-target);
-- 
1.7.11.7



[BUG Report] Maybe a bug for epoll

2014-12-16 Thread Godbach

Hi Willy,

I have noticed that epoll_wait() is called with the following parameters 
boht in latest snapshot and 1.5:


src/ev_epoll.c
REGPRM2 static void _do_poll(struct poller *p, int exp)
{
...
status = epoll_wait(epoll_fd, epoll_events, global.tune.maxpollevents, 
wait_time);

...
}

That's to say, there will be no more than global.tune.maxpollevents 
events returned for calling epoll_wait() once. But epoll_events may be 
allocated memory according to the value absmaxevents in 
_do_init()(srv/ev_epoll.c) as below:


REGPRM1 static int _do_init(struct poller *p)
{
...
/* See comments at the top of the file about this formula. */
absmaxevents = MAX(global.tune.maxpollevents, global.maxsock);
epoll_events = (struct epoll_event*)
calloc(1, sizeof(struct epoll_event) * absmaxevents);
...
}

Since global.maxsock may be greater than global.tune.maxpollevents, 
absmaxevents must be greater than global.tune.maxpollevents under such 
codition. In my opinion, absmaxevents should be passed to epoll_wait() 
instead of global.tune.maxpollevents. The fix is as below:


 diff --git a/src/ev_epoll.c b/src/ev_epoll.c
 index 755c6fa..be33565 100644
 --- a/src/ev_epoll.c
 +++ b/src/ev_epoll.c
 @@ -137,7 +137,7 @@ REGPRM2 static void _do_poll(struct poller *p, int 
exp)

 /* now let's wait for polled events */

 gettimeofday(before_poll, NULL);
 -   status = epoll_wait(epoll_fd, epoll_events, 
global.tune.maxpollevents, wait_time);
 +   status = epoll_wait(epoll_fd, epoll_events, absmaxevents, 
wait_time);

 tv_update_date(wait_time, status);
 measure_idle();

If the fix is right, I will send a patch later.

--
Best Regards,
Godbach



Re: [BUG Report] Maybe a bug for epoll

2014-12-17 Thread Godbach


On 2014/12/17 15:15, Willy Tarreau wrote:



In my opinion, absmaxevents should be passed to epoll_wait()
instead of global.tune.maxpollevents.


No, that would defeat the principle of maxpollevents. In fact, I found
what happened : commit f2e8ee2b introduced an optimization in the old
speculative epoll code, which implemented its own event cache. It was
needed to store that many events (it was bound to maxsock/4 btw). Now
the event cache lives on its own and we don't need this anymore. And
since events are allocated on the kernel side, we only need to allocate
the events we want to return. Thus I'd completely remove absmaxevents,
the comment on top of it suggesting to read the removed comment for
the formula, and would replace it everywhere with maxpollevents and
that will be cleaner (and use slightly less memory for large amounts
of sockets).


Got it. I also confused by the comment of absmaxevents before. And now I 
know how to fix this issue.




Do you want to work on such a patch ?



Yes, I can send the patch later as you suggested:
1. remove the definition of absmaxevents
2. replace absmaxevents with global.tune.maxpollevents everywhere
3. remove the comment of absmaxevents

In the end, it seems that marking the bug as BUG/MINOR or OPTIM/MINOR is 
OK. Since some memory will be saved by the fix, I'd prefer to 
OPTIM/MINOR if you don't mind.


--
Best Regards,
Godbach



Re: [BUG Report] Maybe a bug for epoll

2014-12-17 Thread Godbach

Hi Willy,

On 2014/12/17 16:03, Godbach wrote:


On 2014/12/17 15:15, Willy Tarreau wrote:



In my opinion, absmaxevents should be passed to epoll_wait()
instead of global.tune.maxpollevents.


No, that would defeat the principle of maxpollevents. In fact, I found
what happened : commit f2e8ee2b introduced an optimization in the old
speculative epoll code, which implemented its own event cache. It was
needed to store that many events (it was bound to maxsock/4 btw). Now
the event cache lives on its own and we don't need this anymore. And
since events are allocated on the kernel side, we only need to allocate
the events we want to return. Thus I'd completely remove absmaxevents,
the comment on top of it suggesting to read the removed comment for
the formula, and would replace it everywhere with maxpollevents and
that will be cleaner (and use slightly less memory for large amounts
of sockets).


Got it. I also confused by the comment of absmaxevents before. And now I
know how to fix this issue.



Do you want to work on such a patch ?



Yes, I can send the patch later as you suggested:
1. remove the definition of absmaxevents
2. replace absmaxevents with global.tune.maxpollevents everywhere
3. remove the comment of absmaxevents

In the end, it seems that marking the bug as BUG/MINOR or OPTIM/MINOR is
OK. Since some memory will be saved by the fix, I'd prefer to
OPTIM/MINOR if you don't mind.



Attached are two patches:

0001 is the patch for epoll to remove absmaxevents

0002 is just a typo fix. Since I think it is irrelevant to the first 
patch, I didn't merge them into a single one.


Both the patches should be also backported to 1.5.
--
Best Regards,
Godbach
From 798d50b1922e9680ea69cbc39d54f20665c782da Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 17 Dec 2014 16:14:26 +0800
Subject: [PATCH 1/2] OPTIM/MINOR: epoll: epoll_events should be allocated
 according to global.tune.maxpollevents

Willy: commit f2e8ee2b introduced an optimization in the oldspeculative epoll
code, which implemented its own event cache. It was needed to store that many
events (it was bound to maxsock/4 btw). Now the event cache lives on its own
and we don't need this anymore. And since events are allocated on the kernel
side, we only need to allocate the events we want to return.

As a result, absmaxevents will be not used anymore. Just remove the definition
and the comment of it, replace it with global.tune.maxpollevents. It is also an
optimization of memory usage for large amounts of sockets.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/ev_epoll.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/ev_epoll.c b/src/ev_epoll.c
index 755c6fa..c52d075 100644
--- a/src/ev_epoll.c
+++ b/src/ev_epoll.c
@@ -29,7 +29,6 @@
 #include proto/task.h
 
 
-static int absmaxevents = 0;// absolute maximum amounts of polled events
 
 /* private data */
 static struct epoll_event *epoll_events;
@@ -195,10 +194,8 @@ REGPRM1 static int _do_init(struct poller *p)
if (epoll_fd  0)
goto fail_fd;
 
-   /* See comments at the top of the file about this formula. */
-   absmaxevents = MAX(global.tune.maxpollevents, global.maxsock);
epoll_events = (struct epoll_event*)
-   calloc(1, sizeof(struct epoll_event) * absmaxevents);
+   calloc(1, sizeof(struct epoll_event) * 
global.tune.maxpollevents);
 
if (epoll_events == NULL)
goto fail_ee;
-- 
1.7.11.7

From 7bda83893cc0bede6e5eb0865e805003aed792f4 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 17 Dec 2014 16:32:05 +0800
Subject: [PATCH 2/2] DOC: fix typo: 401 Unauthorized = 407 Unauthorized

401 Unauthorized = 407 Unauthorized

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/proto_http.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index f19a69b..0a94785 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -124,7 +124,7 @@ const char *HTTP_407_fmt =
Content-Type: text/html\r\n
Proxy-Authenticate: Basic realm=\%s\\r\n
\r\n
-   htmlbodyh1401 Unauthorized/h1\nYou need a valid user and 
password to access this content.\n/body/html\n;
+   htmlbodyh1407 Unauthorized/h1\nYou need a valid user and 
password to access this content.\n/body/html\n;
 
 
 const int http_err_codes[HTTP_ERR_SIZE] = {
-- 
1.7.11.7



Re: [BUG Report] Maybe a bug for epoll

2014-12-17 Thread Godbach

Hi Willy,

On 2014/12/17 16:40, Willy Tarreau wrote:



In my opinion it's a cleanup since we're removing a leftover from an
ancient piece of code :-)

Willy



OK, I agree with you and please modify the patch directly.

--
Best Regards,
Godbach



Re: [BUG Report] Maybe a bug for epoll

2014-12-17 Thread Godbach

Hi Willy,

On 2014/12/18 0:07, Willy Tarreau wrote:

Hi Godbach,

On Wed, Dec 17, 2014 at 04:41:33PM +0800, Godbach wrote:

 From 798d50b1922e9680ea69cbc39d54f20665c782da Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 17 Dec 2014 16:14:26 +0800
Subject: [PATCH 1/2] OPTIM/MINOR: epoll: epoll_events should be allocated
  according to global.tune.maxpollevents


(...)

OK so I've applied this one and marked it CLEANUP.


 From 7bda83893cc0bede6e5eb0865e805003aed792f4 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 17 Dec 2014 16:32:05 +0800
Subject: [PATCH 2/2] DOC: fix typo: 401 Unauthorized = 407 Unauthorized


And this one as well but retagged it BUG/MINOR, as it's not a DOC issue,
it's clearly in the code and is reported that way, so it definitely
impacts users who see this message if we return it.

Thanks!
Willy



Get it. Thank you.

--
Best Regards,
Godbach



[BUG Report] BUG/MINOR: parse: refer curproxy instead of proxy

2014-12-18 Thread Godbach

Hi Willy,

Attached is a patch for fixing a bug which should refer curproxy but 
refer proxy by mistake.


In fact, I found this issue when I was trying to change proxy list to 
double linked list for more convenient and united operation. By the way, 
is it necessary for HAProxy to make this change in latest snapshot in 
your opinion? I can send a patch when I get this work done if you will 
accept the change.


--
Best Regards,
Godbach
From c0f268d2638c0053fb470cdeda5cd70ed5cc0b96 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Thu, 18 Dec 2014 15:44:58 +0800
Subject: [PATCH] BUG/MINOR: parse: refer curproxy instead of proxy

Since during parsing stage, curproxy always represents a proxy to be operated,
it should be a mistake by referring proxy.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/cfgparse.c   | 2 +-
 src/proto_http.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index c8b1546..3e345e4 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -6779,7 +6779,7 @@ out_uri_auth_compat:
curproxy-conf.args.file = curproxy-conf.uif_file;
curproxy-conf.args.line = curproxy-conf.uif_line;

parse_logformat_string(curproxy-conf.uniqueid_format_string, curproxy, 
curproxy-format_unique_id, LOG_OPT_HTTP,
-  (proxy-cap  PR_CAP_FE) ? 
SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
+  (curproxy-cap  PR_CAP_FE) ? 
SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
   curproxy-conf.uif_file, 
curproxy-conf.uif_line);
curproxy-conf.args.file = NULL;
curproxy-conf.args.line = 0;
diff --git a/src/proto_http.c b/src/proto_http.c
index 0a94785..3fffc5b 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -9691,7 +9691,7 @@ struct redirect_rule *http_parse_redirect_rule(const char 
*file, int linenum, st
 * if prefix == /, we don't want to add anything, otherwise it
 * makes it hard for the user to configure a self-redirection.
 */
-   proxy-conf.args.ctx = ARGC_RDR;
+   curproxy-conf.args.ctx = ARGC_RDR;
if (!(type == REDIRECT_TYPE_PREFIX  destination[0] == '/'  
destination[1] == '\0')) {
parse_logformat_string(destination, curproxy, 
rule-rdr_fmt, LOG_OPT_HTTP,
   (curproxy-cap  PR_CAP_FE) ? 
SMP_VAL_FE_HRQ_HDR : SMP_VAL_BE_HRQ_HDR,
-- 
1.7.11.7



[PATCH] BUG/MINOR: parse: check the validity of size string in a more strict way

2015-01-28 Thread Godbach

Hi Willy,

Attached is a patch for parse_size_err().

If a stick table is defined as below:
stick-table type ip size 50ka expire 300s

HAProxy will stop parsing size after passing through 50k and return 
the value directly. But such format string of size should not be valid 
in my opinion. So a further check is needed, that is this patch does.


With this patch, we will get the error message when start HAProxy with 
the above configuration of stick table:


[ALERT] 027/175100 (22532) : parsing [h.cfg:53] : stick-table: 
unexpected character 'a' in argument of 'size'.


If you think it is necessary to apply this patch, both 1.6 and 1.5 need it.

--
Best Regards,
Godbach
From 174943fb20fb3b45f186a6536b53151bdf00fee7 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Wed, 28 Jan 2015 17:36:16 +0800
Subject: [PATCH] BUG/MINOR: parse: check the validity of size string in a
 more strict way

If a stick table is defined as below:
stick-table type ip size 50ka expire 300s

HAProxy will stop parsing size after passing through 50k and return the value
directly. But such format string of size should not be valid. The patch checks
the next character to report error if any.

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/standard.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/standard.c b/src/standard.c
index 93c44bb..f28825f 100644
--- a/src/standard.c
+++ b/src/standard.c
@@ -1656,6 +1656,9 @@ const char *parse_size_err(const char *text, unsigned 
*ret) {
return text;
}
 
+   if (*text != '\0'  *++text != '\0')
+   return text;
+
*ret = value;
return NULL;
 }
-- 
1.7.11.7



Re: [PATCH] BUG/MINOR: parse: check the validity of size string in a more strict way

2015-01-28 Thread Godbach

Hi Willy,

On 2015/1/28 18:28, Willy Tarreau wrote:

Hi Godbach,

On Wed, Jan 28, 2015 at 05:57:13PM +0800, Godbach wrote:

Hi Willy,

Attached is a patch for parse_size_err().

If a stick table is defined as below:
stick-table type ip size 50ka expire 300s

HAProxy will stop parsing size after passing through 50k and return
the value directly. But such format string of size should not be valid
in my opinion. So a further check is needed, that is this patch does.


Yes, good point. We have the same issue in many statements in the config
parser, as well as with extra arguments that are silently ignored and
that tend to confuse people. That's inline with what we want to change
in the 1.6 parser.

I've applied it to 1.6, do you want it into 1.5 as well ?

Willy



Thanks.

Since it's not an important issue and will not bring much side effect, 
just apply to 1.6 is OK.


--
Best Regards,
Godbach



[PATCH] remove codes for cleaning p-block_rules

2015-06-08 Thread Godbach

Hi Willy,

Since all block rules has been move to the beginning of the http-request 
rules in check_config_validity() by the the following codes:


/* move any block rules at the beginning of the http-request 
rules */
if (!LIST_ISEMPTY(curproxy-block_rules)) {
/* insert block_rules into http_req_rules at the 
beginning */
curproxy-block_rules.p-n= 
curproxy-http_req_rules.n;
curproxy-http_req_rules.n-p = curproxy-block_rules.p;
curproxy-block_rules.n-p= 
curproxy-http_req_rules;
curproxy-http_req_rules.n= curproxy-block_rules.n;
LIST_INIT(curproxy-block_rules);
}

As a result, there is no need to clean blocking rules in deinit() as below:

list_for_each_entry_safe(cond, condb, p-block_rules, list) {
LIST_DEL(cond-list);
prune_acl_cond(cond);
free(cond);
}

In addition, there is also another issue. The type of the members listed 
in block_rules has become *struct http_req_rule*, not *struct acl_cond* 
in earlier versions, maybe there is also potential risk to clean 
block_rules in deinit().


So in my opinion, just remove the codes will be OK as below:

diff --git a/src/haproxy.c b/src/haproxy.c
index 053..eac6f44 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1020,12 +1020,6 @@ void deinit(void)
free(cwl);
}

-   list_for_each_entry_safe(cond, condb, p-block_rules, list) {
-   LIST_DEL(cond-list);
-   prune_acl_cond(cond);
-   free(cond);
-   }
-
list_for_each_entry_safe(cond, condb, p-mon_fail_cond, list) {
LIST_DEL(cond-list);
prune_acl_cond(cond);


I can send a patch later if there is no problem.

BTW, I only checked this issue in 1.5 branch.

--
Best Regards,
Godbach



Re: [PATCH] remove codes for cleaning p-block_rules

2015-06-09 Thread Godbach

Hi Willy,

On 2015/6/9 15:51, Willy Tarreau wrote:


Yes, please feel free to do so, we'll backport it into 1.5 as well.


BTW, I only checked this issue in 1.5 branch.


It must affect 1.6 as well in my opinion.



Attached is the patch.

The patch is generated in 1.5 branch because I failed to update 1.6 
branch(maybe caused by my local network). As you said, both 1.6 and 1.5 
branches should apply this patch.


--
Best Regards,
Godbach
From f8fa9c908b5b817e1a5804584bc8433ab91f4767 Mon Sep 17 00:00:00 2001
From: Godbach nylzhao...@gmail.com
Date: Tue, 9 Jun 2015 19:41:52 +0800
Subject: [PATCH] CLEANUP: deinit: remove codes for cleaning p-block_rules

Since all rules listed in p-block_rules have been moved to the beginning of
the http-request rules in check_config_validity(), there is no need to clean
p-block_rules in deinit().

Signed-off-by: Godbach nylzhao...@gmail.com
---
 src/haproxy.c |6 --
 1 files changed, 0 insertions(+), 6 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index 053..eac6f44 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1020,12 +1020,6 @@ void deinit(void)
free(cwl);
}
 
-   list_for_each_entry_safe(cond, condb, p-block_rules, list) {
-   LIST_DEL(cond-list);
-   prune_acl_cond(cond);
-   free(cond);
-   }
-
list_for_each_entry_safe(cond, condb, p-mon_fail_cond, list) {
LIST_DEL(cond-list);
prune_acl_cond(cond);
-- 
1.7.7



  1   2   >