A simple question - is the data received from the back-end server
stored on haproxy until "all" the data is received, before that date
is then sent on to the client?  I didn't think so, but I wasn't sure
now.

I'm now seeing a few "cD" connection status:
Jan 14 14:51:15 lbtest1 haproxy[18994]: BBBB:42553
[14/Jan/2010:14:48:15.329] LDAPFarm LDAPFarm/dp2 0/0/180008 196 cD
0/10/10/3/0 0/0

My config
defaults
        mode tcp
        log global
        option tcplog
        option dontlognull
#       option dontlog-normal
        option redispatch
        option tcpka
        retries 3
        maxconn 4096

listen LDAPFarm VIP:389
        mode tcp
        option tcplog
        option httpchk
        balance roundrobin
        timeout connect 5s
        timeout client 180s
        timeout server 180s
        timeout check 900ms
        server dp1 AAAA:389 check addr 127.0.0.1 port 9101 inter 5s
fastinter 1s downinter 3s fall 2 rise 2
        server dp2 BBBB:389 check addr 127.0.0.1 port 9102 inter 5s
fastinter 1s downinter 3s fall 2 rise 2
        server dp3 CCCC:389 check addr 127.0.0.1 port 9103 inter 5s
fastinter 1s downinter 3s fall 2 rise 2


These are LDAP queries.  If I read this correctly, there were 196
bytes sent from haproxy to the client.  The docs define bytes_read as
"- "bytes_read" is the total number of bytes transmitted from the
server to the client when the log is emitted."  Is that necessarily
how much the back-end server sent to haproxy, and how much haproxy
sent to the client?  Could there be a difference in those two numbers?

Is there a log enttry (or option) that records how many bytes was sent
from the client to haproxy and forwarded from haproxy to the server?

The total connection time between client and haproxy was 180,000ms,
which I guess is based off the timeout client 180s number.  The docs
for "cD" mention
==
    cD   The client did not send nor acknowledge any data for as long as the
          "timeout client" delay. This is often caused by network failures on
          the client side, or the client simply leaving the net uncleanly.
==

I suppose the client could have made one small query at the beginning,
got that response (all of 196 bytes, I guess) and then stayed
connected planning to make another query on the same connection - but
didn't make that next query w/in the 180s timeframe.  What I wanted to
make certain was that the 180s client timeout limit wasn't because of
some delay in the (re)transmission of data between server-haproxy or
haproxy->client for whatever reason.  Barring actual network problems,
etc of course.

Finally, if anyone is using haproxy for LDAP/S, and would be willing
to share any best-practices, configurations, etc for such a setup, I'd
be most appreciative.  Ours has been working for several days now, and
seems to be happy.  But it's still only a tiny handful of various
LDAP/S clients using the haproxy VIP.  We're slowly ramping up, adding
a few more here and there at a time.  But sooner or later, we'll
change the DNS for our LDAP IP to the VIP of haproxy, and then it'll
be "everyone" using it all at once.

Thank you,
PH

Reply via email to