Re: [1.6.1] Utilizing http-reuse
Great! Initial tests shows that only one connection was established and closed once. The behavior is as follows: telnet and a manual GET: Connection to haproxy and a connection to server (port 2004). Run ab: New connection to haproxy, reuse the same connection (Port 2004) to server. 'ab' finishes which results in client->haproxy connection getting closed This results in immediate drop of haproxy->server connection (Port 2004) too. Do another GET in the telnet: New connection is established from HAPRoxy -> server (port 2005). Kill telnet: Connection to haproxy is killed. HAProxy kills port 2005 connection. This behavior works for us, thanks a lot for the quick fix. The above behavior validates the second point you mentioned in your earlier mail: "I'll see). If the client closes an idle connection while there are still other connections left, the server connection is not moved back to the server's idle list and is closed. It's not dramatic, but is a waste of resources since we could maintain that connection open. I'll see if we can do something simple regarding this case." Thanks, Krishna . On Tue, Dec 8, 2015 at 12:32 PM, Willy Tarreau wrote: > On Tue, Dec 08, 2015 at 07:44:45AM +0530, Krishna Kumar (Engineering) > wrote: > > Great, will be glad to test and report on the finding. Thanks! > > Sorry I forgot to post the patch after committing it. Here it comes. > Regarding the second point, in the end it's not a bug, it's simply > because we don't have connection pools yet, and I forgot that keeping > an orphan backend connection was only possible with connection pools :-) > > Willy > >
Re: [1.6.1] Utilizing http-reuse
On Tue, Dec 08, 2015 at 07:44:45AM +0530, Krishna Kumar (Engineering) wrote: > Great, will be glad to test and report on the finding. Thanks! Sorry I forgot to post the patch after committing it. Here it comes. Regarding the second point, in the end it's not a bug, it's simply because we don't have connection pools yet, and I forgot that keeping an orphan backend connection was only possible with connection pools :-) Willy commit 58b318c613b6209d6fe3c9ad38cd11f6814bf7ab Author: Willy Tarreau Date: Mon Dec 7 17:04:59 2015 +0100 BUG/MEDIUM: http: fix http-reuse when frontend and backend differ Krishna Kumar reported that the following configuration doesn't permit HTTP reuse between two clients : frontend private-frontend mode http bind :8001 default_backend private-backend backend private-backend mode http http-reuse always server bck 127.0.0.1: The reason for this is that in http_end_txn_clean_session() we check the stream's backend backend's http-reuse option before deciding whether the backend connection should be moved back to the server's pool or not. But since we're doing this after the call to http_reset_txn(), the backend is reset to match the frontend, which doesn't have the option. However it will work fine in a setup involving a "listen" section. We just need to keep a pointer to the current backend before calling http_reset_txn(). The code does that and replaces the few remaining references to s->be inside the same function so that if any part of code were to be moved later, this trap doesn't happen again. This fix must be backported to 1.6. (cherry picked from commit 858b103631db41c608660210eb37a9e09ee9f086) diff --git a/src/proto_http.c b/src/proto_http.c index 1d00071..5fea6c4 100644 --- a/src/proto_http.c +++ b/src/proto_http.c @@ -5120,6 +5120,7 @@ void http_end_txn_clean_session(struct stream *s) { int prev_status = s->txn->status; struct proxy *fe = strm_fe(s); + struct proxy *be = s->be; struct connection *srv_conn; struct server *srv; unsigned int prev_flags = s->txn->flags; @@ -5142,7 +5143,7 @@ void http_end_txn_clean_session(struct stream *s) } if (s->flags & SF_BE_ASSIGNED) { - s->be->beconn--; + be->beconn--; if (unlikely(s->srv_conn)) sess_change_server(s, NULL); } @@ -5163,11 +5164,11 @@ void http_end_txn_clean_session(struct stream *s) fe->fe_counters.p.http.comp_rsp++; } if ((s->flags & SF_BE_ASSIGNED) && - (s->be->mode == PR_MODE_HTTP)) { - s->be->be_counters.p.http.rsp[n]++; - s->be->be_counters.p.http.cum_req++; + (be->mode == PR_MODE_HTTP)) { + be->be_counters.p.http.rsp[n]++; + be->be_counters.p.http.cum_req++; if (s->comp_algo && (s->flags & SF_COMP_READY)) - s->be->be_counters.p.http.comp_rsp++; + be->be_counters.p.http.comp_rsp++; } } @@ -5207,7 +5208,7 @@ void http_end_txn_clean_session(struct stream *s) s->flags &= ~SF_CURR_SESS; objt_server(s->target)->cur_sess--; } - if (may_dequeue_tasks(objt_server(s->target), s->be)) + if (may_dequeue_tasks(objt_server(s->target), be)) process_srv_queue(objt_server(s->target)); } @@ -5286,7 +5287,7 @@ void http_end_txn_clean_session(struct stream *s) if (!srv) si_idle_conn(&s->si[1], NULL); else if ((srv_conn->flags & CO_FL_PRIVATE) || -((s->be->options & PR_O_REUSE_MASK) == PR_O_REUSE_NEVR)) +((be->options & PR_O_REUSE_MASK) == PR_O_REUSE_NEVR)) si_idle_conn(&s->si[1], &srv->priv_conns); else if (prev_flags & TX_NOT_FIRST) /* note: we check the request, not the connection, but
Re: [1.6.1] Utilizing http-reuse
Great, will be glad to test and report on the finding. Thanks! Regards, - Krishna On Mon, Dec 7, 2015 at 9:07 PM, Willy Tarreau wrote: > Hi Krishna, > > I found a bug explaining your observations and noticed a second one I have > not yet troubleshooted. > > The bug causing your issue is that before moving the idle connection back > to > the server's pool, we check the backend's http-reuse mode. But we're doing > this after calling http_reset_txn() which prepares the transaction to > accept > a new request and sets the backend to the frontend. So we're in fact > checking > the frontend's option. That's why it doesn't work in your case. That's a > stupid > bug that I managed to fix. > > While testing this I discovered another issue (problably less easy to fix, > I'll see). If the client closes an idle connection while there are still > other connections left, the server connection is not moved back to the > server's idle list and is closed. It's not dramatic, but is a waste of > resources since we could maintain that connection open. I'll see if we can > do something simple regarding this case. > > I'll send a patch soon for the first case. > > Thanks, > Willy > >
Re: [1.6.1] Utilizing http-reuse
Hi Krishna, I found a bug explaining your observations and noticed a second one I have not yet troubleshooted. The bug causing your issue is that before moving the idle connection back to the server's pool, we check the backend's http-reuse mode. But we're doing this after calling http_reset_txn() which prepares the transaction to accept a new request and sets the backend to the frontend. So we're in fact checking the frontend's option. That's why it doesn't work in your case. That's a stupid bug that I managed to fix. While testing this I discovered another issue (problably less easy to fix, I'll see). If the client closes an idle connection while there are still other connections left, the server connection is not moved back to the server's idle list and is closed. It's not dramatic, but is a waste of resources since we could maintain that connection open. I'll see if we can do something simple regarding this case. I'll send a patch soon for the first case. Thanks, Willy
Re: [1.6.1] Utilizing http-reuse
Thanks a lot, Willy. Regards, - Krishna On Mon, Dec 7, 2015 at 11:59 AM, Willy Tarreau wrote: > Hi Krishna, > > On Mon, Dec 07, 2015 at 08:31:19AM +0530, Krishna Kumar (Engineering) > wrote: > > Hi Willy, Baptiste, > > > > Apologies for the delay in reproducing this issue and in responding. > > > > I am using HAProxy 1.6.2 and am still finding that connection reuse is > not > > happening in my setup. Attaching the configuration file, command line > > arguments, and the tcpdump (80 packets in all), in case it helps. HAProxy > > is configured with a single backend. The same client makes two requests, > > one a telnet with a GET request for a 128 byte file, and the second 'ab > -k' > > command to retrieve the same file. > (...) > > Can you please take a look and suggest what needs to be done to get reuse > > working? > > Thank you for this detailed report. I agree that your config shows that > it should work and the pcap shows that it doesn't. I've taken a quick > look at the code and have no idea why it does this. I'm going to > investigate > and will keep you informed. > > Thanks! > Willy > >
Re: [1.6.1] Utilizing http-reuse
Hi Krishna, On Mon, Dec 07, 2015 at 08:31:19AM +0530, Krishna Kumar (Engineering) wrote: > Hi Willy, Baptiste, > > Apologies for the delay in reproducing this issue and in responding. > > I am using HAProxy 1.6.2 and am still finding that connection reuse is not > happening in my setup. Attaching the configuration file, command line > arguments, and the tcpdump (80 packets in all), in case it helps. HAProxy > is configured with a single backend. The same client makes two requests, > one a telnet with a GET request for a 128 byte file, and the second 'ab -k' > command to retrieve the same file. (...) > Can you please take a look and suggest what needs to be done to get reuse > working? Thank you for this detailed report. I agree that your config shows that it should work and the pcap shows that it doesn't. I've taken a quick look at the code and have no idea why it does this. I'm going to investigate and will keep you informed. Thanks! Willy
Re: [1.6.1] Utilizing http-reuse
Hi Willy, Baptiste, Apologies for the delay in reproducing this issue and in responding. I am using HAProxy 1.6.2 and am still finding that connection reuse is not happening in my setup. Attaching the configuration file, command line arguments, and the tcpdump (80 packets in all), in case it helps. HAProxy is configured with a single backend. The same client makes two requests, one a telnet with a GET request for a 128 byte file, and the second 'ab -k' command to retrieve the same file. 172.20.97.36: Client 10.34.73.174: HAProxy 10.32.121.94: Server Telnet from client with GET: GET /128 HTTP/1.1 Host: www.example.com User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) Followed immediately with: ab -k -n 10 -c 1 http://10.34.73.174/1K Packets #1-7: Telnet to haproxy, and a GET request made Packets #8-9: HAProxy opens connection to single backend Packets #10-15: Response from server, relays data back to the client, connection from Client->HAProxy and HAProxy->server is kept open. Packets #16-19 (5 seconds later): Same client, run 'ab -k' Packet #20-72: New connection to same backend, and data transfer. Packet #73: 'ab' closes connection to HAProxy Packet #74: HAProxy closes connection to 'ab'. Packet #75: HAProxy closes connection to backend. Packets #77-81: Telnet closes connection Configuration file: -- global daemon maxconn 1 defaults mode http option http-keep-alive balance leastconn option splice-response option clitcpka option srvtcpka option tcp-smart-accept option tcp-smart-connect option contstats timeout http-keep-alive 1800s timeout http-request 1800s timeout connect 60s timeout client 1800s timeout server 1800s frontend private-frontend mode http maxconn 1 bind 10.34.73.174:80 default_backend private-backend backend private-backend http-reuse always server 10.32.121.94 10.32.121.94:80 maxconn 1 >From the above, it is seen that HAProxy opens a second connection to the server on same GET request from the client. Can you please take a look and suggest what needs to be done to get reuse working? $ haproxy -vv HA-Proxy version 1.6.2 2015/11/03 Copyright 2000-2015 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O3 -g -fno-strict-aliasing -Wdeclaration-after-statement OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.8 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014 Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.35 2014-04-04 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built without Lua support Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Thanks, - Krishna Kumar On Thu, Nov 12, 2015 at 12:50 PM, Willy Tarreau wrote: > Hi Krishna, > > On Wed, Nov 11, 2015 at 03:22:54PM +0530, Krishna Kumar (Engineering) > wrote: > > I just tested with 128K byte file (run 4 wgets > > in parallel each retrieving 128K). Here, I see 4 connections being > opened, and > > lots of data packets in the middle, finally followed by 4 connections > > being closed. I > > can test with "ab -k" option to simulate a browser, I guess, will try > that. > > In my tests, ab -k definitely works. > > > > Is wget advertising HTTP/1.1 in the request ? If not that could > > > > Yes, tcpdump shows HTTP/1.1 in the GET request. > > OK. > > > > - proxy protocol used to the server > > > - SNI sent to the server > > > - source IP binding to client's IP address > > > - source IP binding to anything dynamic (eg: header) > > > - 401/407 received on a server connection. > > > > I am not doing any of these specifically. Its a very simple setup where > the > > client@ip1 connects to haproxy@ip2 and requests 128 byte file, which > > is handled by server@ip3. > > OK. I don't see any reason for this not to work then. > > > I was doing this in telnet: > > > > GET /128 HTTP/1.1 > > Host: www.example.com > > User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) > > Looks fine as well. Very strange. I have no idea what would not at the > moment, I suspect this is something stupid and obvious but am failing > to spot it :-/ > > Willy > > packets.pcap Description: application/vnd.tcpdump.pcap
Re: [1.6.1] Utilizing http-reuse
Hi Krishna, On Wed, Nov 11, 2015 at 03:22:54PM +0530, Krishna Kumar (Engineering) wrote: > I just tested with 128K byte file (run 4 wgets > in parallel each retrieving 128K). Here, I see 4 connections being opened, and > lots of data packets in the middle, finally followed by 4 connections > being closed. I > can test with "ab -k" option to simulate a browser, I guess, will try that. In my tests, ab -k definitely works. > > Is wget advertising HTTP/1.1 in the request ? If not that could > > Yes, tcpdump shows HTTP/1.1 in the GET request. OK. > > - proxy protocol used to the server > > - SNI sent to the server > > - source IP binding to client's IP address > > - source IP binding to anything dynamic (eg: header) > > - 401/407 received on a server connection. > > I am not doing any of these specifically. Its a very simple setup where the > client@ip1 connects to haproxy@ip2 and requests 128 byte file, which > is handled by server@ip3. OK. I don't see any reason for this not to work then. > I was doing this in telnet: > > GET /128 HTTP/1.1 > Host: www.example.com > User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) Looks fine as well. Very strange. I have no idea what would not at the moment, I suspect this is something stupid and obvious but am failing to spot it :-/ Willy
Re: [1.6.1] Utilizing http-reuse
Hi Willy, >> B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte >> file. >> Again, 8 separate connections are opened to the backend server. > > But are they *really* processed in parallel ? If the file is only 128 bytes, > I can easily imagine that the connections are opened and closed immediately. > Please keep in mind that wget doesn't work like a browser *at all*. A browser > keeps connections alive. Wget fetches one object and closes. That's a huge > difference because the browser connection remains reusable while wget's not. Yes, they were not really in parallel. I just tested with 128K byte file (run 4 wgets in parallel each retrieving 128K). Here, I see 4 connections being opened, and lots of data packets in the middle, finally followed by 4 connections being closed. I can test with "ab -k" option to simulate a browser, I guess, will try that. >> D. Run 5 "wget -i " in parallel. 5 >> connections are opened by >> the 5 wgets, and 5 connections are opened by haproxy to the >> single server, finally >> all are closed by RST's. > > Is wget advertising HTTP/1.1 in the request ? If not that could Yes, tcpdump shows HTTP/1.1 in the GET request. > explain why they're not merged, we only merge connections from > HTTP/1.1 compliant clients. Also we keep private any connection > which sees a 401 or 407 status code so that authentication doesn't > mix up with other clients and we remain compatible with broken > auth schemes like NTLM which violates HTTP. There are other criteria > to mark a connection private : > - proxy protocol used to the server > - SNI sent to the server > - source IP binding to client's IP address > - source IP binding to anything dynamic (eg: header) > - 401/407 received on a server connection. I am not doing any of these specifically. Its a very simple setup where the client@ip1 connects to haproxy@ip2 and requests 128 byte file, which is handled by server@ip3. >> I also modified step#1 above, to do a telnet, followed by a GET in >> telnet to actually >> open a server connection, and then run the other tests. I still don't >> see re-using connection >> having effect. > > How did you make your test, what exact request did you type ? I was doing this in telnet: GET /128 HTTP/1.1 Host: www.example.com User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1) Thanks for your response & help, Regards, - Krishna Kumar
Re: [1.6.1] Utilizing http-reuse
Hi Krishna, On Wed, Nov 11, 2015 at 12:31:42PM +0530, Krishna Kumar (Engineering) wrote: > Thanks Baptiste. My configuration file is very basic: > > global > maxconn 100 > defaults > mode http > option http-keep-alive > option splice-response > option clitcpka > option srvtcpka > option tcp-smart-accept > option tcp-smart-connect > timeout connect 60s > timeout client 1800s > timeout server 1800s > timeout http-request 1800s > timeout http-keep-alive 1800s > frontend private-frontend > maxconn 100 > mode http > bind IP1:80 > default_backend private-backend > backend private-backend > http-reuse always > server IP2 IP2:80 maxconn 10 > > As described by you, I did the following tests: > > 1. Telnet to the HAProxy IP, and then run each of the following tests: > > A. Serial: Run wget, sleep 0.5; wget, sleep 0.5; (8 times). tcpdump shows > that > when each wget finishes, client closes the connection and > haproxy does RST to > the single backend. Next wget opens a new connection to haproxy, > and in turn > to the server upon request. That's expected. To be clear about one point so that there is no doubt about this, we don't have connection pools for now, we can only share *existing* connections. So once your last connection closes, you don't have server connections anymore and you create new ones. > B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte > file. > Again, 8 separate connections are opened to the backend server. But are they *really* processed in parallel ? If the file is only 128 bytes, I can easily imagine that the connections are opened and closed immediately. Please keep in mind that wget doesn't work like a browser *at all*. A browser keeps connections alive. Wget fetches one object and closes. That's a huge difference because the browser connection remains reusable while wget's not. > C. Run "wget -i ". wget uses keepalive to not close >the connection. Here, wget opens only 1 connection to haproxy, > and haproxy >opens 1 connection to the backend, over which wget transfers > the 5 files one after >the other. Behavior is identical to 1.5.12 (same config file, > except without the reuse >directive). OK. That's a better test. > D. Run 5 "wget -i " in parallel. 5 > connections are opened by > the 5 wgets, and 5 connections are opened by haproxy to the > single server, finally > all are closed by RST's. Is wget advertising HTTP/1.1 in the request ? If not that could explain why they're not merged, we only merge connections from HTTP/1.1 compliant clients. Also we keep private any connection which sees a 401 or 407 status code so that authentication doesn't mix up with other clients and we remain compatible with broken auth schemes like NTLM which violates HTTP. There are other criteria to mark a connection private : - proxy protocol used to the server - SNI sent to the server - source IP binding to client's IP address - source IP binding to anything dynamic (eg: header) - 401/407 received on a server connection. > I also modified step#1 above, to do a telnet, followed by a GET in > telnet to actually > open a server connection, and then run the other tests. I still don't > see re-using connection > having effect. How did you make your test, what exact request did you type ? Willy
Re: [1.6.1] Utilizing http-reuse
Thanks Baptiste. My configuration file is very basic: global maxconn 100 defaults mode http option http-keep-alive option splice-response option clitcpka option srvtcpka option tcp-smart-accept option tcp-smart-connect timeout connect 60s timeout client 1800s timeout server 1800s timeout http-request 1800s timeout http-keep-alive 1800s frontend private-frontend maxconn 100 mode http bind IP1:80 default_backend private-backend backend private-backend http-reuse always server IP2 IP2:80 maxconn 10 As described by you, I did the following tests: 1. Telnet to the HAProxy IP, and then run each of the following tests: A. Serial: Run wget, sleep 0.5; wget, sleep 0.5; (8 times). tcpdump shows that when each wget finishes, client closes the connection and haproxy does RST to the single backend. Next wget opens a new connection to haproxy, and in turn to the server upon request. B. Run 8 wgets in parallel. Each opens a new connection to get a 128 byte file. Again, 8 separate connections are opened to the backend server. C. Run "wget -i ". wget uses keepalive to not close the connection. Here, wget opens only 1 connection to haproxy, and haproxy opens 1 connection to the backend, over which wget transfers the 5 files one after the other. Behavior is identical to 1.5.12 (same config file, except without the reuse directive). D. Run 5 "wget -i " in parallel. 5 connections are opened by the 5 wgets, and 5 connections are opened by haproxy to the single server, finally all are closed by RST's. I also modified step#1 above, to do a telnet, followed by a GET in telnet to actually open a server connection, and then run the other tests. I still don't see re-using connection having effect. Is this test scenario different from what you had suggested? Thanks once again. Regards, - Krishna Kumar On Tue, Nov 10, 2015 at 6:19 PM, Baptiste wrote: > On Tue, Nov 10, 2015 at 11:44 AM, Krishna Kumar (Engineering) > wrote: >> Dear all, >> >> I am comparing 1.6.1 with 1.5.12. Following are the relevant snippets from >> the >> configuration file: >> >> global >>maxconn 100 >> defaults >>option http-keep-alive >>option clitcpka >>option srvtcpka >> frontend private-frontend >>maxconn 100 >>mode http >>bind IP1:80 >>default_backend private-backend >> backend private-backend >> http-reuse always (only in the 1.6.1 configuration) >> server IP2 IP2:80 maxconn 10 >> >> Client runs a single command to retrieve file of 128 bytes: >> ab -k -n 20 -c 12 http:///128 >> >> Tcpdump shows that 12 connections were established to the frontend, 10 >> connections >> were then made to the server, and after the 10 were serviced once (GET), two >> new >> connections were opened to the server and serviced once (GET), finally >> 8 requests >> were done on the first set of server connections. Finally all 12 >> connections were >> closed together. There is no difference in #packets between 1.5.12 and >> 1.6.1 or the >> sequence of packets. >> >> How do I actually re-use idle connections? Do I need to run ab's in >> parallel with >> some delay, etc, to see old connections being reused? I also ran separately >> the >> following script to get file of 4K, to introduce parallel connections >> with delay's, etc: >> >> for i in {1..20} >> do >> ab -k -n 100 -c 50 http://10.34.73.174/4K & >> sleep 0.4 >> done >> wait >> >> But the total# packets for 1.5.12 and 1.6.1 were similar (no drops in >> tcpdump, >> no Connection drop in client, with 24.6K packets for 1.5.12 and 24.8K packets >> for 1.6.1). Could someone please let me know what I should change in the >> configuration or the client to see the effect of http-reuse? >> >> Thanks, >> - Krishna Kumar >> > > > Hi Krishna, > > Actually, your timeouts are also very important as well. > I would also enable "option prefer-last-server", furthermore if you > have many servers in the farm. > > Now, to test the reuse, simply try opening a session using telnet, and > fake a keepalive session. > then do a few wget and confirm all the traffic uses the session > previously established. > > Baptiste
Re: [1.6.1] Utilizing http-reuse
On Tue, Nov 10, 2015 at 11:44 AM, Krishna Kumar (Engineering) wrote: > Dear all, > > I am comparing 1.6.1 with 1.5.12. Following are the relevant snippets from the > configuration file: > > global >maxconn 100 > defaults >option http-keep-alive >option clitcpka >option srvtcpka > frontend private-frontend >maxconn 100 >mode http >bind IP1:80 >default_backend private-backend > backend private-backend > http-reuse always (only in the 1.6.1 configuration) > server IP2 IP2:80 maxconn 10 > > Client runs a single command to retrieve file of 128 bytes: > ab -k -n 20 -c 12 http:///128 > > Tcpdump shows that 12 connections were established to the frontend, 10 > connections > were then made to the server, and after the 10 were serviced once (GET), two > new > connections were opened to the server and serviced once (GET), finally > 8 requests > were done on the first set of server connections. Finally all 12 > connections were > closed together. There is no difference in #packets between 1.5.12 and > 1.6.1 or the > sequence of packets. > > How do I actually re-use idle connections? Do I need to run ab's in > parallel with > some delay, etc, to see old connections being reused? I also ran separately > the > following script to get file of 4K, to introduce parallel connections > with delay's, etc: > > for i in {1..20} > do > ab -k -n 100 -c 50 http://10.34.73.174/4K & > sleep 0.4 > done > wait > > But the total# packets for 1.5.12 and 1.6.1 were similar (no drops in tcpdump, > no Connection drop in client, with 24.6K packets for 1.5.12 and 24.8K packets > for 1.6.1). Could someone please let me know what I should change in the > configuration or the client to see the effect of http-reuse? > > Thanks, > - Krishna Kumar > Hi Krishna, Actually, your timeouts are also very important as well. I would also enable "option prefer-last-server", furthermore if you have many servers in the farm. Now, to test the reuse, simply try opening a session using telnet, and fake a keepalive session. then do a few wget and confirm all the traffic uses the session previously established. Baptiste