Build failed in Jenkins: 3.HEAD-ppc-MacOS-Leopard #39

2012-09-10 Thread noc
See 

--
[...truncated 3798 lines...]
sed \
-e "s%[@]DEFAULT_HTTP_PORT[@]%3128%g" \
-e "s%[@]DEFAULT_ICP_PORT[@]%3130%g" \
-e "s%[@]DEFAULT_CACHE_EFFECTIVE_USER[@]%nobody%g" \
-e 
"s%[@]DEFAULT_MIME_TABLE[@]%
 \
-e 
"s%[@]DEFAULT_DNSSERVER[@]%
 dnsserver | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_SSL_CRTD[@]%
 ssl_crtd  | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_UNLINKD[@]%
 unlinkd | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_PINGER[@]%
 pinger | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_DISKD[@]%
 diskd | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_LOGFILED[@]%
 log_file_daemon | sed 's,x,x,;s/$//'`%g;" \
-e 
"s%[@]DEFAULT_CACHE_LOG[@]%
 \
-e 
"s%[@]DEFAULT_ACCESS_LOG[@]%
 \
-e 
"s%[@]DEFAULT_STORE_LOG[@]%
 \
-e 
"s%[@]DEFAULT_PID_FILE[@]%
 \
-e 
"s%[@]DEFAULT_NETDB_FILE[@]%
 \
-e 
"s%[@]DEFAULT_SWAP_DIR[@]%
 \
-e 
"s%[@]DEFAULT_SSL_DB_DIR[@]%
 \
-e 
"s%[@]DEFAULT_ICON_DIR[@]%
 \
-e 
"s%[@]DEFAULT_CONFIG_DIR[@]%
 \
-e 
"s%[@]DEFAULT_ERROR_DIR[@]%
 \
-e 
"s%[@]DEFAULT_PREFIX[@]%
 \
-e "s%[@]DEFAULT_HOSTS[@]%/etc/hosts%g" \
-e "s%[@]SQUID[@]%SQUID\ 3.HEAD-BZR%g" \
< ../../src/cf.data.pre >cf.data
g++ -o cf_gen ../../src/cf_gen.cc -I../../src -I../include/ -I../src
./cf_gen cf.data ../../src/cf.data.depend
awk -f ../../src/mk-string-arrays.awk < ../../src/err_type.h > err_type.cc || 
(/bin/rm -f -f err_type.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/err_detail_type.h | sed 
's/ERR_DETAIL_//' > err_detail_type.cc || (/bin/rm -f -f err_detail_type.cc && 
exit 1)
awk -f ../../src/mk-globals-c.awk < ../../src/globals.h > globals.cc || 
(/bin/rm -f -f globals.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/hier_code.h > hier_code.cc || 
(/bin/rm -f -f hier_code.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/icp_opcode.h > icp_opcode.cc 
|| (/bin/rm -f -f icp_opcode.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/lookup_t.h > lookup_t.cc || 
(/bin/rm -f -f lookup_t.cc && exit 1)
/bin/sh ../../src/repl_modules.sh lru > repl_modules.cc
make  all-recursive
Making all in base
/bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H  
-I../../.. -I../../../include -I../../../lib -I../../../src -I../../include 
-I/opt/local/include  -I../../../libltdl   -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT AsyncCall.lo 
-MD -MP -MF .deps/AsyncCall.Tpo -c -o AsyncCall.lo 
../../../src/base/AsyncCall.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../../.. -I../../../include 
-I../../.

Build failed in Jenkins: 3.HEAD-amd64-opensuse #196

2012-09-10 Thread noc
See 

--
[...truncated 7101 lines...]
Testing ../../src/StoreMetaMD5.h ...Ok.
Testing ../../src/gopher.h ...Ok.
Testing ../../src/StoreIOState.h ...Ok.
Testing ../../src/TimeOrTag.h ...Ok.
Testing ../../src/DelayConfig.h ...Ok.
Testing ../../src/hier_code.h ...Ok.
Testing ../../src/fqdncache.h ...Ok.
Testing ../../src/DelayUser.h ...Ok.
Testing ../../src/HttpParser.h ...Ok.
Testing ../../src/MemBlob.h ...Ok.
Testing ../../src/icp_opcode.h ...Ok.
Testing ../../src/HttpHdrContRange.h ...Ok.
Testing ../../src/SquidDns.h ...Ok.
Testing ../../src/cache_cf.h ...Ok.
Testing ../../src/StoreMetaObjSize.h ...Ok.
Testing ../../src/DelayVector.h ...Ok.
Testing ../../src/http.h ...Ok.
Testing ../../src/CompletionDispatcher.h ...Ok.
Testing ../../src/HttpHdrScTarget.h ...Ok.
Testing ../../src/StatCounters.h ...Ok.
Testing ../../src/wccp2.h ...Ok.
Testing ../../src/PingData.h ...Ok.
Testing ../../src/Parsing.h ...Ok.
Testing ../../src/HttpBody.h ...Ok.
Testing ../../src/StoreMetaSTD.h ...Ok.
Testing ../../src/event.h ...Ok.
Testing ../../src/MemStore.h ...Ok.
Testing ../../src/DescriptorSet.h ...Ok.
Testing ../../src/store_key_md5.h ...Ok.
Testing ../../src/comm.h ...Ok.
Testing ../../src/ftp.h ...Ok.
Testing ../../src/LoadableModules.h ...Ok.
Testing ../../src/stmem.h ...Ok.
Testing ../../src/CommRead.h ...Ok.
Testing ../../src/CompositePoolNode.h ...Ok.
Testing ../../src/Server.h ...Ok.
Testing ../../src/wccp.h ...Ok.
Testing ../../src/DelayTagged.h ...Ok.
Testing ../../src/swap_log_op.h ...Ok.
Testing ../../src/HierarchyLogEntry.h ...Ok.
Testing ../../src/StoreClient.h ...Ok.
Testing ../../src/structs.h ...Ok.
Testing ../../src/client_side_reply.h ...Ok.
Testing ../../src/StoreMetaSTDLFS.h ...Ok.
Testing ../../src/defines.h ...Ok.
Testing ../../src/HttpHeaderTools.h ...Ok.
Testing ../../src/RemovalPolicy.h ...Ok.
Testing ../../src/Packer.h ...Ok.
Testing ../../src/DelaySpec.h ...Ok.
Testing ../../src/AsyncEngine.h ...Ok.
Testing ../../src/StoreSwapLogData.h ...Ok.
Testing ../../src/HttpMsg.h ...Ok.
Testing ../../src/ICP.h ...Ok.
Testing ../../src/DnsLookupDetails.h ...Ok.
Testing ../../src/peer_userhash.h ...Ok.
Testing ../../src/Debug.h ...Ok.
Testing ../../src/ExternalACLEntry.h ...Ok.
Testing ../../src/DiskIO/ReadRequest.h ...Ok.
Testing ../../src/DiskIO/DiskIOModule.h ...Ok.
Testing ../../src/DiskIO/DiskIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskFile.h ...Ok.
Testing ../../src/DiskIO/WriteRequest.h ...Ok.
Testing ../../src/DiskIO/IORequestor.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedFile.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedDiskIOModule.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedIOStrategy.h ...Ok.
Testing ../../src/DiskIO/AIO/AIODiskFile.h ...Ok.
Testing ../../src/DiskIO/AIO/async_io.h ...Ok.
Testing ../../src/DiskIO/AIO/aio_win32.h ...Ok.
Testing ../../src/DiskIO/AIO/AIODiskIOModule.h ...Ok.
Testing ../../src/DiskIO/AIO/AIODiskIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskThreads/DiskThreadsDiskFile.h ...Ok.
Testing ../../src/DiskIO/DiskThreads/CommIO.h ...Ok.
Testing ../../src/DiskIO/DiskThreads/DiskThreads.h ...Ok.
Testing ../../src/DiskIO/DiskThreads/DiskThreadsIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskThreads/DiskThreadsDiskIOModule.h ...Ok.
Testing ../../src/DiskIO/IpcIo/IpcIoFile.h ...Ok.
Testing ../../src/DiskIO/IpcIo/IpcIoDiskIOModule.h ...Ok.
Testing ../../src/DiskIO/IpcIo/IpcIoIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskDaemon/DiskdFile.h ...Ok.
Testing ../../src/DiskIO/DiskDaemon/diomsg.h ...Ok.
Testing ../../src/DiskIO/DiskDaemon/DiskdIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskDaemon/DiskdAction.h ...Ok.
Testing ../../src/DiskIO/DiskDaemon/DiskDaemonDiskIOModule.h ...Ok.
Testing ../../src/DiskIO/Blocking/BlockingIOStrategy.h ...Ok.
Testing ../../src/DiskIO/Blocking/BlockingFile.h ...Ok.
Testing ../../src/DiskIO/Blocking/BlockingDiskIOModule.h ...Ok.
..
OK (2)
PASS: tests/testACLMaxUserIP
.
OK (1)
PASS: tests/testBoilerplate
..
OK (2)
PASS: tests/testCacheManager
.
OK (1)
PASS: tests/testDiskIO
..Actual Text:
Last event to run: last event

Operation\tNext Execution \tWeight\tCallback Valid?
test event   \t0.000 sec\t0\t N/A
test event2  \t0.000 sec\t0\t N/A


OK (6)
PASS: tests/testEvent
...
OK (3)
PASS: tests/testEventLoop
PASS: tests/test_http_range
.
OK (5)
PASS: tests/testHttpParser
.SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHist.cc enumInit (not implemented).
SKIP: StatHi

Jenkins build is back to normal : 3.HEAD-amd64-FreeBSD-7.2 #1561

2012-09-10 Thread noc
See 



Build failed in Jenkins: 3.HEAD-ppc-MacOS-Leopard #38

2012-09-10 Thread noc
See 

--
[...truncated 3798 lines...]
sed \
-e "s%[@]DEFAULT_HTTP_PORT[@]%3128%g" \
-e "s%[@]DEFAULT_ICP_PORT[@]%3130%g" \
-e "s%[@]DEFAULT_CACHE_EFFECTIVE_USER[@]%nobody%g" \
-e 
"s%[@]DEFAULT_MIME_TABLE[@]%
 \
-e 
"s%[@]DEFAULT_DNSSERVER[@]%
 dnsserver | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_SSL_CRTD[@]%
 ssl_crtd  | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_UNLINKD[@]%
 unlinkd | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_PINGER[@]%
 pinger | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_DISKD[@]%
 diskd | sed 's,x,x,;s/$//'`%g" \
-e 
"s%[@]DEFAULT_LOGFILED[@]%
 log_file_daemon | sed 's,x,x,;s/$//'`%g;" \
-e 
"s%[@]DEFAULT_CACHE_LOG[@]%
 \
-e 
"s%[@]DEFAULT_ACCESS_LOG[@]%
 \
-e 
"s%[@]DEFAULT_STORE_LOG[@]%
 \
-e 
"s%[@]DEFAULT_PID_FILE[@]%
 \
-e 
"s%[@]DEFAULT_NETDB_FILE[@]%
 \
-e 
"s%[@]DEFAULT_SWAP_DIR[@]%
 \
-e 
"s%[@]DEFAULT_SSL_DB_DIR[@]%
 \
-e 
"s%[@]DEFAULT_ICON_DIR[@]%
 \
-e 
"s%[@]DEFAULT_CONFIG_DIR[@]%
 \
-e 
"s%[@]DEFAULT_ERROR_DIR[@]%
 \
-e 
"s%[@]DEFAULT_PREFIX[@]%
 \
-e "s%[@]DEFAULT_HOSTS[@]%/etc/hosts%g" \
-e "s%[@]SQUID[@]%SQUID\ 3.HEAD-BZR%g" \
< ../../src/cf.data.pre >cf.data
g++ -o cf_gen ../../src/cf_gen.cc -I../../src -I../include/ -I../src
./cf_gen cf.data ../../src/cf.data.depend
awk -f ../../src/mk-string-arrays.awk < ../../src/err_type.h > err_type.cc || 
(/bin/rm -f -f err_type.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/err_detail_type.h | sed 
's/ERR_DETAIL_//' > err_detail_type.cc || (/bin/rm -f -f err_detail_type.cc && 
exit 1)
awk -f ../../src/mk-globals-c.awk < ../../src/globals.h > globals.cc || 
(/bin/rm -f -f globals.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/hier_code.h > hier_code.cc || 
(/bin/rm -f -f hier_code.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/icp_opcode.h > icp_opcode.cc 
|| (/bin/rm -f -f icp_opcode.cc && exit 1)
awk -f ../../src/mk-string-arrays.awk < ../../src/lookup_t.h > lookup_t.cc || 
(/bin/rm -f -f lookup_t.cc && exit 1)
/bin/sh ../../src/repl_modules.sh lru > repl_modules.cc
make  all-recursive
Making all in base
/bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H  
-I../../.. -I../../../include -I../../../lib -I../../../src -I../../include 
-I/opt/local/include  -I../../../libltdl   -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT -g -O2 -MT AsyncCall.lo 
-MD -MP -MF .deps/AsyncCall.Tpo -c -o AsyncCall.lo 
../../../src/base/AsyncCall.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../../.. -I../../../include 
-I../../.

Build failed in Jenkins: 3.HEAD-amd64-opensuse #195

2012-09-10 Thread noc
See 

Changes:

[Alex Rousskov] Do not reuse persistent connections for PUTs to avoid 
ERR_ZERO_SIZE_OBJECT.

A compliant proxy may retry PUTs, but Squid lacks the [rather complicated]
code required to protect the PUT request body from being nibbled during the
first try or [also tricky] code to send 100-continue expectation requiredto
delay body sending. Thus, Squid cannot safely retry some PUTs today, and
FwdState::checkRetriable() must return false for all PUTs, to avoid
bogus ERR_ZERO_SIZE_OBJECT errors (especially for clients that did not
reuse a pconn and, hence, may not be ready to handle/retry an error response).

In theory, requests with safe or idempotent methods other than PUT might have
bodies so we apply the same logic to them as well.

This reopens Squid bug #3398, undoing trunk r11859 commit which attempted
to close that bug.

[Alex Rousskov] Do not chunk responses carrying a Content-Range header.

When Squid forwards a response with a Content-Range header,
ClientSocketContext::socketState() detects the end of the response range(s)
and returns STREAM_*COMPLETE to ClientSocketContext::writeComplete().
The latter thinks that the writing of the response to the client must be
over and calls keepaliveNextRequest() instead of writing the last-chunk
(if any). If the to-client response was chunked, the client gets stuck
waiting for that missing last-chunk.

The multipart Range request case was already excluded from chunking (or it
would probably suffer from the same problem). With this change, no
Content-Range responses will be chunked.

N.B. Some servers send Content-Range responses to basic GET requests
without a Range header, so the problem affects more than just Range requests.

TODO: A proper fix would be to rewrite ClientSocketContext::writeComplete()
and other code so that it does not mix internal ClientStream completion with
[possibly chunk-encoded] writing completion. This should probably be done
along with fixing ClientSocketContext::socketState() and other state-checking
code to ignore to-client persistence (flags.proxy_keepalive), which is not
related to the internal ClientStream state.

--
[...truncated 7104 lines...]
Testing ../../src/StoreMetaMD5.h ...Ok.
Testing ../../src/gopher.h ...Ok.
Testing ../../src/StoreIOState.h ...Ok.
Testing ../../src/TimeOrTag.h ...Ok.
Testing ../../src/DelayConfig.h ...Ok.
Testing ../../src/hier_code.h ...Ok.
Testing ../../src/fqdncache.h ...Ok.
Testing ../../src/DelayUser.h ...Ok.
Testing ../../src/HttpParser.h ...Ok.
Testing ../../src/MemBlob.h ...Ok.
Testing ../../src/icp_opcode.h ...Ok.
Testing ../../src/HttpHdrContRange.h ...Ok.
Testing ../../src/SquidDns.h ...Ok.
Testing ../../src/cache_cf.h ...Ok.
Testing ../../src/StoreMetaObjSize.h ...Ok.
Testing ../../src/DelayVector.h ...Ok.
Testing ../../src/http.h ...Ok.
Testing ../../src/CompletionDispatcher.h ...Ok.
Testing ../../src/HttpHdrScTarget.h ...Ok.
Testing ../../src/StatCounters.h ...Ok.
Testing ../../src/wccp2.h ...Ok.
Testing ../../src/PingData.h ...Ok.
Testing ../../src/Parsing.h ...Ok.
Testing ../../src/HttpBody.h ...Ok.
Testing ../../src/StoreMetaSTD.h ...Ok.
Testing ../../src/event.h ...Ok.
Testing ../../src/MemStore.h ...Ok.
Testing ../../src/DescriptorSet.h ...Ok.
Testing ../../src/store_key_md5.h ...Ok.
Testing ../../src/comm.h ...Ok.
Testing ../../src/ftp.h ...Ok.
Testing ../../src/LoadableModules.h ...Ok.
Testing ../../src/stmem.h ...Ok.
Testing ../../src/CommRead.h ...Ok.
Testing ../../src/CompositePoolNode.h ...Ok.
Testing ../../src/Server.h ...Ok.
Testing ../../src/wccp.h ...Ok.
Testing ../../src/DelayTagged.h ...Ok.
Testing ../../src/swap_log_op.h ...Ok.
Testing ../../src/HierarchyLogEntry.h ...Ok.
Testing ../../src/StoreClient.h ...Ok.
Testing ../../src/structs.h ...Ok.
Testing ../../src/client_side_reply.h ...Ok.
Testing ../../src/StoreMetaSTDLFS.h ...Ok.
Testing ../../src/defines.h ...Ok.
Testing ../../src/HttpHeaderTools.h ...Ok.
Testing ../../src/RemovalPolicy.h ...Ok.
Testing ../../src/Packer.h ...Ok.
Testing ../../src/DelaySpec.h ...Ok.
Testing ../../src/AsyncEngine.h ...Ok.
Testing ../../src/StoreSwapLogData.h ...Ok.
Testing ../../src/HttpMsg.h ...Ok.
Testing ../../src/ICP.h ...Ok.
Testing ../../src/DnsLookupDetails.h ...Ok.
Testing ../../src/peer_userhash.h ...Ok.
Testing ../../src/Debug.h ...Ok.
Testing ../../src/ExternalACLEntry.h ...Ok.
Testing ../../src/DiskIO/ReadRequest.h ...Ok.
Testing ../../src/DiskIO/DiskIOModule.h ...Ok.
Testing ../../src/DiskIO/DiskIOStrategy.h ...Ok.
Testing ../../src/DiskIO/DiskFile.h ...Ok.
Testing ../../src/DiskIO/WriteRequest.h ...Ok.
Testing ../../src/DiskIO/IORequestor.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedFile.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedDiskIOModule.h ...Ok.
Testing ../../src/DiskIO/Mmapped/MmappedIOStrategy.h ...Ok.
Testing ../../src/DiskIO/AIO/AIODiskFile.h ...Ok.
Testing ../../src/DiskIO/AIO/async_

Re: [RFC] CONNECT and use_ssl cache_peer

2012-09-10 Thread Amos Jeffries

On 11.09.2012 03:35, Alex Rousskov wrote:

On 09/09/2012 08:12 PM, Amos Jeffries wrote:

sön 2012-09-09 klockan 21:34 +1200 skrev Amos Jeffries:

Henrik and I seem to disagree on this being a good idea being 
plugged
into BodyPipe. But as I see it BodyPipe is where Upgrade, 
WebSockets and
SSL-Bump should be happening in a node which internally "receives" 
the

tunnel connection and acts like a ConnStateData reading out of the
tunnel.



I'm imagining a setup of:

 client TCP -> ConnStateData(BodyPipeProducer) -> 
Bump(BodyPipeReader,

'fake' ConnStateData(BodyPipeProducer) ) -> BodyPipe -> server-side

so that:
 * the bumped CONNECT tunnel can be the source of a whole pipeline 
of

requests (HTTPS, etc)
 * the original ConnStateData->Bump section of code retains all the
CONNECT semantics down to and including terminate 'server' 
connection
(aka Bump Node destructs) on end of CONNECT independent of what Bump 
->

server-side does with the server-side connection.
 * when receiving a CONNECT we detect the right type of streams 
reader

to handle the data and allocate it:
   ** the Bump node could be Upgrade:TLS, Upgrade:WebSockets,
Upgrade:HTTP/2, SSL-bump, pass-thru tunnel or some future protocol


FWIW, a "processing chain" design like that was one of the two 
primary
alternatives discussed when Squid3 was born. I agree that this design 
is

attractive for many reasons. I am guessing that ClientStreams were an
attempt to implement it (but we need to do much better than that!).


With ClientStream and adaptation chains lessons in mind, I would be
strongly opposed to making BodyPipe a _central_ piece of a processing
chain coordination. BodyPipe should be used for low-level shoveling 
of

opaque bytes within the chain, but there is more high-level semantics
that a message processing chain must handle and BodyPipe should not. 
New
"message processing stream" and "message processing stream node" 
classes

are needed if we want to build such chains, with the existing
ClientStream, StoreClient, and Adaptation code upgraded to use that 
new

API. This is Squid v4.0 stuff IMO...

Personally, I would welcome efforts in that direction, and I am glad 
you

have a very interested client pushing you there. However, this
monumental work should not become a precondition for fixing this
isolated bug, IMO.

Isolating [secure] peer connection establishment code into a class
shared by tunnel.cc and forward.cc code will already make any future
chaining improvements easier. Let's view this as a single step in the
right direction. I hope you do not insist on us running the whole
marathon within this bug scope :-).



Fine by me.

Amos




Re: ICAP connections under heavy loads

2012-09-10 Thread Amos Jeffries

On 11.09.2012 03:06, Alex Rousskov wrote:

On 09/09/2012 02:30 AM, Amos Jeffries wrote:


IIRC the race is made worse by our internal code going timout event
(async) scheduling a call (async), so doubling the async queue 
delay.

Which shows up worst on heavily loaded proxies.


True. Unfortunately, the alternative is to deal with ConnOpener's 
sync

call reentering ConnOpener object. We all know what that eventually
leads to. I would rather have double async calls (and optimize the
general queue handling) than deal with long chains of reentrant 
calls,
especially when it comes to dealing with an exceptional event such as 
a

timeout.

I am sure this can be optimized further by writing reentrant code in
performance-critical places, but we have much bigger fish to fry at 
the

moment.



The patch woudl be this:

=== modified file 'src/comm/ConnOpener.cc'
--- src/comm/ConnOpener.cc  2012-08-28 19:12:13 +
+++ src/comm/ConnOpener.cc  2012-09-09 08:20:09 +
@@ -139,6 +139,10 @@
 // it never reached fully open, so cleanup the FD handlers
 // Note that comm_close() sequence does not happen for
partially open FD
 Comm::SetSelect(temporaryFd_, COMM_SELECT_WRITE, NULL, 
NULL, 0);

+if (calls_.earlyAbort_ != NULL) {
+ calls_.earlyAbort_->cancel("Comm::ConnOpener::doneConnecting");
+calls_.earlyAbort_ = NULL;


The above line is extra because we do that immediately below:


+}
 calls_.earlyAbort_ = NULL;




I think from all this discussion the 
ConnOpener::DelayedConnectRetry

handler needs to be investigated as well.


Agreed. Why is that handler reusing the same failed socket? Should 
not
we close the failed socket descriptor upon detecting a failure and 
then

open a new one just before trying again?


Hmm. Not sure. This is a straight copy of the older code.

If you want a new socket FD most of what start() does needs to be
re-done for it. Except that the start timer needs to be left 
unchanged,


Exactly.

and the async calls need to be switched to teh new FD (would that be 
a
reassignment, or a cancel + replace?)cancelled before replacing on 
the

new FD.


Ideally, when we fix this, we should avoid attaching any callbacks 
that

may outlive a temporary FD to that temporary FD. For example, the
timeout callback should be converted into an event, which is not a
trivial change. AFAICT we do not have any other such callbacks (the
earlyAbort one is specific to the temporary FD and can stay), but I 
have

not checked closely.


Agreed. Using eventAdd() for the timeout is a great idea IMO. 
Unfortunately eventAdd() does not accept AsyncCalls (yet) and would 
require another wrapper doing async re-schedule. On the plus side that 
means both sides of the race face the same lag handicap.




Here is the sketch for connect retries on failures, as I see it:

 default:
 ++failRetries_; // TODO: rename to failedTries

 // we failed and have to start from scratch
 ... close the temporary socket ...

 // if we exhausted all retries, bail with an error
 if (failRetries_ > Config.connect_retries) {
 ... send COMM_ERR_CONNECT to the caller ...
 return;
 }

 // we can still retry after a small pause
 // we might timeout before that pause ends, but that is OK
 eventAdd("Comm::ConnOpener::DelayedConnectRetry", ...)

More changes would be needed to make this work correctly:

* the timeout handler should not be associated with the socket
descriptor (since it may change). We can use eventAdd instead.

* the Comm::ConnOpener::DelayedConnectRetry() wrapper should call a
method that will open a new socket.


Sounds okay to me.


I'm not strongly viewed on this, but is it better to display a hard 
ERR_CONNECT or a soft(er) TIMEOUT in these cases? ie is it actually a 
good idea to  remove the timeout if case?


Amos


Re: [PATCH] Do not reuse persistent connections for PUTs

2012-09-10 Thread Alex Rousskov
> ons 2012-08-29 klockan 12:42 -0600 skrev Alex Rousskov:
>> Hello,
>>
>> I saw bogus ERR_ZERO_SIZE_OBJECT responses while testing Squid v3.1,
>> but the same problem ought to be present in v3.2 as well.
>>
>> A compliant proxy may retry PUTs, but Squid lacks the [rather
>> complicated] code required to protect the PUT request body from being
>> nibbled during the first try, when pconn races are possible.
> 
> +1 on the patch from me.

Committed to trunk as r12319.


>> Today, we have a choice:
>>
>>   A. Fail some PUTs. Close fewer pconns (current code).
>>   B. Handle all PUTs. Open more connections (patched code).
>>
>> If this patch is accepted, performance bug 3398 will resurface. Henrik,
>> do you think committing the patch is the right decision here even though
>> it will reopen bug 3398?
> 
> Yes, but with a plan to fix it correctly.
...

>> The bug title is wrong. There is a long discussion in the bug report
>> about what the bug is really about. I think a better bug title would be:
>> "persistent server connection closed before PUT".
> 
> yes.
> 
> Fix the bug title, reopen it with comment above and restore the check.

Done. Quality patches preserving PUT bodies (via 100-continue and/or
buffering) to optimize Squid pconn handling are welcomed.


Thank you,

Alex.



Re: [PATCH]proposal for a fake_strore_url helper wrapping.

2012-09-10 Thread Amos Jeffries

On 11.09.2012 04:16, Eliezer Croitoru wrote:

On 09/10/2012 05:39 PM, Alex Rousskov wrote:

On 09/09/2012 04:08 AM, Eliezer Croitoru wrote:
I took the url_rewrite helper and used what ever seems needed to 
create

a url_store_rewrite helper.
I also added the needed variables for the next step and freeing 
them

where and if needed.
this is a more of a basic sketch.
Please do not mark sketch patches not meant for commit with a 
[PATCH]

prefix. [RFC] or no prefix would be better.

OK

When you think these changes are ready for commit, please include a
proposed commit message. It is a good idea to do that even before 
the
last stage, of course -- it tells us what the patch is supposed to 
do.

"what ever seems needed to create a url_store_rewrite helper" is too
vague. If the code is committed with such a message, most of the
knowledge gained during recent discussions will be lost and somebody
would have to start from scratch when they want to understand how 
the
new feature works and, more importantly, why it should work that 
way.

it's not that vague if you do ask me.
the process is pretty simple if you do ask me (after reading XK++ 
lines)
I dont want to maintain a branch but to push it into squid as a 
feature.

I proposed from the beginning to make it in stages:
1. embed a fake store url rewrite fake.(naming is not really
important) as grounds to the next steps.


If by this you mean the helper/ section fake helper. The 
helper/url_rewrite/fake should be sufficient. url-rewrite, store-url, 
and location-rewrite (also not ported) all have the same API limitations 
and format so the helpers can all be in helper/url_rewrite/ together.




this part is pretty important by itself because no one even done that
in the 3+.
2. embed the code that actually handles the data from the external
program when storing it in squid process.
3. use it at every point needed in squid while handling the requests.






Please do not forget to document all new members (using doxygen
comments) -- this is one of the mandatory requirements for patches. 
And,

as Amos pointed out, remove "for storeurl" marks -- everything you
change in this patch is for the storeurl feature (or at least it 
should

be) so the diff blobs themselves can be used as such a mark.

Since I Have never used doxygen and friends in my code before I will
try to learn them on the way.


Basically C/C++ comments, just add an extra /  or * ( "///..."   or  
"/** ... */") to start the comment and ensure clear paragraph 
separation.


For functions/methods with return values or special mention required on 
parameters use \param and \return or \retval. You can see examples of 
these around the code or we can assist with corrections during audit.






If this code is not sufficient to support the storeurl feature, I do 
not
think it should be committed to trunk -- it has no stand-alone 
value.

Commit it to your own branch so that you can maintain the progress
history and post incremental patches for discussion. When the 
feature is

completed, your branch can then be merged into trunk.

I do have a plan since From all the data I gathered and code review I
did until now it's pretty straight forward.

I will look at the guidelines of squid3 code (started already)


So you do remind me that I didnt proposed *yet* the basic idea which
is critical.
what I wanted is to get all the patches ready and then commit them.
I must state I'm testing my code and not waiting until it breaks 
something.


If you do want to know exactly what is being done or what I propose:
get the rewritten url from the external software.
put it in a char * variable (while NULLED at initialization).


I've been wondering if it has to be a char* for any particular reason 
(likely deep in store code). If not please use String type provided by 
Squid which will manage all the allocation/deallocation complexity for 
you.



the access to it will be using the
mem_obj->(name:strore_uri,storeurl.. whatever)
the only really important place I have seen that any code should be
committed in other parts then redirect and friends is at the
getpublickey and friends.
The above is since squid only finds HIT by hash(has was discussed 
before).


So the way to do it is pretty simple like it was before:
while accessing any place mem_obj->url (needed for has and friends)
it will transform into:
if (mem_obj->store__uri\url){
 use store_uri\url in operation
}
else {
use uri\url  in operation

}


if you\someone need me to explain anything I'm here for it.

Doing it the *right *way?
we are deciding if it's the right way so I will propose a working way
that seems efficient enough and that will work.


and a question:
since the http object has uri,log_uri we used store_uri to match the 
others.

but at the mem-obj there is a url methods.
so do we want to stick with anything on the way? nameing the method
at mem_obj->storeurl ?
The 2.7 user storeurl for everything and I mean everything till the
last column.
So what do you

Re: [PATCH] Do not chunk Content-Range responses

2012-09-10 Thread Alex Rousskov
On 08/24/2012 04:28 AM, Amos Jeffries wrote:
> On 24/08/2012 4:30 p.m., Alex Rousskov wrote:
>> Hello,
>>
>>  When Squid forwards a response with a Content-Range header,
>> ClientSocketContext::socketState() detects the end of the response
>> range(s) and returns STREAM_*COMPLETE to
>> ClientSocketContext::writeComplete(). The latter thinks that the writing
>> of the response to the client must be over and calls
>> keepaliveNextRequest() instead of writing the last-chunk (if any). If
>> the to-client response was chunked, the client gets stuck waiting for
>> that missing last-chunk.
>>
>> The multipart Range request case was already excluded from chunking (or
>> it would probably suffer from the same problem). With this change, no
>> Content-Range responses will be chunked.
>>
>> N.B. Some servers send Content-Range responses to basic GET requests
>> without a Range header, so the problem affects more than just Range
>> requests.
>>
>> A proper fix would be to rewrite ClientSocketContext::writeComplete()
>> and other code so that it does not mix internal ClientStream completion
>> with [possibly chunk-encoded] writing completion. This should probably
>> be done along with fixing ClientSocketContext::socketState() and other
>> state-checking code to ignore to-client persistence
>> (flags.proxy_keepalive), which is not related to the internal
>> ClientStream state. Those changes are too big and too potentially
>> disruptive to be included in this fix though. Patches welcome.
>>
>>
>> Thank you,
>>
>> Alex.
> 
> 
> +1.
> 
> Can you mark that comment in the code with a FIXME or similar please so
> we don't loose track of it for later.

Committed to trunk as r12318 after adding an XXX mark. The same change
should be ported to v3.2 IMO.


Thank you,

Alex.



Re: [PATCH]proposal for a fake_strore_url helper wrapping.

2012-09-10 Thread Eliezer Croitoru

On 9/10/2012 7:44 PM, Alex Rousskov wrote:

To be clear, I think we should not push dummy code (e.g., code after
step #1 or #2) into Squid trunk. Others may overrule me, of course, but
I do not recommend counting on that or fighting over it.

I'm with youy on it.

I just noticed that I didnt got you right so sorry for misunderstanding 
before.


> Sorry, I am not sure I understand the question, but I think
> MemoryObject::url (and similar ambiguous cases) should be renamed, at
> least temporary, to make sure we catch all cases of not-rewritten URL
> used for Store purposes.

Ok.

I Will work on it more since I'm on my way and it seems like it would 
ready somewhere in the near future (dont know how much i will have to 
test it so I will not post any code if I dont test it.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [PATCH]proposal for a fake_strore_url helper wrapping.

2012-09-10 Thread Alex Rousskov
On 09/10/2012 10:16 AM, Eliezer Croitoru wrote:
> On 09/10/2012 05:39 PM, Alex Rousskov wrote:

>> When you think these changes are ready for commit, please include a
>> proposed commit message. It is a good idea to do that even before the
>> last stage, of course -- it tells us what the patch is supposed to do.
>> "what ever seems needed to create a url_store_rewrite helper" is too
>> vague. If the code is committed with such a message, most of the
>> knowledge gained during recent discussions will be lost and somebody
>> would have to start from scratch when they want to understand how the
>> new feature works and, more importantly, why it should work that way.

> it's not that vague if you do ask me.

Please assume that a reader knows a lot about Squid3 but does not know
what a "url_store_rewrite helper" is. Target _that_ reader in your
proposed commit message rather than writing a note to yourself.


> I dont want to maintain a branch but to push it into squid as a feature.

Those two options are not mutually exclusive. And I am not saying you
have to maintain a branch, especially a public one. Using bzr or another
VCS makes it easier to manage and post changes for some, but that is
your personal decision. Do whatever you feel comfortable with.


> I proposed from the beginning to make it in stages:
> 1. embed a fake store url rewrite fake.(naming is not really important)
> as grounds to the next steps.
> this part is pretty important by itself because no one even done that in
> the 3+.
> 2. embed the code that actually handles the data from the external
> program when storing it in squid process.
> 3. use it at every point needed in squid while handling the requests.

Sounds like a good plan for your project. I think we should commit the
results of step #3 once they are ready.

To be clear, I think we should not push dummy code (e.g., code after
step #1 or #2) into Squid trunk. Others may overrule me, of course, but
I do not recommend counting on that or fighting over it.



>> Please do not forget to document all new members (using doxygen
>> comments) -- this is one of the mandatory requirements for patches. And,
>> as Amos pointed out, remove "for storeurl" marks -- everything you
>> change in this patch is for the storeurl feature (or at least it should
>> be) so the diff blobs themselves can be used as such a mark.

> Since I Have never used doxygen and friends in my code before I will try
> to learn them on the way.


We do not need anything fancy here. For new member/function
descriptions, basic "///", "/**", and "//<" comment openers is all it
takes. The important part is the content of that comment, especially for
members with vague names such as "data", "state", "object", etc.


> and a question:
> since the http object has uri,log_uri we used store_uri to match the
> others.
> but at the mem-obj there is a url methods.
> so do we want to stick with anything on the way? nameing the method at
> mem_obj->storeurl ?
> The 2.7 user storeurl for everything and I mean everything till the last
> column.
> So what do you think?

Sorry, I am not sure I understand the question, but I think
MemoryObject::url (and similar ambiguous cases) should be renamed, at
least temporary, to make sure we catch all cases of not-rewritten URL
used for Store purposes.


Cheers,

Alex.



Re: [PATCH]proposal for a fake_strore_url helper wrapping.

2012-09-10 Thread Eliezer Croitoru

On 09/10/2012 05:39 PM, Alex Rousskov wrote:

On 09/09/2012 04:08 AM, Eliezer Croitoru wrote:

I took the url_rewrite helper and used what ever seems needed to create
a url_store_rewrite helper.
I also added the needed variables for the next step and freeing them
where and if needed.
this is a more of a basic sketch.

Please do not mark sketch patches not meant for commit with a [PATCH]
prefix. [RFC] or no prefix would be better.

OK

When you think these changes are ready for commit, please include a
proposed commit message. It is a good idea to do that even before the
last stage, of course -- it tells us what the patch is supposed to do.
"what ever seems needed to create a url_store_rewrite helper" is too
vague. If the code is committed with such a message, most of the
knowledge gained during recent discussions will be lost and somebody
would have to start from scratch when they want to understand how the
new feature works and, more importantly, why it should work that way.

it's not that vague if you do ask me.
the process is pretty simple if you do ask me (after reading XK++ lines)
I dont want to maintain a branch but to push it into squid as a feature.
I proposed from the beginning to make it in stages:
1. embed a fake store url rewrite fake.(naming is not really important) 
as grounds to the next steps.
this part is pretty important by itself because no one even done that in 
the 3+.
2. embed the code that actually handles the data from the external 
program when storing it in squid process.

3. use it at every point needed in squid while handling the requests.






Please do not forget to document all new members (using doxygen
comments) -- this is one of the mandatory requirements for patches. And,
as Amos pointed out, remove "for storeurl" marks -- everything you
change in this patch is for the storeurl feature (or at least it should
be) so the diff blobs themselves can be used as such a mark.
Since I Have never used doxygen and friends in my code before I will try 
to learn them on the way.




If this code is not sufficient to support the storeurl feature, I do not
think it should be committed to trunk -- it has no stand-alone value.
Commit it to your own branch so that you can maintain the progress
history and post incremental patches for discussion. When the feature is
completed, your branch can then be merged into trunk.
I do have a plan since From all the data I gathered and code review I 
did until now it's pretty straight forward.


I will look at the guidelines of squid3 code (started already)


So you do remind me that I didnt proposed *yet* the basic idea which is 
critical.

what I wanted is to get all the patches ready and then commit them.
I must state I'm testing my code and not waiting until it breaks something.

If you do want to know exactly what is being done or what I propose:
get the rewritten url from the external software.
put it in a char * variable (while NULLED at initialization).
the access to it will be using the mem_obj->(name:strore_uri,storeurl.. 
whatever)
the only really important place I have seen that any code should be 
committed in other parts then redirect and friends is at the 
getpublickey and friends.

The above is since squid only finds HIT by hash(has was discussed before).

So the way to do it is pretty simple like it was before:
while accessing any place mem_obj->url (needed for has and friends) it 
will transform into:

if (mem_obj->store__uri\url){
 use store_uri\url in operation
}
else {
use uri\url  in operation

}


if you\someone need me to explain anything I'm here for it.

Doing it the *right *way?
we are deciding if it's the right way so I will propose a working way 
that seems efficient enough and that will work.



and a question:
since the http object has uri,log_uri we used store_uri to match the others.
but at the mem-obj there is a url methods.
so do we want to stick with anything on the way? nameing the method at 
mem_obj->storeurl ?
The 2.7 user storeurl for everything and I mean everything till the last 
column.

So what do you think?

It seems to me like giving it a public name of storeurl is the more 
convenient way for public naming.(config etc)






Thank you,

Alex.


NP
Eliezer


Re: [RFC] CONNECT and use_ssl cache_peer

2012-09-10 Thread Alex Rousskov
On 09/09/2012 08:12 PM, Amos Jeffries wrote:
>> sön 2012-09-09 klockan 21:34 +1200 skrev Amos Jeffries:
>>
>>> Henrik and I seem to disagree on this being a good idea being plugged
>>> into BodyPipe. But as I see it BodyPipe is where Upgrade, WebSockets and
>>> SSL-Bump should be happening in a node which internally "receives" the
>>> tunnel connection and acts like a ConnStateData reading out of the
>>> tunnel.

> I'm imagining a setup of:
> 
>  client TCP -> ConnStateData(BodyPipeProducer) -> Bump(BodyPipeReader,
> 'fake' ConnStateData(BodyPipeProducer) ) -> BodyPipe -> server-side
> 
> so that:
>  * the bumped CONNECT tunnel can be the source of a whole pipeline of
> requests (HTTPS, etc)
>  * the original ConnStateData->Bump section of code retains all the
> CONNECT semantics down to and including terminate 'server' connection
> (aka Bump Node destructs) on end of CONNECT independent of what Bump ->
> server-side does with the server-side connection.
>  * when receiving a CONNECT we detect the right type of streams reader
> to handle the data and allocate it:
>** the Bump node could be Upgrade:TLS, Upgrade:WebSockets,
> Upgrade:HTTP/2, SSL-bump, pass-thru tunnel or some future protocol

FWIW, a "processing chain" design like that was one of the two primary
alternatives discussed when Squid3 was born. I agree that this design is
attractive for many reasons. I am guessing that ClientStreams were an
attempt to implement it (but we need to do much better than that!).


With ClientStream and adaptation chains lessons in mind, I would be
strongly opposed to making BodyPipe a _central_ piece of a processing
chain coordination. BodyPipe should be used for low-level shoveling of
opaque bytes within the chain, but there is more high-level semantics
that a message processing chain must handle and BodyPipe should not. New
"message processing stream" and "message processing stream node" classes
are needed if we want to build such chains, with the existing
ClientStream, StoreClient, and Adaptation code upgraded to use that new
API. This is Squid v4.0 stuff IMO...

Personally, I would welcome efforts in that direction, and I am glad you
have a very interested client pushing you there. However, this
monumental work should not become a precondition for fixing this
isolated bug, IMO.

Isolating [secure] peer connection establishment code into a class
shared by tunnel.cc and forward.cc code will already make any future
chaining improvements easier. Let's view this as a single step in the
right direction. I hope you do not insist on us running the whole
marathon within this bug scope :-).


Cheers,

Alex.



Re: ICAP connections under heavy loads

2012-09-10 Thread Alex Rousskov
On 09/09/2012 02:30 AM, Amos Jeffries wrote:

> IIRC the race is made worse by our internal code going timout event
> (async) scheduling a call (async), so doubling the async queue delay.
> Which shows up worst on heavily loaded proxies.

True. Unfortunately, the alternative is to deal with ConnOpener's sync
call reentering ConnOpener object. We all know what that eventually
leads to. I would rather have double async calls (and optimize the
general queue handling) than deal with long chains of reentrant calls,
especially when it comes to dealing with an exceptional event such as a
timeout.

I am sure this can be optimized further by writing reentrant code in
performance-critical places, but we have much bigger fish to fry at the
moment.


> The patch woudl be this:
> 
> === modified file 'src/comm/ConnOpener.cc'
> --- src/comm/ConnOpener.cc  2012-08-28 19:12:13 +
> +++ src/comm/ConnOpener.cc  2012-09-09 08:20:09 +
> @@ -139,6 +139,10 @@
>  // it never reached fully open, so cleanup the FD handlers
>  // Note that comm_close() sequence does not happen for
> partially open FD
>  Comm::SetSelect(temporaryFd_, COMM_SELECT_WRITE, NULL, NULL, 0);
> +if (calls_.earlyAbort_ != NULL) {
> + calls_.earlyAbort_->cancel("Comm::ConnOpener::doneConnecting");
> +calls_.earlyAbort_ = NULL;

The above line is extra because we do that immediately below:

> +}
>  calls_.earlyAbort_ = NULL;



>>> I think from all this discussion the ConnOpener::DelayedConnectRetry
>>> handler needs to be investigated as well.

>> Agreed. Why is that handler reusing the same failed socket? Should not
>> we close the failed socket descriptor upon detecting a failure and then
>> open a new one just before trying again?
> 
> Hmm. Not sure. This is a straight copy of the older code.
> 
> If you want a new socket FD most of what start() does needs to be
> re-done for it. Except that the start timer needs to be left unchanged,

Exactly.

> and the async calls need to be switched to teh new FD (would that be a
> reassignment, or a cancel + replace?)cancelled before replacing on the
> new FD.

Ideally, when we fix this, we should avoid attaching any callbacks that
may outlive a temporary FD to that temporary FD. For example, the
timeout callback should be converted into an event, which is not a
trivial change. AFAICT we do not have any other such callbacks (the
earlyAbort one is specific to the temporary FD and can stay), but I have
not checked closely.


Thank you,

Alex.



>> Here is the sketch for connect retries on failures, as I see it:
>>
>>  default:
>>  ++failRetries_; // TODO: rename to failedTries
>>
>>  // we failed and have to start from scratch
>>  ... close the temporary socket ...
>>
>>  // if we exhausted all retries, bail with an error
>>  if (failRetries_ > Config.connect_retries) {
>>  ... send COMM_ERR_CONNECT to the caller ...
>>  return;
>>  }
>>
>>  // we can still retry after a small pause
>>  // we might timeout before that pause ends, but that is OK
>>  eventAdd("Comm::ConnOpener::DelayedConnectRetry", ...)
>>
>> More changes would be needed to make this work correctly:
>>
>> * the timeout handler should not be associated with the socket
>> descriptor (since it may change). We can use eventAdd instead.
>>
>> * the Comm::ConnOpener::DelayedConnectRetry() wrapper should call a
>> method that will open a new socket.
> 
> Sounds okay to me.




Re: [PATCH]proposal for a fake_strore_url helper wrapping.

2012-09-10 Thread Alex Rousskov
On 09/09/2012 04:08 AM, Eliezer Croitoru wrote:
> I took the url_rewrite helper and used what ever seems needed to create
> a url_store_rewrite helper.
> I also added the needed variables for the next step and freeing them
> where and if needed.
> this is a more of a basic sketch.

Please do not mark sketch patches not meant for commit with a [PATCH]
prefix. [RFC] or no prefix would be better.

When you think these changes are ready for commit, please include a
proposed commit message. It is a good idea to do that even before the
last stage, of course -- it tells us what the patch is supposed to do.
"what ever seems needed to create a url_store_rewrite helper" is too
vague. If the code is committed with such a message, most of the
knowledge gained during recent discussions will be lost and somebody
would have to start from scratch when they want to understand how the
new feature works and, more importantly, why it should work that way.

Please do not forget to document all new members (using doxygen
comments) -- this is one of the mandatory requirements for patches. And,
as Amos pointed out, remove "for storeurl" marks -- everything you
change in this patch is for the storeurl feature (or at least it should
be) so the diff blobs themselves can be used as such a mark.

If this code is not sufficient to support the storeurl feature, I do not
think it should be committed to trunk -- it has no stand-alone value.
Commit it to your own branch so that you can maintain the progress
history and post incremental patches for discussion. When the feature is
completed, your branch can then be merged into trunk.


Thank you,

Alex.



Re: Failure ratio

2012-09-10 Thread Henrik Nordström
mån 2012-09-10 klockan 15:11 +0100 skrev Konstantin Korsunov:
> All working good, but some times 3 times per day I have next error:
> 
> Failure Ratio at 1.02
> Going into hit-only-mode for 5 minutes...

That means that an unexpected amount of the proxied requests have
failed, and to avoid further network damabe your Squid stops responding
with ICP_MISS responses to any cache peers you have. Usually this is due
to a networking failure of some kind, but can also have natural causes
if some client sends a lot of bad requests, i.e. if infested with a spam
bot or similar.

There is no difference for normal proxy users. The difference is only
for cache peers using this proxy as a parent.

Regards
Henrik