Re: Squid-2.6 Negotiate broken
Hi Henrik, At 23.15 27/02/2007, Henrik Nordstrom wrote: tis 2007-02-27 klockan 22:07 +0100 skrev Henrik Nordstrom: But regardless of this I can confirm that Negotiate seems to be broken.. Has been broken a while.. 2.6.CVS broken 2.6.STABLE9 broken 2.6.STABLE8 broken 2.6.STABLE7 broken 2.6.STABLE1 OK 2.6.STABLE4 OK 2.6.STABLE6 OK So it got broken somewhere between 2.6.STABLE6 and STABLE7.. Probably the patches for Bug #1792.. Yep.. so it's been broken since beginning of January.. Not so strange for some reasons: - Full Negotiate support is only available on Windows, and it's little used - I have missed STABLE7 and STABLE8 binary releases for Windows - Firefox seems to switch silently to NTLM when negotiate fails Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: Squid-2.6 Negotiate broken
Hi Henrik, At 23.53 27/02/2007, Henrik Nordstrom wrote: tis 2007-02-27 klockan 23:15 +0100 skrev Henrik Nordstrom: Looking. Ah, it's due to Negotiate returning a final response to the client, and this confuses the twisted logics here even further.. cleaning up to untwist the FINISHED/DONE states into one. Ok. Should work better now. Please give it some testing in both Negotiate and NTLM. I will test it tonight. Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: locked out by partial cvsmerge
tis 2007-02-27 klockan 21:40 -0700 skrev Alex Rousskov: Thanks for the explanation and snipped details. Unfortunately, it looks like the problem is back and has even become worse as I see more locks (from other developers) getting stuck now: cvs rtag: [21:28:08] waiting for rousskov's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc cvs rtag: [21:28:38] waiting for serassio's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc cvs rtag: [21:29:08] waiting for amosjeffries's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc All seem to be gone now... Do you know whether there is something going on with SourceForge that increases the probability of these locks getting stuck? Anything we can do about it in the short term? It might actually be the cvsmerge script.. the following line isn't exactly kind to CVS.. o Check if there is any pending changes in the repository diffl=`eecvs -q rdiff -kk -r ${mergetag} -r ${mergefrom} ${module} | head | wc -l` probably should remove the head from there, letting the rdiff run to completion. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: locked out by partial cvsmerge
ons 2007-02-28 klockan 13:01 +0800 skrev Adrian Chadd: I'm not sure to be honest; but do you think it'd be a good idea just to shift this development tree stuff back to a seperate repository or server over at TMF? It sounds like it'll be less of a headache now. The main reason why SF is used for this repository is security. The developer CVS repository is a scratchpad, and anyone who likes is given write access there if they want. - Separate server from the main repository. - Server not our maintenance problem. - Fully automated developer registration. - CVS is used because of legacy reasons only (i.e. the merge scripts written for CVS many years ago) But nothing says we must use the SF services, or even CVS. It would be quite nice if we would set up a public Bzr repository for example, or why not subversion as well. Account maintenance greatly simplified in both as no system account is needed. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: squid3 comments
Hi Adrian, Thanks for your answers. As I can understand you are talking about the squid4 project. If you want an independent opinion, I believe that it is not good idea to start striping the squid2. You will get again the same mistakes done at squid3. For a such project start from ground and then thieve code (and most important knowledge) from squid2 and squid3 (and other projects too :-) ) to make what you wand. Squid has a lot of parts implemented like modules (acl's, authentication, delay pools etc) which can modified to be loadable at runtime. A modular system needs different configuration system, different net-IO etc. You want to support multi-threading but squid2 does not care about it and you have to correct a lot of thinks to support it. What parts can you keep from a striped squid2 version? For such work, I think the best is to start from the base server (com_* stack?), then add the configuration system then the http/ftp protocol then acls as modules, etc... If the base system is quite well designed then the others developers will be able to convert parts they made for squid2 or squid3, or just add new code as modules without having to known the overall squid structure/code... Regards, Christos Adrian Chadd wrote: Now, I don't have icap on my list as a specific thing to support, but: * I'd like to look at whats been done in the icap-2.6 patchset and Squid-3 to plan the next evolution of the client, store and server side codebases to neatly support ICAP as a module, rather than a patch or a compile-time option with lots of #defines everywhere; * But what I'd like to do is support all the modern architecture features well - lots of CPUs - fast or slow; lots of RAM or as little RAM as exists on an embedded board; support the modern disk IO patterns much more efficiently than we do at the moment, etc. This requires some pretty drastic internal changes - threading, at the outside. Maybe multi-process if people can think of a portable way of implementing it. * I'd like to make the code as modular as possible so a lot of things are simply loadable at runtime. Don't need the URL rewriter? Don't load the module, no performance impact. * But to do all of this we need to strip Squid all the way back to its core bits, build fast, flexible code libraries and APIs which support all the stuff we want to do - including, yes, icap. Its just too hard for me to do all of the above with the Squid codebase dragging as much history as it does.
Re: Squid3 BodyReader changes
Hi Alex, Alex Rousskov wrote: I have committed your change, simplified virgin body buffer maintenance (in hope to minimize the number of similar bugs), and probably fixed a bug with handling of post-preview 204 replies. Yes I know. I am watching the branch for changes and additions... The above was done many days ago, but I also wanted to sync with HEAD before asking you to test again. Since I am not able to do that because of stale CVS locks, I suggest that you try the squid3-icap branch before I sync with HEAD. Yes, I am using it and test it. I did not see any problem. All thinks looks OK Regards, Christos
Re: locked out by partial cvsmerge
Henrik Nordstrom wrote: tis 2007-02-27 klockan 21:40 -0700 skrev Alex Rousskov: Thanks for the explanation and snipped details. Unfortunately, it looks like the problem is back and has even become worse as I see more locks (from other developers) getting stuck now: cvs rtag: [21:28:08] waiting for rousskov's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc cvs rtag: [21:28:38] waiting for serassio's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc cvs rtag: [21:29:08] waiting for amosjeffries's lock in /cvsroot/squid/squid3/lib/cppunit-1.10.0/doc All seem to be gone now... Do you know whether there is something going on with SourceForge that increases the probability of these locks getting stuck? Anything we can do about it in the short term? It might actually be the cvsmerge script.. the following line isn't exactly kind to CVS.. o Check if there is any pending changes in the repository diffl=`eecvs -q rdiff -kk -r ${mergetag} -r ${mergefrom} ${module} | head | wc -l` probably should remove the head from there, letting the rdiff run to completion. Is that a hint for us all to edit our copies of cvsmerge? Regards Henrik Amos
Re: squid3 comments
Is squid3 faster or slower than squid2? _J Alex Rousskov [EMAIL PROTECTED] 02/27/07 5:04 PM On Tue, 2007-02-27 at 13:27 +0200, Tsantilas Christos wrote: In the other hand I need a proxy with an icap client because I spent time (and continue spending) to an icap related project. Squid3 has a good icap client. The first problem someones can see in squid3 is that there are some parts derived from c-code, which are not (fully) converted to real c++ code. The second is a feeling that some parts of code are half-completed. I am thinking that maybe it is good practice for someone to start fixing this things first I agree that many Squid3 parts should be fixed, polished, or thrown away. However, I think that we should focus on making Squid3 stable first, and the performance/polishing work you are discussing should be done for v3.1. There are plenty of users who can use Squid3 that is stable but not very fast. An alternate for me is to try fix the bugs and rewrite some of the icap-related parts of the squid26-icap branch. I don't know This would be a bad idea from my biased point of view. While the code migration to Squid3 was poorly done, we are already at the point where we can make Squid3 work for your purposes instead of going back. Please do not forget that Squid2 has its own problems; it is not like you will be migrating to a great code that just needs a yet another ICAP patch! Alex.
Re: locked out by partial cvsmerge
tor 2007-03-01 klockan 01:25 +1300 skrev Amos Jeffries: It might actually be the cvsmerge script.. the following line isn't exactly kind to CVS.. o Check if there is any pending changes in the repository diffl=`eecvs -q rdiff -kk -r ${mergetag} -r ${mergefrom} ${module} | head | wc -l` probably should remove the head from there, letting the rdiff run to completion. Is that a hint for us all to edit our copies of cvsmerge? Remove the head part of the pipe.. (`| head`, or `head |`) but I still have it in my copy.. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: squid3 comments
On Wed, 2007-02-28 at 09:19 -0500, Jeremy Hall wrote: Is squid3 faster or slower than squid2? I will be doing performance tests with Squid3 shortly. Will post. Alex.
Re: squid3 comments
Hi Alex, Alex Rousskov wrote: I agree that many Squid3 parts should be fixed, polished, or thrown away. However, I think that we should focus on making Squid3 stable first, and the performance/polishing work you are discussing should be done for v3.1. I am not talking about big changes. I am talking about small changes which can be done while reading the code. As an example of such changes I am attaching the rewritten parseHttpRequest, prepareTransparentURL and prepareAcceleratedURL A second example: again In parseHttpRequest we have the HttpParser struct which we are using it to parse the first line of request. The HttpRequest::parseFirstLine method (of HttpMsg derived HttpRequest class) does more or less the same. Maybe they can merged. A third example: in HttpStateData::processReplyHeader int http.cc file. I am seeing the line: const bool parsed = newrep-parse(readBuf, eof, error); and some lines after I am seeing: header_bytes_read = headersEnd(readBuf-content(), readBuf-contentSize()); What the hell parsing we did in previous line if we did not keep the end of headers? The easier we can do is to make HttpReply::parse to return the size of headers on success or zero else. Or just keep in an internall variable of HttpReply class the size of headers and then get them with a call like newrep-headersSize() I think such changes are small and can make squid3 to be a little faster and can simplify in some cases the code, which will help the debuging. I am not having a lot of time too but I can find some hours here or there to make such changes, if needed. Alex Rousskov wrote: . There are plenty of users who can use Squid3 that is stable but not very fast. This is true, in most setups squid is not needed to be (very) fast. But in the other hand I am seeing that sometimes it is very difficult to explain to some people that they do not need a server with 2 or more fast cpus, expensive hardware raids and fast scsi disks, to make a file-server for only 10-20 clients Regards, Christos int internalCheckn(const char *urlpath,int size) { if(size16) return 0; return (0 == strncmp(urlpath, /squid-internal-, 16)); } char *memstr(char *s,char *pattern,int s_len){ int i,k; int p_len=strlen(pattern); for(i=0;i=s_len-p_len;i++){ k=0; while(kp_len){ if(s[i+k] != pattern[k]) break; k++; } if(k==p_len) return s+i; } return NULL; } static void prepareAcceleratedURL(ConnStateData::Pointer conn, ClientHttpRequest *http, char *url, int url_size, const char *req_hdr) { int vhost = conn-port-vhost; int vport = conn-port-vport; int uri_sz = 0, tmpsz = 0; char *host,*s; http-flags.accel = 1; /* BUG: Squid cannot deal with '*' URLs (RFC2616 5.1.2) */ if (url_size = 15 strncasecmp(url, cache_object://, 15) == 0) return; /* already in good shape */ if (*url != '/') { if (conn-port-vhost) return; /* already in good shape */ /* else we need to ignore the host name */ s = NULL; s = memstr(url, //, u_size); if(s) u_size -= ((s-url) + 2); url = s + 2; #if SHOULD_REJECT_UNKNOWN_URLS if (!url) return parseHttpRequestAbort(conn, error:invalid-request); #endif if (url){ s=memchr(url,'/',u_size); if(s) u_size -= s-url; url=s; } if (!url){ url = (char *) /; url_size = 1; } } if (internalCheckn(url,url_size)) { /* prepend our name port */ s = (char *)xcalloc(url_size+1,1); memcpy(s,url,url_size); s[url_size]='\0'; http-uri = xstrdup(internalLocalUri(NULL, s)); xfree(s); return; } if (vhost (host = mime_get_header(req_hdr, Host)) != NULL) { uri_sz = url_size + 32 + Config.appendDomainLen + strlen(host); http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s, conn-port-protocol, host); } else if (conn-port-defaultsite) { uri_sz = url_size + 32 + Config.appendDomainLen + strlen(conn-port-defaultsite); http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s, conn-port-protocol, conn-port-defaultsite); } else if (vport == -1) { /* Put the local socket IP address as the hostname. */ uri_sz = url_size + 32 + Config.appendDomainLen; http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s:%d, http-getConn()-port-protocol, inet_ntoa(http-getConn()-me.sin_addr), ntohs(http-getConn()-me.sin_port)); } else if (vport 0) { /* Put the local socket IP address as the
Re: squid3 comments
Tsantilas Christos wrote: As an example of such changes I am attaching the rewritten parseHttpRequest, prepareTransparentURL and prepareAcceleratedURL Sorry to all the code I send in previous mail will not compile. I am sending it again. Needs some more testing to be sure that it is OK int internalCheckn(const char *urlpath,int size) { if(size16) return 0; return (0 == strncmp(urlpath, /squid-internal-, 16)); } char *memstr(char *s,const char *pattern,int s_len){ int i,k; int p_len=strlen(pattern); for(i=0;i=s_len-p_len;i++){ k=0; while(kp_len){ if(s[i+k] != pattern[k]) break; k++; } if(k==p_len) return s+i; } return NULL; } static void prepareAcceleratedURL(ConnStateData::Pointer conn, ClientHttpRequest *http, char *url, int url_size, const char *req_hdr) { int vhost = conn-port-vhost; int vport = conn-port-vport; int uri_sz = 0, tmpsz = 0; char *host,*s; http-flags.accel = 1; /* BUG: Squid cannot deal with '*' URLs (RFC2616 5.1.2) */ if (url_size = 15 strncasecmp(url, cache_object://, 15) == 0) return; /* already in good shape */ if (*url != '/') { if (conn-port-vhost) return; /* already in good shape */ /* else we need to ignore the host name */ s = NULL; s = memstr(url, //, url_size); if(s) url_size -= ((s-url) + 2); url = s + 2; #if SHOULD_REJECT_UNKNOWN_URLS if (!url) return parseHttpRequestAbort(conn, error:invalid-request); #endif if (url){ s=(char *)memchr(url,'/',url_size); if(s) url_size -= s-url; url=s; } if (!url){ url = (char *) /; url_size = 1; } } if (internalCheckn(url,url_size)) { /* prepend our name port */ s = (char *)xcalloc(url_size+1,1); memcpy(s,url,url_size); s[url_size]='\0'; http-uri = xstrdup(internalLocalUri(NULL, s)); xfree(s); return; } if (vhost (host = mime_get_header(req_hdr, Host)) != NULL) { uri_sz = url_size + 32 + Config.appendDomainLen + strlen(host); http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s, conn-port-protocol, host); } else if (conn-port-defaultsite) { uri_sz = url_size + 32 + Config.appendDomainLen + strlen(conn-port-defaultsite); http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s, conn-port-protocol, conn-port-defaultsite); } else if (vport == -1) { /* Put the local socket IP address as the hostname. */ uri_sz = url_size + 32 + Config.appendDomainLen; http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s:%d, http-getConn()-port-protocol, inet_ntoa(http-getConn()-me.sin_addr), ntohs(http-getConn()-me.sin_port)); } else if (vport 0) { /* Put the local socket IP address as the hostname, but static port */ uri_sz = url_size + 32 + Config.appendDomainLen; http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s:%d, http-getConn()-port-protocol, inet_ntoa(http-getConn()-me.sin_addr), vport); } else return; /*OK Append the url at the end of uri and we are OK*/ uri_sz -= tmpsz; url_size = XMIN(uri_sz - 1, url_size); memcpy(http-uri+tmpsz, url , url_size); http-uri[tmpsz+url_size] = '\0'; debug(33, 5) (ACCEL VHOST REWRITE: '%s'\n, http-uri); } static void prepareTransparentURL(ConnStateData::Pointer conn, ClientHttpRequest *http, char *url, int url_size, const char *req_hdr) { char *host; int uri_sz = 0, tmpsz = 0; http-flags.transparent = 1; if (*url != '/') return; /* already in good shape */ /* BUG: Squid cannot deal with '*' URLs (RFC2616 5.1.2) */ if ((host = mime_get_header(req_hdr, Host)) != NULL) { uri_sz = url_size + 32 + Config.appendDomainLen + strlen(host); http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s, conn-port-protocol, host); } else { /* Put the local socket IP address as the hostname. */ uri_sz = url_size + 32 + Config.appendDomainLen; http-uri = (char *)xcalloc(uri_sz, 1); tmpsz = snprintf(http-uri, uri_sz, %s://%s:%d, http-getConn()-port-protocol, inet_ntoa(http-getConn()-me.sin_addr), ntohs(http-getConn()-me.sin_port)); } uri_sz -= tmpsz; url_size = XMIN(uri_sz - 1,
Re: Squid-2.6 Negotiate broken
Hi Henrik, At 10.34 28/02/2007, Guido Serassio wrote: Hi Henrik, At 23.53 27/02/2007, Henrik Nordstrom wrote: tis 2007-02-27 klockan 23:15 +0100 skrev Henrik Nordstrom: Looking. Ah, it's due to Negotiate returning a final response to the client, and this confuses the twisted logics here even further.. cleaning up to untwist the FINISHED/DONE states into one. Ok. Should work better now. Please give it some testing in both Negotiate and NTLM. I will test it tonight. Both NTLM and Negotiate seems to work fine again. Tested both squid.2-HEAD and SQUID_2_6. I have also successfully run a build test of squid.2-HEAD on the following platforms: NetBSD 3.1 OpenBSD 3.8 SGI Irix 6.5 HP Tru64 5.1 MinGW Just found some warning errors: HttpStatusLine.c: In function `httpStatusLineParse': HttpStatusLine.c:100: warning: subscript has type `char' HttpStatusLine.c:116: warning: subscript has type `char' HttpStatusLine.c:130: warning: subscript has type `char' HttpMsg.c: In function `httpMsgParseRequestLine': HttpMsg.c:183: warning: subscript has type `char' HttpMsg.c:193: warning: subscript has type `char' HttpMsg.c:202: warning: subscript has type `char' HttpMsg.c:245: warning: subscript has type `char' HttpMsg.c:263: warning: subscript has type `char' HttpMsg.c:283: warning: subscript has type `char' Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: squid3 comments
On Wed, 2007-02-28 at 20:48 +0200, Tsantilas Christos wrote: As an example of such changes I am attaching the rewritten parseHttpRequest, prepareTransparentURL and prepareAcceleratedURL A second example: again In parseHttpRequest we have the HttpParser struct which we are using it to parse the first line of request. The HttpRequest::parseFirstLine method (of HttpMsg derived HttpRequest class) does more or less the same. Maybe they can merged. A third example: in HttpStateData::processReplyHeader int http.cc file. I am seeing the line: const bool parsed = newrep-parse(readBuf, eof, error); and some lines after I am seeing: header_bytes_read = headersEnd(readBuf-content(), readBuf-contentSize()); What the hell parsing we did in previous line if we did not keep the end of headers? The easier we can do is to make HttpReply::parse to return the size of headers on success or zero else. Or just keep in an internall variable of HttpReply class the size of headers and then get them with a call like newrep-headersSize() I think such changes are small and can make squid3 to be a little faster and can simplify in some cases the code, which will help the debuging. I am not having a lot of time too but I can find some hours here or there to make such changes, if needed. If you are absolutely, positively, 100% sure that these changes are correct then it is only a question whether they will help us make Squid3 stable faster (by fixing bugs, improving the code, debugging flow, or whatever). If they will, we should implement them. My concern is that in many cases, an innocent-looking parsing function has a couple of well-hidden side effects. Removing a function call or changing calls order introduces subtle bugs. I have been bitten by this many times. I think we should start from stating the goal of these changes in Squid 3.0. If they are for performance improvement, I would suggest waiting until v3.1 or until performance tests indicate that we must improve Squid 3 performance before calling it stable. If the goal is to fix a common-case bug, we should make the necessary changes, of course. If the ultimate goal is different, let's discuss it. Thank you, Alex. P.S. I will try to look at your specific changes soon.
Re: squid3 comments
On Wed, 2007-02-28 at 17:54 -0700, Alex Rousskov wrote: I think we should start from stating the goal of these changes in Squid 3.0. If they are for performance improvement, I would suggest waiting until v3.1 or until performance tests indicate that we must improve Squid 3 performance before calling it stable. If the goal is to fix a common-case bug, we should make the necessary changes, of course. If the ultimate goal is different, let's discuss it. I agree. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid3 comments
On Thu, Mar 01, 2007, Robert Collins wrote: On Wed, 2007-02-28 at 17:54 -0700, Alex Rousskov wrote: I think we should start from stating the goal of these changes in Squid 3.0. If they are for performance improvement, I would suggest waiting until v3.1 or until performance tests indicate that we must improve Squid 3 performance before calling it stable. If the goal is to fix a common-case bug, we should make the necessary changes, of course. If the ultimate goal is different, let's discuss it. I agree. So do I. If you want Squid-3 to go ahead in production then please, please, please, concentrate on fixing up all the bugs that are in Bugzilla without huge code restructuring and get the stable release out the door. Adrian
default Methods?
Hi, I've again been bitten by the by default Squid doesn't support methods for application X where X is almost always Subversion. What do people think about: * adding in the Subversion methods in by default? * as a more long-term goal, adding in an option that allows Squid to handle any method but only those it knows about will be considered for caching. Thoughts? Adrian
knowledge base stuff
Something I've been meaning to do for a while is assemble a knowledge base of common specific problems. Kind of like the FAQ, but less how do I do this? and more It broke like X, how do I fix it? The first article: http://wiki.squid-cache.org/KnowledgeBase/NoNTLMGroupAuth It might get merged into the FAQ, I'm not sure at this stage. I'll start populating it as I see squid-users posts w/ solutions that I think merit recording. Adrian
Re: locked out by partial cvsmerge
On Wed, 2007-02-28 at 10:35 +0100, Henrik Nordstrom wrote: It might actually be the cvsmerge script.. the following line isn't exactly kind to CVS.. o Check if there is any pending changes in the repository diffl=`eecvs -q rdiff -kk -r ${mergetag} -r ${mergefrom} ${module} | head | wc -l` probably should remove the head from there, letting the rdiff run to completion. FWIW, I was able to merge after removing 'head'. Removing 'head' may have nothing to do with merge success, but what a relief! To speed things up, how about using the -s option with rdiff instead of the 'head' trick: If you use the -s option, no patch output is produced. Instead, a summary of the changed or added files between the two releases is sent to the standard output device. This is useful for find- ing out, for example, which files have changed between two dates or revisions. Is it portable? Seems to be available on FreeBSD and Linux. Thanks a lot, Alex.
Re: knowledge base stuff
Something I've been meaning to do for a while is assemble a knowledge base of common specific problems. Kind of like the FAQ, but less how do I do this? and more It broke like X, how do I fix it? The first article: http://wiki.squid-cache.org/KnowledgeBase/NoNTLMGroupAuth It might get merged into the FAQ, I'm not sure at this stage. I'll start populating it as I see squid-users posts w/ solutions that I think merit recording. It overlaps with parts of the FAQ, but it seems like a good idea to me. I'll try and add to it. I suggest we use the one topic-one page approach, and then let the wiki engine add the glue. Kinkie