Re: [squid-users] 0 means no limit??

2006-02-23 Thread mohinder garg
Hi,

when you say less than 1 sec. Isn't it exactly zero?

Thanks
Mohinder

On 2/23/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> tor 2006-02-23 klockan 17:24 +0530 skrev mohinder garg:
> > Hi,
> >
> > I have seen in squid 2.5 stable10 that at many places if i give 0
> > value, it means no limit.
>
> where this is mentioned in the squid.conf.default comments yes.
>
> > for example: connect_timeout 0  (it means no limit to connect timeout).
>
> connect_timeout does not work like this. If you set it to 0 then
> requests will time out very quickly (less than one second).
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.1 (GNU/Linux)
>
> iD8DBQBD/c+X516QwDnMM9sRAmbqAJ9W8ppr6QW5l9RY5Z1jsvg1ghbKAQCffssI
> 3LlYpcyAA/sNPCgPBwx0t78=
> =OQtV
> -END PGP SIGNATURE-
>
>
>


--
Mohinder Paul
Software Engineer
NET Devices Inc.
Bangalore-95, India.
Contact No. +91 80 55171314 (O)
  +91 9886176467 (M)


Re: [squid-users] acl req_mime_type

2006-02-23 Thread mohinder garg
Thanks.

On 2/23/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> ons 2006-02-22 klockan 13:25 +0530 skrev mohinder garg:
> > Hi,
> >
> > can anybody tell me what this acl does? does it block downloading or 
> > uploading?
> > and how can i test it?
>
> It matches the content-type of the HTTP request data going TO the
> requested web server.
>
> Is not really related to uploading, more related to applications sending
> data over HTTP. (most HTTP uploads uses forms based uploading)
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.1 (GNU/Linux)
>
> iD8DBQBD/c3K516QwDnMM9sRAgXlAKCO7N3uPKizVv5tNtF/HzH3zY5g/QCfeyJf
> baVLEL9J1RHGseku8qne+cg=
> =zPOk
> -END PGP SIGNATURE-
>
>
>


--
Mohinder Paul
Software Engineer
NET Devices Inc.
Bangalore-95, India.
Contact No. +91 80 55171314 (O)
  +91 9886176467 (M)


Re: [squid-users] no auth for one domain?

2006-02-23 Thread Mark Elsen
> Reading this, would it be possible to not require AUTH for a certain MIME
> header?
>
>

No because in that case, the object (webserver)-header info has, to
be looked at, if it has been received from the remote server (already).

M.


Re: [squid-users] no auth for one domain?

2006-02-23 Thread Terry Dobbs
Reading this, would it be possible to not require AUTH for a certain MIME 
header?


http_access allow header_type
http_access allow ntlm_users (provided proxy AUTH ACL is named 
'ntlm_users')


Sorry for butting in, just wondering..

Thanks


- Original Message - 
From: "Mark Elsen" <[EMAIL PROTECTED]>

To: "nairb rotsak" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, February 23, 2006 7:44 PM
Subject: Re: [squid-users] no auth for one domain?



Is it possible to have my ntlm users go around 1
domain?  We can't seem to get a state web site (which
uses a weird front end to it's client... but it ends
up on the web) to go through the proxy.  When we sniff
the traffic locally, it is popping up a 407, but their
isn't anyway to log in.

I tried to put an acl and http_access higher in the
list in the .conf, but that didn't seem to matter?



It would have been more productive to show that  line, which you put
for that domain in squid.conf, offhand & probably it should
resemble something like this :

acl ntlm_go_around dstdomain name-excluded-domain
...

http_access allow ntlm_go_around
http_access allow ntlm_users (provided proxy AUTH ACL is named 
'ntlm_users')


M.


--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.0.0/267 - Release Date: 2/22/2006




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.0.0/267 - Release Date: 2/22/2006



Re: [squid-users] Save clients password

2006-02-23 Thread Squidrunner Support Team
On 2/24/06, Mark Elsen <[EMAIL PROTECTED]> wrote:
> > While you are entering the username and password, you can save the
> > password by enabling the checkbox below the password field. This will
> > save your username and password and when you open the browser it will
> > not ask for the authentication.
> >
>
>  - Did you try this ?

This will ask for authentication but you need not want to retype it
again, you can just click n go.


Re: [squid-users] cpu usage increases over time, squid performance declines

2006-02-23 Thread Mike Solomon
All of the machines have had a full process restart and they are  
still experiencing the same problems, so it looks as if  
half_closed_clients wasn't the source of the problem.


-Mike

On Feb 22, 2006, at 1:12 PM, Mike Solomon wrote:

I added this line to the config on two of my hosts, but it did not  
have any effect. The host experienced the same amount of slowdown  
under high load and had to be restarted.


I should note that I changed the config file and did:

sudo squid -k reconfigure

I did not kill the process.

I'm not sure if I understand half_closed_clients exactly, but the  
number of active file descriptors did not change significantly.


As I mentioned before, turning down the keep-alive time and  
lowering the active file descriptors did not seem to have any  
effect previously.


Thanks,

-Mike

On Feb 21, 2006, at 2:47 PM, Henrik Nordstrom wrote:


tis 2006-02-14 klockan 22:31 -0800 skrev Mike Solomon:

This would be fantastic, but the machines "fall over" after several
hours. I have 4 machines, each configured identically. They last a
few hours - they slowly consume more and more cpu, all in user  
space,

until it starts affecting the median HTTP repsonse time. Then
throughput drops precipitously.


Try

 half_closed_clients off

Regards
Henrik







Re: [squid-users] Save clients password

2006-02-23 Thread Mark Elsen
> While you are entering the username and password, you can save the
> password by enabling the checkbox below the password field. This will
> save your username and password and when you open the browser it will
> not ask for the authentication.
>

 - Did you try this ?

 M.


Re: [squid-users] no auth for one domain?

2006-02-23 Thread Mark Elsen
> Is it possible to have my ntlm users go around 1
> domain?  We can't seem to get a state web site (which
> uses a weird front end to it's client... but it ends
> up on the web) to go through the proxy.  When we sniff
> the traffic locally, it is popping up a 407, but their
> isn't anyway to log in.
>
> I tried to put an acl and http_access higher in the
> list in the .conf, but that didn't seem to matter?
>

It would have been more productive to show that  line, which you put
for that domain in squid.conf, offhand & probably it should
resemble something like this :

acl ntlm_go_around dstdomain name-excluded-domain
...

http_access allow ntlm_go_around
http_access allow ntlm_users (provided proxy AUTH ACL is named 'ntlm_users')

M.


Re: [squid-users] squid_ldap_auth Novell and a ERR Success message...

2006-02-23 Thread Mark Elsen
> Has anybody come across this problem of getting Squid_ldap_auth to get
> users off of a NDS ldap server? ldapsearch can connect to it fine, and I
> can see the users, but when I use it to auth with squid. It gives me a
> ERR Success message. Also, do you know where or how I can turn the logs
> on to see what is going on with this? The squid_ldap_auth has a -d for
> debug but it does nothing that I can see.
>


http://www.squid-cache.org/mail-archive/squid-users/200306/0835.html

(e.g.)

 M.


[squid-users] squid_ldap_auth Novell and a ERR Success message...

2006-02-23 Thread Patrick Gray
Has anybody come across this problem of getting Squid_ldap_auth to get 
users off of a NDS ldap server? ldapsearch can connect to it fine, and I 
can see the users, but when I use it to auth with squid. It gives me a 
ERR Success message. Also, do you know where or how I can turn the logs 
on to see what is going on with this? The squid_ldap_auth has a -d for 
debug but it does nothing that I can see.


Thanks
Patrick




RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
FYI, I set half_closed_clients to off and that seemed to get rid of like 95% of 
those messages. 
 


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 3:03 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

tor 2006-02-23 klockan 14:26 -0800 skrev Gregori Parker:
> Well, everything is rebuilt, and my file descriptors are OK, but I'm still 
> seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ
> 
> Any more ideas?  Or are these safely ignored?  Or are they BAD?!? 

Looks like just some debug output. It's probably safe to edit the debug
statement to use log level 2..

As have been stated earilier in this thread the epoll patch is works in
progress. No guarantees it won't fry your computer and eat your lunch.
It's always recommended to have some hard skin when trying out patches
from devel.squid-cache.org.

Regards
Henrik



RE: [squid-users] post epoll...

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 14:26 -0800 skrev Gregori Parker:
> Well, everything is rebuilt, and my file descriptors are OK, but I'm still 
> seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ
> 
> Any more ideas?  Or are these safely ignored?  Or are they BAD?!? 

Looks like just some debug output. It's probably safe to edit the debug
statement to use log level 2..

As have been stated earilier in this thread the epoll patch is works in
progress. No guarantees it won't fry your computer and eat your lunch.
It's always recommended to have some hard skin when trying out patches
from devel.squid-cache.org.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Well, everything is rebuilt, and my file descriptors are OK, but I'm still 
seeing the storeClientCopy3: http://whatever - clearing ENTRY_DEFER_READ

Any more ideas?  Or are these safely ignored?  Or are they BAD?!? 


-Original Message-
From: Gregori Parker 
Sent: Thursday, February 23, 2006 12:44 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

Thanks Chris - that did the trick

Also, thanks to Squidrunner Support Team - your advice resolved my file 
descriptor issue.

Bigups to all you guys, thanks!
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:53 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Ok, I need more detail - it doesn't make a lot of sense to 
> me.  I ran ./bootstrap.sh where you said, and it told me this:
> 
> Trying autoconf (GNU Autoconf) 2.59

SNIP

> autoconf failed
> Autotool bootstrapping failed. You will need to investigate 
> and correct
> before you can develop on this source tree
> 
> Obviously I need some newer files, but I don't know where to 
> get them or where to put them once I got them.  PLEASE HELP :D
> 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris




Re: [squid-users] Filtering log data

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 19:52 +0100 skrev David:
> Hello,
> 
> I am using Squid to collect log data as part of a user study.
> 
> My problem is that logging headers ("log_mime_hdrs") together with the
> "regular" logging that takes place generates huge amounts of data. I am
> trying to minimze that load.
> 
> I am exclusively interested in logging:
> 
> - document requests (i.e not image mime types)
> - one specific line in the http request header
> 
> Are there some settings which would allow me to filter out these values?

http://devel.squid-cache.org/old_projects.html#customlog

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid transparent proxy + VPN problem

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 10:54 -0500 skrev David Clymer:

> I am having problems with networks connected via IPSec VPN. I am using
> native 2.6 ipsec, so there are no interfaces associated with individual
> VPN connections as when using KLIPS (the openswan IPSec implementation).

The native 2.6 ipsec still haves a bit of issues with iptables/netfilter
NAT (which is what implements iptables REDIRECT) for just the above
reason.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] FTP Commands

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 16:15 +0100 skrev Eugen Kraynovych:

> Important: works only with "HTTP CONNECT" Client-Method, neither with 
> "HTTP-Proxy with FTP-Support" (TotalCommander - Response auf DELETE - "Not 
> implemented"), nor anything else.

If you use ĆONNECT there is no HTTP involved other than the initial
CONNECT request to open the control channel. CONNECT in HTTP is
equivalent to CONNECT in SOCKS. It gives you a full duplex "direct"
connection to the requested server.

If you think allowing CONNECT to be used in this manner (the default
squid.conf does not for very valid security reasons) I assure you that
you would be much better off using a SOCKS proxy.


> I make now Doku for a software, which also should delete some files through 
> Proxy(SQUID), so I need to know, if it is oficially implemented, if there are 
> any limitations an so on.

Squid does not implement the DELETE HTTP method on ftp:// URLs.

Squid does not allow CONNECT ports other than 443 (https) and 563
(snews) by default. Other ports may be allowed by the local proxy admin,
but great care should be taken in what ports you open up to not allow
abuse of the HTTP proxy service.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] unsubscribe is not working

2006-02-23 Thread Jack Pepper

To the List Manager:

I have unsubscribed several times, in each case I get the confirmation 
reply, but a few hours later the squid-users stuff starts coming in 
again.


Very odd.  Could you please check the list and unsubscribe me.  thanks.


jp

-
Email solutions, MS Exchange alternatives and extrication,
security services, systems integration.
Contact:[EMAIL PROTECTED]




RE: [squid-users] Interception proxy: disable errors

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 11:20 -0500 skrev Shoebottom, Bryan:

> I realize I can do this, but the user will still receive a page.  Is
> there a way to have the client act as though it weren't going through a
> cache?

Nope.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Thanks Chris - that did the trick

Also, thanks to Squidrunner Support Team - your advice resolved my file 
descriptor issue.

Bigups to all you guys, thanks!
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:53 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Ok, I need more detail - it doesn't make a lot of sense to 
> me.  I ran ./bootstrap.sh where you said, and it told me this:
> 
> Trying autoconf (GNU Autoconf) 2.59

SNIP

> autoconf failed
> Autotool bootstrapping failed. You will need to investigate 
> and correct
> before you can develop on this source tree
> 
> Obviously I need some newer files, but I don't know where to 
> get them or where to put them once I got them.  PLEASE HELP :D
> 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris



RE: [squid-users] low squid performance?

2006-02-23 Thread Chris Robertson
> -Original Message-
> From: Tomasz Kolaj [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 11:17 AM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] low squid performance?
> 
> 
> Dnia czwartek, 23 lutego 2006 18:32, napisałeś:
> 
> > With epoll, 100 Req/sec puts my CPU at 23%.  It made a huge 
> difference.
> 
> I have still that same CPU usage.. mayby I aplied not this patch?
> 
> >
> > More memory and more spindles (drives) certainly won't 
> hurt, but you seem
> > to be CPU limited.  Taking care of that problem will likely 
> net you the
> > best improvement.
> 
> So what is your proposition?
> 
> Regards,
> -- 
> Tomasz Kolaj
>

Likely I wasn't clear with the steps involved with applying the epoll patch.  
My apologies.  Some of the following steps may be redundant in your case.

1) Grab most recent Squid source, untar, ungzip
2) Grab epoll patch
3) Grab autoconf 2.13 
(http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz)
4) Ungzip, untar, configure, make, and install.
5) Grab automake 1.5 
(http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz)
6) Ungzip, untar, etc. again.
7) Patch Squid source
8) Run bootstrap.sh in Squid source dir
9) Run "./configure --help" to verify --enable-epoll is an option
10) Configure, make, make install

Chris


RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Thank you very much Chris - I think that did the trick.

When I run bootstrap again, it seems successful...but can I ignore these 
warnings?

configure.in:1392: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1493: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1494: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1495: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1496: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1497: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1498: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1499: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1500: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1501: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1502: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1904: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1933: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1957: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1392: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1488: warning: AC_TRY_RUN called without default to allow cross 
compiling
configure.in:1489: warning: AC_TRY_RUN called without default to allow cross 
compiling
(etc...)

___
 
Gregori Parker  *  Network Administrator
___
 
 Phone 206.404.7916  *  Fax 206.404.7901
[EMAIL PROTECTED]  
 
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 12:17 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:53 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Ok, I need more detail - it doesn't make a lot of sense to 
> me.  I ran ./bootstrap.sh where you said, and it told me this:
> 
> Trying autoconf (GNU Autoconf) 2.59

SNIP

> autoconf failed
> Autotool bootstrapping failed. You will need to investigate 
> and correct
> before you can develop on this source tree
> 
> Obviously I need some newer files, but I don't know where to 
> get them or where to put them once I got them.  PLEASE HELP :D
> 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris



Re: [squid-users] low squid performance?

2006-02-23 Thread Tomasz Kolaj
Dnia czwartek, 23 lutego 2006 18:32, napisałeś:

> With epoll, 100 Req/sec puts my CPU at 23%.  It made a huge difference.

I have still that same CPU usage.. mayby I aplied not this patch?

>
> More memory and more spindles (drives) certainly won't hurt, but you seem
> to be CPU limited.  Taking care of that problem will likely net you the
> best improvement.

So what is your proposition?

Regards,
-- 
Tomasz Kolaj


RE: [squid-users] post epoll...

2006-02-23 Thread Chris Robertson
> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:53 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Ok, I need more detail - it doesn't make a lot of sense to 
> me.  I ran ./bootstrap.sh where you said, and it told me this:
> 
> Trying autoconf (GNU Autoconf) 2.59

SNIP

> autoconf failed
> Autotool bootstrapping failed. You will need to investigate 
> and correct
> before you can develop on this source tree
> 
> Obviously I need some newer files, but I don't know where to 
> get them or where to put them once I got them.  PLEASE HELP :D
> 

Actually, you need older files.  :o)

As of today...

Grab http://mirrors.kernel.org/gnu/autoconf/autoconf-2.13.tar.gz

Ungzip, untar, configure, make, and install.

Grab http://mirrors.kernel.org/gnu/automake/automake-1.5.tar.gz

Ungzip, untar, etc. again.

Re-run bootstrap.sh

Chris


RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Ok, I need more detail - it doesn't make a lot of sense to me.  I ran 
./bootstrap.sh where you said, and it told me this:

Trying autoconf (GNU Autoconf) 2.59
autoheader: WARNING: Using auxiliary files such as `acconfig.h', `config.h.bot'
autoheader: WARNING: and `config.h.top', to define templates for `config.h.in'
autoheader: WARNING: is deprecated and discouraged.
autoheader:
autoheader: WARNING: Using the third argument of `AC_DEFINE' and
autoheader: WARNING: `AC_DEFINE_UNQUOTED' allows to define a template without
autoheader: WARNING: `acconfig.h':
autoheader:
autoheader: WARNING:   AC_DEFINE([NEED_FUNC_MAIN], 1,
autoheader: [Define if a function `main' is needed.])
autoheader:
autoheader: WARNING: More sophisticated templates can also be produced, see the
autoheader: WARNING: documentation.
configure.in:13: warning: do not use m4_patsubst: use patsubst or m4_bpatsubst
aclocal.m4:628: AM_CONFIG_HEADER is expanded from...
configure.in:13: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:2552: warning: do not use m4_regexp: use regexp or m4_bregexp
aclocal.m4:641: _AM_DIRNAME is expanded from...
configure.in:2552: the top level
configure.in:13: warning: do not use m4_patsubst: use patsubst or m4_bpatsubst
aclocal.m4:628: AM_CONFIG_HEADER is expanded from...
configure.in:13: the top level
configure.in:1555: warning: AC_CHECK_TYPE: assuming `u_short' is not a type
autoconf/types.m4:234: AC_CHECK_TYPE is expanded from...
configure.in:1555: the top level
configure.in:2552: warning: do not use m4_regexp: use regexp or m4_bregexp
aclocal.m4:641: _AM_DIRNAME is expanded from...
configure.in:2552: the top level
configure.in:2365: error: do not use LIBOBJS directly, use AC_LIBOBJ (see 
section `AC_LIBOBJ vs LIBOBJS'
  If this token and others are legitimate, please use m4_pattern_allow.
  See the Autoconf documentation.
autoconf failed
Autotool bootstrapping failed. You will need to investigate and correct
before you can develop on this source tree

Obviously I need some newer files, but I don't know where to get them or where 
to put them once I got them.  PLEASE HELP :D


 
 


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 11:22 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:03 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Interesting...
> 
> Here's what I did (downloaded patch from here
> http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):
> 
>   #> tar zxvf squid.tar.gz
>   #> mv squid-2.5STABLE12/ squid
>   #> patch -p0 < epoll-2_5.patch
>   #> cd squid

./bootstrap.sh <--- This will complain if you don't have the preferred version 
of autoconf and automake.

>   #> ulimit -HSn 8192
>   #> ./configure --prefix=/usr/local/squid --enable-async-io
> --enable-snmp --enable-htcp --enable-underscores --enable-epoll
>   #> make
>   (etc..)
> 
> Can you help me understand what I missed?  I've never worked 
> with CVS or
> bootsrap.sh, so please be specific :) 
> 
> 

To the best of my knowledge, every one of the patch files on 
devel.squid-cache.org have a bootstrap.sh that needs to be run after the patch 
is applied.

Chris



RE: [squid-users] post epoll...

2006-02-23 Thread Chris Robertson
> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 10:03 AM
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] post epoll...
> 
> 
> Interesting...
> 
> Here's what I did (downloaded patch from here
> http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):
> 
>   #> tar zxvf squid.tar.gz
>   #> mv squid-2.5STABLE12/ squid
>   #> patch -p0 < epoll-2_5.patch
>   #> cd squid

./bootstrap.sh <--- This will complain if you don't have the preferred version 
of autoconf and automake.

>   #> ulimit -HSn 8192
>   #> ./configure --prefix=/usr/local/squid --enable-async-io
> --enable-snmp --enable-htcp --enable-underscores --enable-epoll
>   #> make
>   (etc..)
> 
> Can you help me understand what I missed?  I've never worked 
> with CVS or
> bootsrap.sh, so please be specific :) 
> 
> 

To the best of my knowledge, every one of the patch files on 
devel.squid-cache.org have a bootstrap.sh that needs to be run after the patch 
is applied.

Chris


RE: [squid-users] post epoll...

2006-02-23 Thread Gregori Parker
Interesting...

Here's what I did (downloaded patch from here
http://devel.squid-cache.org/cgi-bin/diff2/epoll-2_5.patch?s2_5):

#> tar zxvf squid.tar.gz
#> mv squid-2.5STABLE12/ squid
#> patch -p0 < epoll-2_5.patch
#> cd squid
#> ulimit -HSn 8192
#> ./configure --prefix=/usr/local/squid --enable-async-io
--enable-snmp --enable-htcp --enable-underscores --enable-epoll
#> make
(etc..)

Can you help me understand what I missed?  I've never worked with CVS or
bootsrap.sh, so please be specific :)   


-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 10:04 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] post epoll...

> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 8:49 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] post epoll...
> 
> 
> So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
> some...now I'm seeing this a LOT in the cache.log (more than once per
> minute)
> 
> storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
> ENTRY_DEFER_READ
> 
> Should I 86 the patch?  By "86" I mean "get rid of" ;/~
> 
> 
>

That problem seems to have surfaced in May of 2005
(http://www.google.com/search?hl=en&lr=&q=site%3Awww.squid-cache.org%2Fm
ail-archive%2Fsquid-users%2F+ENTRY_DEFER_READ&btnG=Search), and was
(apparently) fixed at that time.

How did you go about patching the Squid source?
Where you aware that you have to run bootstrap.sh after patching (and
that doing so requires specific versions of autoconf and automake)?
Did you apply the patch to the most recent Squid source (or download the
CVS version with the epoll tag)?

FWIW, I'm running epoll on Squid2.5 STABLE11 without problem.

Chris



[squid-users] Filtering log data

2006-02-23 Thread David
Hello,

I am using Squid to collect log data as part of a user study.

My problem is that logging headers ("log_mime_hdrs") together with the
"regular" logging that takes place generates huge amounts of data. I am
trying to minimze that load.

I am exclusively interested in logging:

- document requests (i.e not image mime types)
- one specific line in the http request header

Are there some settings which would allow me to filter out these values?


I thank you,


/David



[squid-users] no auth for one domain?

2006-02-23 Thread nairb rotsak
Is it possible to have my ntlm users go around 1
domain?  We can't seem to get a state web site (which
uses a weird front end to it's client... but it ends
up on the web) to go through the proxy.  When we sniff
the traffic locally, it is popping up a 407, but their
isn't anyway to log in. 

I tried to put an acl and http_access higher in the
list in the .conf, but that didn't seem to matter?

I got that idea because after reading the FAQ, it
sounded like that is how you do it?

Thanks!

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


RE: [squid-users] post epoll...

2006-02-23 Thread Chris Robertson
> -Original Message-
> From: Gregori Parker [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 23, 2006 8:49 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] post epoll...
> 
> 
> So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
> some...now I'm seeing this a LOT in the cache.log (more than once per
> minute)
> 
> storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
> ENTRY_DEFER_READ
> 
> Should I 86 the patch?  By "86" I mean "get rid of" ;/~
> 
> 
>

That problem seems to have surfaced in May of 2005 
(http://www.google.com/search?hl=en&lr=&q=site%3Awww.squid-cache.org%2Fmail-archive%2Fsquid-users%2F+ENTRY_DEFER_READ&btnG=Search),
 and was (apparently) fixed at that time.

How did you go about patching the Squid source?
Where you aware that you have to run bootstrap.sh after patching (and that 
doing so requires specific versions of autoconf and automake)?
Did you apply the patch to the most recent Squid source (or download the CVS 
version with the epoll tag)?

FWIW, I'm running epoll on Squid2.5 STABLE11 without problem.

Chris


[squid-users] post epoll...

2006-02-23 Thread Gregori Parker
So, I rebuilt squid with the epoll patch, hoping to get cpu usage down
some...now I'm seeing this a LOT in the cache.log (more than once per
minute)

storeClientCopy3: http://xxx..com/xxx/abc.xyz - clearing
ENTRY_DEFER_READ

Should I 86 the patch?  By "86" I mean "get rid of" ;/~




RE: [squid-users] low squid performance?

2006-02-23 Thread Chris Robertson
> -Original Message-
> From: Tomasz Kolaj [mailto:[EMAIL PROTECTED]
> Sent: Saturday, January 28, 2006 10:01 AM
> To: Chris Robertson
> Subject: Re: [squid-users] low squid performance?
> 
> 
> Dnia czwartek, 23 lutego 2006 01:11, napisałeś:
> 
> > > > * High latency clients
> > >
> > > What do you mean "high latecy clients"?
> >
> > The majority of my customers have a network path like:
> >
> > client->squid->satellite->squid->internet
> 
> many of my clients: client->[radio line 
> {12,34,54}mbps]->squid->internet
> 
> > 100 requests/second put my CPU usage in the high 80s (on a 
> > 32 bit Intel
> > Xeon 3.00GHz).
> 
> So my result isn't so bad. But I must tune squid to maximum possible 
> performance.

With epoll, 100 Req/sec puts my CPU at 23%.  It made a huge difference.

> 
> >
> > > aragorn squid # squid -v
> > > Squid Cache: Version 2.5.STABLE12
> > > configure options:  --prefix=/usr --bindir=/usr/bin
> > > --exec-prefix=/usr
> > > --sbindir=/usr/sbin --localstatedir=/var --mandir=/usr/share/man
> > > --sysconfdir=/etc/squid --libexecdir=/usr/lib/squid
> > > --enable-auth=basic,digest,ntlm --enable-removal-policies=lru,heap
> > > --enable-linux-netfilter --enable-truncate --with-pthreads
> > > --enable-epool
> >
> > Hopefully that's just a misspelling.  ;o)
> 
> Why?;) I did some wrong?
> I'm testing epool patch like you said;>
> 
> > > --disable-follow-x-forwarded-for --host=x86_64-pc-linux-gnu
> > > --disable-snmp
> > > --disable-ssl --enable-underscores
> > > --enable-storeio='diskd,coss,aufs,null'
> > > --enable-async-io
> 
> ah.. async-io, mayby better will be to specify number of 
> async-io threads?

Try anything once.  :o)

> 
> > I don't see any other likely problems (not saying there aren't any).
> Is chance to do something morre with hardware? I can add more 
> memory banks or 
> hard discs (for example +2 wd raptors)


More memory and more spindles (drives) certainly won't hurt, but you seem to be 
CPU limited.  Taking care of that problem will likely net you the best 
improvement.

> 
> Regards,
> -- 
> Tomasz Kolaj
> 

Chris


Re: [squid-users] Squid transparent proxy + VPN problem

2006-02-23 Thread Mark Elsen
>
> I can't decide if this is a squid problem or an iptables problem, so I'm
> asking here in case someone can point me in the right direction.
>
> -
> Software/Environment details:
> -
>
> jekyl:/home/david# uname -a
> Linux jekyl 2.4.27-2-686 #1 Wed Aug 17 10:34:09 UTC 2005 i686 GNU/Linux
>
> jekyl:/home/david# iptables --version
> iptables v1.2.11
>
> jekyl:/home/david# squid -v
> Squid Cache: Version 2.5.STABLE9
> configure options:  --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin 
> --sbindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid 
> --localstatedir=/var/spool/squid --datadir=/usr/share/squid --enable-async-io 
> --with-pthreads --enable-storeio=ufs,aufs,diskd,null --enable-linux-netfilter 
> --enable-arp-acl --enable-removal-policies=lru,heap --enable-snmp 
> --enable-delay-pools --enable-htcp --enable-poll --enable-cache-digests 
> --enable-underscores --enable-referer-log --enable-useragent-log 
> --enable-auth=basic,digest,ntlm --enable-carp --with-large-files 
> i386-debian-linux
>
> jekyl:/home/david# cat /etc/debian_version
> 3.1
>
> --
> Issue/action Description
> --
>
> I am attempting to do transparent HTTP proxying with squid. This works
> fine for traffic flowing in over individual interfaces, but not for
> traffic arriving over a VPN (the proxy server is also a VPN gateway).
>
> Tracking packets using logging rules, it seems that the packets are
> getting redirected, and even accepted, but are not arriving in userland,
> or squid is dropping the requests. I can see no indication in the squid
> logs that it is recieving the requests - no corresponding entries in
> access.log or cache.log. The proxy can be accessed directly in all
> cases, but not transparently via the VPN.
>
> In squid.conf i've got:
>...

http://squidwiki.kinkie.it/SquidFaq/InterceptionProxy?highlight=%28intercept%29#head-1cf13b27d5a6f8c523a4582d38a8cfaaacafb896

Especially the item concerning MTU will probably haunt you, in this case and
there's no woraround for that.

M.


Re: [squid-users] low squid performance?

2006-02-23 Thread Pedro Timóteo



Hopefully that's just a misspelling.  ;o)



Why?;) I did some wrong?
I'm testing epool patch like you said;>
  


What he meant, I think, was that it's "poll", not "pool"...  therefore, 
"--enable-epool" won't do a thing.





RE: [squid-users] Interception proxy: disable errors

2006-02-23 Thread Shoebottom, Bryan
Henrik,

I realize I can do this, but the user will still receive a page.  Is
there a way to have the client act as though it weren't going through a
cache?

Thanks,
 Bryan
 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: February 23, 2006 9:52 AM
To: Shoebottom, Bryan
Cc: Squid Users
Subject: Re: [squid-users] Interception proxy: disable errors

tis 2006-02-21 klockan 09:55 -0500 skrev Shoebottom, Bryan:
> Hello,
> 
> I am running a WCCP enabled interception proxy and want the users to
be completely unaware that they are going through a proxy. I tried using
the following directive, but when trying to get to a website that
doesn't respond, I get a squid error on the client.
> deny_info TCP_RESET all
> 
> How can I disable all errors presented to the client?

Edit the error pages to your liking.

Regards
Henrik


Re: [squid-users] Solutions for transparent + proxy_auth?

2006-02-23 Thread Steve Brown
> I think educating users (yes, there are 2 different passwords) would be most
> effective.

Believe me, I wish I could.  But these are sales people, and as I
said, some of them aren't very bright.

> 1. give users the same password for mail and proxy and probably fetch them
> from the same source like LDAP (Win2000 Domain).

Thought about that, but I won't want to have to maintain it.  Its a hassle.

> 2. give users SeaMonkey for both browsing and mail, set it up to remember
> passwords, fill it with proxy and mail password, give users only the master
> password.

Multiple users may use the same computer.  We don't want them reading
each others email, number one, and number two, they would wind up
giving out someone else's email address as their own.  Like I said,
not very bright.

> 3. set up FF (and probably M$IE too) to use proxy on localhost - this way
> you will avoid interception and its problems and still give users benefit of
> local proxy server.

I posted earlier about my this won't work.  Firefox is too easy to get
around on OSX.

> I recommend using encrypted connections to protect your passwords, so you
> might need SSL patch to squid: http://devel.squid-cache.org/ssl/, at least
> for 1. and 3.

Thanks, this was going to be my next question. ;-)


[squid-users] Squid transparent proxy + VPN problem

2006-02-23 Thread David Clymer

I can't decide if this is a squid problem or an iptables problem, so I'm
asking here in case someone can point me in the right direction.

-
Software/Environment details:
-

jekyl:/home/david# uname -a
Linux jekyl 2.4.27-2-686 #1 Wed Aug 17 10:34:09 UTC 2005 i686 GNU/Linux

jekyl:/home/david# iptables --version
iptables v1.2.11

jekyl:/home/david# squid -v
Squid Cache: Version 2.5.STABLE9
configure options:  --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin 
--sbindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid 
--localstatedir=/var/spool/squid --datadir=/usr/share/squid --enable-async-io 
--with-pthreads --enable-storeio=ufs,aufs,diskd,null --enable-linux-netfilter 
--enable-arp-acl --enable-removal-policies=lru,heap --enable-snmp 
--enable-delay-pools --enable-htcp --enable-poll --enable-cache-digests 
--enable-underscores --enable-referer-log --enable-useragent-log 
--enable-auth=basic,digest,ntlm --enable-carp --with-large-files 
i386-debian-linux

jekyl:/home/david# cat /etc/debian_version
3.1

--
Issue/action Description
--

I am attempting to do transparent HTTP proxying with squid. This works
fine for traffic flowing in over individual interfaces, but not for
traffic arriving over a VPN (the proxy server is also a VPN gateway).

Tracking packets using logging rules, it seems that the packets are
getting redirected, and even accepted, but are not arriving in userland,
or squid is dropping the requests. I can see no indication in the squid
logs that it is recieving the requests - no corresponding entries in
access.log or cache.log. The proxy can be accessed directly in all
cases, but not transparently via the VPN.

In squid.conf i've got:  

debug_options ALL,1 33,2

Transparent proxying works flawlessly for network traffic that comes in
over individual interfaces (eth0, mcnulty_ppp - a wanrouter interface).
These networks are connected like so:

192.168.20/24 --- [router] - [main router/proxy] --- {internet}

I am having problems with networks connected via IPSec VPN. I am using
native 2.6 ipsec, so there are no interfaces associated with individual
VPN connections as when using KLIPS (the openswan IPSec implementation).
They are connected in the following fashion:

192.168.30/24 --- [router] ==Full tunnel (0.0.0.0/0 is forwarded to main 
router)= [main router/proxy] --- {internet}

In the sense of network topology and functionality, these two
configurations are identical, as I see it. For all other purposes
besides that of transparent proxying of HTTP requests, both
configurations work flawlessly.

In my iptables set up, I've set the default policy to DROP for the
INPUT, OUTPUT, and FORWARD chains. This means that packets must be
explicitly accepted. The last rule in each chain is a LOG rule so that I
see packets that get dropped in the logs.

For the working networks, I've got these iptables rules:

nat:
-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128

filter:
-A INPUT -d 192.168.10.1 -i eth0 -p tcp -m tcp --dport 3128 -j ACCEPT

Note: in testing the VPN iptables rules, I am restricting their effect
to one host on the remote network, so as to avoid breaking everyone's
internet connection, hence the extra --source switch.

For the VPN networks, I've got these rules in the nat table:

-A PREROUTING -i eth1 -p tcp -m tcp --source 192.168.30.5 --dport 80 -j LOG 
--log-prefix "matched redirect "
-A PREROUTING -i eth1 -p tcp -m tcp --source 192.168.30.5 --dport 80 -j 
REDIRECT --to-ports 3128

and these in the filter table:

-A INPUT -s 192.168.30.5 -i eth1 -j LOG --log-prefix "redirected to input "
-A INPUT -s 192.168.30.5 -d 192.168.2.2 -i eth1 -p tcp -m tcp --dport 3128 -j 
ACCEPT
-A INPUT -s 192.168.30.5 -d 192.168.2.2 -i eth1 -p tcp -m tcp --dport 3128 -j 
LOG --log-prefix "but not accepted for input "

I know that packets are matching the nat rules because the LOG rule
shows up in the logs, and redirected packets are getting logged by my
first filter LOG rule.

I know that packets are getting accepted on the interface because ACCEPT
is a terminating target, and the following LOG rule is not getting
logged. If I remove the ACCEPT rule, the following LOG rule is
evaluated.

Grasping at straws here, but I removed all entries
from /etc/hosts.{allow,deny} in case these contained entries that would
be preventing squid from accepting connections from certain IPs, and
restarted squid.

This doesn't really make any sense, as connections should (I think) be
refused rather than just dropped or ignored if it was a tcpwrappers
thing. Anyway, I tried, and it made no difference.


-davidc

--
Who is General failure and why is he reading my disk?


Re: [squid-users] low squid performance?

2006-02-23 Thread Matus UHLAR - fantomas
On 23.02 14:25, Tomasz Kolaj wrote:
> Dnia czwartek, 23 lutego 2006 11:32, napisałeś:
> > On 22.02 23:13, Tomasz Kolaj wrote:
> > > I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
> > > WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s.  115 r/s
> > > - 97-98% usage of first processor. Second is unusable for squid :/. I
> > > have two cache_dirs (aufs). One pre disk.
> >
> > Maybe you have too many ACL's?
> 
> I pasted my squid.conf in one of last posts. I have much of addresses 
> bloacked 
> in file spywaredomains.txt

sorry - the thread was broken and I didn't see it. (b)lame mailers who break
threads by not using References: or at least In-Reply-To: headers...

> acl spywaredomains dstdomain src "/etc/squid/spywaredomains.txt"
> http_access deny spywaredomains
> 
> but when I remove it from config squid still generate much processor time.
> 
> 
> What about epool? I aplied patch for squid_2.5 for tests.

I don't think that would help you much. Maybe using external redirector
(SquidGuard?) instead of squid itself would help - it may reside on another
CPU, while squid it one-CPU-process.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam = (S)tupid (P)eople's (A)dvertising (M)ethod


RE: [squid-users] FTP Commands

2006-02-23 Thread Eugen Kraynovych
The Problem is, that although I've read (in some old Forums, 2003 and earlier), 
that there is no DELETE (DELE) through SQUID possible, it works - by me in 2.5 
STABLE5 and STABLE 12. 
I have made with ethereal a dump of network traffic, there are:
   Client->SQUID  HTTP-Request DELE Filename
   SQUID->FTP FTP-Request DELE Filename
   FTP-> SQUIDFTP-Response File Filename deleted
And the file ist really deleted

Important: works only with "HTTP CONNECT" Client-Method, neither with 
"HTTP-Proxy with FTP-Support" (TotalCommander - Response auf DELETE - "Not 
implemented"), nor anything else.

I make now Doku for a software, which also should delete some files through 
Proxy(SQUID), so I need to know, if it is oficially implemented, if there are 
any limitations an so on.

Thanks for the help
EK 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 23, 2006 3:40 PM
To: Eugen Kraynovych
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] FTP Commands

mån 2006-02-20 klockan 15:15 +0100 skrev Eugen Kraynovych:

> Does anybody here have a full list of FTP commands, which SQUID 2.5 can
> (for instance 2.5Stable12; PUT, GET, DELE etc.)? The same for SQUID 3?

Squid acts as a HTTP->FTP gateway on requests for ftp:// URLS.

The currently supported HTTP methods on ftp:// URLs are:
  GET
  PUT
  HEAD

which gets translated to suitable FTP commands by Squid as per the
guidelines outlined for ftp:// URLs in RFC1738 section 3.2.

GET is used both for file retreival and directory listings.

Binary and Ascii file retreival is supported. If no explicit format is
requested Squid guesses using it's mime.conf table which tells Squid
both the ftp transfer mode to use and content-type to assign to the
reply. Explicit requests for ascii or binary uses text/plain or
application/octet-stream respectively for the content-type.

Directory listings uses LIST or NLST depending on the format type
specifier in the URL. Requests without an explicit format uses LIST,
while explicit requests for directory format uses NLST.

PUT in addition to storing files using STOR it also automatically
creates directories with MKD if needed. Or only creates directories if
the URL ends in / and the content-length is 0. If the URL ends in / and
content-length > 0 then STOU is used instead of STOR allowing the FTP
server to assign a suitable file name to the uploaded content.

GET uses REST is the request is a range for the remainder of the file.

Support could be added for more HTTP methods such as DELETE mapped to
DELE/RMD but nobody has shown any interest in this.

Regards
Henrik



Re: [squid-users] 0 means no limit??

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-23 klockan 17:24 +0530 skrev mohinder garg:
> Hi,
> 
> I have seen in squid 2.5 stable10 that at many places if i give 0
> value, it means no limit.

where this is mentioned in the squid.conf.default comments yes.

> for example: connect_timeout 0  (it means no limit to connect timeout).

connect_timeout does not work like this. If you set it to 0 then
requests will time out very quickly (less than one second).

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] acl req_mime_type

2006-02-23 Thread Henrik Nordstrom
ons 2006-02-22 klockan 13:25 +0530 skrev mohinder garg:
> Hi,
> 
> can anybody tell me what this acl does? does it block downloading or 
> uploading?
> and how can i test it?

It matches the content-type of the HTTP request data going TO the
requested web server.

Is not really related to uploading, more related to applications sending
data over HTTP. (most HTTP uploads uses forms based uploading)

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Interception proxy: disable errors

2006-02-23 Thread Henrik Nordstrom
tis 2006-02-21 klockan 09:55 -0500 skrev Shoebottom, Bryan:
> Hello,
> 
> I am running a WCCP enabled interception proxy and want the users to be 
> completely unaware that they are going through a proxy. I tried using the 
> following directive, but when trying to get to a website that doesn't 
> respond, I get a squid error on the client.
> deny_info TCP_RESET all
> 
> How can I disable all errors presented to the client?

Edit the error pages to your liking.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: AW: [squid-users] Squid 2.5.STABLE9 and Kernel 2.6.11 SMP

2006-02-23 Thread Henrik Nordstrom
tis 2006-02-21 klockan 12:42 +0100 skrev Christian Herzberg:
> Hi Bart,
> 
> Squid isn´t crashing. Squid is waiting for what ever. You can wait one hour
> without any respons from squid. After a restart of squid everythink is
> working fine.
> The same squid on a system with kernel 2.4 is working for the last 2 years
> without any problem.

Have you tried upgrading your Squid?

Not that I know of any "hanging" issues in 2.5.STABLE9, but it's still
worth a try.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Solutions for transparent + proxy_auth?

2006-02-23 Thread Henrik Nordstrom
tis 2006-02-21 klockan 10:51 -0600 skrev Steve Brown:

> Yes there is a user/pass.  Everyone is saying that the broswer
> shouldn't indiscriminately provide crednetials, which I agree with. 
> However, in the setup I am proposing, the browser isn't submitting
> credentials.  The traffic is intercepted by a local proxy, which does
> *not* have authentication and only responds to localhost traffic.  The
> local proxy then queries the parent cache with the u/p provided by the
> login parameter in the cache_peer config option.  So the
> authentication is there, it just doesn't require any user interaction.

This is entirely fine.

The relation between your Squid and the parent is not transparent
interception, and using cache_peer to specify the login to use in this
relation is perfectly fine. It's only in the relation your Squid <-> the
client where proxy authentication can not be used (including pass-thru
using login=PASS in the cache_peer line..)

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] FTP Commands

2006-02-23 Thread Henrik Nordstrom
mån 2006-02-20 klockan 15:15 +0100 skrev Eugen Kraynovych:

> Does anybody here have a full list of FTP commands, which SQUID 2.5 can
> (for instance 2.5Stable12; PUT, GET, DELE etc.)? The same for SQUID 3?

Squid acts as a HTTP->FTP gateway on requests for ftp:// URLS.

The currently supported HTTP methods on ftp:// URLs are:
  GET
  PUT
  HEAD

which gets translated to suitable FTP commands by Squid as per the
guidelines outlined for ftp:// URLs in RFC1738 section 3.2.

GET is used both for file retreival and directory listings.

Binary and Ascii file retreival is supported. If no explicit format is
requested Squid guesses using it's mime.conf table which tells Squid
both the ftp transfer mode to use and content-type to assign to the
reply. Explicit requests for ascii or binary uses text/plain or
application/octet-stream respectively for the content-type.

Directory listings uses LIST or NLST depending on the format type
specifier in the URL. Requests without an explicit format uses LIST,
while explicit requests for directory format uses NLST.

PUT in addition to storing files using STOR it also automatically
creates directories with MKD if needed. Or only creates directories if
the URL ends in / and the content-length is 0. If the URL ends in / and
content-length > 0 then STOU is used instead of STOR allowing the FTP
server to assign a suitable file name to the uploaded content.

GET uses REST is the request is a range for the remainder of the file.

Support could be added for more HTTP methods such as DELETE mapped to
DELE/RMD but nobody has shown any interest in this.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Squidrunner Support Team
Before compiling squid, try changing the value of the __FD_SETSIZE in
the /usr/include/bits/typesizes.h file as

#define __FD_SETSIZE8192

and then in the shell prompt

ulimit -HSn 8192

and try

echo 1024 32768 > /proc/sys/net/ipv4/ip_local_port_range

then compile and install squid

This should help you out with the file descriptor problem.

Thanks,
-Squid Runner Support

On 2/23/06, Gix, Lilian (CI/OSR) * <[EMAIL PROTECTED]> wrote:
> Hello,
>
>
> I always had some problems with Filedescriptor.
> One day, U changed my Linux version to a Debian (I don't now if it's the
> reason) and I update Squid. And since, I got a configuration file under:
> /etc/default/squid
> On this file, there is:
>
> #
> # /etc/default/squidConfiguration settings for the Squid
> proxy   server.
> #
>
> # Max. number of filedescriptors to use. You can increase this
> on abusy
> # cache to a maximum of (currently) 4096 filedescriptors.
> Default is  1024.
> SQUID_MAXFD=4096
>
> I don't know if this can help
>
> Gix Lilian
>
>
> -Original Message-
> From: Mark Elsen [mailto:[EMAIL PROTECTED]
> Sent: Donnerstag, 23. Februar 2006 09:26
> To: Gregori Parker
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] FILE DESCRIPTORS
>
> > Sorry to be pounding the list lately, but I'm about to lose it with
> > these file descriptors...
> >
> > I've done everything I have read about to increase file descriptors on
> > my caching box, and now I just rebuilt a fresh clean squid.  Before I
> > ran configure, I did ulimit -HSn 8192, and I noticed that while
> > configuring it said "Checking File Descriptors... 8192".  I even
> > double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
> > everything was good, even ran a "ulimit -n" right before starting
> squid
> > and saw 8192!  So I start her up, and in cache.log I see...
> >
> > 2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
> > x86_64-unknown-linux-gnu...
> > 2006/02/22 19:05:08| Process ID 3657
> > 2006/02/22 19:05:08| With 1024 file descriptors available
> >
>
> To make sure that this is not bogus w.r.t. the real available amount of
> FD's : do you still get warnings in cache.log about FD-shortage when
> reaching the 1024 (bogus-reported ?) limit.
>
> The reason I ask is, that I have been playing with the FAQ guidelines
> too,
> ultimately getting the same result (stuck?) as you did.
>
> M.
>
>


Re: [squid-users] Problem with intercept squid and boinc

2006-02-23 Thread Henrik Nordstrom
ons 2006-02-22 klockan 10:16 -0300 skrev Oliver Schulze L.:

> and in the problematic squid server I see:
> 1140566460.404   2060 192.168.2.90 TCP_MISS/100 123 POST 
> http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler - 
> DIRECT/66.28.250.125 -
> 
> What does TCP_MISS/100 mean? As I see, the correct value should be 
> TCP_MISS/200

Correct. You should never see a 100 response code in Squid. This
indicates there is something upstream which malfunctions and sends a 100
Continue to your Squid even if the HTTP standard forbids this. Squid is
HTTP/1.0, and 100 Continue requires HTTP/1.1.

Something upstream ranges from

  Parent proxy
  Another intercepting proxy
  The origin server

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Question.

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-16 klockan 15:36 -0600 skrev Fernando Rodriguez:
> Lets say you are using squid with a redirector that matches users and ip
> adresses
>  
> You have your acl where you match the request of a client based on its ip
> address then squid-cache sends the request to a redirector for authorization
> it will ether return a new webpage or a \n in the case of a succesfull
> authorization. But in the case of returning a webpage it will be good to
> match the result via an acl so it can be requested a login and password.
>  
> Is this possible ??

Yes, there is two options


a) Instead of merely rewriting the URL have the redirector returning a
browser redirect. This redirects the browser to the other URL, making it
hit http_access again.

b) The rproxy patch from devel.squid-cache.org adds a http_access2
directive running after redirectors.

I would recommend starting with the first. If browser redirects is not
desireable then perhaps look into the second.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid and Subversion - "Forwarding loop"

2006-02-23 Thread Henrik Nordstrom
tor 2006-02-16 klockan 11:53 +0100 skrev brak danych:
> 2006/02/15 13:52:02| WARNING: Forwarding loop detected for:
> Client: 127.0.0.1 http_port: 127.0.0.1:80
> REPORT http://clown.domain.com:81/svn/SwordFish/Trunk/SwordFish/SwordSite 
> HTTP/1.0

> So it's not about access rights but about a forwarding loop! :O But again - 
> this is only when I do "svn status -u"...


Maybe svn sends a full URL in the request confusing your accelerator.

Is there a difference in the error returned if you do not enable
httpd_accel_with_proxy?


I'd recommend running the real web server on port 80 as well, but a
different IP (i.e. 127.0.0.1 instead of your public IP, or an internal
server). Simplifies accelerator setups considerably.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Save clients password

2006-02-23 Thread Squidrunner Support Team
While you are entering the username and password, you can save the
password by enabling the checkbox below the password field. This will
save your username and password and when you open the browser it will
not ask for the authentication.

Note: You cannot use authentication with Transparent proxy

-Squid Runner Support

On 2/22/06, Franco, Battista <[EMAIL PROTECTED]> wrote:
> Hi
> I use squid ldap users authentication.
> From my client PCs every time I start IE I need to insert username and
> password.
> Is it possible to configure squid user and password popup with a
> checkbox to permit to save password?
> So next time I'll not retype password.
>


Re: [squid-users] Can't get squid to work

2006-02-23 Thread Henrik Nordstrom
ons 2006-02-15 klockan 20:12 -0500 skrev Joe Commisso:

> I am trying to get squid working and need help since I have been at this 
> for a long time.
> I am including my squid.conf file and also the output from 'squid -k 
> parse' & 'squidclient ...'
> When I put my http proxy information in firefox, the connection times out.
> My firefox http proxy info is 'http proxy: 192.168.4.1' 'port: 3128'

Is DNS properly configured on the Squid server?

Do you have a firewall?

Does
   squidclient -h www.squid-cache.org -p 80 /
work?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] low squid performance?

2006-02-23 Thread Tomasz Kolaj
Dnia czwartek, 23 lutego 2006 11:32, napisałeś:
> On 22.02 23:13, Tomasz Kolaj wrote:
> > I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
> > WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s.  115 r/s
> > - 97-98% usage of first processor. Second is unusable for squid :/. I
> > have two cache_dirs (aufs). One pre disk.
>
> Maybe you have too many ACL's?

I pasted my squid.conf in one of last posts. I have much of addresses bloacked 
in file spywaredomains.txt

acl spywaredomains dstdomain src "/etc/squid/spywaredomains.txt"
http_access deny spywaredomains

but when I remove it from config squid still generate much processor time.


What about epool? I aplied patch for squid_2.5 for tests.

Regards,
-- 
Tomasz Kolaj


Re: [squid-users] Save clients password

2006-02-23 Thread Squidrunner Support Team
On 2/22/06, Franco, Battista <[EMAIL PROTECTED]> wrote:
> Hi
> I use squid ldap users authentication.
> From my client PCs every time I start IE I need to insert username and
> password.
> Is it possible to configure squid user and password popup with a
> checkbox to permit to save password?
> So next time I'll not retype password.

While you are entering the username and password, you can save the
password by enabling the checkbox below the password field. This will
save your username and password and when you open the browser it will
not ask for the authentication.

Note: You cannot use authentication with Transparent proxy

-Squid Runner Support


[squid-users] 0 means no limit??

2006-02-23 Thread mohinder garg
Hi,

I have seen in squid 2.5 stable10 that at many places if i give 0
value, it means no limit.

for example: connect_timeout 0  (it means no limit to connect timeout).

But i didn't find any documentation about it. can anybody tell me what
all directives have 0 as no limit?

Thanks
Mohinder


Re: [squid-users] low squid performance?

2006-02-23 Thread Matus UHLAR - fantomas
On 22.02 23:13, Tomasz Kolaj wrote:
> I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB DDR2, 2x
> WD RAPTOR Squid 2.5.STABLE12 can answer max for 120 requests/s.  115 r/s -
> 97-98% usage of first processor. Second is unusable for squid :/. I have
> two cache_dirs (aufs). One pre disk.

Maybe you have too many ACL's?
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Eagles may soar, but weasels don't get sucked into jet engines. 


Re: [squid-users] R: [squid-users] Save clients password

2006-02-23 Thread Matus UHLAR - fantomas
> > Yes my program talks with Windows 2003 AD.

On 22.02 17:31, Mark Elsen wrote:
> Please ( !-> again) , keep discussions into the same original-thread

does hypermail on squid-cache.org compare Thread-Topic: and Thread-Index: 
headers? Note that lame Outlook and Exchange use these headers instead of
standard References: or at least Reply-To:

>- You are friendly-er to the community
>- Archives en search-tools will be able to organize and
>  operate, themselves in a more optimal manner;
>  which will also benefit >you<.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Depression is merely anger without enthusiasm. 


Re: [squid-users] Mirror sites

2006-02-23 Thread Matus UHLAR - fantomas
On 22.02 09:50, Nolan Rumble wrote:
> I would just like to know.  When downloading a file from a site say for
> example a link like so: http://host1.example.com/test.zip and the file
> has a size say 300kb (and timestamp 2006/01/01 ?) and then downloading
> the same file from another (mirror) site say
> http://host2.example.com/test.zip with the filesize also at 300kb and
> the same timestamp  Will squid redownload the file?  Or will it be
> "clever" enough to say that I've already downloaded that file, I'll send
> you the cached version?  Do timestamps matter?  Does squid just check
> the filesizes and filenames?

There was a patch for this issue:

http://devel.squid-cache.org/projects.html#dsa

Currently, squid does not compare objects with different uri's

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
He who laughs last thinks slowest. 


Re: [squid-users] Need help to improve squid performance

2006-02-23 Thread Matus UHLAR - fantomas
On 23.02 09:41, Raj wrote:
> After I upgrade the memory to 2gb can I increase the cache_mem value
> to 256MB. At the moment it is 64MB.

funny, as long as Kenny said it's expandable only to 384 MB (not much,
although better than nothing).

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Nothing is fool-proof to a talented fool. 


Re: [squid-users] Solutions for transparent + proxy_auth?

2006-02-23 Thread Matus UHLAR - fantomas
On 21.02 10:51, Steve Brown wrote:
> > How is there "authentication" without credentials?  I have misunderstood
> > your setup.  What are you referring to when you say "authentication" because
> > the knee-jerk reaction is to assume a username and password is
> > authenticating...
> 
> Yes there is a user/pass.  Everyone is saying that the broswer
> shouldn't indiscriminately provide crednetials, which I agree with. 
> However, in the setup I am proposing, the browser isn't submitting
> credentials.  The traffic is intercepted by a local proxy, which does
> *not* have authentication and only responds to localhost traffic.  The
> local proxy then queries the parent cache with the u/p provided by the
> login parameter in the cache_peer config option.  So the
> authentication is there, it just doesn't require any user interaction.

I think educating users (yes, there are 2 different passwords) would be most
effective.

Some other solutions are maybe possible too:

1. give users the same password for mail and proxy and probably fetch them
from the same source like LDAP (Win2000 Domain).

2. give users SeaMonkey for both browsing and mail, set it up to remember
passwords, fill it with proxy and mail password, give users only the master
password.

3. set up FF (and probably M$IE too) to use proxy on localhost - this way
you will avoid interception and its problems and still give users benefit of
local proxy server.

I recommend using encrypted connections to protect your passwords, so you
might need SSL patch to squid: http://devel.squid-cache.org/ssl/, at least
for 1. and 3.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Despite the cost of living, have you noticed how popular it remains? 


Re: [squid-users] Squid Efficiency - What else to tweak?

2006-02-23 Thread Matus UHLAR - fantomas
On 21.02 11:57, Denis Vlasenko wrote:
> On Tuesday 21 February 2006 10:31, Matus UHLAR - fantomas wrote:
> > On 21.02 12:33, Ow Mun Heng wrote:
> > > I've got a squid cache running as transparent proxy on an _very_ old
> > > machine.
> > > 
> > > P 133 Mhz
> > > 128MB Ram
> > > 20GB Hard Disk
> > > 1GB Cache in aufs
> > > Fedora Core 3
> > > 
> > > Here are the stats reported by calamaris 2.99
> > > Total amount:   requests  10471 
> > > Total amount cached:requests   1330 
> > > Request hit rate:   %  12.70 
> > > Total Bandwidth:Byte   107M 
> > > Bandwidth savings:  Byte  1767K 
> > > Bandwidth savings in Percent (Byte hit rate): %   1.62
> 
> I think he is mostly worried about having only 1.62% of traffic
> saved by squid by suppying cached content instead.
> 
> What veterans are doing to get it to some decent numbers?

...add more memory and disk space. I haven't seen his cache_mem and memory
usage, but maybe using more than one GB for disk cache would help a bit
without machine starting to swap...

But as long as the question was what to do "besides the hardware"...

Also: What replacement policy do you (Ow Mun Heng) use? I use heap LFUDA for
disk objects and heap GSDF for memory cache. LFUDA gave me 2% better byte
radio and GSDF 2% better hit ratio IIRC.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I feel like I'm diagonally parked in a parallel universe. 


[squid-users] Detecting invalid CONNECT requests

2006-02-23 Thread HAND,Nathan
Hi,

I have a question about CONNECT statements. Squid doesn't try to
interpret the contents of CONNECT statements - how could it, when the
contents are encrypted? - and some applications are abusing that fact by
using CONNECT statements to tunnel non-SSL protocols. The biggest
offender is Skype which uses CONNECT to get a connection to a peer, but
doesn't bother with TLS or SSL on that connection.

I've seen the tutorial on using Squid ACLs to block Skype by matching
CONNECT  and dropping those requests. That's fine. However I was
wondering if it is possible for Squid to  be more generic and block all
CONNECT requests that don't look like a TLS or SSL connection. Although
Squid can't view the encrypted contents it should be possible to detect
a valid TLS session being setup (eg, the negotiation and key exchange).

It would be really nice if I could write something similar to

  acl TLS_proto proto TLS
  acl SSL_ports port 443 563
  acl CONNECT method CONNECT
  http_access deny CONNECT !SSL_ports
  http_access deny CONNECT !TLS_proto

I can't find anything like this with a quick Google search but I'm
hoping more knowledgeable people can point me towards a suitable patch
or module.

Thanks,
Nathan Hand

NB: Somebody suggested I could achieve something similar with an
auth_helper that attempted an openssl s_client and looked for a
certificate, but I think that would open an undesirable second
connection for every HTTPS request.

Notice:
The information contained in this e-mail message and any attached files may
be confidential information, and may also be the subject of legal
professional privilege.  If you are not the intended recipient any use,
disclosure or copying of this e-mail is unauthorised.  If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete all copies of this transmission together with any attachments.




Re: [squid-users] low squid performance?

2006-02-23 Thread Tomasz Kolaj
Dnia czwartek, 23 lutego 2006 01:11, napisałeś:

> > > * High latency clients
> >
> > What do you mean "high latecy clients"?
>
> The majority of my customers have a network path like:
>
> client->squid->satellite->squid->internet

many of my clients: client->[radio line {12,34,54}mbps]->squid->internet

> 100 requests/second put my CPU usage in the high 80s (on a 32 bit Intel
> Xeon 3.00GHz).

So my result isn't so bad. But I must tune squid to maximum possible 
performance.

>
> > aragorn squid # squid -v
> > Squid Cache: Version 2.5.STABLE12
> > configure options:  --prefix=/usr --bindir=/usr/bin
> > --exec-prefix=/usr
> > --sbindir=/usr/sbin --localstatedir=/var --mandir=/usr/share/man
> > --sysconfdir=/etc/squid --libexecdir=/usr/lib/squid
> > --enable-auth=basic,digest,ntlm --enable-removal-policies=lru,heap
> > --enable-linux-netfilter --enable-truncate --with-pthreads
> > --enable-epool
>
> Hopefully that's just a misspelling.  ;o)

Why?;) I did some wrong?
I'm testing epool patch like you said;>

> > --disable-follow-x-forwarded-for --host=x86_64-pc-linux-gnu
> > --disable-snmp
> > --disable-ssl --enable-underscores
> > --enable-storeio='diskd,coss,aufs,null'
> > --enable-async-io

ah.. async-io, mayby better will be to specify number of async-io threads?

> I don't see any other likely problems (not saying there aren't any).
Is chance to do something morre with hardware? I can add more memory banks or 
hard discs (for example +2 wd raptors)

Regards,
-- 
Tomasz Kolaj


RE: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Gix, Lilian \(CI/OSR\) *
Hello,


I always had some problems with Filedescriptor.
One day, U changed my Linux version to a Debian (I don't now if it's the
reason) and I update Squid. And since, I got a configuration file under:
/etc/default/squid
On this file, there is:

#
# /etc/default/squidConfiguration settings for the Squid
proxy   server.
#

# Max. number of filedescriptors to use. You can increase this
on abusy
# cache to a maximum of (currently) 4096 filedescriptors.
Default is  1024.
SQUID_MAXFD=4096

I don't know if this can help

Gix Lilian


-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Donnerstag, 23. Februar 2006 09:26
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] FILE DESCRIPTORS

> Sorry to be pounding the list lately, but I'm about to lose it with
> these file descriptors...
>
> I've done everything I have read about to increase file descriptors on
> my caching box, and now I just rebuilt a fresh clean squid.  Before I
> ran configure, I did ulimit -HSn 8192, and I noticed that while
> configuring it said "Checking File Descriptors... 8192".  I even
> double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
> everything was good, even ran a "ulimit -n" right before starting
squid
> and saw 8192!  So I start her up, and in cache.log I see...
>
> 2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
> x86_64-unknown-linux-gnu...
> 2006/02/22 19:05:08| Process ID 3657
> 2006/02/22 19:05:08| With 1024 file descriptors available
>

To make sure that this is not bogus w.r.t. the real available amount of
FD's : do you still get warnings in cache.log about FD-shortage when
reaching the 1024 (bogus-reported ?) limit.

The reason I ask is, that I have been playing with the FAQ guidelines
too,
ultimately getting the same result (stuck?) as you did.

M.



Re: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Mark Elsen
> Sorry to be pounding the list lately, but I'm about to lose it with
> these file descriptors...
>
> I've done everything I have read about to increase file descriptors on
> my caching box, and now I just rebuilt a fresh clean squid.  Before I
> ran configure, I did ulimit -HSn 8192, and I noticed that while
> configuring it said "Checking File Descriptors... 8192".  I even
> double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
> everything was good, even ran a "ulimit -n" right before starting squid
> and saw 8192!  So I start her up, and in cache.log I see...
>
> 2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
> x86_64-unknown-linux-gnu...
> 2006/02/22 19:05:08| Process ID 3657
> 2006/02/22 19:05:08| With 1024 file descriptors available
>

To make sure that this is not bogus w.r.t. the real available amount of
FD's : do you still get warnings in cache.log about FD-shortage when
reaching the 1024 (bogus-reported ?) limit.

The reason I ask is, that I have been playing with the FAQ guidelines too,
ultimately getting the same result (stuck?) as you did.

M.


RE: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Gregori Parker

My /etc/init.d/squid ...I'm doing this already

#!/bin/bash
echo 1024 32768 > /proc/sys/net/ipv4/ip_local_port_range
echo 1024 > /proc/sys/net/ipv4/tcp_max_syn_backlog
SQUID="/usr/local/squid/sbin/squid"

# increase file descriptor limits
echo 8192 > /proc/sys/fs/file-max
ulimit -HSn 8192

case "$1" in

start)
   $SQUID -s
   echo 'Squid started'
   ;;

stop)
   $SQUID -k shutdown
   echo 'Squid stopped'
   ;;

esac



From: kabindra shrestha [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 22, 2006 7:28 PM
To: Gregori Parker
Subject: Re: [squid-users] FILE DESCRIPTORS

u ve to run the same command "ulimit -HSn 8192" before starting the squid. it 
is working fine on my server.

>>---

I've done everything I have read about to increase file descriptors on 
my caching box, and now I just rebuilt a fresh clean squid.  Before I
ran configure, I did ulimit -HSn 8192, and I noticed that while
configuring it said "Checking File Descriptors... 8192".  I even
double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
everything was good, even ran a "ulimit -n" right before starting squid
and saw 8192!  So I start her up, and in cache.log I see...

2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
x86_64-unknown-linux-gnu...
2006/02/22 19:05:08| Process ID 3657
2006/02/22 19:05:08| With 1024 file descriptors available

Arggghh.

Can anyone help me out?  This is on Fedora Core 4 64-bit

Thanks, sigh - Gregori