[squid-users] Freebsd -Squid - danguardian- Winbind- XMalloc Error

2006-09-01 Thread Erick Dantas Rotole
I have  a box with freebsd 6.1, squid-2.5.13_1 (Ntlm authentication),
dansguardian-2.9.6.1_1 and samba-3.0.22,1 2gb RAM 2 cpu. When The box has
800MB memory active, 1.2GB memory inact and 0 free memory. I get the error
FATAL: xmalloc: Unable to allocate 65535 bytes and the squid process
restart. I really need help, I have already search google and the list but
haven't found the solution. Thanks




RE: [squid-users] Regex url lists and DNS blacklist acls

2006-09-01 Thread Thomas Nilsen

Thanks for the reply Henrik.

As utils like squidguard/dansguardian are able to handle regex files
with good performance, I was hoping to achieve the same with asqredir or
similar light tools.

I assume Squid caches any external regex_url file?

I'll go ahead and see if I can get dnsbl_redir and perhaps asqredir to
work as external ACL helpers and do some testing to see if there is any
performance gain from it.

Thanks again.

Regards,
Thomas

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 12:07 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Regex url lists and DNS blacklist acls

tor 2006-08-31 klockan 15:06 +0200 skrev Thomas Nilsen:

 The shadowserver.org and bleedingsnort.com lists could easily be
 integrated as dstdomain acl, but the malware.com.br is a regex_url
 list and I don't want to take the performance hit using a regex_url
 acl. So the idea was to try and use a redirector like asqredir for the

 regex_url files.

regex performance is about the same I am afraid.. the problem is not
where they are implemented but the fact that regex patterns is not well
structured so the whole list must be searched all the time...

 I also want to use the dnsbl_redir to check dns blacklists (which
 potentially could replace the dstdomain acl as well if that is of any
 performance benefit).

I would recommend implementing that using an external ACL instead of of
a redirector. Much better performance.

 Problem is to use the two redirectors at the same time.

Not really a problem. Look in the archives (search for Open2). But I
wouldn't recommend it in this case as an external acl is much better
design.

 I expect the dnsbl_redir has a lower overhead as a helper application
 than asqredir would if changed into a external acl helper, or does
 that not matter? Have anyone tried this?

external acls have a very noticeable performance benefit compared to
redirectors at large thanks to the lookup cache available in the
external acl construct.

Regards
Henrik

DISCLAIMER:
This message contains information that may be privileged or confidential and is 
the property of the Roxar Group. It is intended only for the person to whom it 
is addressed. If you are not the intended recipient, you are not authorised to 
read, print, retain, copy, disseminate, distribute, or use this message or any 
part thereof. If you receive this message in error, please notify the sender 
immediately and delete all copies of this message.


RE: [squid-users] swap.log size continuing to grow?

2006-09-01 Thread wangzicai
Thanks 
If :
In the I changed the Interscan`s working port to 8080,and change the squid`s
configuration the line: cache_peer 127.0.0.1 parent 8080 3130 no-query
It will be work?
Regards 
garlic
-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 1:56 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?


You can't with that setup. If you want the users details logged in
squid, you need to swap it around so that squid uses Interscan as its
parent. As long as Interscan is passing the request on to Squid, squid
is always going to log the server IP.

Regards,
Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]

Sent: Friday, September 01, 2006 2:54 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Hello everyone
I have a proxy server machine.I am using squid-2.5.stable14 with
InterScan VirusWall for Unix in a same computer.the port of InterScan
VirusWall for Unix is 80 and the port of squid is 8080.


__ __   _
| user|---| InterScan   |--- | squid   |
|_||_| |_|

In the user`s internet I input the server`s ip and 80.
Now ,in the access.log I can not get the user`s access record. The ip is
the server1s ip. Like this
127.0.0.1 - - [24/Aug/2006:12:53:53 +0800] GET http://www.google.co.jp/
HTTP/1.0 200 4328 TCP_MISS\:FIRST_UP_PARENT .
How can I solve it.




Regards
garlic


DISCLAIMER:
This message contains information that may be privileged or confidential and
is the property of the Roxar Group. It is intended only for the person to
whom it is addressed. If you are not the intended recipient, you are not
authorised to read, print, retain, copy, disseminate, distribute, or use
this message or any part thereof. If you receive this message in error,
please notify the sender immediately and delete all copies of this message.




RE: [squid-users] swap.log size continuing to grow?

2006-09-01 Thread Thomas Nilsen
You need to change the squid port as well (http_port) of course, so both
don't listen to 8080 - unless interscan and squid bind to different
interfaces. Apart from that it should be fine.

Thomas
-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 8:22 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Thanks
If :
In the I changed the Interscan`s working port to 8080,and change the
squid`s configuration the line: cache_peer 127.0.0.1 parent 8080 3130
no-query It will be work?
Regards
garlic
-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 1:56 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?


You can't with that setup. If you want the users details logged in
squid, you need to swap it around so that squid uses Interscan as its
parent. As long as Interscan is passing the request on to Squid, squid
is always going to log the server IP.

Regards,
Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]

Sent: Friday, September 01, 2006 2:54 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Hello everyone
I have a proxy server machine.I am using squid-2.5.stable14 with
InterScan VirusWall for Unix in a same computer.the port of InterScan
VirusWall for Unix is 80 and the port of squid is 8080.


__ __   _
| user|---| InterScan   |--- | squid   |
|_||_| |_|

In the user`s internet I input the server`s ip and 80.
Now ,in the access.log I can not get the user`s access record. The ip is
the server1s ip. Like this
127.0.0.1 - - [24/Aug/2006:12:53:53 +0800] GET http://www.google.co.jp/
HTTP/1.0 200 4328 TCP_MISS\:FIRST_UP_PARENT .
How can I solve it.




Regards
garlic


DISCLAIMER:
This message contains information that may be privileged or confidential
and is the property of the Roxar Group. It is intended only for the
person to whom it is addressed. If you are not the intended recipient,
you are not authorised to read, print, retain, copy, disseminate,
distribute, or use this message or any part thereof. If you receive this
message in error, please notify the sender immediately and delete all
copies of this message.




Re: [squid-users] Freebsd -Squid - danguardian- Winbind- XMalloc Error

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 03:21 -0300 skrev Erick Dantas Rotole:

 800MB memory active, 1.2GB memory inact and 0 free memory. I get the error
 FATAL: xmalloc: Unable to allocate 65535 bytes and the squid process
 restart. I really need help, I have already search google and the list but
 haven't found the solution. Thanks

See the FAQ. It's not related to the amount of memory available but to
OS configuration limiting process size.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] swap.log size continuing to grow?

2006-09-01 Thread wangzicai
Thanks 
I have changed the squid prot to 80 , but the server can not connect to the
internet directly it must throw another proxy(server in another company).
In the intrascan I set the proxy to that proxy and the port is also 8080
When I try to access the internet the error occers:
InterScan Error
InterScan HTTP Version 3.81-Build_1084 $Date: 04/06/2005 18:36:0048$
Can't connect to the original server: (any proxy server`s name):8080

-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 2:32 PM
To: wangzicai
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

You need to change the squid port as well (http_port) of course, so both
don't listen to 8080 - unless interscan and squid bind to different
interfaces. Apart from that it should be fine.

Thomas
-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 8:22 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Thanks
If :
In the I changed the Interscan`s working port to 8080,and change the
squid`s configuration the line: cache_peer 127.0.0.1 parent 8080 3130
no-query It will be work?
Regards
garlic
-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 1:56 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?


You can't with that setup. If you want the users details logged in
squid, you need to swap it around so that squid uses Interscan as its
parent. As long as Interscan is passing the request on to Squid, squid
is always going to log the server IP.

Regards,
Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]

Sent: Friday, September 01, 2006 2:54 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Hello everyone
I have a proxy server machine.I am using squid-2.5.stable14 with
InterScan VirusWall for Unix in a same computer.the port of InterScan
VirusWall for Unix is 80 and the port of squid is 8080.


__ __   _
| user|---| InterScan   |--- | squid   |
|_||_| |_|

In the user`s internet I input the server`s ip and 80.
Now ,in the access.log I can not get the user`s access record. The ip is
the server1s ip. Like this
127.0.0.1 - - [24/Aug/2006:12:53:53 +0800] GET http://www.google.co.jp/
HTTP/1.0 200 4328 TCP_MISS\:FIRST_UP_PARENT .
How can I solve it.




Regards
garlic


DISCLAIMER:
This message contains information that may be privileged or confidential
and is the property of the Roxar Group. It is intended only for the
person to whom it is addressed. If you are not the intended recipient,
you are not authorised to read, print, retain, copy, disseminate,
distribute, or use this message or any part thereof. If you receive this
message in error, please notify the sender immediately and delete all
copies of this message.




RE: [squid-users] swap.log size continuing to grow?

2006-09-01 Thread Thomas Nilsen
Suggest you configure this in stages.

1. Get Interscan to work on port 8080 and pass it's request on the the
parent proxy in the other company. Configure your browser to use your
interscan server IP and port 8080 as proxy and test.

2. Once step 1 works. Configure squid to use parent proxy on
localhost:8080 and make sure it works. I think there are some
restrictions with running squid on port 80 (like you have to run as root
to bind to it), so you might want to choose a different port - like
3128. But that's up to you.

Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 8:49 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Thanks
I have changed the squid prot to 80 , but the server can not connect to
the internet directly it must throw another proxy(server in another
company).
In the intrascan I set the proxy to that proxy and the port is also 8080
When I try to access the internet the error occers:
InterScan Error
InterScan HTTP Version 3.81-Build_1084 $Date: 04/06/2005 18:36:0048$
Can't connect to the original server: (any proxy server`s name):8080

-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 2:32 PM
To: wangzicai
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

You need to change the squid port as well (http_port) of course, so both
don't listen to 8080 - unless interscan and squid bind to different
interfaces. Apart from that it should be fine.

Thomas
-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 8:22 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Thanks
If :
In the I changed the Interscan`s working port to 8080,and change the
squid`s configuration the line: cache_peer 127.0.0.1 parent 8080 3130
no-query It will be work?
Regards
garlic
-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 1:56 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?


You can't with that setup. If you want the users details logged in
squid, you need to swap it around so that squid uses Interscan as its
parent. As long as Interscan is passing the request on to Squid, squid
is always going to log the server IP.

Regards,
Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]

Sent: Friday, September 01, 2006 2:54 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Hello everyone
I have a proxy server machine.I am using squid-2.5.stable14 with
InterScan VirusWall for Unix in a same computer.the port of InterScan
VirusWall for Unix is 80 and the port of squid is 8080.


__ __   _
| user|---| InterScan   |--- | squid   |
|_||_| |_|

In the user`s internet I input the server`s ip and 80.
Now ,in the access.log I can not get the user`s access record. The ip is
the server1s ip. Like this
127.0.0.1 - - [24/Aug/2006:12:53:53 +0800] GET http://www.google.co.jp/
HTTP/1.0 200 4328 TCP_MISS\:FIRST_UP_PARENT .
How can I solve it.




Regards
garlic


DISCLAIMER:
This message contains information that may be privileged or confidential
and is the property of the Roxar Group. It is intended only for the
person to whom it is addressed. If you are not the intended recipient,
you are not authorised to read, print, retain, copy, disseminate,
distribute, or use this message or any part thereof. If you receive this
message in error, please notify the sender immediately and delete all
copies of this message.




RE: [squid-users] swap.log size continuing to grow?

2006-09-01 Thread wangzicai
Thanks 
I have solved the problem!
Regards 
Garlic

-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 2:32 PM
To: wangzicai
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

You need to change the squid port as well (http_port) of course, so both
don't listen to 8080 - unless interscan and squid bind to different
interfaces. Apart from that it should be fine.

Thomas
-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 01, 2006 8:22 AM
To: Thomas Nilsen
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Thanks
If :
In the I changed the Interscan`s working port to 8080,and change the
squid`s configuration the line: cache_peer 127.0.0.1 parent 8080 3130
no-query It will be work?
Regards
garlic
-Original Message-
From: Thomas Nilsen [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 1:56 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?


You can't with that setup. If you want the users details logged in
squid, you need to swap it around so that squid uses Interscan as its
parent. As long as Interscan is passing the request on to Squid, squid
is always going to log the server IP.

Regards,
Thomas

-Original Message-
From: wangzicai [mailto:[EMAIL PROTECTED]

Sent: Friday, September 01, 2006 2:54 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] swap.log size continuing to grow?

Hello everyone
I have a proxy server machine.I am using squid-2.5.stable14 with
InterScan VirusWall for Unix in a same computer.the port of InterScan
VirusWall for Unix is 80 and the port of squid is 8080.


__ __   _
| user|---| InterScan   |--- | squid   |
|_||_| |_|

In the user`s internet I input the server`s ip and 80.
Now ,in the access.log I can not get the user`s access record. The ip is
the server1s ip. Like this
127.0.0.1 - - [24/Aug/2006:12:53:53 +0800] GET http://www.google.co.jp/
HTTP/1.0 200 4328 TCP_MISS\:FIRST_UP_PARENT .
How can I solve it.




Regards
garlic


DISCLAIMER:
This message contains information that may be privileged or confidential
and is the property of the Roxar Group. It is intended only for the
person to whom it is addressed. If you are not the intended recipient,
you are not authorised to read, print, retain, copy, disseminate,
distribute, or use this message or any part thereof. If you receive this
message in error, please notify the sender immediately and delete all
copies of this message.




RE: [squid-users] Regex url lists and DNS blacklist acls

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 08:22 +0200 skrev Thomas Nilsen:

 As utils like squidguard/dansguardian are able to handle regex files
 with good performance, I was hoping to achieve the same with asqredir or
 similar light tools.

squidguard doesn't handle large regex expression lists any better than
Squid. The problem with large regex lists is not the tool used, but the
fact that it's a large regex list which takes time to match.

 I assume Squid caches any external regex_url file?

If you mean acl xxx url_regex /path/to/file then this is the same as
having all the patterns inside squid.conf. It's read into memory and
compiled on startup/reconfigure.

The problem of regex lists is the evaluation of the acl on each request.
As regex patterns cannot be sorted Squid (or any other url regex based
acl lookup) has to walk the complete list of patterns on each request
testing if the request matches the pattern. Because of this lookup time
in a regex list is linear to the number of patterns in the list, while
lookup time in most other acl types is nearly constant independent of
the acl size.


From SquidGuard documentation:


  * While the size of the domain and urllists only has marginal
influence on the performance, too many large or complex expressions
will quickly degrade the performance of squidGuard. Though it may
depend heavily on the performance of the regex library you link
with.


And it's exacly the same for Squid, except that we don't have a close
match of urllists.

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] delivering stale content while fetching fresh

2006-09-01 Thread Gregor Reich

Henrik Nordstrom schrieb:

tor 2006-08-31 klockan 16:14 +0200 skrev Gregor Reich:

  
But is there a possibility to have squid responding - to a client that 
request a document that is (in some way) expired - with the stale 
document while - at the same time fetching the new one and storing it to 
cache: ready to deliver to the next requesting client.



Kind of. See the collapsed_forwarding and refresh_stale_hit options.
Thank you for the hint; I searched the archives, googled and searched 
squid.conf (Squid3) but I didn't found any description of these options 
(what do they do, how to configure, possible pitfalls and so on). Can 
anyone -possibly Hendrik - give me a hint where to look? (In the source 
code of course; but I'm not really able to do so...)


Thanks, Gregor.

--

Jud Grafik+Internet
Grynaustrasse 21
8730 Uznach
Tel. 055 290 16 59
Fax 055 290 16 26
Skype: gregreich (Internettelefonie www.skype.com)
www.juhui.ch



[squid-users] CacheMgr: Process Filedescriptor Allocation

2006-09-01 Thread Michał Margula

Hello!

	For last two days I had trouble with squid performance. It turns out 
that some kind of virus was heavily flooding with SYN packets which 
reduced availible numer of file descriptors and degraded performance


I used menu Process Filedescriptor Allocation to finally find that guy. 
But it would be much easier if there was:


	a) option to detect IP with most accesses (those SYN do not show in 
access.log so cachemgr is the only way or using tcpdump)

b) at least a way to sort that menu in cachemgr by IP address

Maybe you have any other ideas? I know that simples approach would be to 
limit SYN packets per second but I have no idea of proper limit :)


--
Michał Margula, [EMAIL PROTECTED], http://alchemyx.uznam.net.pl/
W życiu piękne są tylko chwile [Ryszard Riedel]


[squid-users] Authentication problem

2006-09-01 Thread Strandell, Ralf
Hi

I try to access a page that requires a username and a password. The page
is hosted on IIS.

1) If I bypass Squid completely, I get through (after the XP
authentication dialog).

2) If I use Squid, I am asked for the username and password three times
(auth dialog by web browser) and then I get HTTP Error 401.1 -
Unauthorized: Access is denied due to invalid credentials. Each of
these three attempts generates a TCP_MISS/401 in access.log.

3) I then logged in to the proxy server and used lynx
-auth=user:password www.domain.com
Messages from Lynx:
Alert!: Invalid header 'WWW-Authenticate: Negotiate'
Alert!: Invalid header 'WWW-Authenticate: NTLM'
401.2 Unauthorized: Access is denied due to server configuration.

Any ideas how to solve this?



RES: [squid-users] Freebsd -Squid - danguardian- Winbind- XMallocError

2006-09-01 Thread Erick Dantas Rotole
Henrik,

I had already read the FAQ. But the configuration for FREEBSD I think it is
for older version. Searching the web I found. 

add the following values to /boot/loader.conf which worked:
kern.maxdsiz=1073741824 # 1GB
kern.dfldsiz=1073741824 # 1GB
kern.maxssiz=134217728 # 128MB

Is that rigth? I have to set kern.maxdsiz or kern.maxssiz. Thanks!



8.7 xmalloc: Unable to allocate 4096 bytes! 
by Henrik Nordstrom

Messages like FATAL: xcalloc: Unable to allocate 4096 blocks of 1 bytes!
appear when Squid can't allocate more memory, and on most operating systems
(inclusive BSD) there are only two possible reasons: 

The machine is out of swap 
The process' maximum data segment size has been reached 
The first case is detected using the normal swap monitoring tools available
on the platform (pstat on SunOS, perhaps pstat is used on BSD as well).

To tell if it is the second case, first rule out the first case and then
monitor the size of the Squid process. If it dies at a certain size with
plenty of swap left then the max data segment size is reached without no
doubts.

The data segment size can be limited by two factors: 

Kernel imposed maximum, which no user can go above 
The size set with ulimit, which the user can control.   

-Mensagem original-
De: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Enviada em: sexta-feira, 1 de setembro de 2006 03:50
Para: Erick Dantas Rotole
Cc: squid-users@squid-cache.org
Assunto: Re: [squid-users] Freebsd -Squid - danguardian- Winbind-
XMallocError

fre 2006-09-01 klockan 03:21 -0300 skrev Erick Dantas Rotole:

 800MB memory active, 1.2GB memory inact and 0 free memory. I get the 
 error
 FATAL: xmalloc: Unable to allocate 65535 bytes and the squid process 
 restart. I really need help, I have already search google and the list 
 but haven't found the solution. Thanks

See the FAQ. It's not related to the amount of memory available but to OS
configuration limiting process size.

Regards
Henrik



[squid-users] Squid LDAP authentication with 2003 AD

2006-09-01 Thread Saqib Khan \(horiba/eu\)


Hello List members,

I am getting problem after authenticating a user over ldap. After getting
authenticated I get the following error message:

ERROR
The requested URL could not be retrieved


While trying to retrieve the URL: http://www.google.de/

The following error was encountered:

   Access Denied.

Access control configuration prevents your request from being allowed at
this time. Please contact your service provider if you feel this is
incorrect.

I am sure that it is authenticating the user as if I use a username which
is not a member of the group which is meant to be use for internet access,
i get the authentication window again  again. I also checked it by using a
LDAP browser  i was able to browse the Active Directory. I am using SuSE
9.1 and squid 2.5 stable.

Any Ideas?


Best Regards,

Saqib




[squid-users] use squid for just images

2006-09-01 Thread Nick Duda

I've read about the issues with NTLM passthrough on squid. Is there any
way a client can be configured to use squid for its cached content (like
images) but go directly to a server for NTLM (nt auth)?

The scenario is that one of our branch offices has a squid cache. They
have dedicated private line circuits to the corporate office only. In
the corporate office they get internet access. All clients in the branch
office use squid as the proxy for internet traffic, but have exclusions
in the browse to not use squid for local traffic and specific servers.
We require that they use the proxy to access an internal server that is
located in our corporate office, but this server requires NT
authentication when accessing its web page.

I understand squid has an issue with this, as I've tried to get this to
work once and was even told by some of you very smart people that i was
beating a dead horse because Microsoft cant write ntml properly :) Can
squid be configured in a way that serves up images and such from this
server but does the nt auth not going through squid?

Do anyone even follow what I'm trying to say

Regards,
Nick


-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of VistaPrint and/or its affiliates and may be 
privileged or otherwise protected from disclosure. If you are not the intended 
recipient, you are hereby notified that any review, reliance or distribution by 
others or forwarding without express permission is strictly prohibited and may 
cause liability. In case you have received this message due to an error in 
transmission, please notify the sender immediately and delete this email and 
any attachment from your system.
-


Re: [squid-users] delivering stale content while fetching fresh

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 09:37 +0200, Gregor Reich wrote:

  Kind of. See the collapsed_forwarding and refresh_stale_hit options.

 Thank you for the hint; I searched the archives, googled and searched 
 squid.conf (Squid3) but I didn't found any description of these options 

See the current STABLE release. (2.6).

Squid-3 is still under development and not all features of 2.6 is
available in Squid-3 yet.

Regards
Henrik



Re: [squid-users] Authentication problem

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 12:42 +0300, Strandell, Ralf wrote:
 Hi
 
 I try to access a page that requires a username and a password. The page
 is hosted on IIS.

Which Squid version? Should work with current STABLE release
(2.6.STABLE3).

Squid-2.5 can only forward HTTP compliant authentication schemes (Basic
and Digest), not Microsoft broken authentication schemes (NTLM,
Negotiate and Kerberos).

Regards
Henrik



Re: [squid-users] Squid LDAP authentication with 2003 AD

2006-09-01 Thread Alejandro Decchi
Hi ! my squid friend.Can you explain me how did you do to install everything 
. A long tome ago i tried but i could not made this method of athentication.
Can you give me a hand explain me step by step how this you all I read a lot 
of article hou to install ldap and squid with active directory but i could 
not

Thz
- Original Message - 
From: Saqib Khan (horiba/eu) [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Friday, September 01, 2006 10:07 AM
Subject: [squid-users] Squid LDAP authentication with 2003 AD




Hello List members,

I am getting problem after authenticating a user over ldap. After getting
authenticated I get the following error message:

ERROR
The requested URL could not be retrieved


While trying to retrieve the URL: http://www.google.de/

The following error was encountered:

   Access Denied.

Access control configuration prevents your request from being allowed at
this time. Please contact your service provider if you feel this is
incorrect.

I am sure that it is authenticating the user as if I use a username which
is not a member of the group which is meant to be use for internet access,
i get the authentication window again  again. I also checked it by using a
LDAP browser  i was able to browse the Active Directory. I am using SuSE
9.1 and squid 2.5 stable.

Any Ideas?


Best Regards,

Saqib




Re: RES: [squid-users] Freebsd -Squid - danguardian- Winbind- XMallocError

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 07:27 -0300, Erick Dantas Rotole wrote:
 Henrik,
 
 I had already read the FAQ. But the configuration for FREEBSD I think it is
 for older version.

Possible. The reasoning is the same.

  Searching the web I found. 
 
 add the following values to /boot/loader.conf which worked:
 kern.maxdsiz=1073741824 # 1GB
 kern.dfldsiz=1073741824 # 1GB
 kern.maxssiz=134217728 # 128MB
 
 Is that rigth? I have to set kern.maxdsiz or kern.maxssiz. Thanks!


maxdsiz is the key parameter. Not sure if dfldsiz needs to be set.

you shoud not need to change massiz.

Regards
Henrik



Re: [squid-users] Squid LDAP authentication with 2003 AD

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 15:07 +0200, Saqib Khan (horiba/eu) wrote:
 
 Hello List members,
 
 I am getting problem after authenticating a user over ldap. After getting
 authenticated I get the following error message:
 
 ERROR
 The requested URL could not be retrieved
 
 
 While trying to retrieve the URL: http://www.google.de/
 
 The following error was encountered:
 
Access Denied.

Which says that the request was denied your http_access directives (or
maybe http_reply_access or miss_access).

The authentication as such most likely worked fine.

Regards
Henrik



RE: [squid-users] Authentication problem

2006-09-01 Thread Nick Duda

Are you saying 2.6 can work with the microsoft broken authentication
schemes? This would be so nice...and solve lots of my problems.

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Friday, September 01, 2006 10:44 AM
To: Strandell, Ralf
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Authentication problem

On Fri, 2006-09-01 at 12:42 +0300, Strandell, Ralf wrote:
 Hi

 I try to access a page that requires a username and a password. The
 page is hosted on IIS.

Which Squid version? Should work with current STABLE release
(2.6.STABLE3).

Squid-2.5 can only forward HTTP compliant authentication schemes (Basic
and Digest), not Microsoft broken authentication schemes (NTLM,
Negotiate and Kerberos).

Regards
Henrik


-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of VistaPrint and/or its affiliates and may be 
privileged or otherwise protected from disclosure. If you are not the intended 
recipient, you are hereby notified that any review, reliance or distribution by 
others or forwarding without express permission is strictly prohibited and may 
cause liability. In case you have received this message due to an error in 
transmission, please notify the sender immediately and delete this email and 
any attachment from your system.
-


Re: [squid-users] use squid for just images

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 09:58 -0400, Nick Duda wrote:
 I've read about the issues with NTLM passthrough on squid.

Those should be pretty much an issue of the past now with the release of
Squid-2.6 with support for NTLM passthrough. If you still have problems
with 2.6.STABLE3 please file a bug report.

An alternative which is recommended and works for all proxies is to have
the web site use https on authenticated content. https is tunneled via
the proxy, not proxied, and therefore works fine even with
non-HTTP-compliant authentication such as NTLM.

 Is there any
 way a client can be configured to use squid for its cached content (like
 images) but go directly to a server for NTLM (nt auth)?

Only by URL-based exclusions.

 I understand squid has an issue with this, as I've tried to get this to
 work once and was even told by some of you very smart people that i was
 beating a dead horse because Microsoft cant write ntml properly :)

Microsoft knows NTLM reasonably well.. it's HTTP they don't understand..

 Can
 squid be configured in a way that serves up images and such from this
 server but does the nt auth not going through squid?

Only if

a) These images can be identified by URL.

and

b) Access to these images does not require authentication.


For 'a' use a pac file which gives your detailed control of what URLs to
proxy or not..


But as I said above: With Squid-2.6 it should just work.

Regards
Henrik



RE: [squid-users] Authentication problem

2006-09-01 Thread Henrik Nordstrom
On Fri, 2006-09-01 at 10:47 -0400, Nick Duda wrote:
 Are you saying 2.6 can work with the microsoft broken authentication
 schemes?

Yes.

Regards
Henrik



[squid-users] Errors during make

2006-09-01 Thread Robert Shatford
Hey guys, this is my first post on this forum, so I would like to say
hello.
 
I am having troubles when I run make.  It throws a bunch of errors
that I don't understand.  I was wondering if it outputs those errors
to
log somewhere and then I was wondering how do I find out what these
errors mean.  I am very new to Linux and Squid, this is my third
attempt
at making a server and it is the furthest I have gotten.  Any help
would
be wonderful.
 
Thank you for your time.
Bob Shatford
Asst. Network Administrator
Keller ISD



Re: [squid-users] Errors during make

2006-09-01 Thread fulan Peng

Probablly you did not upgrade your libxml2.
If you configure with ssl, you need the newest openssl.
I have set up Squid on Linux, FreeBSD and Windows, all accelerator
mode with SSL. I can help you in my chat server at
http://breakevilaxis.org if you want do the same mode.
I have scripts to create all SSL certificates and files. If you are
not interest in accelerator mode with SSL, then I would not help you.
I do not have time to learn other mode.

Fulan Peng.


On 9/1/06, Robert Shatford [EMAIL PROTECTED] wrote:

Hey guys, this is my first post on this forum, so I would like to say
hello.

I am having troubles when I run make.  It throws a bunch of errors
that I don't understand.  I was wondering if it outputs those errors
to
log somewhere and then I was wondering how do I find out what these
errors mean.  I am very new to Linux and Squid, this is my third
attempt
at making a server and it is the furthest I have gotten.  Any help
would
be wonderful.

Thank you for your time.
Bob Shatford
Asst. Network Administrator
Keller ISD




[squid-users] Help with acl's

2006-09-01 Thread Jason
Hello, I have searched and read until I wanna bang my head. What I want
squid to do: I have 6 internet computers that will access the internet (they
have static ip's) and 2 homework computers (also static) that will only have
access (whitelist) to a couple of websites (www.tutor.com for example). I am
running squid 2.6.Stable3. My squid.conf looks like this:

#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

  a bunch of comments, then:

#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
#acl our_networks src 192.168.1.0/24 192.168.2.0/24
#http_access allow our_networks
acl scorpio src 192.168.5.21
http_access allow scorpio


# And finally deny all other access to this proxy
http_access deny all

In this configuration I get access denied to any website I go to. When I
change http_access allow all obviously everything works. So I enabled
debug_options All,1 28,9 and this is what I get:

2006/09/01 11:05:33| Reconfiguring Squid Cache (version 2.6.STABLE3)...
2006/09/01 11:05:33| FD 9 Closing HTTP connection
2006/09/01 11:05:33| FD 11 Closing ICP connection
2006/09/01 11:05:33| DNS Socket created at 0.0.0.0, port 32775, FD 8
2006/09/01 11:05:33| Adding nameserver 192.168.5.5 from /etc/resolv.conf
2006/09/01 11:05:33| Adding nameserver 192.168.5.7 from /etc/resolv.conf
2006/09/01 11:05:33| Accepting proxy HTTP connections at 192.168.5.249, port
3128, FD 9.
2006/09/01 11:05:33| Accepting ICP messages at 0.0.0.0, port 3130, FD 11.
2006/09/01 11:05:33| WCCP Disabled.
2006/09/01 11:05:33| Loaded Icons.
2006/09/01 11:05:33| Ready to serve requests.
2006/09/01 11:05:46| aclCheckFast: list: 0x926b228
2006/09/01 11:05:46| aclMatchAclList: checking all
2006/09/01 11:05:46| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2006/09/01 11:05:46| aclMatchIp: '192.168.5.249' found
2006/09/01 11:05:46| aclMatchAclList: returning 1
2006/09/01 11:05:47| aclCheck: checking 'http_access allow manager
localhost'
2006/09/01 11:05:47| aclMatchAclList: checking manager
2006/09/01 11:05:47| aclMatchAcl: checking 'acl manager proto cache_object'
2006/09/01 11:05:47| aclMatchAclList: no match, returning 0
2006/09/01 11:05:47| aclCheck: checking 'http_access deny manager'
2006/09/01 11:05:47| aclMatchAclList: checking manager
2006/09/01 11:05:47| aclMatchAcl: checking 'acl manager proto cache_object'
2006/09/01 11:05:47| aclMatchAclList: no match, returning 0
2006/09/01 11:05:47| aclCheck: checking 'http_access deny !Safe_ports'
2006/09/01 11:05:47| aclMatchAclList: checking !Safe_ports
2006/09/01 11:05:47| aclMatchAcl: checking 'acl Safe_ports port 80
# http'
2006/09/01 11:05:47| aclMatchAclList: no match, returning 0
2006/09/01 11:05:47| aclCheck: checking 'http_access deny CONNECT
!SSL_ports'
2006/09/01 11:05:47| aclMatchAclList: checking CONNECT
2006/09/01 11:05:47| aclMatchAcl: checking 'acl CONNECT method CONNECT'
2006/09/01 11:05:47| aclMatchAclList: no match, returning 0
2006/09/01 11:05:47| aclCheck: checking 'http_access allow scorpio'
2006/09/01 11:05:47| aclMatchAclList: checking scorpio
2006/09/01 11:05:47| aclMatchAcl: checking 'acl scorpio src 192.168.5.21'
2006/09/01 11:05:47| aclMatchIp: '192.168.5.249' NOT found
2006/09/01 11:05:47| aclMatchAclList: no match, returning 0
2006/09/01 11:05:47| aclCheck: checking 'http_access deny all'
2006/09/01 11:05:47| aclMatchAclList: checking all
2006/09/01 11:05:47| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2006/09/01 11:05:47| aclMatchIp: '192.168.5.249' found
2006/09/01 11:05:47| aclMatchAclList: returning 1
2006/09/01 11:05:47| aclCheck: match found, returning 0
2006/09/01 11:05:47| aclCheckCallback: answer=0

There is a few things in their I don't get. Maybe somebody does and can tell
me I am missing 

[squid-users] access.log stopped working after 2.5 to 2.6 upgrade

2006-09-01 Thread Nick Duda

The 2.5 to 2.6 upgrade went great, everything appears to work , except
its throwong some ntlm stuff which I think I can figure out...however,
the access.log has stopped working, it doesnt write to it anymore. The
store.log and cache.log work fineany ideas?


Regards,
Nick


-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of VistaPrint and/or its affiliates and may be 
privileged or otherwise protected from disclosure. If you are not the intended 
recipient, you are hereby notified that any review, reliance or distribution by 
others or forwarding without express permission is strictly prohibited and may 
cause liability. In case you have received this message due to an error in 
transmission, please notify the sender immediately and delete this email and 
any attachment from your system.
-


[squid-users] Authenticate Squid Using Digital Certificate

2006-09-01 Thread Zaki Akhmad

Hello

I am trying to use digital certificate for squid authentication. I
have my certicate export to LDAP server. Is it possible to
authenticate squid using digital certificate? Should I install extra
package? And what configuration that I should add on the squid.conf?

I had browse this mailing-list archive, but I didn't find the answer.
Thank you for your attention.

Regards.
--
Zaki Akhmad


[squid-users] getting rid of 304's

2006-09-01 Thread Nick Duda

Client -- Squid --- server (IIS)

We just put squid into play like this for a test we are doing. Prior to
the test the client would hit the server directly for its pages. The web
log showed the clients hits and downloading the pages, images...etc. We
then put squid inbetween them, and the server no longer shows the client
making the calls, sweet. It shows the proxy doing so, sweet.

The problem we are seeing is that the web server is showing a lot of
304's from the proxy , and the byte count in the log files for stuff
like images are the full size, as if the proxy is still pulling from the
server every couple seconds. What is misconfigured? We would like the
iis servers log file to show the occasional hit for the pages/gif files
from the proxy.

Any settings I should be running in the config file on squid to hold the
file sin cache longer?

Regards,
Nick


-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of VistaPrint and/or its affiliates and may be 
privileged or otherwise protected from disclosure. If you are not the intended 
recipient, you are hereby notified that any review, reliance or distribution by 
others or forwarding without express permission is strictly prohibited and may 
cause liability. In case you have received this message due to an error in 
transmission, please notify the sender immediately and delete this email and 
any attachment from your system.
-


[squid-users] reverse proxy v2.6

2006-09-01 Thread dale wilhelm

it appears that reverse proxy has been removed from the 2.6
version... does anyone know of a reason why this rm'd and if there is
a work around??? i have the following in my config for 2.5:

httpd_accel_host ( ip addr )
httpd_accel_port 8083
httpd_accel_single_host on
httpd_accel_with_proxy on

all httpd_accel* directives are now gone... any help would be


Re: [squid-users] reverse proxy v2.6

2006-09-01 Thread Odhiambo WASHINGTON
* On 01/09/06 12:28 -0700, dale wilhelm wrote:
| it appears that reverse proxy has been removed from the 2.6
| version... does anyone know of a reason why this rm'd and if there is
| a work around??? i have the following in my config for 2.5:
| 
| httpd_accel_host ( ip addr )
| httpd_accel_port 8083
| httpd_accel_single_host on
| httpd_accel_with_proxy on
| 
| all httpd_accel* directives are now gone... any help would be

Please don't expect a file from 2.5 to work in 2.6 just like that ;)
Read the squid.conf.default that comes with 2.6


-Wash

http://www.netmeister.org/news/learn2quote.html

DISCLAIMER: See http://www.wananchi.com/bms/terms.php

--
+==+
|\  _,,,---,,_ | Odhiambo Washington[EMAIL PROTECTED]
Zzz /,`.-'`'-.  ;-;;,_ | Wananchi Online Ltd.   www.wananchi.com
   |,4-  ) )-,_. ,\ (  `'-'| Tel: +254 20 313985-9  +254 20 313922
  '---''(_/--'  `-'\_) | GSM: +254 722 743223   +254 733 744121
+==+

If there is a possibility of several things going wrong, the one that
will cause the most damage will be the one to go wrong.


[squid-users] Re: Just a question?

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 15:37 -0500 skrev Greg Bledsoe:

 Ok, one I've tried to look on your website for a windows .exe file on
 your website and was unsuccessful. So, do you have to use CYGWin or
 some other program to run it, or if not could you tell me where it is.
 Another thing, if I can't find it on your website, what were the other
 three programs like squid called. Thank you! 

It's there.

www.squid-cache.org - Binary Distributions - Windows - Download

or alternatively

www.squid-cache.org - FAQ - About Squid - Does Squid run on Windows
NT? - Native NT Port - Download


Both paths end up at the website of the native Windows NT port of Squid.

http://www.acmeconsulting.it/SquidNT/


You can also compile Squid yourself using either Visual Studio, MinGW or
Cygwin. The sources to the Windows port is found at the same place, or
alternatively at devel.squid-cache.org.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Errors during make

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 10:56 -0500 skrev Robert Shatford:
 Hey guys, this is my first post on this forum, so I would like to say
 hello.
  
 I am having troubles when I run make.  It throws a bunch of errors
 that I don't understand.

Golden rule: Start with the first error, ignore the rest.

If you don't get what it's about, post the error here and we will try to
help you.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Help with acl's

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 11:35 -0500 skrev Jason:

 # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
 
 # Example rule allowing access from your local networks. Adapt
 # to list your (internal) IP networks from where browsing should
 # be allowed
 #acl our_networks src 192.168.1.0/24 192.168.2.0/24
 #http_access allow our_networks
 acl scorpio src 192.168.5.21
 http_access allow scorpio

The above allows 192.168.5.21 access. There is no other rules so it's
only that address who is allowed access.

 2006/09/01 11:05:47| aclMatchAcl: checking 'acl scorpio src 192.168.5.21'
 2006/09/01 11:05:47| aclMatchIp: '192.168.5.249' NOT found

This says the request came from 192.168.5.249, not 192.168.5.21.

Are you running some other proxy infront of Squid perhaps?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] access.log stopped working after 2.5 to 2.6 upgrade

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 14:33 -0400 skrev Nick Duda:
 The 2.5 to 2.6 upgrade went great, everything appears to work , except
 its throwong some ntlm stuff which I think I can figure out...however,
 the access.log has stopped working, it doesnt write to it anymore. The
 store.log and cache.log work fineany ideas?

You probably don't have an access_log directive in your squid.conf
telling Squid to log.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Authenticate Squid Using Digital Certificate

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 11:32 -0700 skrev Zaki Akhmad:

 I am trying to use digital certificate for squid authentication. I
 have my certicate export to LDAP server. Is it possible to
 authenticate squid using digital certificate?

Yes, but browses only support this when using Squid as reverse proxy
infront of your web servers, not when using it as an Internet proxy.

Squid doesn't use LDAP to verify the client certificate. Instead normal
X509 CA based chain of trust is used.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] reverse proxy v2.6

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 22:37 +0300 skrev Odhiambo WASHINGTON:
 * On 01/09/06 12:28 -0700, dale wilhelm wrote:
 | it appears that reverse proxy has been removed from the 2.6
 | version... does anyone know of a reason why this rm'd and if there is
 | a work around??? i have the following in my config for 2.5:
 | 
 | httpd_accel_host ( ip addr )
 | httpd_accel_port 8083
 | httpd_accel_single_host on
 | httpd_accel_with_proxy on
 | 
 | all httpd_accel* directives are now gone... any help would be
 
 Please don't expect a file from 2.5 to work in 2.6 just like that ;)
 Read the squid.conf.default that comes with 2.6

and the release notes to hint what to look for.

acceleation is very well supported in 2.6, just somewhat different than
2.5.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Authenticate Squid Using Digital Certificate

2006-09-01 Thread Henrik Nordstrom
fre 2006-09-01 klockan 15:04 -0700 skrev Zaki Akhmad:
 On 9/1/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 
  Yes, but browses only support this when using Squid as reverse proxy
  infront of your web servers, not when using it as an Internet proxy.
 
  Squid doesn't use LDAP to verify the client certificate. Instead normal
  X509 CA based chain of trust is used.
 
 Hai Henrik, thank you for your attention. Is there any hint how to
 modify the squid.conf? So that the squid can access the certificate
 from the LDAP server. Such as
 
 auth_param basic program ... -x -D (cn=username) certificateFile; 

Squid just doesn't do this.

But in theory you should be able to write an external acl helper to
verify the certificate against LDAP after the connection has been
accepted by Squid.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] cache peering question

2006-09-01 Thread Daniel Appleby

Hi,

Just a question about cache_peering. If you have proxys setup in a 
sibling relationship does one proxy require authentication to access 
cache of another proxy if both proxys use prxoy_auth?


i.e. does proxy A have to auth to proxy B in ordered to get its cache?

Thanks in Advance
Regards
Daniel


[squid-users] Large Files

2006-09-01 Thread Mark Nottingham
I'd appreciate some enlightenment as to how Squid handles large files  
WRT memory and disk.


In particular;
  - squid.conf says that memory is used for in-transit objects.  
What exactly is kept in memory for in-transit objects; just metadata,  
or the whole thing?
  - if something is in memory cache, does it get copied when it is  
requested (because it is in-transit)?

  - How does sendfile support in 2.6 affect this?
  - Does anyone have any experiences they'd care to relate regarding  
memory-caching very large objects?


Thanks!

--
Mark Nottingham
[EMAIL PROTECTED]





Re: [squid-users] Large Files

2006-09-01 Thread Adrian Chadd
On Fri, Sep 01, 2006, Mark Nottingham wrote:
 I'd appreciate some enlightenment as to how Squid handles large files  
 WRT memory and disk.
 
 In particular;
   - squid.conf says that memory is used for in-transit objects.  
 What exactly is kept in memory for in-transit objects; just metadata,  
 or the whole thing?

Squid used to keep the whole thing in memory. It now:

* can keep the whole object in memory
* If memory is needed, and the object is being swapped out Squid will
  'free' the start of the object in memory - the in-memory copy is
  then not used to serve replies but is just there to be written to
  disk. All subsequent hits come from disk.

   - if something is in memory cache, does it get copied when it is  
 requested (because it is in-transit)?

The whole object isn't copied during a memory hit; only the current
'4k' range being read.

   - How does sendfile support in 2.6 affect this?

Sendfile support? :)

   - Does anyone have any experiences they'd care to relate regarding  
 memory-caching very large objects?

Not yet; but there's a project on my plate to evaluate this to
begin caching p2p and youtube stuff a bit better..



Adrian



Re: [squid-users] cache peering question

2006-09-01 Thread Adrian Chadd
On Sat, Sep 02, 2006, Daniel Appleby wrote:
 Hi,
 
 Just a question about cache_peering. If you have proxys setup in a 
 sibling relationship does one proxy require authentication to access 
 cache of another proxy if both proxys use prxoy_auth?
 
 i.e. does proxy A have to auth to proxy B in ordered to get its cache?

The proxy doesn't have to entirely require authentication.

Ie, the ACLs are:

acl proxy_hosts src 1.1.1.1 1.1.1.2 1.1.1.3 1.1.1.4
acl local src 192.168.0.0/16
acl proxyauth proxy_auth REQUIRED

http_access allow proxy_hosts
http_access allow lcl proxyauth
http_access deny all

This way the proxy_hosts have access without needing to authenticate.

That said, I believe you can configure up basic authentication in
the cache_peer line.




adrian



[squid-users] proxy.pac file problem

2006-09-01 Thread Raj

Hi All,

I am running Version 2.5.STABLE10 on an Open BSD operating system. I
am having problems with proxy.pac file. I have the following proxy.pac
file.

if (isInNet(myIpAddress(), 172.26.96.0, 255.255.240.0)) return PROXY 172.
26.11.50:3128; PROXY 172.26.11.150:3128;

if (isInNet(myIpAddress(), 172.26.112.0, 255.255.240.0)) return PROXY 172
.26.11.150:3128; PROXY 172.26.11.50:3128;

else
   return PROXY 172.26.11.50:3128; PROXY 172.26.11.150:3128;
   return PROXY 172.26.11.150:3128; PROXY 172.26.11.50:3128;

So when the proxy server 172.26.11.50 goes down, all the clients
failover to 172.26.11.150. But when the proxy server 172.26.11.150
goes down, clients are not failing over to 172.26.11.50.

Why is it failing over from 172.26.11.50 to 172.26.11.150 but not vice versa.
Could someone help me if there is any syntax error in my proxy.pac
file. I would really appreciate it.

Thanks.


[squid-users] strange, dont have cache peers but installer adds it to the default configuration

2006-09-01 Thread SSCR Internet Admin
Hi,

I just updated my squid to squid-2.6.1 but strange it adds up a default
cache_peer entry against itself.  Could some help me out? cant figured it
out during configure...

heres my squid configure switches

./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu
--target=x86_64-redhat-linux-gnu \
--enable-arp-acl --enable-delay-pools --program-prefix= --prefix=/usr
--exec-prefix=/usr --bindir=/usr/bin \
--sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share
--includedir=/usr/include --libdir=/usr/lib64 \
--libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com
--mandir=/usr/share/man \
--infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin
--libexecdir=/usr/lib64/squid --localstatedir=/var \
--sysconfdir=/etc/squid --enable-snmp --enable-poll
--enable-removal-policies=heap,lru \
--with-pthreads --enable-storeio=aufs,ufs --enable-ssl
--with-openssl=/usr/kerberos --enable-delay-pools \
--enable-ssl --enable-linux-netfilter --enable-useragent-log
--enable-referer-log --disable-dependency-tracking  \
--enable-cachemgr-hostname=proxy.sscrmnl.edu.ph --disable-ident-lookups
--enable-truncate --enable-underscores \
--enable-err-languages=English --enable-htcp --datadir=/usr/share
--enable-icmp

TIA

Nats


-- 
All messages that are coming from this domain
is certified to be virus and spam free.  If
ever you have received any virus infected 
content or spam, please report it to the
internet administrator of this domain 
[EMAIL PROTECTED]



[squid-users] WCCPv2 GRE with 2.6 on Linux

2006-09-01 Thread Stephen Fletcher
Hi
I have compiled the Debian Unstable package of Squid 2.6.3 and cannot get
WCCPv2 GRE working.
I have built with standard confiure options so WCCPv2 support should be
available. I configure my wccp2_router and leave it as other default wccp2
options such that it is using ID 0 and GRE. I see the squid proxy ip has
register itself with my Pix. However when GRE packets are sent to the Squid
cache there is no response from Squid. I can't see squid listening on
protocol 47, and nothing shows in the squid access.log.

gre0  Link encap:UNSPEC  HWaddr
00-00-00-00-07-08-00-00-00-00-00-00-00-00-00-00  
  UP RUNNING NOARP  MTU:1476  Metric:1
  RX packets:394 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:18912 (18.4 KiB)  TX bytes:0 (0.0 b)

netstat -plane | egrep -i '(squid|47|gre)'
tcp0  0 0.0.0.0:31280.0.0.0:*   LISTEN
0  70715  364/(squid) 
udp0  0 172.16.1.7:2048 172.16.1.1:2048
ESTABLISHED13 70717  364/(squid) 
udp0  0 0.0.0.0:31300.0.0.0:*
0  70716  364/(squid) 
udp0  0 0.0.0.0:10880.0.0.0:*
13 70709  364/(squid) 
unix  2  [ ] DGRAM70708364/(squid)

unix  2  [ ] DGRAM70704362/squid



From Cache.log
2006/09/02 17:28:52| Accepting proxy HTTP connections at 0.0.0.0, port 3128,
FD 13.
2006/09/02 17:28:52| Accepting ICP messages at 0.0.0.0, port 3130, FD 14.
2006/09/02 17:28:52| HTCP Disabled.
2006/09/02 17:28:52| WCCP Disabled.
2006/09/02 17:28:52| Accepting WCCPv2 messages on port 2048, FD 15.
2006/09/02 17:28:52| Initialising all WCCPv2 lists

Registered with Pix...
WCCP-PKT:S00: Received valid Here_I_Am packet from 172.16.1.7 w/rcv_id
1AA4
WCCP-PKT:S00: Sending I_See_You packet to 172.16.1.7 w/ rcv_id 1AA5

I also decided to try using the ip_wccp module instead of ip_gre but it
wouldn't compile with 2.6.17.8. I would prefer to not pursue this method
however.



Re: [squid-users] WCCPv2 GRE with 2.6 on Linux

2006-09-01 Thread Adrian Chadd
On Sat, Sep 02, 2006, Stephen Fletcher wrote:
 Hi
 I have compiled the Debian Unstable package of Squid 2.6.3 and cannot get
 WCCPv2 GRE working.
 I have built with standard confiure options so WCCPv2 support should be
 available. I configure my wccp2_router and leave it as other default wccp2
 options such that it is using ID 0 and GRE. I see the squid proxy ip has
 register itself with my Pix. However when GRE packets are sent to the Squid
 cache there is no response from Squid. I can't see squid listening on
 protocol 47, and nothing shows in the squid access.log.

Can you post a squid -v?

I've been running squid-2.6 and squid-3 with wccpv2 and it works fine.
The thing I initially forgot was --enable-linux-netfilter.
It'll run; it just won't work. :)


 Registered with Pix...
 WCCP-PKT:S00: Received valid Here_I_Am packet from 172.16.1.7 w/rcv_id
 1AA4
 WCCP-PKT:S00: Sending I_See_You packet to 172.16.1.7 w/ rcv_id 1AA5
 
 I also decided to try using the ip_wccp module instead of ip_gre but it
 wouldn't compile with 2.6.17.8. I would prefer to not pursue this method
 however.

Have you bought up a 'fake' gre interface just so the kernel will
handle incoming GRE?

also, have you turned on ip forwarding and turned off rp_filter ?




adrian



RE: [squid-users] WCCPv2 GRE with 2.6 on Linux

2006-09-01 Thread Stephen Fletcher
My config options 

configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin'
'--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid'
'--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid'
'--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,diskd,null' '--enable-linux-netfilter'
'--enable-linux-proxy' '--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools'
'--enable-htcp' '--enable-cache-digests' '--enable-underscores'
'--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp' '--with-large-files'
'i386-debian-linux' 'build_alias=i386-debian-linux'
'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux'

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 2 September 2006 12:46 PM
To: Stephen Fletcher
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] WCCPv2 GRE with 2.6 on Linux

On Sat, Sep 02, 2006, Stephen Fletcher wrote:
 Hi
 I have compiled the Debian Unstable package of Squid 2.6.3 and cannot get
 WCCPv2 GRE working.
 I have built with standard confiure options so WCCPv2 support should be
 available. I configure my wccp2_router and leave it as other default wccp2
 options such that it is using ID 0 and GRE. I see the squid proxy ip has
 register itself with my Pix. However when GRE packets are sent to the
Squid
 cache there is no response from Squid. I can't see squid listening on
 protocol 47, and nothing shows in the squid access.log.

Can you post a squid -v?

I've been running squid-2.6 and squid-3 with wccpv2 and it works fine.
The thing I initially forgot was --enable-linux-netfilter.
It'll run; it just won't work. :)


 Registered with Pix...
 WCCP-PKT:S00: Received valid Here_I_Am packet from 172.16.1.7 w/rcv_id
 1AA4
 WCCP-PKT:S00: Sending I_See_You packet to 172.16.1.7 w/ rcv_id 1AA5
 
 I also decided to try using the ip_wccp module instead of ip_gre but it
 wouldn't compile with 2.6.17.8. I would prefer to not pursue this method
 however.

Have you bought up a 'fake' gre interface just so the kernel will
handle incoming GRE?

also, have you turned on ip forwarding and turned off rp_filter ?




adrian



Re: [squid-users] WCCPv2 GRE with 2.6 on Linux

2006-09-01 Thread Adrian Chadd
Just to compare:

Squid Cache: Version 3.0.PRE4-CVS
(same options for 2.6 work fine.)
configure options: '--prefix=/usr/local/squid' '--enable-storeio=ufs aufs null' 
'--enable-linux-netfilter'

Config:

cache_effective_user adrian

wccp2_service standard 0
#wccp2_service dynamic 80
#wccp2_service_info 80 protocol=tcp ports=80 priority=240

tcp_outgoing_address 203.56.15.78

wccp2_router 192.168.1.1:2048

http_port 192.168.1.10:3128 transparent vport=80
http_port localhost:3128

(I have this server doing wccp on a NATted interface; so it has a non-NATted 
public
IP for external outbound connections..)

Then:

[EMAIL PROTECTED]:~/work/squid3# cat /root/wccp.sh 
#!/bin/sh

ifconfig gre0 inet 1.2.3.4 netmask 255.255.255.0 up
echo 1  /proc/sys/net/ipv4/ip_forward
echo 0  /proc/sys/net/ipv4/conf/default/rp_filter
echo 0  /proc/sys/net/ipv4/conf/all/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0  /proc/sys/net/ipv4/conf/gre0/rp_filter

iptables -F -t nat
# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 
3128 
iptables -t nat -A PREROUTING -i gre0 -p tcp -m tcp --dport 80 -j DNAT 
--to-destination 192.168.1.10:3128

eth0 is external, eth1 is internal.

Cisco config is simple - enable wccp2 + web-cache, ip wccp web-cache redirect 
in on the internal
interface.

I've not got a spare PIX/ASA device here to try it against.



Adrian