Re: [squid-users] how to intergrate virus detection with squid proxy

2008-10-07 Thread Henrik K
On Mon, Oct 06, 2008 at 11:03:10AM -0700, simon benedict wrote:
 Dear All,
 
 I have the following setup which is been workin fine for a long time 
 
 Redhat 9 
 squid-2.4.STABLE7-4
 
 i also have shoreline firewall on the same squid server
 
 now i appreciate if someone cd advise n help me
 
 1) Real time Virus Scanning for your Proxy Server, includes scanning of HTTP 
 traffic and downloaded files

Try HAVP.

http://server-side.de/
http://havp.hege.li/

 2) Real-Time Content Scanning for HTTP traffic.

DansGuardian or such.

 3) POP Up Filter
 4) Scanning of active content like Javascript, Java or ActiveX and block 
 scripts that access the hard disks

Privoxy? Dunno..



Re: [squid-users] Cache_dir more than 10GB

2008-10-07 Thread Matus UHLAR - fantomas
  Yes, but using data=writeback is not a tuning, but risking. Using that on
  squid cache dir may require cleaning cache_dir after each crash, otherwise
  you risk providing invalid data

On 06.10.08 22:06, Rafael Gomes wrote:
 What option data=writeback really do?

see man mount(8), tune2fs(8). 

CITE
This may increase throughput, however,  it may allow old data to appear in
files after a crash and journal recovery.
/CITE

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #9: Out of error messages.


Re: [squid-users] squid busy even when its not working...(?) bug?

2008-10-07 Thread Linda W

Is that a configurable in the config script or is it hard coded?

Amos Jeffries wrote:

Linda W wrote:

With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?

It's the most active process -- even when it is supposedly doing nothing?

I'm running it on suse10.3, squid-beta-3.0-351 so maybe it is something
that has been fixed?


Wakeups-from-idle per second : 102.2interval: 10.0s

Top causes for wakeups:
 58.4% ( 62.6) squid : schedule_timeout (process_timeout)


Ah wakeups. Looks like one of the inefficient listening methods is being 
used. Squid _polls_ its listening sockets in one of several ways. Some 
of these can cause lot of wakeups.


Amos


Re: [squid-users] acl website block

2008-10-07 Thread Amos Jeffries

░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

i have been use
acl website dstdomain /etc/website

to block some website

but how to make exception to some PC ( client )
eg :

192.168.1.100-200 -- my client's ip
i want only 192.168.1.100 and 192.168.1.110 that not have any blocked


acl clientsA src 192.168.1.100/32 192.168.1.110/32
http_access allow clientsA


site ( free access )
and i want 192.168.1.101 - 192.168.1.109 only can browse yahoo.com


acl clientsB src 192.168.1.101-192.168.1.109
acl yahoo dstdomain .yahoo.com
http_access allow clientsB yahoo


and i want 192.168.1.111 - 192.168.1.120 only can browse google.com


acl clientsC src 192.168.1.111-192.168.1.120
acl google dstdomain .google.com
http_access allow clientsC google


and i want 192.168.1.121 - 192.168.1.30 only can browse yahoo and google.com


(NP: I assume you typo and meant that last range to end at *.130)

acl clientsD src 192.168.1.121-192.168.1.130
http_access allow clientsD yahoo
http_access allow clientsD google


and i want the rest cannot / doesnt have ant access


http_access deny all


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] squid busy even when its not working...(?) bug?

2008-10-07 Thread Henrik Nordstrom
On mån, 2008-10-06 at 21:09 -0700, Linda W wrote:
 With no processes attaching to squid -- no activity -- no open
 network connections -- only squid listening for connections --
 why is squid walking up doing a busy-wait so often?
 
 It's the most active process -- even when it is supposedly doing nothing?
 
 I'm running it on suse10.3, squid-beta-3.0-351 so maybe it is something
 that has been fixed?

There has been a bit of work in that area indeed. Not sure which
release, probably 3.1.

Suqid-2.7 also behaves better in this regard.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] auth_param basic children

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 18:24 +1300, Amos Jeffries wrote:

 NP: Some helpers though have a max concurrency of 1.

Most actually. Supporting a concurrency level other than 0 (default)
requires a change in the helper.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Cache settings per User Agent?

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 19:12 +0800, howard chen wrote:

 E.g. for a dynamic page,
 
 http://www.example.com/product.php?id=123
 
 
 Normally this page is not cached by Squid as it contains some user
 information such as login name.
 
 However, is it possible to enable caching, if UA =  Googlebot or  Slurp etc?

Best done by the origin server using the Vary header and Cache-Control:
max-age..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Bandwidth shapping is lost when 'squid -k reconfigure' is executed

2008-10-07 Thread Amos Jeffries

Murilo Opsfelder Araújo wrote:

Hi squid users,

I'm using delay pools to shape the bandwidth from some ip addresses.

If a user starts to download some file from Internet, its download
falls into the shapping,
i.e., the delay pools is working as expected.

Whenever a 'squid -k reconfigure' is executed during the users's download, the
shapping is lost, i.e., the download begins to download in full speed.

Here is my delay pools in squid.conf:

delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 8000/8000
acl pool_1_acl src 192.168.8.100 10.10.10.0/255.255.255.0
delay_access 1 allow pool_1_acl
delay_access 1 deny all

I'm using an old version of squid cache, as you can see in the output
of the command
'squid -v':

Squid Cache: Version 2.5.STABLE12-20060202
configure options:  --prefix=/usr --exec_prefix=/usr --enable-snmp
--sysconfdir=/etc/squid
--enable-icmp --enable-underscores --datadir=/etc/squid --enable-linux-netfilter
--enable-auth=ntlm,basic --enable-ssl --enable-delay-pools

I'd like to know if this situation is normal or if it was a bug
corrected in newer versions
of squid cache.



Yes this is a known bug (219) fixed in 2.7.STABLE3+

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Bandwidth shapping is lost when 'squid -k reconfigure' is executed

2008-10-07 Thread Murilo Opsfelder Araújo
 Yes this is a known bug (219) fixed in 2.7.STABLE3+

Thanks Amos.

 Please use Squid 2.7.STABLE4 or 3.0.STABLE9

I'm gonna do that.

Thanks again.

Murilo.


Re[2]: [squid-users] Squid dying

2008-10-07 Thread Dietmar Braun
Thursday, October 2, 2008, 12:44:01 PM, you wrote:
 Bug 2447:
 http://www.squid-cache.org/bugs/show_bug.cgi?id=2447

 Not really sure it's the same bug, but perhaps related.
 Please get a stack trace of your crash and file a new bug report.

Until now with the patch applied, the two systems are running
stable... so I am at the moment unable to get a trace of any crash...
;)

Dietmar




[squid-users] Cache settings per User Agent?

2008-10-07 Thread howard chen
Hello,

Is it possible to set the cache rules, per user agent?

E.g. for a dynamic page,

http://www.example.com/product.php?id=123


Normally this page is not cached by Squid as it contains some user
information such as login name.

However, is it possible to enable caching, if UA =  Googlebot or  Slurp etc?


Thanks.


Res: Res: [squid-users] BUG 740

2008-10-07 Thread NBBR
NBBR wrote:
 
 
 NBBR wrote:
 I'm with problems for the squid(3.0) send content-type to my perl script 
 using external acl's. this problem is what this in BUG 740?

 if it will be, would like resolv in squid 3.0?

 
 3.0 is already restricted to only serious bugs. You can patch your own 
 though if you want:
  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9216.patch
  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9223.patch
  http://www.squid-cache.org/Versions/v3/3.1/changesets/b9226.patch

 3.1 has just been branched, which means test releases will come very 
 soon which you can use.
 
 Amos
 -- 
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9
 
 I made download of SQUID 3.0 STABLE 9 and applied the PATCH' s, but when go 
 to compile it of the following o error:
 
 external_acl.cc: In function ‘void parse_externalAclHelper(external_acl**)’:
 external_acl.cc:353: erro: ‘DBG_IMPORTANT’ was not declared in this scope
 make[3]: ** [external_acl.o] Erro 1
 
 it needs to apply more some PATCH?
 

 Ah, replace the DBG_IMPORTANT with 1. (DBG_CRITICAL with 0 too if any)


 Amos
 -- 
Please use Squid 2.7.STABLE4 or 3.0.STABLE9

I compile and install squid correctly but when I configure the squid.conf for 
get anything of the HEADER (ex. Content-Type) it shows error when init:

#-#
/usr/local/squid/sbin/squid -d1
FATAL: Bungled squid.conf line 409: external_acl_type bla %SRC %DST 
%{Content-Type} /tmp/teste.pl
Squid Cache (Version 3.0.STABLE9): Terminated abnormally.
CPU Usage: 0.004 seconds = 0.000 user + 0.004 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
#-#

If I not configure the %{Content-Type} or anything parameters the HEADER, it 
function normally.

tanks for help []s

Andre Oliveira


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Web accelerator settings for multiple sites on different servers

2008-10-07 Thread Amos Jeffries

Odiobill wrote:

Hi, I configured an OpenVZ cluster for my company, where there is a VE with
public IP adress that provides web-accelerator for other VEs that are
running on a different network, with private addresses.

To realize it, I used a configuration similar to this one within my
squid.conf:


-- CUT HERE --

http_port 80 accel vhost
http_port 3128

acl our_network src 192.168.10.0/24
http_access allow our_network

acl our_domains dstdomain /etc/squid/sites.conf
http_access allow our_domains

http_access deny all

cache_peer 192.168.10.100 parent 80 0 no-query originserver login=PASS
name=100
cache_peer_domain 100 1st.example.org 3rd.example.org

cache_peer 192.168.10.101 parent 80 0 no-query originserver login=PASS
name=101
cache_peer_domain 101 2nd.example.org 4th.example.org 5th.example.org

-- CUT HERE --


/etc/squid/sites.conf lists every hostname referred by cache_peer_domain
directives.

The problem is that we have ten virtual servers to accelerate, and some of
these are providing hosting for 20 or 30 websites (each one).
There is better way to accomplish this, maybe without the need to write down
hostnames both on the external file and squid.conf itself?


Yes there is :-)   cache_peer_access.

I use it something roughly like this:

  cache_peer  name=serverA
  acl serverA dstdomain /etc/squid/serverA-domains
  http_access allow serverA
  cache_peer_access serverA allow serverA
  cache_peer_access serverA deny all

... repeat for each host.

You can then automate the squid.conf generation if you need.
Any alterations to the data files still require a squid 
reload/reconfigure to activate.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Transparent proxy from different networks

2008-10-07 Thread Amos Jeffries

Matus UHLAR - fantomas wrote:

I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
from 192.168.1.0/24 can browse the Internet without problems thanks to a
REDIRECT rule in my shorewall config.

But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
etc.) can't browse the Internet. Those networks are connected to
192.168.1.0/24 via a VPN connection.

My redirect rule in iptables syntax is like this:

iptables -t nat -A PREROUTING -s 0.0.0.0/24 -i eth2 -p tcp --dport 80 -j
REDIRECT --to-ports

Is there a restriction to work transparent proxy for other networks
different from 192.168.1.0/24? Do I have to configure squid to listen on
each range o network addresses?


On 07.10.08 16:09, Amos Jeffries wrote:

Your current rule is restricting the REDIRECT to specific interface and
0.0.0.0 source. not sure host that 0.0.0.0 bit works.


It probably has to be 0.0.0.0/0 which matches ALL IP's. 0.0.0.0/24 matches
only 0.0.0.* which is nearly the same as nothing.



Can www get any confirmation on that. Because I thought the -s meant 
source-IP.  And the 0.0.0.0/8 range are invalid bogons. It only makes 
sense as you say as an inverted mask.


The issue could be the eth2 setting.
Or if you are right about the 0.0.0.0/24, Matus, that bit may need 
changing to 0.0.0.0/16 or similar to catch more subnets.



Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Transparent proxy from different networks

2008-10-07 Thread Matus UHLAR - fantomas
  I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
  from 192.168.1.0/24 can browse the Internet without problems thanks to a
  REDIRECT rule in my shorewall config.
 
  But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
  etc.) can't browse the Internet. Those networks are connected to
  192.168.1.0/24 via a VPN connection.
 
  My redirect rule in iptables syntax is like this:
 
  iptables -t nat -A PREROUTING -s 0.0.0.0/24 -i eth2 -p tcp --dport 80 -j
  REDIRECT --to-ports
 
  Is there a restriction to work transparent proxy for other networks
  different from 192.168.1.0/24? Do I have to configure squid to listen on
  each range o network addresses?

On 07.10.08 16:09, Amos Jeffries wrote:
 Your current rule is restricting the REDIRECT to specific interface and
 0.0.0.0 source. not sure host that 0.0.0.0 bit works.

It probably has to be 0.0.0.0/0 which matches ALL IP's. 0.0.0.0/24 matches
only 0.0.0.* which is nearly the same as nothing.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
(R)etry, (A)bort, (C)ancer


Re: [squid-users] Cache settings per User Agent?

2008-10-07 Thread howard chen
Hello,

On Tue, Oct 7, 2008 at 8:11 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 Best done by the origin server using the Vary header and Cache-Control:
 max-age..


It can't, since it will confuse my squid to cache the page for normal
user. Is should not be cached for normal request. Robot don't need the
most updated result, don't need personalized contents etc.

I only want to cache, if and only if UA are robots. The squid will
block the request and the robots will not hit my backend.


[squid-users] WCCP and Squid both through Linux

2008-10-07 Thread Johnson, S
Does anyone know of a good HowTo on running WCCP and Squid together?
(Specifically running WCCP on the linux box itself and not a Cisco
router.)

 Thanks 
   Scott


RE: [squid-users] auth_param basic children

2008-10-07 Thread Andrew Struiksma
 When Squid needs to authenticate a user their details are
 passed to the auth helper. It then waits (doing other stuff
 meanwhile) for the helper to send back its result.

 There are two things which affect performance.

   A) children - number of helpers squid can send data to.

   B) helper concurrency - number of requests squid is allowed
 to queue up for a single helper.

OK, but I'm still not 100% when the helper is actually called. Is the helper 
only used when a user is prompted for a password or is there an authentication 
process that takes place for each request?

My thought is that our LAN users, since they are never forced to authenticate, 
are not effected by the children parameter but I wanted to double-check.

Thanks,

Andrew


[squid-users] Bandwidth shapping is lost when 'squid -k reconfigure' is executed

2008-10-07 Thread Murilo Opsfelder Araújo
Hi squid users,

I'm using delay pools to shape the bandwidth from some ip addresses.

If a user starts to download some file from Internet, its download
falls into the shapping,
i.e., the delay pools is working as expected.

Whenever a 'squid -k reconfigure' is executed during the users's download, the
shapping is lost, i.e., the download begins to download in full speed.

Here is my delay pools in squid.conf:

delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 8000/8000
acl pool_1_acl src 192.168.8.100 10.10.10.0/255.255.255.0
delay_access 1 allow pool_1_acl
delay_access 1 deny all

I'm using an old version of squid cache, as you can see in the output
of the command
'squid -v':

Squid Cache: Version 2.5.STABLE12-20060202
configure options:  --prefix=/usr --exec_prefix=/usr --enable-snmp
--sysconfdir=/etc/squid
--enable-icmp --enable-underscores --datadir=/etc/squid --enable-linux-netfilter
--enable-auth=ntlm,basic --enable-ssl --enable-delay-pools

I'd like to know if this situation is normal or if it was a bug
corrected in newer versions
of squid cache.

Thanks in advance.

Murilo.


[squid-users] Web accelerator settings for multiple sites on different servers

2008-10-07 Thread Odiobill

Hi, I configured an OpenVZ cluster for my company, where there is a VE with
public IP adress that provides web-accelerator for other VEs that are
running on a different network, with private addresses.

To realize it, I used a configuration similar to this one within my
squid.conf:

-- CUT HERE --
http_port 80 accel vhost
http_port 3128

acl our_network src 192.168.10.0/24
http_access allow our_network

acl our_domains dstdomain /etc/squid/sites.conf
http_access allow our_domains

http_access deny all

cache_peer 192.168.10.100 parent 80 0 no-query originserver login=PASS
name=100
cache_peer_domain 100 1st.example.org 3rd.example.org

cache_peer 192.168.10.101 parent 80 0 no-query originserver login=PASS
name=101
cache_peer_domain 101 2nd.example.org 4th.example.org 5th.example.org
-- CUT HERE --

/etc/squid/sites.conf lists every hostname referred by cache_peer_domain
directives.

The problem is that we have ten virtual servers to accelerate, and some of
these are providing hosting for 20 or 30 websites (each one).
There is better way to accomplish this, maybe without the need to write down
hostnames both on the external file and squid.conf itself?

Thanks,

//Davide

-- 
View this message in context: 
http://www.nabble.com/Web-accelerator-settings-for-multiple-sites-on-different-servers-tp19854704p19854704.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Cache_dir more than 10GB

2008-10-07 Thread Rafael Gomes
So, it is very risky.

User may get a old page after a crash and journal recovery.

Thanks.

On Tue, Oct 7, 2008 at 6:54 AM, Matus UHLAR - fantomas
[EMAIL PROTECTED] wrote:
  Yes, but using data=writeback is not a tuning, but risking. Using that on
  squid cache dir may require cleaning cache_dir after each crash, otherwise
  you risk providing invalid data

 On 06.10.08 22:06, Rafael Gomes wrote:
 What option data=writeback really do?

 see man mount(8), tune2fs(8).

 CITE
 This may increase throughput, however,  it may allow old data to appear in
 files after a crash and journal recovery.
 /CITE

 --
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 WinError #9: Out of error messages.




-- 
Rafael Gomes
Consultor em TI
Embaixador Fedora
LPIC-1
(71) 8709-1289


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
On Sat, Oct 04, 2008 at 12:55:15PM -0400, Chris Nighswonger wrote:
 On Tue, Sep 30, 2008 at 6:13 PM, Dave Dykstra [EMAIL PROTECTED] wrote:
  On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
   I am running squid on over a thousand computers that are filtering data
   coming out of one of the particle collision detectors on the Large
   Hadron Collider.
 
 A bit off-topic here, but I'm wondering if these squids are being used
 in CERN's new computing grid? I noticed Fermi was helping out with
 this. 
 (http://devicedaily.com/misc/cern-launches-the-biggest-computing-grid-in-the-world.html)

The particular squids I was talking about are not considered to be part
of the grid, they're part of the High-Level Trigger filter farm that
is installed at the location of the CMS detector.  There are other
squids that are considered to be part of the grid, however, at each of
the locations around the world where CMS collision data is being
analyzed.  I own the piece of the software involved in moving detector
alignment  calibration data from CERN out to all the processors at all
the collaboration sites, which is needed to be able to understand the
collision data.  This data is on the order of 100MB but needs to get
sent to all the analysis jobs (and some of it changes every day or so),
unlike the collision data which is much larger but gets sent separately
to individual processors.  The software I own converts the data from a
database to http where it is cached in squids and then converts the data
from http to objects in memory.  The home page is frontier.cern.ch.

That article is misleading, by the way; the very nature of a computing
grid is that it doesn't belong to a single organization, so it's not
CERN's new computing grid.  It is a collaboration of many
organizations; many different organizations provide the computing
resources, and many different organizations provide the software that
controls the grid and the software that runs on the grid.

- Dave


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
Mark,

Thanks for that suggestion.  I had independently come to the same idea,
after posting my message, but haven't yet had a chance to try it out.  I
currently have hierarchies of cache_peer parents but stop the hierarchies
just before the last step to the origin servers because they were
selected by the host  port number in the URLs.  The origin servers have
their own squids configured in accelerator mode so I think I will just
extend the hierarchies all the way to them and let the squids (the ones
which were formerly the top of the hierarchies) take care of detecting
when an origin server goes down (using the cache_peer monitorurl
option).  I did a little experiment and found out that it doesn't matter
what the host and port number are in a URL if the top of a cache_peer
parent hierarchy is an accelerator mode squid, so I don't think I'll
even have to change the application.

- Dave


On Fri, Oct 03, 2008 at 11:21:19AM +1000, Mark Nottingham wrote:
 Have you considered setting squid up to know about both origins, so it  
 can fail over automatically?
 
 
 On 26/09/2008, at 5:04 AM, Dave Dykstra wrote:
 
 I am running squid on over a thousand computers that are filtering data
 coming out of one of the particle collision detectors on the Large
 Hadron Collider.  There are two origin servers, and the application
 layer is designed to try the second server if the local squid returns a
 5xx HTTP code (server error).  I just recently found that before squid
 2.7 this could never happen because squid would just return stale data
 if the origin server was down (more precisely, I've been testing with
 the server up but the listener process down so it gets 'connection
 refused').  In squid 2.7STABLE4, if squid.conf has 'max_stale 0' or if
 the origin server sends 'Cache-Control: must-revalidate' then squid will
 send a 504 Gateway Timeout error.  Unfortunately, this timeout error
 does not get cached, and it gets sent upstream every time no matter what
 negative_ttl is set to.  These squids are configured in a hierarchy
 where each feeds 4 others so loading gets spread out, but the fact that
 the error is not cached at all means that if the primary origin server
 is down, the squids near the top of the hierarchy will get hammered with
 hundreds of requests for the server that's down before every request
 that succeeds from the second server.
 
 Any suggestions?  Is the fact that negative_ttl doesn't work with
 max_stale a bug, a missing feature, or an unfortunate interpretation of
 the HTTP 1.1 spec?
 
 By the way, I had hoped that 'Cache-Control: max-stale=0' would work the
 same as squid.conf's 'max_stale 0' but I never see an error come back
 when the origin server is down; it returns stale data instead.  I wonder
 if that's intentional, a bug, or a missing feature.  I also note that
 the HTTP 1.1 spec says that there MUST be a Warning 110 (Response is
 stale) header attached if stale data is returned and I'm not seeing
 those.
 
 - Dave


[squid-users] AD groups / wbinfo_group.pl problem

2008-10-07 Thread Jakob Curdes

Hi,

when trying to setup NTLM authentication  against an AD controller I ran 
into an issue with testing against Windows Group membership.


Here's what works:
- authorizing against AD controller via winbindd and ntlm_auth helper 
from samba package

i.e. without group restrictions the authorization works

- testing group membership with wbinfo_auth.pl via the command line:

[EMAIL PROTECTED] libexec]# ./wbinfo_group.pl
DOMAIN+guest DOMAIN+WebEnabled
ERR
DOMAIN+service DOMAIN+WebEnabled
OK

What does not work is letting squid check the group membership.
Here are the relevant conf settings:

external_acl_type nt_group ttl=0 concurrency=5 %LOGIN 
/usr/local/squid/libexec/wbinfo_group.pl -d

acl WebEnabled  external nt_group WebEnabled
acl allowed_users proxy_auth REQUIRED
(...)
http_access allow WebEnabled
http_access allow allowed_users
http_access deny all

What happens in cache.log is (wbinfo_group.pl debug is on) :
[2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(63)
 Got NTLMSSP neg_flags=0xa208b207
[2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(739)
 Got user=[guest] domain=[DOMAIN] workstation=[WS1] len1=24 len2=24
[2008/10/07 18:30:57, 3] libsmb/ntlmssp_sign.c:ntlmssp_sign_init(338)
 NTLMSSP Sign/Seal - Initialising with flags:
[2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(63)
 Got NTLMSSP neg_flags=0xa2088205
Got 0 guest2 WebEnabled from squid
Could not convert sid S- to gid
User:  -0-
Group: -guest-
SID:   -
GID:   --
Could not get groups for user 0
Sending OK to squid
2008/10/07 18:30:58| helperHandleRead: unexpected reply on channel -1 
from nt_group #1 'OK'


Why is squid not able to lookup the groups if wbinfo on the commandline 
can? I changed the permissions of the winbindd_privileged directory to 
match the squid_effective group.


Any ideas ?

Regards,
Jakob


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
Henrik,

Thanks so much for your very informative reply!

On Thu, Oct 02, 2008 at 12:31:03PM +0200, Henrik Nordstrom wrote:
 By default Squid tries to use a parent 10 times before declaring it
 dead.

Ah, I never would have guessed that I needed to try 10 times before
negative_ttl would take effect for a dead host.  That wouldn't be
bad at all.

I just tried this now by having two squids, one a cache_peer parent of
the other.  I requested a URL while the origin server was up in order to
load the cache, with a CC max-age of 180.  Both squids have max_stale 0
and negative_ttl of 3 minutes.  Next, I put the origin server name as an
alias for localhost in /etc/hosts on both machines the squids were on,
so they both see connection refused when they try to connect to the
origin server.  I also restarted nscd and did squid -k reconfigure to
make sure the new host name was seen by squid.  After the (small) object
in the cache expired, I retried the request 20 times in a row.  Every
time I still saw the request get sent from the child squid to the parent
squid and return a 504 error.  This is unexpected to me; is it to you,
Henrik?  I would have thought the 504 error would get cached for three
minutes after the tenth try.

 Each time Squid retries a request it falls back on the next possible
 path for forwarding the request. What that is depends on your
 configuration. In normal forwarding without never_direct there usually
 never is more than at most two selected active paths: Selected peer if
 any + going direct. In accelerator mode or with never_direct more peers
 is selected as candidates (one sibling, and all possible parents).
 
 These retries happens on
 
 * 504 Gateway Timeout  (including local connection failure)
 * 502 Bad gateway
 
 or if retry_on_error is enabled also on
 
 * 401 Forbidden
 * 500 Server Error
 * 501 Not Implemented
 * 503 Service not available
 
 Please note that there is a slight name confusion relating to max-stale.
 Cache-Control: max-stale is not the same as the squid.conf directive. 
 
 Cache-Control: max-stale=N is a permissive request directive, saying
 that responses up to the given staleness is accepted as fresh without
 needing a cache validation. It's not defined for responses.
 
 The squid.conf setting is a restrictive directive, placing an upper
 limit on how stale content may be returned if cache validations fail.
 
 The Cache-Control: stale-if-error response header is equivalent the
 squid.conf max-stale setting, and overrides squid.conf.

That's very good to know.  I didn't see that in the HTTP 1.1 spec, but
I see that Mark Nottingham submitted a draft protocol extension with
this feature.

 The default for stale-if-error if not specified (and squid.conf
 max-stale) is infinite.
 
 Warning headers is not yet implemented by Squid. This is on the todo.

Sounds good.

- Dave


Re: [squid-users] Cache settings per User Agent?

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 23:02 +0800, howard chen wrote:

 On Tue, Oct 7, 2008 at 8:11 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
  Best done by the origin server using the Vary header and Cache-Control:
  max-age..
 
 
 It can't, since it will confuse my squid to cache the page for normal
 user.

No, it won't.

Your server when seeing a request from one of the spiders responds with

Vary: user-agent
Cache-Control: max-age=72000

Or whatever age you want for the search content..

If you can throw an ETag or Last-Modified in there as well, and have the
server react reasonably on If-None-Match / If-Modified-Since conditional
requests then even better but that's a bonus.

If the request is NOT from one of these spiders you respond with
personalized content and headers looking as:

Vary: user-agent
Cache-Control: private

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] auth_param basic children

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 08:19 -0700, Andrew Struiksma wrote:

 OK, but I'm still not 100% when the helper is actually called. Is the
 helper only used when a user is prompted for a password or is there an
 authentication process that takes place for each request?

The helper is called each time Squid sees a new username:password
combination, or if the previous results have expired (auth cache expiry
time is set by auth_param basic credentialsttl=NN parameter, default 2
hours)

 My thought is that our LAN users, since they are never forced to 
 authenticate, are not effected by the children parameter but I wanted to 
 double-check.

If LAN users are allowed access without requesting authentication then
Squid naturally won't query the helpers either as there is no
login:password combination to verify.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: Re[2]: [squid-users] Squid dying

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 14:31 +0200, Dietmar Braun wrote:
 Thursday, October 2, 2008, 12:44:01 PM, you wrote:
  Bug 2447:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2447
 
  Not really sure it's the same bug, but perhaps related.
  Please get a stack trace of your crash and file a new bug report.
 
 Until now with the patch applied, the two systems are running
 stable... so I am at the moment unable to get a trace of any crash...
 ;)

Good.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] WCCP and Squid both through Linux

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 09:45 -0500, Johnson, S wrote:
 Does anyone know of a good HowTo on running WCCP and Squid together?
 (Specifically running WCCP on the linux box itself and not a Cisco
 router.)

Normally you don't run WCCP in such setups. Instead use LVS + ldirectord
if you need to have the linux box balance traffic on multiple cache
servers (possibly including itself).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Cache_dir more than 10GB

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 12:46 -0300, Rafael Gomes wrote:
 So, it is very risky.
 
 User may get a old page after a crash and journal recovery.

It's worse. The user may get a corruoted page with content mixed from
various old files after a system crash an journal recovery.

With the default data=ordered this is avoided entirely, but at the
expense of somewhat higher memory pressure under high load and somewhat
lower write performance.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 11:49 -0500, Dave Dykstra wrote:

 Ah, I never would have guessed that I needed to try 10 times before
 negative_ttl would take effect for a dead host.  That wouldn't be
 bad at all.

You don't. Squid does that for you automatically. 

 time I still saw the request get sent from the child squid to the parent
 squid and return a 504 error.  This is unexpected to me; is it to you,
 Henrik?  I would have thought the 504 error would get cached for three
 minutes after the tenth try.

Agreed.


  The Cache-Control: stale-if-error response header is equivalent the
  squid.conf max-stale setting, and overrides squid.conf.
 
 That's very good to know.  I didn't see that in the HTTP 1.1 spec, but
 I see that Mark Nottingham submitted a draft protocol extension with
 this feature.

Correct.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How get negative cache along with origin server error?

2008-10-07 Thread Dave Dykstra
On Tue, Oct 07, 2008 at 08:38:12PM +0200, Henrik Nordstrom wrote:
 On tis, 2008-10-07 at 11:49 -0500, Dave Dykstra wrote:
 
  Ah, I never would have guessed that I needed to try 10 times before
  negative_ttl would take effect for a dead host.  That wouldn't be
  bad at all.
 
 You don't. Squid does that for you automatically. 

I meant, in my testing I needed to try 10 times to see if negative_ttl
caching was working.  Or are you saying that squid tries to contact the
origin server 10 times on the first request before it even returns the
first 504?  I thought you meant it kept track of the number of client
attempts and should start caching it after 10 failures.

  time I still saw the request get sent from the child squid to the parent
  squid and return a 504 error.  This is unexpected to me; is it to you,
  Henrik?  I would have thought the 504 error would get cached for three
  minutes after the tenth try.
 
 Agreed.

Ok, then I will file a bugzilla report.

Meanwhile, I belive I have a workaround as I discussed in another post
on this thread
http://www.squid-cache.org/mail-archive/squid-users/200810/0171.html

Thanks,

- Dave


[squid-users] storeDirWriteCleanLogs() blocking queries

2008-10-07 Thread Chris Woodfield

Hi,

We've been noticing lately that the logrotation process is taking  
longer and longer as our caches fill up - currently, with ~18 million  
on-disk objects, we've seen it take as long as 12 seconds, during  
which time squid is not answering queries.


Searching on this issue found the following prior thread on this:

http://www.mail-archive.com/squid-users@squid-cache.org/msg24326.html

Is this still the case that the storeDirWriteCleanLogs() function is  
expected to take this long when the cache_dirs get this large? Is  
there anything that can be done to mitigate this? The issue is that we  
rotate logs fairly frequently (multiple times per hour), which  
amplifies this issue.


As a workaround, we may disable the storeDirWriteCleanLogs() in  
mainRotate() and trigger it on a different signal instead. Sound like  
a reasonable workaround? If so, what should the maximum time between  
rotating swap.state be?


-Chris


[squid-users] cache performance: flash drive substitute vs. fast hard drive

2008-10-07 Thread Chuck Kollars
Anybody have performance experience (or benchmark results) putting Squid's 
cache on a Flash Drive? 

Devices that plug into a disk cable but that contain only what you'd find in a 
thumb drive are available. They have zero latency and they have much faster 
transfer speed than a moving disk. On the other hand they don't have any 
internal cache memory; even small repetetive accesses always go directly to the 
flash memory. (A regular hard drive typically has 4-32MB cache memory, so 
although overall access is only as fast as the disk spins, a few repetetive 
accesses can be very fast.) How do these two opposing tendencies (better 
average transfer rate but no internal cache memory) net out with Squid's cache 
access pattern?

For a Squid cache, am I better off buying a small but really fast hard drive, 
or one of these flash drive substitutes? 

-Chuck Kollars


  


Re: [squid-users] cache performance: flash drive substitute vs. fast hard drive

2008-10-07 Thread Joel Jaeggli
Chuck Kollars wrote:
 Anybody have performance experience (or benchmark results) putting
 Squid's cache on a Flash Drive?
 
 Devices that plug into a disk cable but that contain only what you'd
 find in a thumb drive are available. They have zero latency and they

They have no rotational latency however they're far from zero latency
devices... the fastest examples you can get now are in the ~80-100usec
range instead of the 8-15ms range.

 have much faster transfer speed than a moving disk. 

only in high end parts... many of the ones you see in laptops are
actually quite a bit slower than high-end winchester disks.

 On the other hand
 they don't have any internal cache memory;

Not a generalization that can be made, some enterprise models need
battery or capacitor backed write caches to order write erase cycles for
wear leveling. In general there's little point in having a read cache
however in places where it makes sense, some devices in fact do. .

 even small repetetive
 accesses always go directly to the flash memory. 

high repetive or extremely fragmented writes may be treated differently
by the controllers state machine eg by block shadowing so that large
regions don't have to be constantly rewritten for small writes.

 (A regular hard
 drive typically has 4-32MB cache memory, so although overall access
 is only as fast as the disk spins, a few repetetive accesses can be
 very fast.) How do these two opposing tendencies (better average
 transfer rate but no internal cache memory) net out with Squid's
 cache access pattern?

you're going to have the benchmark a particular variant in order to come
to grips with how that nets out... The 16GB sata ssd's I'm using from
last year in some security appliances are 1/2 the the speed reading and
1/4th of the speed writing as an analogous 10k rpm 2.5 sas disk in the
same box. Compared to a 4200rpm fujitsu ruggedized disk on the same
platform they are faster. Looking at the intel x25-m sata disk you can
see what a difference a year makes.

 For a Squid cache, am I better off buying a small but really fast
 hard drive, or one of these flash drive substitutes?

The other part of the equation is the ssd is still around an order of
magnitude or more per gigabyte more costly than the sas/sata winchester
drive, which is non-trivial when you're talking $700 or so for 80GB of
genuinely faster flash. If the alternative were buying 7x300GB 10k rpm
sas disks the flash route is a lot spendier for the equivalent capacity.

 -Chuck Kollars
 
 
 
 



Re: [squid-users] storeDirWriteCleanLogs() blocking queries

2008-10-07 Thread Henrik Nordstrom
On tis, 2008-10-07 at 15:25 -0400, Chris Woodfield wrote:

 
 We've been noticing lately that the logrotation process is taking  
 longer and longer as our caches fill up - currently, with ~18 million  
 on-disk objects, we've seen it take as long as 12 seconds, during  
 which time squid is not answering queries.

Yes..

 Is this still the case that the storeDirWriteCleanLogs() function is  
 expected to take this long when the cache_dirs get this large?

Time is expected to grow with size yes. 

  Is  
 there anything that can be done to mitigate this?

Disable swap if you have any, to make sure the index data never gets
swapped out. You generally don't need swap in a Squid server.

 The issue is that we  
 rotate logs fairly frequently (multiple times per hour), which  
 amplifies this issue.

Indeed.

 As a workaround, we may disable the storeDirWriteCleanLogs() in  
 mainRotate() and trigger it on a different signal instead. Sound like  
 a reasonable workaround?

Yes.

 If so, what should the maximum time between  
 rotating swap.state be?

Depends on the churn rate in your caches. The more that has changed in
the cache since swap.state was last cleaned the longer Squid will take
on an unscheduled restart.

swap.state is not used during normal operations, only as a journal of
the cache contents in case Squid gets restarted with no chance of
writing out a clean index (power failure, kernel panic, kill -9, double
seg fault, etc...)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] squid memory usage and SNMP

2008-10-07 Thread Leonardo Rodrigues Magalhães


   Hello Guys,

   from cachemgr.cgi, General Runtime Information, i have among other 
informations:


Memory usage for squid via mallinfo():
Total space in arena:2780 KB
Ordinary blocks: 2437 KB 26 blks
Small blocks:2780 KB  0 blks
Holding blocks: 0 KB  0 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 342 KB
Total in use:5217 KB 188%
Total free:   342 KB 12%
Total size:  2780 KB
Memory accounted for:
Total accounted:  199 KB
memPoolAlloc calls: 4146753
memPoolFree calls: 4145285



   with SNMP i can already grab the 'Total Accounted' value  but i 
would like the real allocated memory by squid, which seems to be the 
'Total in use' value.


   is it possible to grab that value using SNMP queries ??? i have 
search mib.txt from squid 2.6 and squid 2.7, but there's no reference 
for memory other than the 'Total Accounted' value.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






Re: [squid-users] Cache settings per User Agent?

2008-10-07 Thread Amos Jeffries
 Hello,

 On Tue, Oct 7, 2008 at 8:11 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 Best done by the origin server using the Vary header and Cache-Control:
 max-age..


 It can't, since it will confuse my squid to cache the page for normal
 user. Is should not be cached for normal request. Robot don't need the
 most updated result, don't need personalized contents etc.

Robot DO need the latest version of your page. How can people find your
new content if its not indexed on search engines?

True about personalized content, but then thats done by the web server
right? so it can send generic page with correct Vary, ETag, Cache-Control
for robot and all other unknown agents.


 I only want to cache, if and only if UA are robots. The squid will
 block the request and the robots will not hit my backend.


Any idea how many robots there are on the web? I've found it FAR better on
bandwidth and processing to have a generic default version of a page that
unknowns get. Personalizing only for knowns who can be personalized
accurately.

Amos




Re: [squid-users] storeDirWriteCleanLogs() blocking queries

2008-10-07 Thread Amos Jeffries
 On tis, 2008-10-07 at 15:25 -0400, Chris Woodfield wrote:


 We've been noticing lately that the logrotation process is taking
 longer and longer as our caches fill up - currently, with ~18 million
 on-disk objects, we've seen it take as long as 12 seconds, during
 which time squid is not answering queries.

 Yes..

 Is this still the case that the storeDirWriteCleanLogs() function is
 expected to take this long when the cache_dirs get this large?

 Time is expected to grow with size yes.

  Is
 there anything that can be done to mitigate this?

 Disable swap if you have any, to make sure the index data never gets
 swapped out. You generally don't need swap in a Squid server.

 The issue is that we
 rotate logs fairly frequently (multiple times per hour), which
 amplifies this issue.

 Indeed.

 As a workaround, we may disable the storeDirWriteCleanLogs() in
 mainRotate() and trigger it on a different signal instead. Sound like
 a reasonable workaround?

 Yes.

How about a global 'skip' timer like that used to silence NAT errors?

if(last_cleanup  squid_curtime - 3600) {
  last_cleanup = squid_curtime;
  storeDirWriteCleanLogs();
}

That way it doesn't need a new signal, but still gets run at reasonable
regularity. 3600, 86400 or similar maybe.


 If so, what should the maximum time between
 rotating swap.state be?

 Depends on the churn rate in your caches. The more that has changed in
 the cache since swap.state was last cleaned the longer Squid will take
 on an unscheduled restart.

 swap.state is not used during normal operations, only as a journal of
 the cache contents in case Squid gets restarted with no chance of
 writing out a clean index (power failure, kernel panic, kill -9, double
 seg fault, etc...)

 Regards
 Henrik





Re: [squid-users] Re: cannot browse website

2008-10-07 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
it's works
i use  http://wiki.squid-cache.org/SquidFaq/ReverseProxy
acl our_sites dstdomain your.main.website
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

question is

how if i have 2 or more domain ?

On Fri, Sep 26, 2008 at 9:14 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 ??? ??z?up??? ?z??? ??? wrote:

 this is my squid
 using apt-get install squid from latest ubuntu


 [EMAIL PROTECTED]:/home/mirza# squid -v
 Squid Cache: Version 2.6.STABLE18

 Okay, your config file was full of configuration things that only work in
 2.5.

 The wiki pages I sent are correct for your squid 2.6.



 On Fri, Sep 26, 2008 at 2:59 PM, Amos Jeffries [EMAIL PROTECTED]
 wrote:

 Upgrade your Squid. 2.5 is rather broken with interception and
 acceleration
 modes.

 After upgrading to a later squid. Remove the NAT interception hack. These
 two How-To's tell you everything you need to get started.


 For reverse-proxy (accelarating) of websites using Squid:
  http://wiki.squid-cache.org/SquidFaq/ReverseProxy

 For interception of outbound network port 80 traffic:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
 or http://wiki.squid-cache.org/ConfigExamples/Intercept


 Amos
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9






 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9




-- 
-=-=-=-=


[squid-users] Bungled squid.conf line - reply_body_max_size

2008-10-07 Thread Лесовский Алексей

Hello ALL, and Sorry my English.
I have line in squid.conf
...
reply_body_max_size 1228800 deny asu
...
and when squid starts, I got error
FATAL: Bungled squid.conf line 65: reply_body_max_size 1228800 deny asu
Squid Cache (Version 3.0.STABLE8): Terminated abnormally.

I don't understand, why?



[squid-users] SQUID with NTLM prompts password window

2008-10-07 Thread Tanveer Chowdhury
HI all,

I have setup NTLM authentication with squid-2.6.STABLE20, samba-3.0.10
and winbind. My purpose is to find the username in both squid and DG
access log which I am getting fine. But the problem is sometimes not
frequest IE prompts a pop up window for authentication and if not
given i.e., pressed cancel then it gives a message like  Cache access
denied. But if you then press Refresh button then it loads again
fine.

But if you provide the username and password at the login prompt it
also works though. My question is how to STOP this password prompting
pop up window.

Below is the output of /var/log/squid/cache.log when the password window prompts

[2008/09/29 13:39:11, 3] utils/ntlm_auth.c:winbind_pw_check(427)
Login for user [EMAIL PROTECTED] failed due to [Reading winbind
reply failed!]
2008/09/29 13:39:11| The request GET
http://search.live.com/LS/GLinkPing.aspx?/_1_9SE..

Below is my NTLM part of squid.conf file

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

.
...
acl manager proto cache_object
acl authenticated_users proxy_auth REQUIRED
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8

...
.
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost

##http_access deny !Safe_ports
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
#http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
http_access allow authenticated_users

# cat /etc/nsswitch.conf
passwd: compat winbind
group:  compat winbind
shadow: compat

hosts:  files dns wins
networks:   files dns
protocols:  db files
services:   db files
ethers: db files
rpc:db files


# cat /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = DOMAIN.COM

[realms]
DOMAIN.COM = {
 default_domain = DOMAIN.COM
 kdc = abc.domain.com
 kdc = efg.domain.com
 kdc = xx.xx.xx.xx
 kdc = xx.xx.xx.xx
}

[domain_realm]
.kerberos.server = DOMAIN.COM


[squid-users] SQUID configure with NTLM prompts users password window

2008-10-07 Thread Tanveer Chowdhury
Hi all,

I have setup NTLM authentication with squid-2.6.STABLE20, samba-3.0.10
and winbind. My purpose is to find the username in both squid and DG
access log which I am getting fine. But the problem is sometimes not
frequest IE prompts a pop up window for authentication and if not
given i.e., pressed cancel then it gives a message like  Cache access
denied. But if you then press Refresh button then it loads again
fine.

But if you provide the username and password at the login prompt it
also works though. My question is how to STOP this password prompting
pop up window.

Below is the output of /var/log/squid/cache.log when the password window prompts

[2008/09/29 13:39:11, 3] utils/ntlm_auth.c:winbind_pw_check(427)
Login for user [EMAIL PROTECTED] failed due to [Reading winbind
reply failed!]
2008/09/29 13:39:11| The request GET
http://search.live.com/LS/GLinkPing.aspx?/_1_9SE..

Below is my NTLM part of squid.conf file

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

.
...
acl manager proto cache_object
acl authenticated_users proxy_auth REQUIRED
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8

...
.
#Recommended minimum configuration:
#
# Only allow cachemgr access from localhost

##http_access deny !Safe_ports
http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
#http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
http_access allow authenticated_users

# cat /etc/nsswitch.conf
passwd: compat winbind
group:  compat winbind
shadow: compat

hosts:  files dns wins
networks:   files dns
protocols:  db files
services:   db files
ethers: db files
rpc:db files


# cat /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = DOMAIN.COM

[realms]
DOMAIN.COM = {
 default_domain = DOMAIN.COM
 kdc = abc.domain.com
 kdc = efg.domain.com
 kdc = xx.xx.xx.xx
 kdc = xx.xx.xx.xx
}

[domain_realm]
.kerberos.server = DOMAIN.COM


Re: [squid-users] Bungled squid.conf line - reply_body_max_size

2008-10-07 Thread Amos Jeffries

Лесовский Алексей wrote:

Hello ALL, and Sorry my English.
I have line in squid.conf
...
reply_body_max_size 1228800 deny asu
...
and when squid starts, I got error
FATAL: Bungled squid.conf line 65: reply_body_max_size 1228800 deny asu
Squid Cache (Version 3.0.STABLE8): Terminated abnormally.

I don't understand, why?



The line logged above that fatal should say what is wrong. I this case 
there are no units listed ('deny' is not a known measure of size).


The format of that line is:

  reply_body_max_size size units [acl ...]

ie.

 reply_body_max_size 1228800 bytes deny asu


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9