Re: [squid-users] Information flodded in logfiles

2009-09-17 Thread Henrik Nordstrom
ons 2009-09-16 klockan 06:39 -0700 skrev sandiphw:
 
 Recently I found that logfiles are flooding with informations like 
 
 access.log
 
 1253094090.451  0 192.168.42.30 TCP_DENIED/407 1725 OPTIONS
 http://ab-desktop/ - NONE/- text/html

Seems that client is running some malfunctioning program.

Hunt it down and fix it.

Regards
Henrik



Re: [squid-users] Information flodded in logfiles

2009-09-17 Thread Henrik Nordstrom
ons 2009-09-16 klockan 06:39 -0700 skrev sandiphw:

 
 Logfiles becomes in over a GB witin 7 days and squid stops working. We need
 to manually replaced these files with new one. debug_option is set to
 default. How to stop these informations comming to logfiles?

It's normal requests and should be logged. The issue is that the client
is not behaving well and continuously retries the same unsuccessful
request and getting back authentiacation required each time..

but you don't need store.log. Disable it in squid.conf.

  How can I set
 the maximum size of logfile?

The already mentioned logrotate is a good tool for keep track of log
file size and automatically pruning data to keep logs at comfortable
levels.

Regards
Henrik



[squid-users] How to tell if request is cached

2009-09-17 Thread Matias

Hi!

How can I tell by reading the log files if a certain request is returned 
to the browser from cache or from the internet?



Thanks!



Re: [squid-users] How to tell if request is cached

2009-09-17 Thread Kinkie
On Thu, Sep 17, 2009 at 10:17 AM, Matias matiassu...@gmail.com wrote:
 Hi!

 How can I tell by reading the log files if a certain request is returned to
 the browser from cache or from the internet?

Please see 
http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-2914f3a846d41673d4ae34018142e672b8f258ce

-- 
/kinkie


Re: [squid-users] How to tell if request is cached

2009-09-17 Thread Jeff Pang
2009/9/17 Matias matiassu...@gmail.com:
 Hi!

 How can I tell by reading the log files if a certain request is returned to
 the browser from cache or from the internet?



difference by some kinds of HIT or MISS.


Re: [squid-users] Squid 3.1.12 - Parent Proxy and DNS queries

2009-09-17 Thread Silamael
Amos Jeffries wrote:
 This is usually a configuration problem.
 
 Please provide your squid.conf file contents (minus empty and comment
 lines)
 
 Amos

Hello Amos,

Here is our configuration.
Thank you for your help.

-- Matthias
#
# WARNING: Do not edit this file, it has been automatically generated.
#
# Prepends
append_domain .domain.de
unlinkd_program /usr/local/libexec/unlinkd
ipcache_high 95
icp_port 0
ipcache_size 1024
http_port 127.0.0.1:8000
cache_dir ufs /var/squid/cache/cache-8000 100MB 8 16
debug_options ALL,1
server_persistent_connections on
cache_swap_high 95
log_ip_on_direct off
maximum_object_size 2 KB
minimum_direct_hops 4
udp_incoming_address 127.0.0.1
pid_filename /var/squid/logs/squid-8000.pid
ftp_user sq...@domain.de
forwarded_for off
cache_access_log /var/squid/logs/access-8000.log
visible_hostname domaind193.domain.de
client_persistent_connections on
cache_swap_low 90
logfile_rotate 0
ipcache_low 90
cache_effective_user _squid
cache_log /var/squid/logs/cache-8000.log
cache_effective_group _squid
hosts_file none
refresh_pattern . 0 20% 14400
cache_mem 8 MB
cache_store_log none
hierarchy_stoplist cgi-bin ?
error_directory /usr/local/share/squid/errors/de
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localdomain srcdomain domain.de
acl localdst dstdomain .domain.de
acl localhost-dst dst 127.0.0.1/32
# user defined ACLs
always_direct deny all
refresh_pattern .domain.de 0 1% 0
refresh_pattern www.domain.de 0 1% 0
cache_peer 10.254.0.17 parent  0 default no-query
always_direct allow localdst
never_direct allow all

# Authentication

# User options

# Append
acl Dangerous_ports port 7 9 19
acl CONNECT method CONNECT
http_access deny Dangerous_ports
http_access deny manager !localhost
acl SSL_ports port 443 563 881
http_access deny CONNECT !SSL_ports
http_access deny localhost-dst
http_access allow all



Re: [squid-users] Information flodded in logfiles

2009-09-17 Thread Sakhi Louw
Dear sandiphw,

The best option would be to work on the request or misbehaving
application from http://ab-desktop, log-rotation on the squid proxy
works best for me and if you can, please create a syslog server, this
will assist you in ensuring that logs are removed from the production
server and reduce downtime on the proxy server. One more thing that
works best for me is Munin (monitoring), I check it every time for my
servers and it works best especially when it comes to identifying disk
space,CPU usage etc.

--
Sakhi

On 9/17/09, Henrik Nordstrom hen...@henriknordstrom.net wrote:
 ons 2009-09-16 klockan 06:39 -0700 skrev sandiphw:


 Logfiles becomes in over a GB witin 7 days and squid stops working. We
 need
 to manually replaced these files with new one. debug_option is set to
 default. How to stop these informations comming to logfiles?

 It's normal requests and should be logged. The issue is that the client
 is not behaving well and continuously retries the same unsuccessful
 request and getting back authentiacation required each time..

 but you don't need store.log. Disable it in squid.conf.

  How can I set
 the maximum size of logfile?

 The already mentioned logrotate is a good tool for keep track of log
 file size and automatically pruning data to keep logs at comfortable
 levels.

 Regards
 Henrik




-- 
Sakhi Louw
Cell:083 951 7760
Fax: 086 632 3670
sa...@jabber.org
sip:sa...@ekiga.net


[squid-users] Re: Re[squid-users] verse proxy and site2site vpn question

2009-09-17 Thread ilinktech

Thanks Henrik, I figured that out after posting but I appreciate the
assistance anyway =)

Henrik Nordstrom-5 wrote:
 
 tis 2009-09-15 klockan 06:37 -0700 skrev ilinktech:
 
 My question - I know that Squid can do reverse proxy for OWA (I've seen
 the
 examples) but since ISA is already doing the auth / reverse proxy
 functions
 and as all I really need is a way to get the traffic from site1 to site2,
 what would the best configuration be for Squid? Would a generic reverse
 proxy configuration be sufficient or will I need to do something more
 involved? Do I even need Squid in this case?
 
 If your routing for some reason prevents the clients for talking
 directly to the ISA via your vpn tunnel then I would probably use NAT
 for that case, not proxying.
 
 Regards
 Henrik
 
 
 
-- 
View this message in context: 
http://www.nabble.com/Reverse-proxy-and-site2site-vpn-question-tp25454198p25489913.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] daemon.log[squid-users] Squid stops responding-LTSP and WinXP clients

2009-09-17 Thread Avinash Rao
On Thu, Sep 17, 2009 at 5:48 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Wed, 16 Sep 2009 17:01:49 +0530, Avinash Rao avinash@gmail.com
 wrote:
 On Wed, Sep 16, 2009 at 4:30 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:
 Avinash Rao wrote:

 On Wed, Sep 16, 2009 at 4:20 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 Avinash Rao wrote:

 On Wed, Sep 16, 2009 at 1:49 PM, Avinash Rao avinash@gmail.com
 wrote:

 On Tue, Sep 15, 2009 at 4:46 AM, Henrik Nordstrom
 hen...@henriknordstrom.net wrote:

 mån 2009-09-14 klockan 19:33 +0530 skrev Avinash Rao:

 I am having problems with Squid everyday.. twice everyday to be
 precise.
 Squid stops responding.. I have to restart squid service to resume
 service. Even the old cache_mem = 100MB setting had the same
 problem,
 the current setting of cache_dir = null /tmp is also giving the
 same
 problem.

 How do i resolve this?

 By first figuring out how and why it's stopping.

 Is the process still running?

 Is there anything in cache.log saying why it stopped?

 Regards
 Henrik


 Hi,

 I don't see anything related to squid in /var/log/messages but there
 a
 lot of entries which says -Mark- and few entries of the below

 Sep 14 12:13:18 sunbox kernel: [  112.579221] system 00:0e: iomem
 range 0x10 -0xdfff could not be reserved

 Sep 14 12:13:18 sunbox kernel: [    0.00] swsusp: Registered
 nosave memory region: 0009f000 - 000a

 The cache.log is not populated:

 r...@sunbox:/var/log/squid# more cache.log.1
 2009/09/15 08:02:19| storeDirWriteCleanLogs: Starting...
 2009/09/15 08:02:19|   Finished.  Wrote 0 entries.
 2009/09/15 08:02:19|   Took 0.0 seconds (   0.0 entries/sec).
 2009/09/15 08:02:19| logfileRotate: /var/log/squid/store.log
 2009/09/15 08:02:19| logfileRotate: /var/log/squid/access.log
 2009/09/15 14:55:30| storeLocateVary: Not our vary marker object,
 78ED95245D6EA8
 CBB231047CAEE1E30F = 'http://m1.2mdn.net/879366/flashwrite_1_2.js',
 'accept-enco
 ding=gzip,deflate'/'-'





 2009/09/15 14:59:49| storeLocateVary: Not our vary marker object,
 78ED95245D6EA8
 CBB231047CAEE1E30F = 'http://m1.2mdn.net/879366/flashwrite_1_2.js',
 'accept-enco
 ding=gzip,deflate'/'-'

  more cache.log
 2009/09/16 07:58:04| storeDirWriteCleanLogs: Starting...
 2009/09/16 07:58:04|   Finished.  Wrote 0 entries.
 2009/09/16 07:58:04|   Took 0.0 seconds (   0.0 entries/sec).
 2009/09/16 07:58:04| logfileRotate: /var/log/squid/store.log
 2009/09/16 07:58:04| logfileRotate: /var/log/squid/access.log
 2009/09/16 08:38:01| httpReadReply: Excess data from GET
 http://webcs.msg.yahoo
 .com/crossdomain.xml

 Avinash

 Daemon.log
 Here's what the daemon.log file reads if i restart squid.

 Sep 16 15:14:28 sunbox squid[6382]: Preparing for shutdown after
 38603
 requests
 Sep 16 15:14:28 sunbox squid[6382]: Waiting 30 seconds for active
 connections to finish
 Sep 16 15:14:28 sunbox squid[6382]: FD 10 Closing HTTP connection
 Sep 16 15:14:59 sunbox squid[6382]: Shutting down...
 Sep 16 15:14:59 sunbox squid[6382]: FD 11 Closing ICP connection
 Sep 16 15:14:59 sunbox squid[6382]: WARNING: Closing client
 10.10.10.200 connection due to lifetime timeout
 Sep 16 15:14:59 sunbox squid[6382]:



 ^Ihttp://mail.google.com/a/aolcep.org/channel/bind?VER=6it=200825at=xn3j31vv6ibmsnkg4qg2ctmvxmdqruRID=rpcSID=EBB5D5B4EDA6065BCI=0AID=163TYPE=xmlhttpzx=z3okb8-nrvdq2t=1
 Sep 16 15:14:59 sunbox squid[6382]: WARNING: Closing client
 10.10.10.200 connection due to lifetime timeout
 Sep 16 15:14:59 sunbox squid[6382]:



 ^Ihttp://mail.google.com/mail/channel/bind?VER=6it=417745at=xn3j2ue36lbqrpvy8tuyo5qm4yr368RID=rpcSID=ACC227B9816E025DCI=0AID=561TYPE=xmlhttpzx=ag3xlj-tk3pkmt=1
 6Zb3tnokBgp5TovAALAIKL_Q2qeeRC_OTEe6m9cmy-U67iM3IF8j0TdQQ==
 Sep 16 15:14:59 sunbox squid[6382]: storeDirWriteCleanLogs:
 Starting...
 Sep 16 15:14:59 sunbox squid[6382]:   Finished.  Wrote 0 entries.
 Sep 16 15:14:59 sunbox squid[6382]:   Took 0.0 seconds (   0.0
 entries/sec).
 Sep 16 15:14:59 sunbox squid[6382]: Squid Cache (Version
 2.6.STABLE18): Exiting normally.
 Sep 16 15:14:59 sunbox squid[6379]: Squid Parent: child process 6382
 exited with status 0
 Sep 16 15:15:01 sunbox squid[17927]: Creating Swap Directories
 Sep 16 15:15:01 sunbox squid[17931]: Squid Parent: child process
 17934
 started
 Sep 16 15:15:01 sunbox squid[17934]: Starting Squid Cache version
 2.6.STABLE18 for amd64-debian-linux-gnu...
 Sep 16 15:15:01 sunbox squid[17934]: Process ID 17934
 Sep 16 15:15:01 sunbox squid[17934]: With 1024 file descriptors
 available
 Sep 16 15:15:01 sunbox squid[17934]: Using epoll for the IO loop
 Sep 16 15:15:01 sunbox squid[17934]: DNS Socket created at 0.0.0.0,
 port 57386, FD 7
 Sep 16 15:15:01 sunbox squid[17934]: Adding nameserver 192.168.1.1
 from /etc/resolv.conf
 Sep 16 15:15:01 sunbox squid[17934]: User-Agent logging is disabled.
 Sep 16 15:15:01 sunbox squid[17934]: Referer logging is disabled.
 Sep 16 15:15:01 sunbox squid[17934]: Swap maxSize 0 KB, estimated 0
 

Re: [squid-users] acl using Content-Length

2009-09-17 Thread Mikio Kishi
Hi, Amos and Henrik

 The only problem will be objects without any Content-Length, of which there
 are still many.

In this case, I hope that acl MAX100Mbyte becomes false.
What do you think ?

Sincerely,

--
Mikio Kishi

On Tue, Sep 15, 2009 at 8:55 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Mon, 14 Sep 2009 22:44:36 +0900, Mikio Kishi mki...@104.net wrote:
 Hi, Leonardo

 not directly that way . you'll have to use reply_body_max_size
 for that.

 you'll have to define your other ACLs and merge them with
 reply_body_max_size which takes the maximum site as argument.

 I'd like to use it to control icap access.

 ACL for Squid-3 are easily created.  If you are able to sponsor the work
 I'm sure we can get something done soon that uses Content-Length.

 The only problem will be objects without any Content-Length, of which there
 are still many. These will have to be covered by some 'other' setting.

 Amos


 For example,

 acl MAX100Mbyte rep_max_content_length 100M
 icap_service av respmod_precache 1 icap://127.0.0.1:1344/av/respmod
 icap_class respmod av
 icap_access respmod deny MAX100Mbyte
 icap_access respmod allow all

 I can't apply reply_body_max_size to above

 Sincerely,

 --
 Mikio Kishi


 On Mon, Sep 14, 2009 at 10:26 PM, Leonardo Rodrigues
 leolis...@solutti.com.br wrote:
 Mikio Kishi escreveu:



 For example

 acl MAX100Mbyte rep_max_content_length 100M



 Is it possible ?


     not directly that way . you'll have to use   reply_body_max_size
 for that.

     you'll have to define your other ACLs and merge them with
 reply_body_max_size which takes the maximum site as argument.



 #  TAG: reply_body_max_size     bytes allow|deny acl acl...
 #       This option specifies the maximum size of a reply body in bytes.
 #       It can be used to prevent users from downloading very large
 files,
 #       such as MP3's and movies. When the reply headers are received,
 #       the reply_body_max_size lines are processed, and the first line
 with
 #       a result of allow is used as the maximum body size for this
 reply.
 #       This size is checked twice. First when we get the reply headers,
 #       we check the content-length value.  If the content length value
 exists
 #       and is larger than the allowed size, the request is denied and
 the
 #       user receives an error message that says the request or reply
 #       is too large. If there is no content-length, and the reply
 #       size exceeds this limit, the client's connection is just closed
 #       and they will receive a partial reply.
 #
 #       WARNING: downstream caches probably can not detect a partial
 reply
 #       if there is no content-length header, so they will cache
 #       partial responses and give them out as hits.  You should NOT
 #       use this option if you have downstream caches.
 #

 --


      Atenciosamente / Sincerily,
      Leonardo Rodrigues
      Solutti Tecnologia
      http://www.solutti.com.br

      Minha armadilha de SPAM, NÃO mandem email
      gertru...@solutti.com.br
      My SPAMTRAP, do not email it






[squid-users] cache store type

2009-09-17 Thread Mikio Kishi
Hi, all

Now, which type of cache store do you recommend ?
(ufs, aufs and diskd ...)

squid version: 3.1.0.13

Sincerely,

--
Mikio Kishi


Re: [squid-users] Information flodded in logfiles

2009-09-17 Thread sandiphw

Thank you all for valuable assistance. I am working in a corporate
environment. squid is installed on linux server and all these desktops/
laptops (Windows) generating these logs are through samba client. These
thing happens very recently and request are coming from hundreds of clients.
We have not installed any new software to any client machine.

Anyhow, I shall try to build a syslog server, but it may takes time due to
my limited knowledge. If you can advise me how to fix log sizes through
squid configuration, it will give me a temporary relief.

Regards,

SKS
-- 
View this message in context: 
http://www.nabble.com/Information-flodded-in-logfiles-tp25472578p25491116.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] ntlm on distributed samba PDC system

2009-09-17 Thread Andreas Calvo Gómez
Hi,
I'm trying to set up squid with NTLM to do automated auth based on
windows credentials.
I'm running a samba/openldap PDC server, and squid is in a separate
computer.
Is it necessary to install another samba plus winbind in the machine
that has squid running?

If I run wbinfo -t in the squid computer it works, however runnig the
gpasswd -a proxy winbindd_priv does not work (obviously, it does not
have the winbind).
When I try to run the ntlm_auth --helper-protocol=squid-2.5-basic it
reports an ERR message, but I don't know where to look for logs (nor
syslog nor squid logs have information about it).

So, what I think I'm missing is what should be done if I'm running squid
in a non samba machine.

Any hints?

Thanks
-
Andreas Calvo Gómez andreas.ca...@admi.esci.es
Dept. Informàtica ESCI
Pg. Pujades, 1 08003 Barcelona
tel. (34) 932954710 ext.233 fax. (34) 932954720
http://www.esci.es
 







Re: [squid-users] ntlm on distributed samba PDC system

2009-09-17 Thread Kinkie
On Thu, Sep 17, 2009 at 5:32 PM, Andreas Calvo Gómez
andreas.ca...@admi.esci.es wrote:
 Hi,
 I'm trying to set up squid with NTLM to do automated auth based on
 windows credentials.
 I'm running a samba/openldap PDC server, and squid is in a separate
 computer.
 Is it necessary to install another samba plus winbind in the machine
 that has squid running?

Only winbindd is needed on the squid system, smbd performs some
additional activities (changing the machine account password) which
can also be performed by scripted calls to the net command.

 If I run wbinfo -t in the squid computer it works, however runnig the
 gpasswd -a proxy winbindd_priv does not work (obviously, it does not
 have the winbind).
 When I try to run the ntlm_auth --helper-protocol=squid-2.5-basic it
 reports an ERR message, but I don't know where to look for logs (nor
 syslog nor squid logs have information about it).

 So, what I think I'm missing is what should be done if I'm running squid
 in a non samba machine.

 Any hints?

These questions are better asked to the Samba user-groups.


-- 
/kinkie


Re: [squid-users] An Old Question: Cache Query/Extraction

2009-09-17 Thread Genaro Flores

One way to do it reasonably efficiently is to track store.log, where you
have [...]


Many thanks for the idea--store.log itself is pretty much all I need since 
mostly entries from a short interval before present concern me. Didn't know 
it held that much information. RTFM, they say :-)



There may be some small drift between this shadow database and the
actual content if you miss some log entries, but it's self-healing over
time as the cache content gets replaced.


For my purpose, even that wouldn't matter. A few lost entries in tens of 
thousands is negligible for the use case.


--On Tuesday, September 15, 2009 20:44 +0200 Henrik Nordstrom 
hen...@henriknordstrom.net wrote:



tis 2009-09-15 klockan 18:03 +0100 skrev Genaro Flores:


I guessed so but I was thinking a specialized tool could do the indexing
for whoever wants/needs it. Maybe I'll try making a couple short scripts
for that purpose and for searching the index and retrieving the targets.
I  was wishing somebody had done something similar before :-D


Quite likely some have done such tools, but I am not aware of any such
tool published on the Internet..

One way to do it reasonably efficiently is to track store.log, where you
have
   - Squid object id
   - URL
   - Mime type
   - time
   - HTTP status
   - last-modified
   - content-length
   - object size
   - expires

and some other small details.

just feed this into an database keyed by Squid object id, and indexed on
relevant pieces of the rest..

There may be some small drift between this shadow database and the
actual content if you miss some log entries, but it's self-healing over
time as the cache content gets replaced.

Regards
Henrik








[squid-users] keep cached objects permanently

2009-09-17 Thread Matthew Morgan
Is there a way to configure squid so that objects from a certain domain 
are cached permanently?  I am trying to get a windows update caching 
server up, and I want to make sure the LRU manager doesn't delete any of 
the cached updates.  Other cached data can be deleted as needed.


Re: [squid-users] Squid Multiple ACL

2009-09-17 Thread ScarEye

I will give the suggestion below a try, thank you for your time.  I am
running OpenWRT which has squid.  What I acutally did was download the trunk
for OpenWRT and modified the Makefile so that I won't need to store log
files.  I believe the switch I used was --enable-storeio=null \ then just
complile the image for your platform depending on what hardware your running
and your good to go.  If you have other questions let me know.

Thanks,
ScarEye


Amos Jeffries-2 wrote:
 
 ScarEye wrote:
 Squid is installed on an embedded device, With 16MB of RAM and 8MB of
 PROM it
 would fill up within a few seconds. 
 
 Cool.
One of my long- to medium-term objectives is to make it easy to build 
 a slimline Squid for these types of environment.  Are you able to share 
 the build options and patches you used to get Squid to run in less than 
 16MB of RAM?
 
 
 We have this one computer that needs access to 3 websites that I don't
 want
 192.168.1.2-192.168.1.10 to have access to. The IP of that device will be
 192.168.1.60 how would I create a seperate ACL for that device that won't
 interfere with 192.168.1.2-192.168.1.10.
 
 
 acl specialWebsite dstdomain .website.example.com
 acl specialSrc src 192.168.1.60
 
 adding:
http_access allow specialSrc specialWebsites
http_access deny specialWebsites
 
 above the # users  part of your config.
 
 Amos
 
 Thanks for your time
 
 ScarEye
 
 
 
 Amos Jeffries-2 wrote:

 So

 On Tue, 15 Sep 2009 10:39:11 -0700 (PDT), ScarEye scar...@gmail.com
 wrote:
 #Squid Config Stuff
 cache_access_log none
 cache_store_log none
 cache_log /dev/null
 That log is where you find out what critical and important system errors
 are happening.
 So you as administrator can fix them.

 cache_effective_user nobody
 cache_dir null /dev/null
 http_port 3128 transparent
 pid_filename /var/run/squid.pid
 visible_hostname router
 # Supervisors With Unlimited Access
 ## Match by MAC 
 acl supmac arp /etc/mac.txt
 http_access allow supmac 
 http_reply_access allow supmac
 ## Match By IP 
 acl supip src /etc/supip.txt
 http_access allow supip 
 http_reply_access allow supip
 # users
 acl users src 192.168.1.2-192.168.1.10
 acl allowedsites dstdomain /etc/squid/acl/acl
 http_access allow allowedsites users
 http_access deny !allowedsites users
 deny_info http://www.my-site.com/ users
 http_reply_access allow users
 # Safe Ports
 acl Safe_ports port 80 21 443 563 70 210 1025-65535
 http_access deny !Safe_ports
 # Not Safe Ports
 acl Dangerous_ports port 7 9 19 22 23 25 53 109 110 119
 http_access deny Dangerous_ports
 # Anyone Not Already Matched
 acl all src 0.0.0.0/0.0.0.0
 http_access deny all

 So the above rules work perfectly, they do exactly I need for it to do.
 Now,
 what I need to do is the following.

 Add a rule to allow an IP 192.168.1.60 to look at a different acl. Like
 acl2
 or something.
 ... huh?



 Amos


 
 
 
 -- 
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE19
Current Beta Squid 3.1.0.13
 
 

-- 
View this message in context: 
http://www.nabble.com/Squid-Multiple-ACL-tp25458501p25492202.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Website problem

2009-09-17 Thread Carlos E. Vargas
Hi list, im having this problem in a company i work as a Network Admin..
we moved from a ISA Server 2004 to a squid proxy server with ntlm auth +
dansguardian + ClamAV + adzapper + wpad via DHCP in Debian Stable 5.0

With the ISA i can see this website: http://www.sic.gov.do/ which is a
security website from my country and our security department need to
access to it, but with the Squid i just see the head part of the site
and not the body.

This is the error message:

The page cannot be displayed
Explanation: There is a problem with the page you are trying to reach
and it cannot be displayed. 


Try the following:

  * Refresh page: Search for the page again by clicking the Refresh
button. The timeout may have occurred due to Internet
congestion.
  * Check spelling: Check that you typed the Web page address
correctly. The address may have been mistyped.
  * Access from a link: If there is a link to the page you are
looking for, try accessing the page from that link. 



Technical Information (for support personnel)

  * Error Code: 500 Internal Server Error. An internal error
occurred. (1359)

My SQUID contiguration is:

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl Java browser Java/1.4 Java/1.5 Java/1.6
acl cachemgrcgi src 172.20.0.36/32
acl to_localhost dst 127.0.0.0/8
acl ntlm_users proxy_auth REQUIRED
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl QUERY urlpath_regex cgi-bin \? \.css \.asp \.aspx
http_access allow manager localhost
http_access allow manager cachemgrcgi
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow ntlm_users
http_access allow ntlm_users Java
http_access allow localhost Java
http_access allow localhost
http_access deny all
icp_access allow ntlm_users
icp_access deny all
follow_x_forwarded_for allow localhost
follow_x_forwarded_for allow ntlm_users
http_port 172.20.0.36:3128 transparent
cache_peer 127.0.0.1 parent 3128 0 no-query login=*:nopassword
hierarchy_stoplist cgi-bin ?
hierarchy_stoplist jsp asp aspx
cache_mem 16 MB
cache_dir null /dev/null
maximum_object_size 20480 KB
store_avg_object_size 50 KB
cache_swap_low 85
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 5
redirect_rewrites_host_header off
pid_filename /var/run/squid.pid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Package(.gz)*)$0   20% 2880
refresh_pattern .   0   20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
header_access WWW-Authenticate deny all
header_access Link deny all
header_access Warning deny all
header_access Via deny all
#header_access User-Agent deny all
header_access Proxy-Connection deny all
header_access X-Forwarded-For deny all
half_closed_clients off
shutdown_lifetime 1 seconds
cache_mgr cvar...@incadr.com
cache_effective_user proxy
cache_effective_group proxy
visible_hostname lnxisaprx01.inca.local
snmp_port 0
snmp_access deny all
icp_port 0
htcp_port 0
error_directory /usr/share/squid/errors/Spanish
acl FTP proto FTP
always_direct allow FTP
no_cache deny QUERY
dns_defnames on
hosts_file /etc/hosts
append_domain .inca.local
ipcache_size 4192
ipcache_low 90
ipcache_high 95
fqdncache_size 3600
forwarded_for off
cachemgr_passwd secret all
coredump_dir /var/spool/squid
pipeline_prefetch on
log_icp_queries on
buffered_logs on

[squid-users] URL issue (Internet explorer vs Firefox)

2009-09-17 Thread Juan Cardoza
Hello All

I have an issue, when someone Access the link below:
http://portal.infonavit.org.mx/wps/portal/TRABAJADORES/!ut/p/c5/04_SB8K8xLLM
9MSSzPy8xBz9CP0os3hnd0cPE3MfAwMLfwsLAyM_1wAXIxNvA_dAU30_j_zcVP2CbEdFABfWMig!
/dl3/d3/L2dBISEvZ0FBIS9nQSEh/

Through the Internet Explorer I receive the download process for a file that
say:
We do not recognize the kind of file, do you want to download it?

But when I try to access the same link through the firefox the link works.

Also if I send the request directly to the internet without the proxy in the
same PC, the link work perfectly.

Does anyone know what is happening?
Do I need to add something else in the proxy to get the correct access?

Thanks 
Best regards
Jhon



Teleperformance values: Integrity - Respect - Professionalism - Innovation - 
Commitment

The information contained in this communication is privileged and confidential. 
 The content is intended only for the use of the individual or entity named 
above. If the reader of this message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited.  If you have received this communication 
in error, please notify me immediately by telephone or e-mail, and delete this 
message from your systems.
Please consider the environmental impact of needlessly printing this e-mail.


RE: [squid-users] deny access with squid_ldap_group

2009-09-17 Thread vincent.blondel


 Hello,

 I am trying to block Internet access for people member of one
specific
 AD Security group called GSIFBENoInternetAccess but I get some issue
 with it.

 When I try the squid_ldap_group process from shell, the mechanism is
 working well. my service account correctly requests our Active
Directory
 and gives the right response ERR/OK.

 When I try this mechanism from squid process, allow/deny is working
well
 but before being blocked by squid_ldap_group I also receive an
 authentication popup box .. I simply press on CANCEL and receives the
 personalized error page.


 I have read on the net this may come from multiple authentication but
I
 do not see this in my case and if this is the case thks to explain me
 what's wrong with this .. Is this coming from the line with ntlmauth
 just afterwards and how is this this possible to make this working
 without the authentication box ??

Yes it is.


 # my config

 ...
 auth_param ntlm program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 32
 auth_param ntlm keep_alive on
 acl ntlmauth proxy_auth REQUIRED
 ...
 external_acl_type gg_nointernet ttl=3600 children=8 %LOGIN
 /usr/local/bin/squid_ldap_group ... -p 389 -P -t 2 -c 3 -R -S +
 acl GSIFBENoInternetAccess external gg_nointernet
GSIFBENoInternetAccess
 ...

Replace this:

 http_access deny GSIFBENoInternetAccess
 deny_info ERR_LDAP GSIFBENoInternetAccess

with this:

   # maybe needed to force credentials to be present
   #
   http_access deny !ntlmauth

   # do the group checking and custom denial page
   # without another auth popup.
   #
   acl ldapErrPage src all
   deny_info ERR_LDAP ldapErrPage
   http_access deny GSIFBENoInternetAccess ldapErrPage


 http_access allow ntlmauth
 http_reply_access allow all
 http_access deny all

first of all many thks for the quick reply .. 

I tried your proposal and seems to work. I still have to check
everything is ok at ldap and ntlm level but seems well until now ..
about your config there is something I do not understand ..

when I look at what I tried before, I deny all member of group
GSIFBENoInternetAccess before requesting for authentication so afaik
processing stops after the first line .. Is this correct and do I say
something wrong with this ??

http_access deny GSIFBENoInternetAccess
http_access allow ntlmauth
http_reply_access allow all
http_access deny all

when I look at your proposal what I understand, client is first
requested with authentication (407), then you simply define an acl
matching everything, you deny all member of GSIFBENoInternetAccess for
everybody (ldapErrPage is matching in this case 0.0.0.0/0.0.0.0) and
last but not least but this part is not clear for me, you request
credentials for the second time

http_access deny !ntlmauth
acl ldapErrPage src all
deny_info ERR_LDAP ldapErrPage
http_access deny GSIFBENoInternetAccess ldapErrPage
http_access allow ntlmauth
http_reply_access allow all
http_access deny all

in other words 

why did you force authentication before and after the ldap group ?
I see two times ntlmauth so you should authenticate two times for the
same request, right ?
why did you define an acl called ldapErrPage, without ldapErrPage is not
enough ?

many thks for your answers.



 many thks to help me.
 Vincent.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE19
   Current Beta Squid 3.1.0.13

-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-




Re: [squid-users] Squid + Trendmicro

2009-09-17 Thread Luis Daniel Lucio Quiroz
Le lundi 7 septembre 2009 01:04:49, Amos Jeffries a écrit :
 Luis Daniel Lucio Quiroz wrote:
  Hi all,
 
  Well, I have a really big problem,  We have deployed a Squid with digest
  auth + LDAP, it was work perfectly but other department has installed a
  Trendmicro antivirii solution.
 
  Well the problem is that when trendmicro cliend ask squid to access an
  url, it fails in first acl related with auth.
 
  My log is this:
  Request:
  2009/09/05 23:56:30.829| parseHttpRequest: Request Header is
  Host: licenseupdate.trendmicro.com:80
  User-Agent: Mozilla/4.0 (compatible;MSIE 5.0; Windows 98)
  Accept: */*
  Pragma: no-cache
  Cache-Control: no-cache,no-store
  Proxy-Authorization: Digest username=avedstrend, realm=XXX,
  nonce=/kCjSgB4/JcCAKLZuWMA, uri
  =http://licenseupdate.trendmicro.com:80/ollu/license_update.aspx?Protoco
 l_version=1AC=OSVMX49VN7GTUMQ8QYQAX
  SGJ72QENXKProduct_Code=OSAP_Name=OCOS=WWLanguage=EProduct_Version=R3
 CnAGQAyAA, response=5bd515897ca2f1
  84b196eae2fafc654a
  Proxy-Connection: Keep-Alive
  Connection: Close
 
 
  Acl who fails:
  2009/09/05 23:56:30.832| ACLChecklist::preCheck: 0x146e1b0 checking
  'http_access deny !plUexception !plU'
  2009/09/05 23:56:30.832| ACLList::matches: checking !plUexception
  2009/09/05 23:56:30.832| ACL::checklistMatches: checking 'plUexception'
  2009/09/05 23:56:30.832| authenticateAuthenticate: no connection
  authentication type
  2009/09/05 23:56:30.832| AuthUserRequest::AuthUserRequest: initialised
  request 0x189cc30
  2009/09/05 23:56:30.832| authenticateValidateUser: Validated Auth_user
  request '0x189cc30'.
  2009/09/05 23:56:30.832| authenticateValidateUser: Validated Auth_user
  request '0x189cc30'.
  FATAL: Received Segment Violation...dying.
 
  As you see plUexception is failling , this acl is declared as next:
 
  plUexception acl auth user1
 
 
  I wonder if anyone knows how to fix it.
 
 Segment violation crashes require a code fix.  What release of Squid is
 this?
 
 ... and can you get any stack trace info?
 
 http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e6
 7911becaabb8c95a34d576d
 
 
 Amos
 
I did ticket 2773 at bugzilla, as soon as possible I will send stack trace.

Thanx

LD


Re: [squid-users] keep cached objects permanently

2009-09-17 Thread Matthew Morgan

Kinkie wrote:

On Thu, Sep 17, 2009 at 7:11 PM, Matthew Morgan atcs.matt...@gmail.com wrote:
  

Is there a way to configure squid so that objects from a certain domain are
cached permanently?  I am trying to get a windows update caching server up,
and I want to make sure the LRU manager doesn't delete any of the cached
updates.  Other cached data can be deleted as needed.



Check refresh_pattern, just make sure you know what you're doing :)
  
I see stuff in refresh_pattern that keeps squid from reloading it's 
copy, but does this stop the LRU cleanup routine from deleting it from 
the cache?  Currently I'm using the following lines (copied from an 
online tutorial).  Will they do the job?  It seems like some of the 
updates get cached, then deleted from the cache a few days later.


refresh_pattern windowsupdate.com/.*\.(cab|exe|dll|msi) 10080 100% 43200 
reload-into-ims
refresh_pattern download.microsoft.com/.*\.(cab|exe|dll|msi) 10080 100% 
43200 reload-into-ims
refresh_pattern www.microsoft.com/.*\.(cab|exe|dll|msi) 10080 100% 43200 
reload-into-ims
refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|dll|msi) 4320 
100% 43200 reload-into-ims


  




Re: [squid-users] keep cached objects permanently

2009-09-17 Thread Kinkie
On Thu, Sep 17, 2009 at 10:26 PM, Matthew Morgan atcs.matt...@gmail.com wrote:
 Kinkie wrote:

 On Thu, Sep 17, 2009 at 7:11 PM, Matthew Morgan atcs.matt...@gmail.com
 wrote:


 Is there a way to configure squid so that objects from a certain domain
 are
 cached permanently?  I am trying to get a windows update caching server
 up,
 and I want to make sure the LRU manager doesn't delete any of the cached
 updates.  Other cached data can be deleted as needed.


 Check refresh_pattern, just make sure you know what you're doing :)


 I see stuff in refresh_pattern that keeps squid from reloading it's copy,
 but does this stop the LRU cleanup routine from deleting it from the cache?
  Currently I'm using the following lines (copied from an online tutorial).
  Will they do the job?  It seems like some of the updates get cached, then
 deleted from the cache a few days later.

 refresh_pattern windowsupdate.com/.*\.(cab|exe|dll|msi) 10080 100% 43200
 reload-into-ims
 refresh_pattern download.microsoft.com/.*\.(cab|exe|dll|msi) 10080 100%
 43200 reload-into-ims
 refresh_pattern www.microsoft.com/.*\.(cab|exe|dll|msi) 10080 100% 43200
 reload-into-ims
 refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|dll|msi) 4320
 100% 43200 reload-into-ims

There are a few more options you can add (override-expire,
override-must-revalidate, etc.). I don't know what's the right mix for
this specific scenario, but you can look into those.

-- 
/kinkie


[squid-users] Windows update through the proxy

2009-09-17 Thread Juan Cardoza
Does anyone know how to get Access to the windows update through the proxy.
Is there a way to configure the proxy into the windows update or into the proxy 
to download the update files.

Best regards
Juan Cardoza


Teleperformance values: Integrity - Respect - Professionalism - Innovation - 
Commitment

The information contained in this communication is privileged and confidential. 
 The content is intended only for the use of the individual or entity named 
above. If the reader of this message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited.  If you have received this communication 
in error, please notify me immediately by telephone or e-mail, and delete this 
message from your systems.
Please consider the environmental impact of needlessly printing this e-mail.


RE: [squid-users] URL issue (Internet explorer vs Firefox)

2009-09-17 Thread Juan Cardoza
Does anyone have any idea about this issue


-Mensaje original-
De: Juan Cardoza [mailto:jcard...@tpmex.com] 
Enviado el: Jueves, 17 de Septiembre de 2009 02:41 p.m.
Para: squid-users@squid-cache.org
Asunto: [squid-users] URL issue (Internet explorer vs Firefox)

Hello All

I have an issue, when someone Access the link below:
http://portal.infonavit.org.mx/wps/portal/TRABAJADORES/!ut/p/c5/04_SB8K8xLLM
9MSSzPy8xBz9CP0os3hnd0cPE3MfAwMLfwsLAyM_1wAXIxNvA_dAU30_j_zcVP2CbEdFABfWMig!
/dl3/d3/L2dBISEvZ0FBIS9nQSEh/

Through the Internet Explorer I receive the download process for a file that
say:
We do not recognize the kind of file, do you want to download it?

But when I try to access the same link through the firefox the link works.

Also if I send the request directly to the internet without the proxy in the
same PC, the link work perfectly.

Does anyone know what is happening?
Do I need to add something else in the proxy to get the correct access?

Thanks 
Best regards
Jhon



Teleperformance values: Integrity - Respect - Professionalism - Innovation -
Commitment

The information contained in this communication is privileged and
confidential.  The content is intended only for the use of the individual or
entity named above. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of this communication is strictly prohibited.  If you have received
this communication in error, please notify me immediately by telephone or
e-mail, and delete this message from your systems.
Please consider the environmental impact of needlessly printing this e-mail.


Teleperformance values: Integrity - Respect - Professionalism - Innovation - 
Commitment

The information contained in this communication is privileged and confidential. 
 The content is intended only for the use of the individual or entity named 
above. If the reader of this message is not the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited.  If you have received this communication 
in error, please notify me immediately by telephone or e-mail, and delete this 
message from your systems.
Please consider the environmental impact of needlessly printing this e-mail.


[squid-users] Squid Configuration

2009-09-17 Thread ashley08

Hello, I have beenn trying this for about 3 days, I've struggled to read
through the manuals and I still  haven't been able to get it to work,.. 
What I need is just to be able to connect from my windows laptop on a
different network, to my windows home computer.  I dont care about
authentication at all, I just want to get it to work.  I've tried all sorts
of configs, and they do work on localhost, but if I try to connect to the
server (serverip:port), from a different computer, it doesn't do anything. 
Nothing shows up in the logs unless I'm connecting through the proxy on the
same computer as the proxy server..  And I disabled all firewalls on both
computers..  now, i tried to make a bare minimum config, since n one of the
full ones work any better even after trying lots of adjustments


acl all src 0.0.0.0/0.0.0.0
http_port 27960
http_access allow all


that allows me to connect on localhost, but fails anywhere else!!

I tried also adding random things like the host name but they don't help



I've tried all kinds of ports, also the default 3128, I tried 27960 since I
know for a fact that port is open.

Do I need to Specifically define my client PC ip which will connect in the
config?  isn't ALL enough that it should allow everyone?

What can I do to allow ALL connections? Is acl all. Not Enough??

I also want to use this from other networks where my IP will be different.





Any help would be greatly appreciated, thanks if anyone can atleast try to
help








-- 
View this message in context: 
http://www.nabble.com/Squid-Configuration-tp25495815p25495815.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] How can I expire cache object by Apache's expire time?

2009-09-17 Thread Calvin Park
Hello Squid users

I use squid as reverse proxy and my setup is :

* Clients -- Squid ( 2.7 or 3.1 ) -- Apache ( many image files ,
use mod_expire rule )

I must be expire cache object by 2 rules:

  1. Last Modified ( It's good work and  )
  2. HTTP header's expire date   ( I heard Squid is not support )


How can I do ?..

May I get some source patch files about this ?

Plz help me.


[squid-users] about cache object expire

2009-09-17 Thread Ken Peng

Hello,

When an object stays in cache for some time then get expired, will squid 
delete this object at the moment?


Thanks.


Re: [squid-users] How can I expire cache object by Apache's expire time?

2009-09-17 Thread Jeff Pang
2009/9/18 Calvin Park car...@carrotis.com:
 Hello Squid users

 I use squid as reverse proxy and my setup is :

 * Clients -- Squid ( 2.7 or 3.1 ) -- Apache ( many image files ,
 use mod_expire rule )

 I must be expire cache object by 2 rules:

  1. Last Modified ( It's good work and  )

Having Last-Modified header is good enough for caching.
And you want to set the refresh-pattern in squid.conf as well.

  2. HTTP header's expire date   ( I heard Squid is not support )


who said that? squid can handle mod_expire's added header well.


Jeff.


Re: [squid-users] Squid Configuration

2009-09-17 Thread Jeff Pang
2009/9/18 ashley08 doubleflashst...@gmail.com:

 Hello, I have beenn trying this for about 3 days, I've struggled to read
 through the manuals and I still  haven't been able to get it to work,..
 What I need is just to be able to connect from my windows laptop on a
 different network, to my windows home computer.  I dont care about
 authentication at all, I just want to get it to work.  I've tried all sorts
 of configs, and they do work on localhost, but if I try to connect to the
 server (serverip:port), from a different computer, it doesn't do anything.
 Nothing shows up in the logs unless I'm connecting through the proxy on the
 same computer as the proxy server..  And I disabled all firewalls on both
 computers..  now, i tried to make a bare minimum config, since n one of the
 full ones work any better even after trying lots of adjustments


 acl all src 0.0.0.0/0.0.0.0
 http_port 27960
 http_access allow all


are you sure you setup squid as a proxy for http only?
if you want to pass anything others (like pop3/ssh/telnet etc) through
squid they won't work.

Jeff.


Re: [squid-users] Windows update through the proxy

2009-09-17 Thread Jeff Pang
2009/9/18 Juan Cardoza jcard...@tpmex.com:
 Does anyone know how to get Access to the windows update through the proxy.
 Is there a way to configure the proxy into the windows update or into the 
 proxy to download the update files.


see:
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate


Re: [squid-users] cache store type

2009-09-17 Thread Jeff Pang
2009/9/17 Mikio Kishi mki...@104.net:
 Hi, all

 Now, which type of cache store do you recommend ?
 (ufs, aufs and diskd ...)

 squid version: 3.1.0.13


many people have said, aufs for linux and diskd for bsd will get
better performance.
also I have tried COSS under linux, very good too.

Jeff.


Re: [squid-users] How can I expire cache object by Apache's expire time?

2009-09-17 Thread Jeff Pang
2009/9/18 Calvin Park car...@carrotis.com:

 HTTP/1.0 200 OK
 Date: Tue, 01 Sep 2009 07:54:40 GMT
 Server: thumbd-64bits/0.2.51
 Content-Length: 2798
 Content-Type: image/gif
 Cache-Control: max-age=604800
 Last-Modified: Mon, 17 Aug 2009 04:40:33 GMT    === It's ok
 Expires: Tue, 08 Sep 2009 07:54:40 GMT              === It's problem for me
 Accept-Ranges: bytes


Here you have both cache-control and expire headers.
So squid will use cache-control firstly.
It doesn't mean squid will not use the rules set by mod_expire.


Jeff.


[squid-users] Appending multiple domains for DNS resolution

2009-09-17 Thread dmorton




Hiya,

SITUATION:
As a lot of companies will struggle with we're cursed with a legacy of
internal and external services sharing the same domain space (seperate
authorative servers internally and externally). We have three domains that
services can exist on internally, these addresses are not advertised on
internet based DNS. Squid is setup also as a caching DNS server with
forwarders to the appropriate servers for the three internal domains as
well as a default for internet based resolution, this setup works perfectly
from the server command line. I can ping any non FQDN and get the correct
result (three domains in the resolv.conf search string as well as localhost
for nameserver) for internal servers as well as internet based FQDN's.

ISSUE:
The issue is that Squid does NOT apply the domain suffixes as specified in
resolv.conf for a non FQDN, it queries literally and fails. The
append_domain works perfectly for my purpose but i understand it can only
be used for one domain and not the three i require. As people send links
around for web based services that are simply
http://internalserver/site.html I'm a bit stuck as to how to resolve
against DNS correctly to return the result.

I do not want to implement (or continue to) browser based bypass lists as
the overhead is too high. It will be much cleaner if our proxy can
transparently redirect requests to the correct server on various internal
domains.

Hope this is clear, its my first post so go easy ;)





This e-mail contains privileged and confidential information intended for
the use of the addressees named above.  If you are not the intended
recipient of this e-mail, you are hereby notified that you must not
disseminate, copy or take any action in respect of any information
contained in it.  If you have received this e-mail in error, please notify
the sender immediately by e-mail and immediately destroy this e-mail and
its attachments.

ATTENTION RECIPIENT This email may contain privileged, confidential and/or 
personal information and is intended only for the use of the addressee. If you 
are not the intended recipient of this email you must not disseminate, copy or 
take action in reliance on it. If you have received this email in error please 
notify the sender immediately and delete the email. The confidential nature of 
and/or privilege in the documents transmitted is not waived or lost as a result 
of a mistake or error in transmission. Any personal information in this email 
must be handled in accordance with the prevailing Privacy legislation in the 
country of receipt of this email. This email does not necessarily constitute an 
official representation of Tyco. The content of this email may be reviewed by 
Tyco and has been logged for archival purposes. Emails may be interfered with, 
may contain computer viruses or other defects and may not be successfully 
replicated on other systems. Tyco gives no warranties in relation to these 
matters. 

Re: [squid-users] Squid stops responding-LTSP and WinXP clients

2009-09-17 Thread Avinash Rao
On Wed, Sep 16, 2009 at 7:04 PM, Henrik Nordstrom
hen...@henriknordstrom.net wrote:

 ons 2009-09-16 klockan 18:02 +0530 skrev Avinash Rao:
  3) how do i use truss, not found on my machine

 What OS are you running on the server? If Linux then use strace instead.

  4) r...@sunbox:~# gdb /usr/sbin/squid 21557
  This GDB was configured as x86_64-linux-gnu...
  (no debugging symbols found)

 Sorry, to use GDB your squid binary needs to be built with debug
 information and not stripped.

  (no debugging symbols found)
  0x7ff17d4ad315 in waitpid () from /lib/libpthread.so.0

 Hmm.. which of the Squid processes did you attach to?

 Regards
 Henrik




Hi,

1) I am using Ubuntu 8.04 Server 64-bit OS on a sun fire x4150 server
with 8GB RAM + RAID 5
2) I attached /usr/sbin/squid -D -sYC process in gdb
3) I will use strace  when squid stops responding today.

Thanks
Avinash


[squid-users] Not able to access Thunderbird from a linux client through squid

2009-09-17 Thread Avinash Rao
Hi,

I am using squid2.6stable18 on ubuntu 8.04 server.
I have configured squid for very basic proxy and my squid.conf is below.
I am not able to access thunderbird email through this proxy
configuration, I am using thunderbird from a Ubuntu client, but i am
able to access internet using Mozilla Firefox browser, but not
thunderbird. How can i get this working?

The thunderbird client uses Port 110 and 25 to access emails and i
have enabled them here.

squid.conf

visible_hostname server
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
hosts_file /etc/hosts
http_port 10.10.10.10:3128
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
debug_options ALL,9
cache_access_log /var/log/squid/access.log
cache_dir null /tmp

acl abc urlpath_regex -i\.(mp3|exe|mp4|mov|sex)(\?.*)?$
acl videos dstdomain .youtube.com .yimg.com .orkut.com .sex.com
.teen.com .adult.com .mp3.com
#cache_mem 256 MB

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 110
acl Safe_ports port 25
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443 563   # https, snews
acl Safe_ports port 70# gopher
acl Safe_ports port 210   # wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280   # http-mgmt
acl Safe_ports port 488   # gss-http
acl Safe_ports port 591   # filemaker
acl Safe_ports port 631   # cups
acl Safe_ports port 777   # multiling http
acl Safe_ports port 901   # SWAT
acl Safe_ports port 993   # IMAP
acl Safe_ports port 587   # SMTP
acl Safe_ports port 22# SSH
acl purge method PURGE
acl special_urls url_regex /etc/squid/squid-noblock.acl
acl extndeny url_regex -i /etc/squid/blocks.files.acl
acl malware_block_list url_regex -i /etc/squid/malware_block_list.txt

acl lan src 192.168.1.0 10.10.10.0/24
acl stud ident_regex babu
acl download method GET
acl CONNECT method CONNECT

ident_lookup_access allow all

http_access deny extndeny
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny abc
http_access deny videos

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access allow special_urls
http_access deny extndeny download
http_access deny malware_block_list
deny_info http://malware.hiperlinks.com.br/denied.shtml malware_block_list
http_access allow localhost
http_access allow lan
http_access deny all
http_reply_access allow all
icp_access allow all
coredump_dir /var/spool/squid


Thanks
Avinash


[squid-users] squid3 with dansguardian in non-transparent mode

2009-09-17 Thread sameer shinde
Hi All,

I'm having squid 3.0 running on my Ubuntu 8.04 server in n NON-transperant mode.
(I want it that way). I'm trying to configure the Dansguardian as a
content filtering
and restricting access to unwanted sites, for which I've installed
Dansguardian 2.9.9.7
The DansGuardian is also seem to be configured. But it is not doing
the filtering.
can someone tell me how to link it with squid3 in a non-transparent mode?

I'm checking with google, but everyone is talking about the transparent proxy
with iptables. I don't want to do all this things. I want to configure
it in non-transparent mode
itself. so just want to link it with squid3 so that it works.


~~
Sameer Shinde.
M:- +91 98204 61580
Every man is the architect of his own fortune.