Re: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-23 Thread FredB
Maybe a problem with /var/cache/RockStore-0 directory ? Permission ?


- Mail original -
De: David Touzeau da...@touzeau.eu
À: squid-users@squid-cache.org
Envoyé: Mardi 22 Novembre 2011 17:35:37
Objet: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No 
such file or directory

Dear

I have enabled RockStore on my squid 3.2.0.13-20111027-r11388

has this

workers 2
cache_dir rock /var/cache/RockStore-0 256 max-size=32768
cache_dir rock /var/cache/RockStore-1 256 max-size=32768
cache_dir   ufs /var/cache/squid 2000 16 256

squid claim

2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory

but the rock file exists:

 stat /var/cache/RockStore-0/rock
  File: «/var/cache/RockStore-0/rock»
  Size: 268435456   Blocks: 32 IO Block: 4096   fichier
Device: 801h/2049d  Inode: 262542  Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1001/   squid)   Gid: ( 1001/   squid)
Access: 2011-11-22 17:16:32.401809206 +0100
Modify: 2011-11-22 17:23:17.601809860 +0100
Change: 2011-11-22 17:25:17.601309064 +0100



Squid Cache (Version 3.2.0.13-20111027-r11388): Terminated abnormally.
CPU Usage: 0.036 seconds = 0.032 user + 0.004 sys
Maximum Resident Size: 31328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%



is it a misconfiguration ?

Here it is the full dump sequence :


Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%
2011/11/22 17:25:28 kid1| Starting Squid Cache version
3.2.0.13-20111027-r11388 for i686-pc-linux-gnu...
2011/11/22 17:25:28 kid1| Process ID 31315
2011/11/22 17:25:28 kid1| Process Roles: worker
2011/11/22 17:25:28 kid1| With 1024 file descriptors available
2011/11/22 17:25:28 kid1| Initializing IP Cache...
2011/11/22 17:25:28 kid1| DNS Socket created at [::], FD 9
2011/11/22 17:25:28 kid1| DNS Socket created at 0.0.0.0, FD 10
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.105
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.1
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile: opening log tcp:127.0.0.1:54424
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Unlinkd pipe opened on FD 19
2011/11/22 17:25:28 kid1| Local cache digest enabled; rebuild/rewrite
every 3600/3600 sec
2011/11/22 17:25:28 kid1| Logfile: opening log
stdio:/var/log/squid/store.log
2011/11/22 17:25:28 kid1| Swap maxSize 2048000 + 8192 KB, estimated
158168 objects
2011/11/22 17:25:28 kid1| Target number of buckets: 7908
2011/11/22 17:25:28 kid1| Using 8192 Store buckets
2011/11/22 17:25:28 kid1| Max Mem  size: 8192 KB [shared]
2011/11/22 17:25:28 kid1| Max Swap size: 2048000 KB
2011/11/22 17:25:28 kid1| Version 1 of swap file with LFS support
detected...
2011/11/22 17:25:28 kid1| Rebuilding storage in /var/cache/squid (DIRTY)
2011/11/22 17:25:28 kid1| Using Least Load store dir selection
2011/11/22 17:25:28 kid1| Set Current Directory to /var/squid/cache
2011/11/22 17:25:28 kid1| Loaded Icons.
2011/11/22 17:25:28 kid1| HTCP Disabled.
2011/11/22 17:25:28 kid1| Squid plugin modules loaded: 0
2011/11/22 17:25:28 kid1| Adaptation support is off.
2011/11/22 17:25:28 kid1| Ready to serve requests.
2011/11/22 17:25:28 kid1| Done reading /var/cache/squid swaplog (39
entries)
2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory
Squid Cache (Version 3.2.0.13-20111027-r11388): Terminated abnormally.
CPU Usage: 0.036 seconds = 0.032 user + 0.004 sys
Maximum Resident Size: 31328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena: 

Re: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-23 Thread FredB
Please add swap-timeout like this

workers 2
cache_dir rock /cache1 13 max-size=31000 max-swap-rate=250 swap-timeout=350
cache_dir rock /cache2 13 max-size=31000 max-swap-rate=250 swap-timeout=350

Under 250 I have

2011/11/23 09:21:28 kid1| DiskIO/IpcIo/IpcIoFile.cc(137) openCompleted: error: 
timeout
FATAL: Rock cache_dir at /cache1/rock failed to open db file: (111) Connection 
ref


- Mail original -
De: David Touzeau da...@touzeau.eu
À: squid-users@squid-cache.org
Envoyé: Mardi 22 Novembre 2011 17:35:37
Objet: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No 
such file or directory

Dear

I have enabled RockStore on my squid 3.2.0.13-20111027-r11388

has this

workers 2
cache_dir rock /var/cache/RockStore-0 256 max-size=32768
cache_dir rock /var/cache/RockStore-1 256 max-size=32768
cache_dir   ufs /var/cache/squid 2000 16 256

squid claim

2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory

but the rock file exists:

 stat /var/cache/RockStore-0/rock
  File: «/var/cache/RockStore-0/rock»
  Size: 268435456   Blocks: 32 IO Block: 4096   fichier
Device: 801h/2049d  Inode: 262542  Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1001/   squid)   Gid: ( 1001/   squid)
Access: 2011-11-22 17:16:32.401809206 +0100
Modify: 2011-11-22 17:23:17.601809860 +0100
Change: 2011-11-22 17:25:17.601309064 +0100



Squid Cache (Version 3.2.0.13-20111027-r11388): Terminated abnormally.
CPU Usage: 0.036 seconds = 0.032 user + 0.004 sys
Maximum Resident Size: 31328 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%



is it a misconfiguration ?

Here it is the full dump sequence :


Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:2448 KB
Ordinary blocks: 2308 KB 14 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1024 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 139 KB
Total in use:3332 KB 136%
Total free:   139 KB 6%
2011/11/22 17:25:28 kid1| Starting Squid Cache version
3.2.0.13-20111027-r11388 for i686-pc-linux-gnu...
2011/11/22 17:25:28 kid1| Process ID 31315
2011/11/22 17:25:28 kid1| Process Roles: worker
2011/11/22 17:25:28 kid1| With 1024 file descriptors available
2011/11/22 17:25:28 kid1| Initializing IP Cache...
2011/11/22 17:25:28 kid1| DNS Socket created at [::], FD 9
2011/11/22 17:25:28 kid1| DNS Socket created at 0.0.0.0, FD 10
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.105
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding nameserver 192.168.1.1
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Adding domain touzeau.com
from /etc/resolv.conf
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/access.log
2011/11/22 17:25:28 kid1| Logfile: opening log tcp:127.0.0.1:54424
2011/11/22 17:25:28 kid1| Logfile: opening log
daemon:/var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Logfile Daemon: opening
log /var/log/squid/sarg.log
2011/11/22 17:25:28 kid1| Unlinkd pipe opened on FD 19
2011/11/22 17:25:28 kid1| Local cache digest enabled; rebuild/rewrite
every 3600/3600 sec
2011/11/22 17:25:28 kid1| Logfile: opening log
stdio:/var/log/squid/store.log
2011/11/22 17:25:28 kid1| Swap maxSize 2048000 + 8192 KB, estimated
158168 objects
2011/11/22 17:25:28 kid1| Target number of buckets: 7908
2011/11/22 17:25:28 kid1| Using 8192 Store buckets
2011/11/22 17:25:28 kid1| Max Mem  size: 8192 KB [shared]
2011/11/22 17:25:28 kid1| Max Swap size: 2048000 KB
2011/11/22 17:25:28 kid1| Version 1 of swap file with LFS support
detected...
2011/11/22 17:25:28 kid1| Rebuilding storage in /var/cache/squid (DIRTY)
2011/11/22 17:25:28 kid1| Using Least Load store dir selection
2011/11/22 17:25:28 kid1| Set Current Directory to /var/squid/cache
2011/11/22 17:25:28 kid1| Loaded Icons.
2011/11/22 17:25:28 kid1| HTCP Disabled.
2011/11/22 17:25:28 kid1| Squid plugin modules loaded: 0
2011/11/22 17:25:28 kid1| Adaptation support is off.
2011/11/22 17:25:28 kid1| Ready to serve requests.
2011/11/22 17:25:28 kid1| Done reading /var/cache/squid swaplog (39
entries)
2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at 

Re: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-23 Thread FredB


 Please add swap-timeout like this

 workers 2 
 cache_dir rock /cache1 13 max-size=31000 max-swap-rate=250 
 swap-timeout=350
 cache_dir rock /cache2 13 max-size=31000 max-swap-rate=250 
 swap-timeout=350

 Under 250 I have

I meant with max-swap-rate under 250 




Re: [squid-users] compilation error

2011-11-23 Thread Amos Jeffries

On 23/11/2011 5:34 p.m., benjamin fernandis wrote:

Hi,

I am trying to compile squid code on Linux, compilation is done
properly but after that when i checked config.log ,  i can see some
errors and warning.So i wonder that does it related with OS side or
from squid side ?


Squid ./configure does a lot of tests to see what capabilities your OS 
has. Errors and problems are normal during this process.


The tricky part is figuring out which failures (if any) are supposed to 
be succeeding. Are there any particular features you need and  your OS 
is supposed to have which are failing to be found properly?


Amos


Re: [squid-users] Squid loosing connectivity for 30 seconds

2011-11-23 Thread Amos Jeffries

On 23/11/2011 8:32 p.m., Elie Merhej wrote:

 Hi,

I am currently facing a problem that I wasn't able to find a solution 
for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same exact 
time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid as a 
dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping to the 
squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is installed on 
the server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will start 
having the problem when the cache directories are above 50% filled
when i change the number of cache directory (9,8,...) the squid works 
for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but the 
problem persisted


You have 2 HDD?  but, but, you have 10 cache_dir.
 We repeatedly say one cache_dir per disk or similar. In particular 
one cache_dir per physical drive spindle (for disks made up of 
multiple physical spindles) wherever possible with physical 
drives/spindles mounting separately to ensure the pairing. Squid 
performs a very unusual pattern of disk I/O which stress them down to 
the hardware controller level and make this kind of detail critical for 
anything like good speed. Avoiding cache_dir object limitations by 
adding more UFS-based dirs to one disk does not improve the situation.


That is a problem which will be affecting your Squid all the time 
though, possibly making the source of the pause worse.


From teh description I believe it is garbage collection on the cache 
directories. The pauses can be visible when garbage collecting any 
caches over a few dozen GB. The squid default swap_high and swap_low 
values are 5 apart, with at minimum being a value of 0 apart. These 
are whole % points of the total cache size, being erased from disk in a 
somewhat random-access style across the cache area. I did mention 
uncommon disk I/O patterns, right?


To be sure what it is, you can use the strace tool to the squid worker 
process (the second PID in current stable Squids) and see what is 
running. But given the hourly regularity and past experience with others 
on similar cache sizes, I'm almost certain its the garbage collection.


Amos


[squid-users] External allow/deny

2011-11-23 Thread Matvey Teplov

Hi everyone,

I have a short question: Is there an option to offload allow/deny 
decision to the external script from squid itself? The target is to run 
external script that will enquire the database for the allow/deny action 
based on the certain type of logic.


Cheers


Re: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-23 Thread Amos Jeffries

On 23/11/2011 9:12 p.m., FredB wrote:

Maybe a problem with /var/cache/RockStore-0 directory ? Permission ?


- Mail original -
De: David Touzeauda...@touzeau.eu
À: squid-users@squid-cache.org
Envoyé: Mardi 22 Novembre 2011 17:35:37
Objet: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No 
such file or directory

Dear

I have enabled RockStore on my squid 3.2.0.13-20111027-r11388

has this

workers 2
cache_dir rock /var/cache/RockStore-0 256 max-size=32768
cache_dir rock /var/cache/RockStore-1 256 max-size=32768
cache_dir   ufs /var/cache/squid 2000 16 256

squid claim

2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
error: timeout
FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
file: (2) No such file or directory

but the rock file exists:

  stat /var/cache/RockStore-0/rock
   File: «/var/cache/RockStore-0/rock»
   Size: 268435456  Blocks: 32 IO Block: 4096   fichier
Device: 801h/2049d  Inode: 262542  Links: 1
Access: (0755/-rwxr-xr-x)  Uid: ( 1001/   squid)   Gid: ( 1001/   squid)
Access: 2011-11-22 17:16:32.401809206 +0100
Modify: 2011-11-22 17:23:17.601809860 +0100
Change: 2011-11-22 17:25:17.601309064 +0100



Hmm, yes. I'm a little suspicious about the 755 permission and cannot 
open message.
We have an unresolved bug about Squid reporting its UID wrongly (as 
squid when its still running as whatever UID started the app) which 
could be biting you here.


Amos


Re: [squid-users] compilation error

2011-11-23 Thread Amos Jeffries

On 23/11/2011 10:22 p.m., Benjamin wrote:

 Hi Amos,

After compilation of squid, it is working fine means i m not having 
any issue.But while looking in config.log i can see much erros and  
warning so i wonder that these might be due to OS package problem or 
related with squid.


No worries then.



Amos, what is idle value for L1 and L2 while configuring CACHE_DIR in 
squid.conf.we are using aufs .


You mean defaults?   http://www.squid-cache.org/Doc/config/cache_dir/*
*
Amos



Re: [squid-users] External allow/deny

2011-11-23 Thread Amos Jeffries

On 23/11/2011 10:33 p.m., Matvey Teplov wrote:

Hi everyone,

I have a short question: Is there an option to offload allow/deny 
decision to the external script from squid itself? The target is to 
run external script that will enquire the database for the allow/deny 
action based on the certain type of logic.


Cheers


No, but yes.

No ... the allow/deny is just a human readable label for an if..else 
code branch Squid performs based on some ACL boolean tests. It can't be 
detached from Squid.


Yes ... one particular ACL test can be pushed externally and do any 
amount or type of logics you may want to do in order to push an OK/ERR 
(true vs false boolean) back to Squid.

see http://www.squid-cache.org/Doc/config/external_acl_type

Amos



Re: [squid-users] Squid losing connectivity for 30 seconds

2011-11-23 Thread Elie Merhej



 Hi,

I am currently facing a problem that I wasn't able to find a solution 
for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same exact 
time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid as a 
dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping to the 
squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is installed on 
the server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will start 
having the problem when the cache directories are above 50% filled
when i change the number of cache directory (9,8,...) the squid works 
for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but the 
problem persisted


You have 2 HDD?  but, but, you have 10 cache_dir.
 We repeatedly say one cache_dir per disk or similar. In particular 
one cache_dir per physical drive spindle (for disks made up of 
multiple physical spindles) wherever possible with physical 
drives/spindles mounting separately to ensure the pairing. Squid 
performs a very unusual pattern of disk I/O which stress them down to 
the hardware controller level and make this kind of detail critical 
for anything like good speed. Avoiding cache_dir object limitations by 
adding more UFS-based dirs to one disk does not improve the situation.


That is a problem which will be affecting your Squid all the time 
though, possibly making the source of the pause worse.


From teh description I believe it is garbage collection on the cache 
directories. The pauses can be visible when garbage collecting any 
caches over a few dozen GB. The squid default swap_high and 
swap_low values are 5 apart, with at minimum being a value of 0 
apart. These are whole % points of the total cache size, being erased 
from disk in a somewhat random-access style across the cache area. I 
did mention uncommon disk I/O patterns, right?


To be sure what it is, you can use the strace tool to the squid 
worker process (the second PID in current stable Squids) and see what 
is running. But given the hourly regularity and past experience with 
others on similar cache sizes, I'm almost certain its the garbage 
collection.


Amos



Hi Amos,

Thank you for your fast reply,
I have 2 HDD (450GB and 600GB)
df -h displays that i have 357Gb and 505GB available
In my last test, my cache dir where:
cache_swap_low 90
cache_swap_high 95
maximum_object_size 512 MB
maximum_object_size_in_memory 20 KB
cache_dir aufs /cache1/squid 32 480 256
cache_dir aufs /cache2/squid 48 700 256

Is this Ok?

Thank you

Elie Merhej


[squid-users] ERROR: No forward-proxy ports configured

2011-11-23 Thread David Touzeau
Dear i'm using squid 3.2.0.13-2021-r11422
for each request i receive an event

Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.


here it is a piece of config file

# IS 3.2 YES
# IS 3.1 YES
acl localhost src 127.0.0.1/8 0.0.0.0/32
acl to_localhost dst 127.0.0.1/8 0.0.0.0/32
auth_param basic credentialsttl 2 hour
authenticate_ttl 1 hour
authenticate_ip_ttl 60 seconds
#- TWEEKS PERFORMANCES
# http://blog.last.fm/2007/08/30/squid-optimization-guide
memory_pools off
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
client_db off
buffered_logs on
half_closed_clients off

#- UfdbGuard

#Disabled enable_UfdbGuard=0

#- squidGuard

#Disabled enable_squidguard= 0
#- SQUID PARENTS (feature not enabled)

#- acls
acl blockedsites url_regex /etc/squid3/squid-block.acl
acl CONNECT method CONNECT
acl purge method PURGE
acl FTP proto FTP

acl office_network src all


#- MAIN RULES...
always_direct allow FTP
# - SAFE ports
acl Safe_ports port 80  #http
acl Safe_ports port 22  #ssh
acl Safe_ports port 443 563 #https, snews
acl Safe_ports port 1863#msn
acl Safe_ports port 70  #gopher
acl Safe_ports port 210 #wais
acl Safe_ports port 1025-65535  #unregistered ports
acl Safe_ports port 280 #http-mgmt
acl Safe_ports port 488 #gss-http
acl Safe_ports port 591 #filemaker
acl Safe_ports port 777 #multiling http
acl Safe_ports port 631 #cups
acl Safe_ports port 873 #rsync
acl Safe_ports port 901 #SWAT
acl Safe_ports port 20  #ftp-data
acl Safe_ports port 21  #ftp#
# - Use x-forwarded-for for local Dansguardian or load balancers
log_uses_indirect_clienton
follow_x_forwarded_for allow localhost
acl SSL_ports port 9000 #Artica
acl SSL_ports port 443  #HTTPS
acl SSL_ports port 563  #https, snews
acl SSL_ports port 6667 #tchat



# -  RULES DEFINITIONS
url_rewrite_access deny localhost
url_rewrite_access allow all
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow manager localhost
http_access allow purge localhost
http_access deny purge
http_access deny blockedsites
http_access allow office_network
http_access deny to_localhost
http_access deny all
# - ICAP Services.(1 service(s))


# - icap_service KASPERSKY mode 3.1.1

icap_serviceis_kav_resp respmod_precache routing=on bypass=on
icap://127.0.0.1:1344/av/respmod
icap_serviceis_kav_req reqmod_precache routing=on bypass=on
icap://127.0.0.1:1344/av/reqmod


# - adaptation For Kaspersky Antivirus

adaptation_service_set class_antivirus_kav_resp is_kav_resp
adaptation_service_set class_antivirus_kav_req is_kav_req
adaptation_access class_antivirus_kav_req allow all
adaptation_access class_antivirus_kav_resp allow all


icap_enable on
icap_preview_size 128
icap_service_failure_limit -1
icap_preview_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Authenticated-User
icap_client_username_encode on




# - ident_lookup_access
hierarchy_stoplist cgi-bin ?

# - General settings 
visible_hostname proxyweb


# - time-out 
dead_peer_timeout 10 seconds
dns_timeout 2 minutes
connect_timeout 1600 seconds
persistent_request_timeout 3 minutes
pconn_timeout 1600 seconds


maximum_object_size 300 MB
minimum_object_size 0 KB
maximum_object_size_in_memory 1024 KB


#http/https ports
http_port 33070 transparent


# - SSL Rules 

# - Caches 
cache_effective_user squid
cache_effective_group squid
#cache_replacement_policy heap LFUDA
cache_mem 512 MB
cache_swap_high 90
cache_swap_low 95
# - DNS and ip caches 
ipcache_size 51200
ipcache_low 90
ipcache_high 95
fqdncache_size 51200


# - SPECIFIC DNS SERVERS 
dns_nameservers resolve.conf

#- FTP specific parameters
ftp_passive on
ftp_sanitycheck off
ftp_epsv off
ftp_epsv_all off
ftp_telnet_protocol off




Re: [squid-users] External allow/deny

2011-11-23 Thread Amos Jeffries

On 23/11/2011 11:08 p.m., Matvey Teplov wrote:

Amos,

That should do! It caches the result to improve the performance based 
on the parameters supplied?


Regards Matvey


Yes. With configurable ttl= for OK and negative_ttl= for ERR results.

Amos


Re: [squid-users] ERROR: No forward-proxy ports configured

2011-11-23 Thread Amos Jeffries

On 23/11/2011 11:14 p.m., David Touzeau wrote:

Dear i'm using squid 3.2.0.13-2021-r11422
for each request i receive an event

Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.
Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
forward-proxy ports configured.


here it is a piece of config file


snip


#http/https ports
http_port 33070 transparent



So, only one port, and its a NAT traffic receiving port. Looks like 
Squid was right, no forward-proxy ports configured.


Should not be on every request, that is a sign of some other problem 
happening. Just on the ones which need to produce internal URLs (ie 
error page icons, FTP directory icons, cache manager requests, etc). 
Squid requires a forward-proxy port for the URL it tells the browser to 
fetch those icons from. A NAT or TROXY receiving port is somewhat 
vulnerable to forgery attacks and Squid now avoids publishing its 
details. Accel ports have some problematic cases in the parser and until 
those are figured out properly they are avoided as well. Which just 
leaves the regular old forward-proxy port for now.


Adding http_port 3128 or somesuch is enough to avoid this message.

Amos


Re: [squid-users] compilation error

2011-11-23 Thread Amos Jeffries

On 23/11/2011 11:14 p.m., Benjamin wrote:

 On 11/23/2011 03:13 PM, Amos Jeffries wrote:

On 23/11/2011 10:22 p.m., Benjamin wrote:

 Hi Amos,

After compilation of squid, it is working fine means i m not having 
any issue.But while looking in config.log i can see much erros and  
warning so i wonder that these might be due to OS package problem or 
related with squid.


No worries then.



Amos, what is idle value for L1 and L2 while configuring CACHE_DIR 
in squid.conf.we are using aufs .


You mean defaults?   http://www.squid-cache.org/Doc/config/cache_dir/*
*
Amos


Hi Sir,

My question is that while we configure cache_dir in squid.conf that 
time syntax is :


size in mb  L1   L2
cache_dir aufs /cache/1 76800 256 512


How to decide L1 and L2 while configuring cache_dir option in squid.conf?


Even numbers only, whatever your OS plays best with. Most people like 
powers of 2 for simplicity of understanding.


They are count of folders inside the UFS directory structure. L1 is 
count of folders inside the top level directory, inside each L1 folder 
is L2 amount of folders, inside L2 is individual files. With a maximum 
of 2^27 files spread across the structure. Some systems work better with 
fewer files per folder. Calculate L1/L2 as needed for your OS files per 
directory capabilities.


Amos


Re: [squid-users] Squid losing connectivity for 30 seconds

2011-11-23 Thread Amos Jeffries

On 23/11/2011 11:11 p.m., Elie Merhej wrote:



 Hi,

I am currently facing a problem that I wasn't able to find a 
solution for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same exact 
time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the squid as a 
dead sibling, and the squid cannot server any requests
The network connectivity of the sever is not affected (a ping to the 
squid's ip doesn't timeout)


The problem doesn't start immediately when the squid is installed on 
the server (The server is dedicated as a squid)

It starts when the cache directories starts to fill up,
I have started my setup with 10 cache directors, the squid will 
start having the problem when the cache directories are above 50% 
filled
when i change the number of cache directory (9,8,...) the squid 
works for a while then the same problem

cache_dir aufs /cache1/squid 9 140 256
cache_dir aufs /cache2/squid 9 140 256
cache_dir aufs /cache3/squid 9 140 256
cache_dir aufs /cache4/squid 9 140 256
cache_dir aufs /cache5/squid 9 140 256
cache_dir aufs /cache6/squid 9 140 256
cache_dir aufs /cache7/squid 9 140 256
cache_dir aufs /cache8/squid 9 140 256
cache_dir aufs /cache9/squid 9 140 256
cache_dir aufs /cache10/squid 8 140 256

I have 1 terabyte of storage
Finally I created two cache dircetories (One on each HDD) but the 
problem persisted


You have 2 HDD?  but, but, you have 10 cache_dir.
 We repeatedly say one cache_dir per disk or similar. In particular 
one cache_dir per physical drive spindle (for disks made up of 
multiple physical spindles) wherever possible with physical 
drives/spindles mounting separately to ensure the pairing. Squid 
performs a very unusual pattern of disk I/O which stress them down to 
the hardware controller level and make this kind of detail critical 
for anything like good speed. Avoiding cache_dir object limitations 
by adding more UFS-based dirs to one disk does not improve the 
situation.


That is a problem which will be affecting your Squid all the time 
though, possibly making the source of the pause worse.


From teh description I believe it is garbage collection on the cache 
directories. The pauses can be visible when garbage collecting any 
caches over a few dozen GB. The squid default swap_high and 
swap_low values are 5 apart, with at minimum being a value of 0 
apart. These are whole % points of the total cache size, being erased 
from disk in a somewhat random-access style across the cache area. I 
did mention uncommon disk I/O patterns, right?


To be sure what it is, you can use the strace tool to the squid 
worker process (the second PID in current stable Squids) and see what 
is running. But given the hourly regularity and past experience with 
others on similar cache sizes, I'm almost certain its the garbage 
collection.


Amos



Hi Amos,

Thank you for your fast reply,
I have 2 HDD (450GB and 600GB)
df -h displays that i have 357Gb and 505GB available
In my last test, my cache dir where:
cache_swap_low 90
cache_swap_high 95


This is not. For anything more than 10-20 GB I recommend setting it to 
no more than 1 apart, possibly the same value if that works.
Squid has a light but CPU-intensive and possibly long garbage removal 
cycle above cache_swap_low, and a much more aggressive but faster and 
less CPU intensive removal above cache_swap_high. On large caches it is 
better in terms of downtime going straight to the aggressive removal and 
clearing disk space fast, despite the bandwidth cost replacing any items 
the light removal would have left.



Amos


[squid-users] Yet another thread about problems with Internet Explorer

2011-11-23 Thread Antonio Gutiérrez Mayoral
Hi there,
We are having some troubles with Internet Explorer users using Squid.
On the server:

Squid: 2.6.7
OS: Suse Linux Enterprise Server 10 sp2
RAM: 2 GB

On the Client:

IE Version: 8
Using Identd auth
Windows XP sp3

The problem is: loading almost web pages over IE spends a lot of time,
and The program is not responding in task manager in Windows.
We dont have this problem across other browsers like Firefox or Google
Chrome. Loading the site without proxy, load well (on all browsers).

If I see the logs (tail -f access.log | grep IP), I can detect that
using IE spends a lot of time
after the logs shows up a set of lines of the request, this doesnt
happens with other
browsers. Also I see a lot of TCP_MISS the first time the page is
loaded, and TCP_HITs
after (except the dynamic content). My intution says that the problem
is on the comunication between IE
and squid.

Is there any issue related to IE, or there is a preferred
configuration for this browser? In this case,
could you give me a pointer to recheck all my configuration?

So much thanks to all and excuses for my bad english.

Antonio.
--
--
Antonio Gutiérrez Mayoral aguti...@gmail.com


Re: [squid-users] ERROR: No forward-proxy ports configured

2011-11-23 Thread David Touzeau
Le mercredi 23 novembre 2011 à 23:37 +1300, Amos Jeffries a écrit :
De: 
 Amos Jeffries
 squ...@treenet.co.nz
 À: 
 squid-users@squid-cache.org
 Sujet: 
 Re: [squid-users] ERROR: No
 forward-proxy ports configured
  Date: 
 Wed, 23 Nov 2011 23:37:00 +1300
 (23/11/2011 11:37:00)
 
 
 On 23/11/2011 11:14 p.m., David Touzeau wrote:
  Dear i'm using squid 3.2.0.13-2021-r11422
  for each request i receive an event
 
  Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
  forward-proxy ports configured.
  Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
  forward-proxy ports configured.
  Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
  forward-proxy ports configured.
  Nov 23 11:08:24 kav4proxy64Coatpont squid[16844]: ERROR: No
  forward-proxy ports configured.
 
 
  here it is a piece of config file
 
 snip
 
  #http/https ports
  http_port 33070 transparent
 
 
 So, only one port, and its a NAT traffic receiving port. Looks like 
 Squid was right, no forward-proxy ports configured.
 
 Should not be on every request, that is a sign of some other problem 
 happening. Just on the ones which need to produce internal URLs (ie 
 error page icons, FTP directory icons, cache manager requests, etc). 
 Squid requires a forward-proxy port for the URL it tells the browser
 to 
 fetch those icons from. A NAT or TROXY receiving port is somewhat 
 vulnerable to forgery attacks and Squid now avoids publishing its 
 details. Accel ports have some problematic cases in the parser and
 until 
 those are figured out properly they are avoided as well. Which just 
 leaves the regular old forward-proxy port for now.
 
 Adding http_port 3128 or somesuch is enough to avoid this message.
 
 Amos 


This fix the error, thanks !



Re: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / No such file or directory

2011-11-23 Thread David Touzeau
Le mercredi 23 novembre 2011 à 22:40 +1300, Amos Jeffries a écrit :
 On 23/11/2011 9:12 p.m., FredB wrote:
  Maybe a problem with /var/cache/RockStore-0 directory ? Permission ?
 
 
  - Mail original -
  De: David Touzeauda...@touzeau.eu
  À: squid-users@squid-cache.org
  Envoyé: Mardi 22 Novembre 2011 17:35:37
  Objet: [squid-users] [3.2.0.13]: DiskIO/IpcIo/IpcIoFile.cc for RockStore / 
  No such file or directory
 
  Dear
 
  I have enabled RockStore on my squid 3.2.0.13-20111027-r11388
 
  has this
 
  workers 2
  cache_dir rock /var/cache/RockStore-0 256 max-size=32768
  cache_dir rock /var/cache/RockStore-1 256 max-size=32768
  cache_dir   ufs /var/cache/squid 2000 16 256
 
  squid claim
 
  2011/11/22 17:25:31 kid2| DiskIO/IpcIo/IpcIoFile.cc(132) openCompleted:
  error: timeout
  FATAL: Rock cache_dir at /var/cache/RockStore-0/rock failed to open db
  file: (2) No such file or directory
 
  but the rock file exists:
 
stat /var/cache/RockStore-0/rock
 File: «/var/cache/RockStore-0/rock»
 Size: 268435456  Blocks: 32 IO Block: 4096   fichier
  Device: 801h/2049d  Inode: 262542  Links: 1
  Access: (0755/-rwxr-xr-x)  Uid: ( 1001/   squid)   Gid: ( 1001/   squid)
  Access: 2011-11-22 17:16:32.401809206 +0100
  Modify: 2011-11-22 17:23:17.601809860 +0100
  Change: 2011-11-22 17:25:17.601309064 +0100
 
 
 Hmm, yes. I'm a little suspicious about the 755 permission and cannot 
 open message.
 We have an unresolved bug about Squid reporting its UID wrongly (as 
 squid when its still running as whatever UID started the app) which 
 could be biting you here.
 
 Amos

Thanks Amos, by waiting the fix how can i resolve this behavior ?




[squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
include. I tried to use it but tell me not recognized . Why don't
use this option in all versions? This could be helpful to organize the
squid.conf in many single files with the parameter that we never or
almost never touch. Sorry about my english !!


Re: [squid-users] about include

2011-11-23 Thread Matus UHLAR - fantomas

On 23.11.11 09:56, Carlos Manuel Trepeu Pupo wrote:

Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
include. I tried to use it but tell me not recognized . Why don't
use this option in all versions? This could be helpful to organize the
squid.conf in many single files with the parameter that we never or
almost never touch. Sorry about my english !!


Try upgrading to newer version, that
- is supported
- has less bugs
- supports include.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Save the whales. Collect the whole set.


Re: [squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
But all the newer version supported, even the 3.1.x ?

On Wed, Nov 23, 2011 at 10:36 AM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
 On 23.11.11 09:56, Carlos Manuel Trepeu Pupo wrote:

 Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
 include. I tried to use it but tell me not recognized . Why don't
 use this option in all versions? This could be helpful to organize the
 squid.conf in many single files with the parameter that we never or
 almost never touch. Sorry about my english !!

 Try upgrading to newer version, that
 - is supported
 - has less bugs
 - supports include.

 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 Save the whales. Collect the whole set.



Re: [squid-users] New user - few questions

2011-11-23 Thread Sw@g

Hi Again,

I had a further look at it, and understand a bit more now. What I would 
like to achieve is to have the access log looking as below, and only 
registering individual web page accessed rather than every object 
(within the page requested by the user)


day:month:year-hour:minute:second url_of_the_page_requested_by_the_user

Looking forward for your reply,

Kind regards,


On 22/11/11 22:13, Amos Jeffries wrote:

On Tue, 22 Nov 2011 18:11:26 +, Sw@g wrote:

Hi all,

First of all, I would like to thank you for your time and effort for
providing such a great tool.

I am a new user on archlinux, using Squid locally.  I have a few
questions, regarding the setup most of all.

- Is it possible to change the information logged into access.log? I
would like it like that

= date +%#F_%T address_visited (I would like to replace the
timestamps with a human readable time/date and just the website
visited)


http://wiki.squid-cache.org/SquidFaq/SquidLogs
http://www.squid-cache.org/Doc/config/access_log
http://www.squid-cache.org/Doc/config/logformat



= Is it possible to limit the size of the logs from within the
squid.conf file?



No. You need to integrate log management tools like logrotate.d or 
cron jobs to control when log rotation occurs.




And the last question, I have that error coming up from the cache.log

IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not
available

And the browsing become really slow, even page aren't opening
anymore? Any advice?


Squid is unable to locate the client details in the kernel NAT table. 
NAT *must* be done on the Squid box.


Also ensure that you have separate http_port lines for the different 
types of traffic arriving at your Squid.


Amos


[squid-users] Pass username and group to peer

2011-11-23 Thread Andreas Moroder

Hello,

is it possible to configure squid the way it sends username and group of 
the authenticated user to a peer proxy ?


Thanks
Andreas



Re: [squid-users] New user - few questions

2011-11-23 Thread Naira Kaieski

Hi,

You can try:

create your logfomat with infomation that you want:

logformat my_log %{%d/%b/%Y:%H:%M:%S}tl %ru

# default for sarg
cache_access_log /var/log/squid/access.log squid
# your log
cache_access_log /var/log/squid/my_access.log my_log

where in logformat:
%ru = Request URL, without the query string
%{format}tl = Date of request, strftime format (localtime)

see http://devel.squid-cache.org/customlog/logformat.html

Atenciosamente,
Naira Kaieski
Linux Professional Institute - LPI 101


Em 23/11/2011 14:38, Sw@g escreveu:

Hi Again,

I had a further look at it, and understand a bit more now. What I 
would like to achieve is to have the access log looking as below, and 
only registering individual web page accessed rather than every object 
(within the page requested by the user)


day:month:year-hour:minute:second url_of_the_page_requested_by_the_user

Looking forward for your reply,

Kind regards,


On 22/11/11 22:13, Amos Jeffries wrote:

On Tue, 22 Nov 2011 18:11:26 +, Sw@g wrote:

Hi all,

First of all, I would like to thank you for your time and effort for
providing such a great tool.

I am a new user on archlinux, using Squid locally.  I have a few
questions, regarding the setup most of all.

- Is it possible to change the information logged into access.log? I
would like it like that

= date +%#F_%T address_visited (I would like to replace the
timestamps with a human readable time/date and just the website
visited)


http://wiki.squid-cache.org/SquidFaq/SquidLogs
http://www.squid-cache.org/Doc/config/access_log
http://www.squid-cache.org/Doc/config/logformat



= Is it possible to limit the size of the logs from within the
squid.conf file?



No. You need to integrate log management tools like logrotate.d or 
cron jobs to control when log rotation occurs.




And the last question, I have that error coming up from the cache.log

IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not
available

And the browsing become really slow, even page aren't opening
anymore? Any advice?


Squid is unable to locate the client details in the kernel NAT table. 
NAT *must* be done on the Squid box.


Also ensure that you have separate http_port lines for the different 
types of traffic arriving at your Squid.


Amos


[squid-users] scheduling not working for me, what am I doing wrong?

2011-11-23 Thread someone
I am unable to limit the hours squid will accept requests



squid3 -v
Squid Cache: Version 3.0.STABLE8 ---yes I know its older but will do
for my needs.


Ive tried this:

acl ACLTIME time SMTWHFA 06:00-23:30

http_access allow ACLTIME



and ive tried this:

acl ACLTIME time 06:00-23:30

http_access allow ACLTIME

None of the above actually works. 

I am trying to shut off squids access between 23:30pm and 06:00am



Re: [squid-users] scheduling not working for me, what am I doing wrong?

2011-11-23 Thread Andrew Beverley
On Wed, 2011-11-23 at 13:28 -0800, someone wrote:
 I am unable to limit the hours squid will accept requests
 
 
 
 squid3 -v
 Squid Cache: Version 3.0.STABLE8 ---yes I know its older but will do
 for my needs.
 
 
 Ive tried this:
 
 acl ACLTIME time SMTWHFA 06:00-23:30
 
 http_access allow ACLTIME

Have you got a deny option somewhere? Are you following the ACLTIME by
another allow that is giving them access?

Maybe you want something like:

acl ACLTIME time 06:30-21:00
http_access deny !ACLTIME

Andy




Re: [squid-users] about include

2011-11-23 Thread Amos Jeffries

On Wed, Nov 23, 2011 at 10:36 AM, Matus UHLAR - fantomas wrote:

On 23.11.11 09:56, Carlos Manuel Trepeu Pupo wrote:


Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
include. I tried to use it but tell me not recognized . Why don't
use this option in all versions? This could be helpful to organize 
the

squid.conf in many single files with the parameter that we never or
almost never touch. Sorry about my english !!


Try upgrading to newer version, that
- is supported
- has less bugs
- supports include.



On Wed, 23 Nov 2011 10:58:44 -0500, Carlos Manuel Trepeu Pupo wrote:

But all the newer version supported, even the 3.1.x ?


It exists in all Squid-3.x releases since it was added in March 2008. 
Each Squid-3 package builds upon the earlier ones, as indicated by the 
numerical version numbering.

http://wiki.squid-cache.org/Squid-3.0

Amos



Re: [squid-users] New user - few questions

2011-11-23 Thread Amos Jeffries

On Wed, 23 Nov 2011 16:38:06 +, Sw@g wrote:

Hi Again,

I had a further look at it, and understand a bit more now. What I
would like to achieve is to have the access log looking as below, and
only registering individual web page accessed rather than every 
object

(within the page requested by the user)


(sorry. Rant warning.)

There is no page. HTTP simply does not have any such concept. This is 
where it differs from FTP and Gopher protocols of the 1980's. They have 
easily identifiable URL which can be considered pages. In HTTP 
everything is an object and any type of object can appear at any URL, 
at any time, as negotiated between the client and server.
 For dynamic websites things get extremely tricky. The HTML part most 
developers think of as the page, may be a collection of snippets 
spread over many requests when Squid sees it. Sometimes a script or 
media object creates the HTML after arrival and essentially *is* the 
page (think Shockwave Flash web pages that were the popular fad a few 
years back).


A web page is a data model which only exists inside web browsers and 
users headers these days. Although browser people call it the DOM 
instead of page (reflecting what it actually is; a model made up of 
many data objects).


/rant


The best you can hope for in HTTP is to log the Referer: request 
header. And hope that all clients will actually send it (the privacy 
obsessed people wont, nor will a lot of automated agents). Even then you 
will face a little skew since Referer: contains the *last* page that was 
requested. Not the current one. It works only because most pages have 
sub-objects whose requests might contain a Referer: during the actual 
page load+display time.


The logformat documentation describes the tokens available. access_log 
documentation describes how to use logformat, with some examples at the 
bottom of the page, and also describes how to use ACL tests to log only 
specific things to one particular file.


Amos



day:month:year-hour:minute:second 
url_of_the_page_requested_by_the_user


Looking forward for your reply,

Kind regards,


On 22/11/11 22:13, Amos Jeffries wrote:

On Tue, 22 Nov 2011 18:11:26 +, Sw@g wrote:

Hi all,

First of all, I would like to thank you for your time and effort 
for

providing such a great tool.

I am a new user on archlinux, using Squid locally.  I have a few
questions, regarding the setup most of all.

- Is it possible to change the information logged into access.log? 
I

would like it like that

= date +%#F_%T address_visited (I would like to replace the
timestamps with a human readable time/date and just the website
visited)


http://wiki.squid-cache.org/SquidFaq/SquidLogs
http://www.squid-cache.org/Doc/config/access_log
http://www.squid-cache.org/Doc/config/logformat



= Is it possible to limit the size of the logs from within the
squid.conf file?



No. You need to integrate log management tools like logrotate.d or 
cron jobs to control when log rotation occurs.



And the last question, I have that error coming up from the 
cache.log


IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not
available

And the browsing become really slow, even page aren't opening
anymore? Any advice?


Squid is unable to locate the client details in the kernel NAT 
table. NAT *must* be done on the Squid box.


Also ensure that you have separate http_port lines for the different 
types of traffic arriving at your Squid.


Amos




Re: [squid-users] New user - few questions

2011-11-23 Thread Amos Jeffries

On Wed, 23 Nov 2011 15:23:19 -0200, Naira Kaieski wrote:

Hi,

You can try:

create your logfomat with infomation that you want:

logformat my_log %{%d/%b/%Y:%H:%M:%S}tl %ru

# default for sarg
cache_access_log /var/log/squid/access.log squid
# your log
cache_access_log /var/log/squid/my_access.log my_log


Remove cache_ and that will be right.



where in logformat:
%ru = Request URL, without the query string
%{format}tl = Date of request, strftime format (localtime)

see http://devel.squid-cache.org/customlog/logformat.html



NP: devel.squid-cache.org is obsolete documentation.


http://www.squid-cache.org/Doc/config/logformat is the current version 
for all squid-2.6 and later releases. 2.5 configuration information can 
also be inferred from the details which have nothing saying added or 
altered in 2.6+.



Amos



Atenciosamente,
Naira Kaieski
Linux Professional Institute - LPI 101


Em 23/11/2011 14:38, Sw@g escreveu:

Hi Again,

I had a further look at it, and understand a bit more now. What I 
would like to achieve is to have the access log looking as below, and 
only registering individual web page accessed rather than every object 
(within the page requested by the user)


day:month:year-hour:minute:second 
url_of_the_page_requested_by_the_user


Looking forward for your reply,

Kind regards,


On 22/11/11 22:13, Amos Jeffries wrote:

On Tue, 22 Nov 2011 18:11:26 +, Sw@g wrote:

Hi all,

First of all, I would like to thank you for your time and effort 
for

providing such a great tool.

I am a new user on archlinux, using Squid locally.  I have a few
questions, regarding the setup most of all.

- Is it possible to change the information logged into access.log? 
I

would like it like that

= date +%#F_%T address_visited (I would like to replace the
timestamps with a human readable time/date and just the website
visited)


http://wiki.squid-cache.org/SquidFaq/SquidLogs
http://www.squid-cache.org/Doc/config/access_log
http://www.squid-cache.org/Doc/config/logformat



= Is it possible to limit the size of the logs from within the
squid.conf file?



No. You need to integrate log management tools like logrotate.d or 
cron jobs to control when log rotation occurs.



And the last question, I have that error coming up from the 
cache.log


IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 29: (92) Protocol not
available

And the browsing become really slow, even page aren't opening
anymore? Any advice?


Squid is unable to locate the client details in the kernel NAT 
table. NAT *must* be done on the Squid box.


Also ensure that you have separate http_port lines for the 
different types of traffic arriving at your Squid.


Amos




Re: [squid-users] Pass username and group to peer

2011-11-23 Thread Amos Jeffries

On 24/11/2011 5:41 a.m., Andreas Moroder wrote:

Hello,

is it possible to configure squid the way it sends username and group 
of the authenticated user to a peer proxy ?




Consider:  Which part of the username:passkey credentials is the group?

In short no. Group is not sent to peers proxies.


Username and password can be configured with the login= option on the 
relevant cache_peer line. You get a choice of Basic authentication, with 
one of other username:passord detail removed/replaced. In newer Squid 
you can also send Negotiate/Kerberos authentication security hashes.


Amos



Re: [squid-users] Yet another thread about problems with Internet Explorer

2011-11-23 Thread Amos Jeffries

On 24/11/2011 12:42 a.m., Antonio Gutiérrez Mayoral wrote:

Hi there,
We are having some troubles with Internet Explorer users using Squid.
On the server:

Squid: 2.6.7
OS: Suse Linux Enterprise Server 10 sp2
RAM: 2 GB

On the Client:

IE Version: 8
Using Identd auth
Windows XP sp3

The problem is: loading almost web pages over IE spends a lot of time,
and The program is not responding in task manager in Windows.
We dont have this problem across other browsers like Firefox or Google
Chrome. Loading the site without proxy, load well (on all browsers).

If I see the logs (tail -f access.log | grep IP), I can detect that
using IE spends a lot of time
after the logs shows up a set of lines of the request, this doesnt
happens with other
browsers. Also I see a lot of TCP_MISS the first time the page is
loaded, and TCP_HITs
after (except the dynamic content). My intution says that the problem
is on the comunication between IE
and squid.


Possibly. Looked at the traffic yet?




Is there any issue related to IE, or there is a preferred
configuration for this browser? In this case,
could you give me a pointer to recheck all my configuration?


Nothing specific. This is end-user software remember, where the people 
in charge are often incapable of changing settings. Any configurable 
workarounds are usually in Squid where the admin/users have more 
knowledge and ability.


IIRC the one that used to come up often was the IE setting about sending 
HTTP/1.1 version to the proxy. Note that squid-2.6 is HTTP/1.0-only 
software with very limited 1.1 abilities (only around 20-40%  of the RFC 
2616 features supported).


Amos


Re: [squid-users] Sibling issue

2011-11-23 Thread Amos Jeffries

On 22/11/2011 10:43 p.m., Chia Wei LEE wrote:

Hi

I had an issue when configure the sibling.
below is my configuration in my proxy01 (10.1.1.2)

cache_peer 10.1.1.8 sibling 3128 0 proxy-only no-query
acl sibling1 src 10.1.1.8
cache_peer_access 10.1.1.8 deny sibling1



but when i go to browse Internet, the access.log in my proxy02 (10.1.1.8)
show the below log.

22/Nov/2011:17:15:43 +0800248 10.1.1.2 TCP_MISS/404 330 GET
internal://proxy01/squid-internal-periodic/store_digest - NONE/- text/plain

even my proxy02 already have the related cache, but my proxy01 keep cannot
find the cache and fetch the content from Internet.

Any idea on this ?


Squid version?

What do you mean by have the cache the storage digest is already 
finished building by proxy-2?


404 seems a little strange. As does the ://proxy01/ part. AFAIK the 
request from proxy01 to proxy02 given that above config in any current 
Squid should be one of:


  GET internal://10.1.1.8:3128/squid-internal-periodic/store_digest
or
  GET /squid-internal-periodic/store_digest

Amos


Re: [squid-users] Sibling issue

2011-11-23 Thread Chia Wei LEE
Hi Amos

Proxy01 is using version 3.1 and the Proxy02 is using version 2.7

Cheers
Chia Wei


Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient, any disclosure, copying or distribution of the message, or any
action taken by you in reliance on it, is prohibited and may be unlawful.
This email is for communication purposes only. It is not intended to
constitute an offer or form a binding agreement.  Our company accepts no
liability for the content of this email, or for the consequences of any
actions taken on the basis of the information provided.  If you have
received this message in error,  please delete it and contact the sender
immediately.  Thank you.




   
 Amos Jeffries 
 squid3@treenet.c 
 o.nz  To 
   squid-users@squid-cache.org 
 24-11-2011 12:12   cc 
 PM
   Subject 
   Re: [squid-users] Sibling issue 
   
   
   
   
   
   




On 22/11/2011 10:43 p.m., Chia Wei LEE wrote:
 Hi

 I had an issue when configure the sibling.
 below is my configuration in my proxy01 (10.1.1.2)

 cache_peer 10.1.1.8 sibling 3128 0 proxy-only no-query
 acl sibling1 src 10.1.1.8
 cache_peer_access 10.1.1.8 deny sibling1



 but when i go to browse Internet, the access.log in my proxy02 (10.1.1.8)
 show the below log.

 22/Nov/2011:17:15:43 +0800248 10.1.1.2 TCP_MISS/404 330 GET
 internal://proxy01/squid-internal-periodic/store_digest - NONE/-
text/plain

 even my proxy02 already have the related cache, but my proxy01 keep
cannot
 find the cache and fetch the content from Internet.

 Any idea on this ?

Squid version?

What do you mean by have the cache the storage digest is already
finished building by proxy-2?

404 seems a little strange. As does the ://proxy01/ part. AFAIK the
request from proxy01 to proxy02 given that above config in any current
Squid should be one of:

   GET internal://10.1.1.8:3128/squid-internal-periodic/store_digest
or
   GET /squid-internal-periodic/store_digest

Amos

ForwardSourceID:NT976E