Re: [squid-users] Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-07-30 Thread John Joseph


Hi Antony
thanks for the reply, I have filed up the corresponding information below 


- Original Message -
From: Antony Stone antony.st...@squid.open.source.it
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc: 
Sent: Monday, 29 July 2013 2:45 PM
Subject: Re: [squid-users] Squid monitoring, access report shows upto 5 % to 7 
%  cache usage

On Monday 29 July 2013 at 12:36:05, John Joseph wrote:

 Hi All
 
 For purpose of convincing about the squid cahce, I installed and started
 monitoring SQUID using mysar http://sourceforge.net/projects/mysar/;;
 
 Is this % too low or average  how much should a optimized squid setup give.
 Or should I dig into some more configuration to pump the cache %

 How large is your cache?
  It is  around 556GB, my cache dir details in squid.conf are 

  cache_dir aufs /opt/var/spool/squid 57 32 256


 How many users are going via Squid?
  At present not more than 250  users, but concurrent users will not be 
more than 50 users


 How long have they been using it (how long has Squid been building up its 
cache)?
 3 months 


 How many connection requests do you have per minute/hour/day/whatever makes 
sense?
   Can you please advice me on how to check for this parameter

 How many cache HITs do you see in the log file compared to MISSes (ie: how 
often are cached objects being requested)?
 Can you please advice me on how to check for this parameter



 Any other relevant information you can supply might be helpful in providing 
 an 
answer.

  from the mysar, I get the following information, which I am pasting here 

USERS     HOSTS     SITES     TRAFFIC    Cache Percent
12    990    18336    47.71G     5.00%
21    1020    26877    86.57G     5.00%
31    1017    24811    105.67G 4.00%
59    1024    29185    119.15G 4.00%
41    1022    27245    106.82G 5.00%
25    1022    26785    86.68G     5.00%
29    1019    26609    71.94G     6.00%
23    1019    24863    79.48G     5.00%
20    1016    24334    77.44G     5.00%
43    1015    25206    72.18G     5.00%
18    1016    24187    81.28G     5.00%
38    1021    25538    80.74G     6.00%
30    1021    27427    85.19G     5.00%
31    1021    27369    96.13G     5.00%
29    1019    23879    74.26G     5.00%
28    1014    20715    70.90G     5.00%
22    1010    22537    69.59G     5.00%
22    1021    23031    86.52G     4.00%
19    1015    22542    73.21G     5.00%
16    1020    22408    74.67G     5.00%
23    1021    23594    72.99G     5.00%
28    1021    23408    71.97G     6.00%
17    1006    21390    64.28G     5.00%
23    994    22685    61.42G     5.00%
20    1016    25792    71.54G     5.00%
24    1017    25178    74.03G     5.00%
37    1019    29740    83.19G     5.00%
22    1020    25175    77.47G     5.00%


Guidance and advice requested.
Thanks 

Joseph John


 
 Regards,


 Antony.

-- 
Users don't know what they want until they see what they get.

                                                     Please reply to the list;
                                                           please don't CC me.



[squid-users] question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Ahmad
hi ,
i have Squid Cache: Version 3.1.6

i have squidclient installed on .

here is the result from squidclient :
HTTP/1.0 200 OK
Server: squid
Mime-Version: 1.0
Date: Tue, 30 Jul 2013 07:18:45 GMT
Content-Type: text/plain
Expires: Tue, 30 Jul 2013 07:18:45 GMT
Last-Modified: Tue, 30 Jul 2013 07:18:45 GMT
X-Cache: MISS from dft
X-Cache-Lookup: MISS from dft:22001
Via: 1.0 dft (squid)
Proxy-Connection: close

Squid Object Cache: Version 3.1.6
Start Time: Sun, 28 Jul 2013 13:37:00 GMT
Current Time:   Tue, 30 Jul 2013 07:18:45 GMT
Connection information for squid:
*Number of clients accessing cache:  2211
Number of HTTP requests received:   80437449*
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   32152.5
Average ICP messages per minute since start:0.0
Select loop called: 2096328654 times, 0.072 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 44.1%, 60min: 43.4%
Hits as % of bytes sent:5min: 36.8%, 60min: 38.6%
Memory hits as % of hit requests:   5min: 18.3%, 60min: 16.9%
Disk hits as % of hit requests: 5min: 45.3%, 60min: 43.1%
Storage Swap size:  272496340 KB
Storage Swap capacity:  98.6% used,  1.4% free
Storage Mem size:   1014520 KB
Storage Mem capacity:   100.0% used, 7276133255119654.0% free
Mean Object Size:   108.79 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.07014  0.07014
Cache Misses:  0.19742  0.19742
Cache Hits:0.0  0.0
Near Hits: 0.64968  0.20843
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.06083  0.06364
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:150104.835 seconds
CPU Time:   105900.170 seconds
CPU Usage:  70.55%
CPU Usage, 5 minute avg:58.96%
CPU Usage, 60 minute avg:   52.03%
Process Data Segment Size via sbrk(): 4192192 KB
Maximum Resident Size: 16999120 KB
Page faults with physical i/o: 89
Memory usage for squid via mallinfo():
Total space in arena:   -1980 KB
Ordinary blocks:   -165956 KB 453500 blks
Small blocks:   0 KB  0 blks
Holding blocks: 66172 KB 15 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  163975 KB
Total in use:  -99784 KB -154%
Total free:163975 KB 255%
Total size: 64192 KB
Memory accounted for:
Total accounted:   -1252542 KB -1950%
memPool accounted: -1252542 KB -1950%
memPool unaccounted:   1316733 KB 2051%
memPoolAlloc calls:12
memPoolFree calls:  22462254711
File descriptor usage for squid:
Maximum number of file descriptors:   65536
Largest file desc currently in use:   34282
Number of file desc currently in use: 23259
Files queued for open:   0
Available number of file descriptors: 42277
Reserved number of file descriptors:   100
Store Disk files open:11827
Internal Data Structures:
2547268 StoreEntries
157597 StoreEntries with MemObjects
156565 Hot Object Cache Items
2504776 on-disk objects





=

as we see above , we have :
*Number of clients accessing cache:  2211*

does that mean that i have 2211 single ips accessing squid ??
i checked from my side   from my router  , i found my real clients access
squid is about 1580 ,  so  why it is greater that 1580 

does this value count the Natting  in routers as new clients ??

i mean does this  value accurate ? or it count # of ips in a range of time
???

==
i wish i could understand it .


regards





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-in-cache-Manager-about-of-clients-and-of-hits-http-tp4661313.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-07-30 Thread Amos Jeffries

On 30/07/2013 6:13 p.m., John Joseph wrote:


Hi Antony
thanks for the reply, I have filed up the corresponding information below


- Original Message -

From: Antony Stone antony.st...@squid.open.source.it
To: squid-users@squid-cache.org squid-users@squid-cache.org
Cc:
Sent: Monday, 29 July 2013 2:45 PM
Subject: Re: [squid-users] Squid monitoring, access report shows upto 5 % to 7 
%  cache usage
On Monday 29 July 2013 at 12:36:05, John Joseph wrote:

Hi All

For purpose of convincing about the squid cahce, I installed and started
monitoring SQUID using mysar http://sourceforge.net/projects/mysar/;;

Is this % too low or average  how much should a optimized squid setup give.
Or should I dig into some more configuration to pump the cache %

How large is your cache?

   It is  around 556GB, my cache dir details in squid.conf are

   cache_dir aufs /opt/var/spool/squid 57 32 256



How many users are going via Squid?

   At present not more than 250  users, but concurrent users will not be 
more than 50 users



How long have they been using it (how long has Squid been building up its

cache)?
  3 months



How many connection requests do you have per minute/hour/day/whatever makes

sense?
Can you please advice me on how to check for this parameter


How many cache HITs do you see in the log file compared to MISSes (ie: how

often are cached objects being requested)?
  Can you please advice me on how to check for this parameter


Simplest way is:
.../access.log | grep -c HIT
.../access.log | grep -c REFRESH
.../access.log | grep -c MISS

The next best is:
 squidclient mgr:info




Any other relevant information you can supply might be helpful in providing an

answer.

   from the mysar, I get the following information, which I am pasting here


Ah. This last column 5.00% seems to be the HIT percentage over all 
taht traffic.



USERS HOSTS SITES TRAFFICCache Percent
129901833647.71G 5.00%
2110202687786.57G 5.00%
31101724811105.67G 4.00%
59102429185119.15G 4.00%
41102227245106.82G 5.00%
2510222678586.68G 5.00%
2910192660971.94G 6.00%
2310192486379.48G 5.00%
2010162433477.44G 5.00%
4310152520672.18G 5.00%
1810162418781.28G 5.00%
3810212553880.74G 6.00%
3010212742785.19G 5.00%
3110212736996.13G 5.00%
2910192387974.26G 5.00%
2810142071570.90G 5.00%
2210102253769.59G 5.00%
2210212303186.52G 4.00%
1910152254273.21G 5.00%
1610202240874.67G 5.00%
2310212359472.99G 5.00%
2810212340871.97G 6.00%
1710062139064.28G 5.00%
239942268561.42G 5.00%
2010162579271.54G 5.00%
2410172517874.03G 5.00%
3710192974083.19G 5.00%
2210202517577.47G 5.00%


Guidance and advice requested.
Thanks

Joseph John


Amos


[squid-users] Re: Squid monitoring, access report shows upto 5 % to 7 % cache usage

2013-07-30 Thread babajaga
You should install and use
 http://wiki.squid-cache.org/Features/CacheManager

This gives you a lot of info regarding cache performance, like hit rate etc.


Having 556 GB of cache within one cache dir might already hit the upper
limit of max. number of cached objects, depending upon the avg size of
objects in cache.
Which could mean, that only part of the 556GB will ever be used.

Solution: Create different cache dirs, for various object-size classes.
But before doing this, post the infos you get from CacheManager, like avg
object size, cache fill rate etc.
CacheClient is the alternative to CacheManager, in case you do not want to
use web access.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitoring-access-report-shows-upto-5-to-7-cache-usage-tp4661301p4661314.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Amos Jeffries

On 30/07/2013 7:31 p.m., Ahmad wrote:

hi ,
i have Squid Cache: Version 3.1.6

i have squidclient installed on .

here is the result from squidclient :
HTTP/1.0 200 OK
Server: squid
Mime-Version: 1.0
Date: Tue, 30 Jul 2013 07:18:45 GMT
Content-Type: text/plain
Expires: Tue, 30 Jul 2013 07:18:45 GMT
Last-Modified: Tue, 30 Jul 2013 07:18:45 GMT
X-Cache: MISS from dft
X-Cache-Lookup: MISS from dft:22001
Via: 1.0 dft (squid)
Proxy-Connection: close

Squid Object Cache: Version 3.1.6
Start Time: Sun, 28 Jul 2013 13:37:00 GMT
Current Time:   Tue, 30 Jul 2013 07:18:45 GMT
Connection information for squid:
 *Number of clients accessing cache:  2211
 Number of HTTP requests received:   80437449*
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:   0
 Number of HTCP messages received:   0
 Number of HTCP messages sent:   0
 Request failure ratio:   0.00
 Average HTTP requests per minute since start:   32152.5
 Average ICP messages per minute since start:0.0
 Select loop called: 2096328654 times, 0.072 ms avg
Cache information for squid:
 Hits as % of all requests:  5min: 44.1%, 60min: 43.4%
 Hits as % of bytes sent:5min: 36.8%, 60min: 38.6%
 Memory hits as % of hit requests:   5min: 18.3%, 60min: 16.9%
 Disk hits as % of hit requests: 5min: 45.3%, 60min: 43.1%
 Storage Swap size:  272496340 KB
 Storage Swap capacity:  98.6% used,  1.4% free
 Storage Mem size:   1014520 KB
 Storage Mem capacity:   100.0% used, 7276133255119654.0% free
 Mean Object Size:   108.79 KB
 Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.07014  0.07014
 Cache Misses:  0.19742  0.19742
 Cache Hits:0.0  0.0
 Near Hits: 0.64968  0.20843
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.06083  0.06364
 ICP Queries:   0.0  0.0
Resource usage for squid:
 UP Time:150104.835 seconds
 CPU Time:   105900.170 seconds
 CPU Usage:  70.55%
 CPU Usage, 5 minute avg:58.96%
 CPU Usage, 60 minute avg:   52.03%
 Process Data Segment Size via sbrk(): 4192192 KB
 Maximum Resident Size: 16999120 KB
 Page faults with physical i/o: 89
Memory usage for squid via mallinfo():
 Total space in arena:   -1980 KB
 Ordinary blocks:   -165956 KB 453500 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 66172 KB 15 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  163975 KB
 Total in use:  -99784 KB -154%
 Total free:163975 KB 255%
 Total size: 64192 KB
Memory accounted for:
 Total accounted:   -1252542 KB -1950%
 memPool accounted: -1252542 KB -1950%
 memPool unaccounted:   1316733 KB 2051%
 memPoolAlloc calls:12
 memPoolFree calls:  22462254711
File descriptor usage for squid:
 Maximum number of file descriptors:   65536
 Largest file desc currently in use:   34282
 Number of file desc currently in use: 23259
 Files queued for open:   0
 Available number of file descriptors: 42277
 Reserved number of file descriptors:   100
 Store Disk files open:11827
Internal Data Structures:
 2547268 StoreEntries
 157597 StoreEntries with MemObjects
 156565 Hot Object Cache Items
 2504776 on-disk objects





=

as we see above , we have :
*Number of clients accessing cache:  2211*

does that mean that i have 2211 single ips accessing squid ??
i checked from my side   from my router  , i found my real clients access
squid is about 1580 ,  so  why it is greater that 1580 


Did you check both IPv6 and IPv4 clients?


does this value count the Natting  in routers as new clients ??


Only if they reach Squid as identifiably different IPs.


i mean does this  value accurate ? or it count # of ips in a range of time
???


It is the count of distinct IPs currently in the client-DB.

The client-DB is the set of distinct IPs connected to your Squid from 
any direction (both LAN and WAN) over a period of up to 24hrs. They are 
faded / aged out of the DB dynamically based on several factors 
including last access and frequecy of access when the client-DB garbage 
collector gets run to remove old entries.


Amos


Re: [squid-users] question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Amos Jeffries

On 30/07/2013 7:31 p.m., Ahmad wrote:

hi ,
i have Squid Cache: Version 3.1.6

i have squidclient installed on .

here is the result from squidclient :
HTTP/1.0 200 OK
Server: squid
Mime-Version: 1.0
Date: Tue, 30 Jul 2013 07:18:45 GMT
Content-Type: text/plain
Expires: Tue, 30 Jul 2013 07:18:45 GMT
Last-Modified: Tue, 30 Jul 2013 07:18:45 GMT
X-Cache: MISS from dft
X-Cache-Lookup: MISS from dft:22001
Via: 1.0 dft (squid)
Proxy-Connection: close

Squid Object Cache: Version 3.1.6
Start Time: Sun, 28 Jul 2013 13:37:00 GMT
Current Time:   Tue, 30 Jul 2013 07:18:45 GMT
Connection information for squid:
 *Number of clients accessing cache:  2211
 Number of HTTP requests received:   80437449*
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:   0
 Number of HTCP messages received:   0
 Number of HTCP messages sent:   0
 Request failure ratio:   0.00
 Average HTTP requests per minute since start:   32152.5
 Average ICP messages per minute since start:0.0
 Select loop called: 2096328654 times, 0.072 ms avg
Cache information for squid:
 Hits as % of all requests:  5min: 44.1%, 60min: 43.4%
 Hits as % of bytes sent:5min: 36.8%, 60min: 38.6%
 Memory hits as % of hit requests:   5min: 18.3%, 60min: 16.9%
 Disk hits as % of hit requests: 5min: 45.3%, 60min: 43.1%
 Storage Swap size:  272496340 KB
 Storage Swap capacity:  98.6% used,  1.4% free
 Storage Mem size:   1014520 KB
 Storage Mem capacity:   100.0% used, 7276133255119654.0% free
 Mean Object Size:   108.79 KB
 Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.07014  0.07014
 Cache Misses:  0.19742  0.19742
 Cache Hits:0.0  0.0
 Near Hits: 0.64968  0.20843
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.06083  0.06364
 ICP Queries:   0.0  0.0
Resource usage for squid:
 UP Time:150104.835 seconds
 CPU Time:   105900.170 seconds
 CPU Usage:  70.55%
 CPU Usage, 5 minute avg:58.96%
 CPU Usage, 60 minute avg:   52.03%
 Process Data Segment Size via sbrk(): 4192192 KB
 Maximum Resident Size: 16999120 KB
 Page faults with physical i/o: 89
Memory usage for squid via mallinfo():
 Total space in arena:   -1980 KB
 Ordinary blocks:   -165956 KB 453500 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 66172 KB 15 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  163975 KB
 Total in use:  -99784 KB -154%
 Total free:163975 KB 255%
 Total size: 64192 KB
Memory accounted for:
 Total accounted:   -1252542 KB -1950%
 memPool accounted: -1252542 KB -1950%
 memPool unaccounted:   1316733 KB 2051%
 memPoolAlloc calls:12
 memPoolFree calls:  22462254711
File descriptor usage for squid:
 Maximum number of file descriptors:   65536
 Largest file desc currently in use:   34282
 Number of file desc currently in use: 23259
 Files queued for open:   0
 Available number of file descriptors: 42277
 Reserved number of file descriptors:   100
 Store Disk files open:11827
Internal Data Structures:
 2547268 StoreEntries
 157597 StoreEntries with MemObjects
 156565 Hot Object Cache Items
 2504776 on-disk objects





=

as we see above , we have :
*Number of clients accessing cache:  2211*

does that mean that i have 2211 single ips accessing squid ??
i checked from my side   from my router  , i found my real clients access
squid is about 1580 ,  so  why it is greater that 1580 


Did you check both IPv6 and IPv4 clients?


does this value count the Natting  in routers as new clients ??


Only if they reach Squid as identifiably different IPs.


i mean does this  value accurate ? or it count # of ips in a range of time
???


It is the count of distinct IPs currently in the client-DB.

The client-DB is the set of distinct IPs connected to your Squid from 
any direction (both LAN and WAN) over a period of up to 24hrs. They are 
faded / aged out of the DB dynamically based on several factors 
including last access and frequecy of access when the client-DB garbage 
collector gets run to remove old entries.


Amos


Re: [squid-users] question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Amos Jeffries

On 30/07/2013 7:31 p.m., Ahmad wrote:

hi ,
i have Squid Cache: Version 3.1.6

i have squidclient installed on .

here is the result from squidclient :
HTTP/1.0 200 OK
Server: squid
Mime-Version: 1.0
Date: Tue, 30 Jul 2013 07:18:45 GMT
Content-Type: text/plain
Expires: Tue, 30 Jul 2013 07:18:45 GMT
Last-Modified: Tue, 30 Jul 2013 07:18:45 GMT
X-Cache: MISS from dft
X-Cache-Lookup: MISS from dft:22001
Via: 1.0 dft (squid)
Proxy-Connection: close

Squid Object Cache: Version 3.1.6
Start Time: Sun, 28 Jul 2013 13:37:00 GMT
Current Time:   Tue, 30 Jul 2013 07:18:45 GMT
Connection information for squid:
 *Number of clients accessing cache:  2211
 Number of HTTP requests received:   80437449*
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:   0
 Number of HTCP messages received:   0
 Number of HTCP messages sent:   0
 Request failure ratio:   0.00
 Average HTTP requests per minute since start:   32152.5
 Average ICP messages per minute since start:0.0
 Select loop called: 2096328654 times, 0.072 ms avg
Cache information for squid:
 Hits as % of all requests:  5min: 44.1%, 60min: 43.4%
 Hits as % of bytes sent:5min: 36.8%, 60min: 38.6%
 Memory hits as % of hit requests:   5min: 18.3%, 60min: 16.9%
 Disk hits as % of hit requests: 5min: 45.3%, 60min: 43.1%
 Storage Swap size:  272496340 KB
 Storage Swap capacity:  98.6% used,  1.4% free
 Storage Mem size:   1014520 KB
 Storage Mem capacity:   100.0% used, 7276133255119654.0% free
 Mean Object Size:   108.79 KB
 Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.07014  0.07014
 Cache Misses:  0.19742  0.19742
 Cache Hits:0.0  0.0
 Near Hits: 0.64968  0.20843
 Not-Modified Replies:  0.0  0.0
 DNS Lookups:   0.06083  0.06364
 ICP Queries:   0.0  0.0
Resource usage for squid:
 UP Time:150104.835 seconds
 CPU Time:   105900.170 seconds
 CPU Usage:  70.55%
 CPU Usage, 5 minute avg:58.96%
 CPU Usage, 60 minute avg:   52.03%
 Process Data Segment Size via sbrk(): 4192192 KB
 Maximum Resident Size: 16999120 KB
 Page faults with physical i/o: 89
Memory usage for squid via mallinfo():
 Total space in arena:   -1980 KB
 Ordinary blocks:   -165956 KB 453500 blks
 Small blocks:   0 KB  0 blks
 Holding blocks: 66172 KB 15 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  163975 KB
 Total in use:  -99784 KB -154%
 Total free:163975 KB 255%
 Total size: 64192 KB
Memory accounted for:
 Total accounted:   -1252542 KB -1950%
 memPool accounted: -1252542 KB -1950%
 memPool unaccounted:   1316733 KB 2051%
 memPoolAlloc calls:12
 memPoolFree calls:  22462254711
File descriptor usage for squid:
 Maximum number of file descriptors:   65536
 Largest file desc currently in use:   34282
 Number of file desc currently in use: 23259
 Files queued for open:   0
 Available number of file descriptors: 42277
 Reserved number of file descriptors:   100
 Store Disk files open:11827
Internal Data Structures:
 2547268 StoreEntries
 157597 StoreEntries with MemObjects
 156565 Hot Object Cache Items
 2504776 on-disk objects





=

as we see above , we have :
*Number of clients accessing cache:  2211*

does that mean that i have 2211 single ips accessing squid ??
i checked from my side   from my router  , i found my real clients access
squid is about 1580 ,  so  why it is greater that 1580 


Did you check both IPv6 and IPv4 clients?


does this value count the Natting  in routers as new clients ??


Only if they reach Squid as identifiably different IPs.


i mean does this  value accurate ? or it count # of ips in a range of time
???


It is the count of distinct IPs currently in the client-DB.

The client-DB is the set fo distinct IPs connected to your Squid from 
any direction (both LAN and WAN) over a period of up to 24hrs. They are 
faded / aged out of the DB dynamically based on several factors 
including last access and frequecy of access when the client-DB garbage 
collector gets run to remove old entries.


Amos


Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-30 Thread Amm




- Original Message -
 From: csn233 csn...@gmail.com
 To: Amm ammdispose-sq...@yahoo.com
 Cc: 
 Sent: Tuesday, 30 July 2013 2:03 PM
 Subject: Re: [squid-users] Basic questions on transparent/intercept proxy

Thanks to all who replied. Looks like the ssl_bump none all is
 required to stop those pop-warnings about self-signed certificates.
 
 Another related question, what do people do about ftp://... that no
 longer works in an intercepted proxy


Please use reply all instead of reply!

For intercepted proxy, you only use HTTP/HTTPS interception. So browser
will access FTP site directly. (Unless you have blocked/redirected FTP port)

Amm.



[squid-users] Re: Squid 3.3.6 FATAL: Received Segment Violation...dying.

2013-07-30 Thread x-man
I produced another binary by compiling on another paltform - this time Ubuntu
12.04 and it looks like this new binary doesn't have this problem... strange
enough. 

Compiled with all the same options/flags.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-6-FATAL-Received-Segment-Violation-dying-tp4661026p4661321.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid 3.3.6 FATAL: Received Segment Violation...dying.

2013-07-30 Thread Antony Stone
On Tuesday 30 July 2013 at 11:21:55, x-man wrote:

 I produced another binary by compiling on another paltform - this time
 Ubuntu 12.04 and it looks like this new binary doesn't have this
 problem... strange enough.
 
 Compiled with all the same options/flags.

Which platform did you compile it on the first time, and which platform are you 
running the binary on?


Antony.

-- 
In fact I wanted to be John Cleese and it took me some time to realise that 
the job was already taken.

 - Douglas Adams

 Please reply to the list;
   please don't CC me.


Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-30 Thread csn233
 Please use reply all instead of reply!

 For intercepted proxy, you only use HTTP/HTTPS interception. So browser
 will access FTP site directly. (Unless you have blocked/redirected FTP port)

 Amm.

Clicked wrong button... It's to do with the requirement to log all
traffic, including FTP, as well as the caching benefits.


[squid-users] icp_query_timeout directive is not working in 3.3.8 for some reason

2013-07-30 Thread x-man
I'm setting icp_query_timeout 9000 because my cache peer is responding a
little slowly and this setting was doing great job in squid 3.1, but after
migrating in a test installation to 3.3 then again I started seeing in the 
ACCESS log

30/Jul/2013:15:12:54 +0530  28700 115.127.14.209 TCP_MISS/200 238022 GET
http://r14---sn-cvh7zn7r.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  23079 115.127.17.13 TCP_MISS/200 242118 GET
http://r13---sn-cvh7zn7k.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  22254 115.127.19.239 TCP_MISS/200 9663 GET
http://r14---sn-cvh7zn7d.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  22283 115.127.17.80 TCP_MISS/200 14214 GET
http://r13---sn-cvh7zn76.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream


this same setup was working fine with the squid 3.1 version, all rest of
settings are same

Can this icp_query_timeout directive be broken in recent versions, or can it
be related or dependent to some Compilation setting?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Ahmad
hi amos , thanks alot for reply 

how to know last ip hits  for 1 minute  ?
i dont want to see disctinct for 24  hours .

can i know ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-in-cache-Manager-about-of-clients-and-of-hits-http-tp4661313p4661325.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-30 Thread Amos Jeffries

On 30/07/2013 9:28 p.m., csn233 wrote:

Please use reply all instead of reply!

For intercepted proxy, you only use HTTP/HTTPS interception. So browser
will access FTP site directly. (Unless you have blocked/redirected FTP port)

Amm.

Clicked wrong button... It's to do with the requirement to log all
traffic, including FTP, as well as the caching benefits.


As stated that requirement is impossible to implement via Squid. You 
need to chop it down to a smaller size. In particular there are many 
overheads in the TCP/IP layer and in other non-HTTP protocols which 
Squid cannot measure nor log. Only the system firewall and related 
Layer-2 software has sufficient access to all the information a full 
measurement needs.


For all protocols other than plain-text HTTP there are *no* caching 
benefits from Squid. Squid will simply *add* overheads of processing and 
possibly some few hundred bytes necessary to setup CONNECT tunnels to 
peers. Unless you are using ssl-bump to decrypt HTTPS into plain-text 
HTTP for Squids usage it is also one of those other protocols where you 
get no caching benefit - because everything a cache needs to use is 
locked away inside the encryption.



NP: adding SSL-bump just to get a measurement is a very bad reason to do 
it on a production proxy. Better to accept that HTTPS has no cache gains 
and leave it for now.


Amos



Re: [squid-users] Re: question in cache Manager about # of clients and # of hits http

2013-07-30 Thread Amos Jeffries

On 30/07/2013 10:38 p.m., Ahmad wrote:

hi amos , thanks alot for reply

how to know last ip hits  for 1 minute  ?
i dont want to see disctinct for 24  hours .

can i know ?


You need to scan / grep your log files for that.

The squid client-db is only presenting this information as a side 
effect. The DB itself is used for retaining performance metrics about 
the clients for optimizing future traffic.


Amos


[squid-users] Meaning of negative sizes in store.log

2013-07-30 Thread squid
In the sizes fields of store.log, what do negative sizes mean?  For
instance, I'm getting this, and I'm interested in knowing the meaning of
the -312:

...  -1 application/octet-stream 96508744/-312 GET
http://au.v4.download.windowsupdate.com/msdownload/update/software

Thanks
Mark



[squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Ralf Hildebrandt
I had good results replacing 3.3.8 with 3.4.0.1 - no changes to the
config were needed.

One interesting observation: The dnsreq statistics are different. With
3.3.8 the graphs for requests and replies were identical. Plotted on
top of each other -- only one graph could be seen.

Since switching to I'm seeing MORE requests than replies. Not much,
but enough for the graphs to be seen individually. Currently I'm
seeing 10.17 requests and 8.21 replies per second.

Is this to be expected?

Graph:
http://www.arschkrebs.de/bugs/dnsreq1d.png
on the left side you can see the point in time I changed the squid versions.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


[squid-users] Squid monitering Tool

2013-07-30 Thread javed_samtiah
Hi, 

Is there any squid monitering tool.
*1. I want to monitor Web trafic that which Ip or PC is accessing to which
Website.
2. How much bandwidth is utilized by each PC.
3. Want to monitor on real time basis.






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitering-Tool-tp4661330.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid monitering Tool

2013-07-30 Thread Edmonds Namasenda
Squid Analyzer on http://squidanalyzer.darold.net did me good thanks to Darold.

On Tue, Jul 30, 2013 at 2:24 PM, javed_samtiah
javed.iq...@northbaysolutions.net wrote:
 Hi,

 Is there any squid monitering tool.
 *1. I want to monitor Web trafic that which Ip or PC is accessing to which
 Website.
 2. How much bandwidth is utilized by each PC.
 3. Want to monitor on real time basis.






 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-monitering-Tool-tp4661330.html
 Sent from the Squid - Users mailing list archive at Nabble.com.



-- 
Thank you and kind regards,

I.P.N Edmonds
ICTs Practitioner: Systems | Networks | Applications
Mob: +256 71 227 3374 / +256 75 327 3374 | Tel: +256 41 466 3066
Skype: edsend | P.O. Box 22249, Kampala UGANDA


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Amos Jeffries

On 30/07/2013 11:15 p.m., Ralf Hildebrandt wrote:

I had good results replacing 3.3.8 with 3.4.0.1 - no changes to the
config were needed.

One interesting observation: The dnsreq statistics are different. With
3.3.8 the graphs for requests and replies were identical. Plotted on
top of each other -- only one graph could be seen.

Since switching to I'm seeing MORE requests than replies. Not much,
but enough for the graphs to be seen individually. Currently I'm
seeing 10.17 requests and 8.21 replies per second.

Is this to be expected?


Yes and No.

Yes, 3.4 added mDNS support which have no particular guarantee of 
getting any response. If you do not have mDNS setup the .local requests 
will timeout instead, before moving on to the global resolution methods.


No, because the above event should only show up on single-label domain 
names in URL or Host: header. And if you do have .local mDNS setup in 
the network most of them should be getting responses anyway.


Amos


[squid-users] Uneven load distribution between SMP Workers

2013-07-30 Thread Tim Murray
Good Afternoon Everyone

I'm running Squid 3.3.5 on 3 multicore systems here, using SMP and 6
workers per server dedicated to their own core. Each one running OS
RHEL6 U4 with 2.6.32 kernel.

I'm noticing as time goes on, some workers seem to be favoured and
doing the majority of the work. I've read the article regarding SMP
Scaling here:

http://wiki.squid-cache.org/Features/SmpScale

However I'm find our workers CPU time is differing quite substantially;

Server 1:

TIME+  COMMAND
287:01.16 (squid-3) -f /etc/squid/squid.conf
248:36.07 (squid-2) -f /etc/squid/squid.conf
146:04.90 (squid-5) -f /etc/squid/squid.conf
140:59.06 (squid-1) -f /etc/squid/squid.conf
111:24.22 (squid-6) -f /etc/squid/squid.conf
120:41.21 (squid-4) -f /etc/squid/squid.conf

Server 2:

TIME+  COMMAND
618:05.08 (squid-1) -f /etc/squid/squid.conf
405:59.84 (squid-5) -f /etc/squid/squid.conf
362:29.37 (squid-3) -f /etc/squid/squid.conf
318:56.54 (squid-2) -f /etc/squid/squid.conf
211:11.80 (squid-6) -f /etc/squid/squid.conf
204:48.51 (squid-4) -f /etc/squid/squid.conf

Server 3:

TIME+  COMMAND
497:21.70 (squid-5) -f /etc/squid/squid.conf
389:32.63 (squid-1) -f /etc/squid/squid.conf
171:31.28 (squid-6) -f /etc/squid/squid.conf
177:15.38 (squid-4) -f /etc/squid/squid.conf
346:28.21 (squid-3) -f /etc/squid/squid.conf
174:05.69 (squid-2) -f /etc/squid/squid.conf


I can also see the connections differ massively between the workers:

Server 1:

(Client and Server side connections)

squid-1 145 ESTABLISHED
squid-2 547 ESTABLISHED
squid-3 929 ESTABLISHED
squid-4 118 ESTABLISHED
squid-5 298 ESTABLISHED
squid-6 276 ESTABLISHED

Server 2:

(Client and Server side connections)

squid-1 899 ESTABLISHED
squid-2 215 ESTABLISHED
squid-3 311 ESTABLISHED
squid-4 96 ESTABLISHED
squid-5 516 ESTABLISHED
squid-6 70 ESTABLISHED

Server 3:

(Client and Server side connections)
squid-1 517 ESTABLISHED
squid-2 96 ESTABLISHED
squid-3 366 ESTABLISHED
squid-4 83 ESTABLISHED
squid-5 1030 ESTABLISHED
squid-6 189 ESTABLISHED

I'm a little concerned that the more people I migrate to this solution
the more the first 1 or 2 workers will become saturated. Do the
workers happen to have some form of source or destination persistance
for (SSL?) connections or something that might be causing this to
occur?

And is there anything I can do to improve the distribution between
workers? Or have I missed something along the line?


Cheers


[squid-users] PURGE not purging with memory_cache_shared on

2013-07-30 Thread Alexandre Chappaz
Hi,

from what I have seen, with v3.3.8, PURGE method is not purging if
memory_cache_shared is on.

posting in pastebin 2 logs with the same requests

debug log with 1 worker /  memory_cache_shared on :

cache.log is here :
http://pastebin.archlinux.fr/467269


corresponding to these requests :
1375190125.156 12 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:15:25 +0200
- Wget/1.14 (linux-gnu)
1375190127.367  2 ::1 TCP_MISS/200 255 PURGE
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- - 1.0
30/Jul/2013:15:15:27 +0200 - squidclient/3.3.8
1375190130.735  2 ::1 TCP_MEM_HIT/200 1390 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- text/html
1.1 30/Jul/2013:15:15:30 +0200 - Wget/1.14 (linux-gnu)


this is wrong, because the purge request should have cleared the
object from the cache, so the last GET should be a MISS, but it is a
HIT

Now the same with memory_cache_shared off :

cache.log
http://pastebin.archlinux.fr/467270

corresponding to these requests :
1375190414.749 14 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:20:14 +0200
- Wget/1.14 (linux-gnu)
1375190417.550  3 ::1 TCP_MISS/200 255 PURGE
http://bofip.impots.gouv.fr/bofip/1-PGP.html - HIER_NONE/- - 1.0
30/Jul/2013:15:20:17 +0200 - squidclient/3.3.8
1375190420.694 15 ::1 TCP_MISS/200 1295 GET
http://bofip.impots.gouv.fr/bofip/1-PGP.html -
FIRSTUP_PARENT/10.154.61.1 text/html 1.1 30/Jul/2013:15:20:20 +0200
- Wget/1.14 (linux-gnu)


this is right, because the purge request have cleared the object from
the cache, so the last GET is a MISS.



Guess I should file a bug on this.

Regards
Alex


Re: [squid-users] Uneven load distribution between SMP Workers

2013-07-30 Thread Eliezer Croitoru
On 07/30/2013 03:44 PM, Tim Murray wrote:
 I'm a little concerned that the more people I migrate to this solution
 the more the first 1 or 2 workers will become saturated. Do the
 workers happen to have some form of source or destination persistance
 for (SSL?) connections or something that might be causing this to
 occur?
 
 And is there anything I can do to improve the distribution between
 workers? Or have I missed something along the line?
 
 
 Cheers
Hey,

it's the OS that does the random load on the process using the source
and destination routing path.
it should be checked and tested to make sure that IPTABLES does a fair
LB if you use different process VS SMP.
When using simple PROCESS and IPTABLES is load balancing the ports it
shows you the load from specific IP address:port to specific address:ip.
If the process works and only the load on one specific rises is it
effecting performance?

Eliezer






Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Eliezer Croitoru
On 07/30/2013 03:15 PM, Amos Jeffries wrote:
 Yes and No.
 
 Yes, 3.4 added mDNS support which have no particular guarantee of
 getting any response. If you do not have mDNS setup the .local requests
 will timeout instead, before moving on to the global resolution methods.
 
 No, because the above event should only show up on single-label domain
 names in URL or Host: header. And if you do have .local mDNS setup in
 the network most of them should be getting responses anyway.
 
 Amos
It's most likely to be the internet DNS infrastructure that is the
problem compared to local mDNS.

In many cases you can get no response when querying a DNS if you do have
a route problem for a sec.
Once a BGP session ended it takes couple secs or MS to get the path
correctly.

You can test it using a local DNS server that will be a proxy that
response in any case and then see if the DNS is responding with a
respond or the route is the problem.
This way all the tests will be done on the local network rather then
over the INTERNET and then you can ask the next person in the chain in
order to get a deeper response.

Eliezer


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Ralf Hildebrandt
* Eliezer Croitoru elie...@ngtech.co.il:

 It's most likely to be the internet DNS infrastructure that is the
 problem compared to local mDNS.
 
 In many cases you can get no response when querying a DNS if you do have
 a route problem for a sec.

And the routing problems suddenly started when I installed 3.4.0.1?

 Once a BGP session ended it takes couple secs or MS to get the path
 correctly.
 
 You can test it using a local DNS server that will be a proxy that
 response in any case and then see if the DNS is responding with a
 respond or the route is the problem.

We're using a local cache on all proxies.

 This way all the tests will be done on the local network rather then
 over the INTERNET and then you can ask the next person in the chain in
 order to get a deeper response.
 
 Eliezer

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Ralf Hildebrandt
* Amos Jeffries squ...@treenet.co.nz:
 On 30/07/2013 11:15 p.m., Ralf Hildebrandt wrote:
 I had good results replacing 3.3.8 with 3.4.0.1 - no changes to the
 config were needed.
 
 One interesting observation: The dnsreq statistics are different. With
 3.3.8 the graphs for requests and replies were identical. Plotted on
 top of each other -- only one graph could be seen.
 
 Since switching to I'm seeing MORE requests than replies. Not much,
 but enough for the graphs to be seen individually. Currently I'm
 seeing 10.17 requests and 8.21 replies per second.
 
 Is this to be expected?
 
 Yes and No.
 
 Yes, 3.4 added mDNS support which have no particular guarantee of
 getting any response. If you do not have mDNS setup the .local
 requests will timeout instead, before moving on to the global
 resolution methods.

So I would see *.local queries in the query log on the local caching
DNS on the machine?

 No, because the above event should only show up on single-label
 domain names in URL or Host: header.

Like accesssing http://www ?

 And if you do have .local mDNS setup in the network

We don't use that in the server networks.

 most of them should be getting responses anyway.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Eliezer Croitoru
On 07/30/2013 05:18 PM, Ralf Hildebrandt wrote:
 * Eliezer Croitoru elie...@ngtech.co.il:
 
 It's most likely to be the internet DNS infrastructure that is the
 problem compared to local mDNS.

 In many cases you can get no response when querying a DNS if you do have
 a route problem for a sec.
 
 And the routing problems suddenly started when I installed 3.4.0.1?
 
Well not related to squid directly but yes it can be possible since in
the INTERNET there are a lot of packets that just drops..
Also I do not know who is the person in charge of the ROUTING but it can
be traced using a simple sampling and some NAGIOS dig and ping checks...

 Once a BGP session ended it takes couple secs or MS to get the path
 correctly.

 You can test it using a local DNS server that will be a proxy that
 response in any case and then see if the DNS is responding with a
 respond or the route is the problem.
 
 We're using a local cache on all proxies.
DNS cache???
Try to force the DNS server to bind specific port to differentiate the
PROXY requests to the DNS requests..
Just suggesting some really good test that can be done easily to
minimize the damage to specific points.

Eliezer
 
 This way all the tests will be done on the local network rather then
 over the INTERNET and then you can ask the next person in the chain in
 order to get a deeper response.

 Eliezer
 



Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Ralf Hildebrandt
* Eliezer Croitoru elie...@ngtech.co.il:

 Well not related to squid directly but yes it can be possible since in
 the INTERNET there are a lot of packets that just drops..

But that hasn't changed... I should have observed the same drops
before going to 3.4.x - looking at the numbers right now:

30 requests/s
20 replies/s

That would mean 33.3% percent unanswered queries. At the same time the
IP Cache hitsmisses and FQDN cache hitsmisses statistics look like
before.



  We're using a local cache on all proxies.

 DNS cache???

Yep.

 Try to force the DNS server to bind specific port to differentiate the
 PROXY requests to the DNS requests..

The proxy is the only program quering the local DNS server. It's bound
to 127.0.0.1

I'm looking at the query.log, but I'm not seeing any queries to .local
names at all.

Maybe some new code path is not adding to the statistics?
-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Eliezer Croitoru
On 07/30/2013 05:33 PM, Ralf Hildebrandt wrote:
 The proxy is the only program quering the local DNS server. It's bound
 to 127.0.0.1
 
 I'm looking at the query.log, but I'm not seeing any queries to .local
 names at all.
 
 Maybe some new code path is not adding to the statistics?
Like what?
OK so the dns only queryies are not the .local ons??
in this case you can start monitoring your network infrastructure to
make sure what happens just to make sure that the infrastructure works fine.
it's a simple task.. and it will make a stronger argument then just
plain old statistics.
it would have long history.
Do you see any real life threatening situation that it can affect.. the
DNS dropping packets? I think that the F5 or CTRL-F5 keys should be
right there to make sure what ever problems are outhere should be enough
for most users cases.

Eliezer


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Ralf Hildebrandt
* Eliezer Croitoru elie...@ngtech.co.il:
 On 07/30/2013 05:33 PM, Ralf Hildebrandt wrote:
  The proxy is the only program quering the local DNS server. It's bound
  to 127.0.0.1
  
  I'm looking at the query.log, but I'm not seeing any queries to .local
  names at all.
  
  Maybe some new code path is not adding to the statistics?

 Like what?
 OK so the dns only queryies are not the .local ons??

Exactly, there are only queries like internal IPs  external
domainnames:

30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.777 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.778 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.778 client 127.0.0.1#43716: query: 
227.56.248.78.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.778 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.778 client 127.0.0.1#43716: query: 
249.138.42.141.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.781 client 127.0.0.1#43716: query: 
227.56.248.78.in-addr.arpa IN PTR + (127.0.0.1)
30-Jul-2013 16:33:42.826 client 127.0.0.1#43716: query: www.viewster.com IN A + 
(127.0.0.1)
30-Jul-2013 16:33:42.901 client 127.0.0.1#43716: query: banner.congstar.de IN A 
+ (127.0.0.1)
30-Jul-2013 16:33:42.934 client 127.0.0.1#43716: query: n28.ad.ad-srv.net IN A 
+ (127.0.0.1)
30-Jul-2013 16:33:42.935 client 127.0.0.1#43716: query: platform.twitter.com IN 
A + (127.0.0.1)
30-Jul-2013 16:33:43.004 client 127.0.0.1#43716: query: 
www.googletagmanager.com IN A + (127.0.0.1)
30-Jul-2013 16:33:43.168 client 127.0.0.1#43716: query: cdn.gmxpro.net IN A + 
(127.0.0.1)
30-Jul-2013 16:33:43.177 client 127.0.0.1#43716: query: divaag.vo.llnwd.net IN 
A + (127.0.0.1)
30-Jul-2013 16:33:43.312 client 127.0.0.1#43716: query: aidps.atdmt.com IN A + 
(127.0.0.1)
30-Jul-2013 16:33:43.327 client 127.0.0.1#43716: query: dc8.s317.meetrics.net 
IN A + (127.0.0.1)
30-Jul-2013 16:33:43.337 client 127.0.0.1#43716: query: viewster.ivwbox.de IN A 
+ (127.0.0.1)
30-Jul-2013 16:33:43.372 client 127.0.0.1#43716: query: viewster.tv IN A + 
(127.0.0.1)
30-Jul-2013 16:33:43.456 client 127.0.0.1#43716: query: us-mg5.mail.yahoo.com 
IN A + (127.0.0.1)
30-Jul-2013 16:33:43.676 client 127.0.0.1#43716: query: 
download.cdn.mozilla.net IN A + (127.0.0.1)

 in this case you can start monitoring your network infrastructure to
 make sure what happens just to make sure that the infrastructure works fine.

Everything is working fine  is already being monitored.

 it's a simple task.. and it will make a stronger argument then just
 plain old statistics.

They are part of the monitoring. If they look different, I tend to ask
questions:

Like:
http://www.squid-cache.org/mail-archive/squid-users/200712/0465.html
(broken memory statistics in 3.0.x)

Or:
http://comments.gmane.org/gmane.comp.web.squid.general/97869
(sudden increase of the HttpErrors counter with 3.2.0.19 -- due to a
change of what counts as an error page)

 it would have long history.
 Do you see any real life threatening situation that it can affect.

Nope. Just asking why the stats suddenly would look different. 
I'll check the other machines in the cluster

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Amos Jeffries

On 31/07/2013 2:22 a.m., Ralf Hildebrandt wrote:

* Amos Jeffries squ...@treenet.co.nz:

On 30/07/2013 11:15 p.m., Ralf Hildebrandt wrote:

I had good results replacing 3.3.8 with 3.4.0.1 - no changes to the
config were needed.

One interesting observation: The dnsreq statistics are different. With
3.3.8 the graphs for requests and replies were identical. Plotted on
top of each other -- only one graph could be seen.

Since switching to I'm seeing MORE requests than replies. Not much,
but enough for the graphs to be seen individually. Currently I'm
seeing 10.17 requests and 8.21 replies per second.

Is this to be expected?

Yes and No.

Yes, 3.4 added mDNS support which have no particular guarantee of
getting any response. If you do not have mDNS setup the .local
requests will timeout instead, before moving on to the global
resolution methods.

So I would see *.local queries in the query log on the local caching
DNS on the machine?


I would expect to yes. At least that was the behaviour when I tested it.
 - lookup for invalid.local in mDNS server - timeout
 - lookup for invalid.local in local NS server - NXDOMAIN



No, because the above event should only show up on single-label
domain names in URL or Host: header.

Like accesssing http://www ?


Ah sorry I steered a bit wrong. Squid will not add the .local part 
itself. Resolv.conf or equivalent search list settings are still 
required to do that mapping part. The mDNS only steps in with a 
different resolver if the domain tail matches .local.





And if you do have .local mDNS setup in the network

We don't use that in the server networks.


Okay. Then mDNS should be irrrelevant.

What does the idns cache manager report say about #QUERIES vs #REPLIES 
on each of your NS ?


Amos


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Amos Jeffries
Aha. Digging around in the code I found another way that the queries and 
replies counters may be getting separated.

 = all queries are recorded at the point they are sent.
 = replies are recorded only if the nameserver they are received from 
is a known NS.


So if you have ignore_unknown_nameservers set to ON, the difference 
would be the replies dropped from unknown servers.



NP: I am still suspicious that this may be related to mDNS, since I 
think the mDNS responses come back form the LAN machines as unicast 
replies and would hit that known/unknown security check.


Amos


Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Amos Jeffries

On 31/07/2013 2:15 a.m., Eliezer Croitoru wrote:

On 07/30/2013 03:15 PM, Amos Jeffries wrote:

Yes and No.

Yes, 3.4 added mDNS support which have no particular guarantee of
getting any response. If you do not have mDNS setup the .local requests
will timeout instead, before moving on to the global resolution methods.

No, because the above event should only show up on single-label domain
names in URL or Host: header. And if you do have .local mDNS setup in
the network most of them should be getting responses anyway.

Amos

It's most likely to be the internet DNS infrastructure that is the
problem compared to local mDNS.


Only if the NS being used is something remote. Such as Googles 
resolvers. The recommended practice of using a local recursive resolver 
will send a SERVFAIL response back to Squid if there is any kind of 
Internet DNS failures. Which does get recorded as a received response 
even if it is not useful.


Amos


Re: [squid-users] Meaning of negative sizes in store.log

2013-07-30 Thread Amos Jeffries

On 30/07/2013 10:21 p.m., sq...@peralex.com wrote:

In the sizes fields of store.log, what do negative sizes mean?  For
instance, I'm getting this, and I'm interested in knowing the meaning of
the -312:

...  -1 application/octet-stream 96508744/-312 GET
http://au.v4.download.windowsupdate.com/msdownload/update/software

Thanks
Mark



Good question. And one which will require digging through the code to 
answer I'm afraid.


FWIW I suspect that is a bug in the header vs object calculations. The 
wiki has this to say about the value:



sizes This column consists of two slash separated fields:

 *

   The advertised content length from the HTTP Content-Length reply
   header.

 * The size actually read.
 o If the advertised (or expected) length is missing, it will be
   set to zero. If the advertised length is not zero, but not equal
   to the real length, the object will be released from the cache.



Amos



Re: [squid-users] Uneven load distribution between SMP Workers

2013-07-30 Thread Alex Rousskov
On 07/30/2013 06:44 AM, Tim Murray wrote:

 I'm running Squid 3.3.5 on 3 multicore systems here, using SMP and 6
 workers per server dedicated to their own core. Each one running OS
 RHEL6 U4 with 2.6.32 kernel.
 
 I'm noticing as time goes on, some workers seem to be favoured and
 doing the majority of the work. I've read the article regarding SMP
 Scaling here:
 
 http://wiki.squid-cache.org/Features/SmpScale
 
 However I'm find our workers CPU time is differing quite substantially;

As discussed on the above wiki page, this is expected. We see it all the
time on many boxes, especially if Squid is not very loaded. IIRC, the
patch working around that problem has not been submitted for the
official review yet -- no free cycles to finish its polishing at the moment.


 I can also see the connections differ massively between the workers:

Same thing.


 I'm a little concerned that the more people I migrate to this solution
 the more the first 1 or 2 workers will become saturated. Do the
 workers happen to have some form of source or destination persistance
 for (SSL?) connections or something that might be causing this to
 occur?

The wiki page provides the best explanation of the phenomena I know
about. In short, some kernels (including their TCP stacks) are not very
good at balancing this kind of server load.


 And is there anything I can do to improve the distribution between
 workers?

I am not aware of any specific fix, except for the workaround patch
mentioned on the wiki.


Alex.



[squid-users] negotiate_kerberos_auth helpers stay busy

2013-07-30 Thread Klaus Walter

Hi,

I am running squid 3.2.1 on CentOS 6.3 with kerberos authentication  
using negotiate_kerberos_auth.
Generally this is working fine, but after some time more and more  
helper instances stay busy and cannot finish the given request.
Therefore squid starts new helper processes to have enough working  
helpers for kerberos authentication.


This is going on until squid has no more memory for the helpers:

2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers...
2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500  
'negotiate_kerberos_auth' processes

2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory
2013/07/30 08:48:04 kid1| WARNING: Cannot run  
'/usr/lib64/squid/negotiate_kerberos_auth' process.


The problem can only be solved by restarting squid.

squidclient mgr:negotiateauthenticator shows the problem (I put away  
the large kerberos requests waiting to be finished):


Negotiate Authenticator Statistics:
program: /usr/lib64/squid/negotiate_kerberos_auth
number active: 39 of 500 (0 shutting down)
requests sent: 11141
replies received: 11133
queue length: 0
avg service time: 4 msec

  #  FD PID  # Requests  Flags Time  Offset Request
  1  19   31373 753 B R 3887.019  0
  1  37   31390 755 B R 3637.061  0
  1  39   313912539 B R 2053.518  0
  1  41   31392  78 B R 3859.365  0
  1  43   31393 807 B R 2008.036  0
  1  57   31396 415 B R 2003.899  0
  1  63   31397 363 B R 1975.126  0
  1  95   31401 329 B R 1944.980  0
  1  29   314911891   0.009   0 (none)
  1  77   31492 813   0.011   0 (none)
  1  88   31493 578   0.009   0 (none)
  1  99   31494 430   0.009   0 (none)
  1 111   31512 320   0.010   0 (none)
  1 115   31513 200   0.018   0 (none)
  1 117   31514 158   0.014   0 (none)
  1 119   31515 122   0.013   0 (none)
  1 121   31516  99   0.011   0 (none)
  1 123   31517  82   0.014   0 (none)
  1 125   31518  66   0.012   0 (none)
  1 118   31519  58   0.010   0 (none)
  1 113   32414  44   0.013   0 (none)
  1 116   32415  36   0.015   0 (none)
  1 124 367  29   0.014   0 (none)
  1 128 368  28   0.015   0 (none)
  1 137 375  24   0.012   0 (none)
  1 138 376  21   0.015   0 (none)
  1 140 377  16   0.040   0 (none)
  1 142 378  15   0.036   0 (none)
  1 144 379  14   0.033   0 (none)
  1 1393490  11   0.037   0 (none)
  1 1433491  10   0.036   0 (none)
  1 1463495   8   0.037   0 (none)
  1 1483496   7   0.046   0 (none)
  1 1503497   6   0.047   0 (none)
  1 1453498   5   0.047   0 (none)
  1 1493499   4   0.041   0 (none)
  1 1523500   3   0.104   0 (none)
  1 1543501   2   0.105   0 (none)
  1 1563502   2   0.089   0 (none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED
   S = SHUTDOWN PENDING
   P = PLACEHOLDER

The first eight helper processes are busy and will never return to  
normal state until squid is restarted.

Gradually more and more helpers stay in busy state.

strace shows me that this helpers are blocked during a read-command:

read(0, r, 1) = 1
read(0, r, 1) = 1
read(0, 7, 1) = 1
read(0, +, 1) = 1
read(0, a, 1) = 1
read(0, G, 1) = 1
read(0,  unfinished ...

After this the process is never continued.

I cannot find any error messages in cache.log even if I switch on  
debugging at the helper.


Thank you for help!

Klaus




Re: [squid-users] 3.4.0.1 dnsreq statistics question

2013-07-30 Thread Eliezer Croitoru
On 07/30/2013 07:25 PM, Amos Jeffries wrote:
 Aha. Digging around in the code I found another way that the queries and
 replies counters may be getting separated.
  = all queries are recorded at the point they are sent.
  = replies are recorded only if the nameserver they are received from
 is a known NS.
 
 So if you have ignore_unknown_nameservers set to ON, the difference
 would be the replies dropped from unknown servers.
 
 
 NP: I am still suspicious that this may be related to mDNS, since I
 think the mDNS responses come back form the LAN machines as unicast
 replies and would hit that known/unknown security check.
 
 Amos
I really suspect that a recursive lookup of the bind or whatever server
would do that.
If it can be resolved I would expect it to not work?

Eliezer


Re: [squid-users] negotiate_kerberos_auth helpers stay busy

2013-07-30 Thread Amos Jeffries

On 31/07/2013 9:18 a.m., Klaus Walter wrote:

Hi,

I am running squid 3.2.1 on CentOS 6.3 with kerberos authentication 
using negotiate_kerberos_auth.
Generally this is working fine, but after some time more and more 
helper instances stay busy and cannot finish the given request.
Therefore squid starts new helper processes to have enough working 
helpers for kerberos authentication.


This release of Squid is quite old now. Are you able to upgrade your 
proxy to the current stable release and see if the problem disappears? 
(today that would be 3.3.8)
You can find recent versions packages for CentOS at 
http://wiki.squid-cache.org/KnowledgeBase/CentOS.





This is going on until squid has no more memory for the helpers:

2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers...
2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500 
'negotiate_kerberos_auth' processes

2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory


That is bad, it is unrelated to the helpers getting locked up though.

How much RAM is the Squid worker process using at the time this appears? 
Starting helpers with fork() requires Squid to be allocated virtual 
memory 2x that being used at the time by the worker process.


And how much memory is currently in use by each of those 8 BUSY helpers?



Negotiate Authenticator Statistics:
program: /usr/lib64/squid/negotiate_kerberos_auth
number active: 39 of 500 (0 shutting down)
requests sent: 11141
replies received: 11133
queue length: 0
avg service time: 4 msec

  #  FD PID  # Requests  Flags Time  Offset Request
  1  19   31373 753 B R 3887.019  0
  1  37   31390 755 B R 3637.061  0
  1  39   313912539 B R 2053.518  0
  1  41   31392  78 B R 3859.365  0
  1  43   31393 807 B R 2008.036  0
  1  57   31396 415 B R 2003.899  0
  1  63   31397 363 B R 1975.126  0
  1  95   31401 329 B R 1944.980  0
  1  29   314911891   0.009   0 (none)
  1  77   31492 813   0.011   0 (none)
  1  88   31493 578   0.009   0 (none)


The first eight helper processes are busy and will never return to 
normal state until squid is restarted.

Gradually more and more helpers stay in busy state.

strace shows me that this helpers are blocked during a read-command:

read(0, r, 1) = 1
read(0, r, 1) = 1
read(0, 7, 1) = 1
read(0, +, 1) = 1
read(0, a, 1) = 1
read(0, G, 1) = 1
read(0,  unfinished ...

After this the process is never continued.


That does not look blocked to me. The value arriving is changing, just 
eee ssslloowwwlllyyy, one byte per I/O cycle to be exact. 
Since kerberos credentials can be up to 32 KB large its easy to see why 
they are stuck in BUSY state for such long times.




I cannot find any error messages in cache.log even if I switch on 
debugging at the helper.


At this rate of I/O in the helper it is unlikely that they will be able 
to send a message to cache.log in any reasonable time.




Thank you for help!

Klaus






Re: [squid-users] Uneven load distribution between SMP Workers

2013-07-30 Thread Tim Murray
On Wed, Jul 31, 2013 at 1:44 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 07/30/2013 06:44 AM, Tim Murray wrote:

 I'm running Squid 3.3.5 on 3 multicore systems here, using SMP and 6
 workers per server dedicated to their own core. Each one running OS
 RHEL6 U4 with 2.6.32 kernel.

 I'm noticing as time goes on, some workers seem to be favoured and
 doing the majority of the work. I've read the article regarding SMP
 Scaling here:

 http://wiki.squid-cache.org/Features/SmpScale

 However I'm find our workers CPU time is differing quite substantially;

 As discussed on the above wiki page, this is expected. We see it all the
 time on many boxes, especially if Squid is not very loaded. IIRC, the
 patch working around that problem has not been submitted for the
 official review yet -- no free cycles to finish its polishing at the moment.


 I can also see the connections differ massively between the workers:

 Same thing.


 I'm a little concerned that the more people I migrate to this solution
 the more the first 1 or 2 workers will become saturated. Do the
 workers happen to have some form of source or destination persistance
 for (SSL?) connections or something that might be causing this to
 occur?

 The wiki page provides the best explanation of the phenomena I know
 about. In short, some kernels (including their TCP stacks) are not very
 good at balancing this kind of server load.


 And is there anything I can do to improve the distribution between
 workers?

 I am not aware of any specific fix, except for the workaround patch
 mentioned on the wiki.


 Alex.


Thank you very much for that Alex, to be honest when I read the Wiki
page I had assumed this patch had already been implemented.

In the meantime, I might see if using separate http_ports for each
worker and using the load balancer to even up the spread of traffic
will work.


[squid-users] Problem with compile squid 3.4.0.1 on RHEL6 x64

2013-07-30 Thread Kris Glynn
Hi,

I'm using a squid.spec from squid 3.3 to build 3.4.0.1 but it fails with 
/usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32 
against `.rodata' can not be used when making a shared object; recompile with 
-fPIC
../snmplib/libsnmplib.a: could not read symbols: Bad value

libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Werror -pipe -D_REENTRANT -O2 -g -fPIC -fpie -march=native 
-std=c++0x .libs/squidS.o -fPIC -pie -Wl,-z -Wl,relro -Wl,-z -Wl,now -o squid 
AclRegs.o AuthReg.o AccessLogEntry.o AsyncEngine.o YesNoNone.o cache_cf.o 
CacheDigest.o cache_manager.o carp.o cbdata.o ChunkedCodingParser.o client_db.o 
client_side.o client_side_reply.o client_side_request.o BodyPipe.o 
clientStream.o CompletionDispatcher.o ConfigOption.o ConfigParser.o 
CpuAffinity.o CpuAffinityMap.o CpuAffinitySet.o debug.o delay_pools.o DelayId.o 
DelayBucket.o DelayConfig.o DelayPool.o DelaySpec.o DelayTagged.o DelayUser.o 
DelayVector.o NullDelayId.o ClientDelayConfig.o disk.o DiskIO/DiskIOModule.o 
DiskIO/ReadRequest.o DiskIO/WriteRequest.o dlink.o dns_internal.o 
DnsLookupDetails.o errorpage.o ETag.o event.o EventLoop.o external_acl.o 
ExternalACLEntry.o FadingCounter.o fatal.o fd.o fde.o filemap.o fqdncache.o 
ftp.o FwdState.o gopher.o helper.o HelperChildConfig.o HelperReply.o htcp.o 
http.o HttpHdrCc.o HttpHdrRange.o HttpHdrSc.o HttpHdrScTarget.o 
HttpHdrContRange.o HttpHeader.o HttpHeaderTools.o HttpBody.o HttpMsg.o 
HttpParser.o HttpReply.o RequestFlags.o HttpRequest.o HttpRequestMethod.o 
icp_v2.o icp_v3.o int.o internal.o ipc.o ipcache.o SquidList.o main.o 
MasterXaction.o mem.o mem_node.o MemBuf.o MemObject.o mime.o mime_header.o 
multicast.o neighbors.o Notes.o Packer.o Parsing.o pconn.o peer_digest.o 
peer_proxy_negotiate_auth.o peer_select.o peer_sourcehash.o peer_userhash.o 
redirect.o refresh.o RemovalPolicy.o send-announce.o MemBlob.o snmp_core.o 
snmp_agent.o SquidMath.o SquidNew.o stat.o StatCounters.o StatHist.o String.o 
StrList.o stmem.o store.o StoreFileSystem.o store_io.o StoreIOState.o 
store_client.o store_digest.o store_dir.o store_key_md5.o store_log.o 
store_rebuild.o store_swapin.o store_swapmeta.o store_swapout.o StoreMeta.o 
StoreMetaMD5.o StoreMetaSTD.o StoreMetaSTDLFS.o StoreMetaUnpacker.o 
StoreMetaURL.o StoreMetaVary.o StoreStats.o StoreSwapLogData.o Server.o 
SwapDir.o MemStore.o time.o tools.o tunnel.o unlinkd.o url.o URLScheme.o urn.o 
wccp.o wccp2.o whois.o wordlist.o LoadableModule.o LoadableModules.o 
DiskIO/DiskIOModules_gen.o err_type.o err_detail_type.o globals.o hier_code.o 
icp_opcode.o LogTags.o lookup_t.o repl_modules.o swap_log_op.o 
DiskIO/AIO/AIODiskIOModule.o DiskIO/Blocking/BlockingDiskIOModule.o 
DiskIO/DiskDaemon/DiskDaemonDiskIOModule.o 
DiskIO/DiskThreads/DiskThreadsDiskIOModule.o DiskIO/IpcIo/IpcIoDiskIOModule.o 
DiskIO/Mmapped/MmappedDiskIOModule.o -Wl,--export-dynamic  auth/.libs/libacls.a 
ident/.libs/libident.a acl/.libs/libacls.a acl/.libs/libstate.a 
auth/.libs/libauth.a libAIO.a libBlocking.a libDiskDaemon.a libDiskThreads.a 
libIpcIo.a libMmapped.a acl/.libs/libapi.a base/.libs/libbase.a 
./.libs/libsquid.a ip/.libs/libip.a fs/.libs/libfs.a ipc/.libs/libipc.a 
mgr/.libs/libmgr.a anyp/.libs/libanyp.a comm/.libs/libcomm.a eui/.libs/libeui.a 
http/.libs/libsquid-http.a icmp/.libs/libicmp.a icmp/.libs/libicmp-core.a 
log/.libs/liblog.a format/.libs/libformat.a repl/libheap.a repl/liblru.a 
-lpthread -lcrypt adaptation/.libs/libadaptation.a esi/.libs/libesi.a 
../lib/libTrie/libTrie.a -lxml2 -lexpat ssl/.libs/libsslsquid.a 
ssl/.libs/libsslutil.a snmp/.libs/libsnmp.a ../snmplib/libsnmplib.a 
../lib/.libs/libmisccontainers.a ../lib/.libs/libmiscencoding.a 
../lib/.libs/libmiscutil.a -lssl -lcrypto -lgssapi_krb5 -lkrb5 -lk5crypto 
-lcom_err -L/root/rpmbuild/BUILD/squid-3.4.0.1/compat -lcompat-squid -lm -lnsl 
-lresolv -lcap -lrt -ldl -L/root/rpmbuild/BUILD/squid-3.4.0.1 -lltdl
/usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32 
against `.rodata' can not be used when making a shared object; recompile with 
-fPIC
../snmplib/libsnmplib.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
libtool: link: rm -f .libs/squidS.o
make[3]: *** [squid] Error 1
make[3]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
make: *** [all-recursive] Error 1

Any ideas?





The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or 

Fwd: [squid-users] negotiate_kerberos_auth helpers stay busy

2013-07-30 Thread Alan
On Wed, Jul 31, 2013 at 6:59 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 31/07/2013 9:18 a.m., Klaus Walter wrote:
 strace shows me that this helpers are blocked during a read-command:
 
 read(0, r, 1) = 1
 read(0, r, 1) = 1
 read(0, 7, 1) = 1
 read(0, +, 1) = 1
 read(0, a, 1) = 1
 read(0, G, 1) = 1
 read(0,  unfinished ...

 After this the process is never continued.


 That does not look blocked to me. The value arriving is changing, just
 eee ssslloowwwlllyyy, one byte per I/O cycle to be exact. Since
 kerberos credentials can be up to 32 KB large its easy to see why they are
 stuck in BUSY state for such long times.

I see the helper is reading one byte at a time.
The code for the kerberos explicitly disables buffering:

setbuf(stdout, NULL);
setbuf(stdin, NULL);

As a test, I removed setbuf(stdin, NULL), and it works well: the
helper no longer reads one byte at a time, but the whole thing at
once.

I tried to take it further and enable buffering for stdout as well,
but it doesn't work: the helpers don't write anything until the buffer
is full, so it just hangs.  I think glibc only flushes stdout on
newline when stdout is a terminal.  So I added explicit calls to
fflush() after each fprintf(stdout, ...), and now it works like a
charm.

I haven't measured the performance improvement with this, and I'm sure
it doesn't solve Klaus' issue, but I am very interested in improving
performance in every possible way, even if its a small improvement.

If anybody is interested, please see the attached patch against Squid 3.3.8.

Best regards,

Alan



I'm resending this message because it was rejected due to the attachment.
If you are interested in the patch, please run:

echo 
H4sICJB6+FEAA25lZ290aWF0ZV9rZXJiZXJvc19hdXRoLWJ1ZmZlcmluZy5wYXRjaADtmG1z0zgQgD9ff8XSm7Z246R2nHcuMw3XAJ22KZT05m6g43Ft2TEksrHkQo/jv99KdlLHcQqlMBzDaTqNR7Z2tY92Vyu5gedBNYFqDOxtErhVs2bWOrUwDvy9CZlGJGZ7lPghD2xOLDvhk703JL4kcZjvn3fJD2qOk5d1DzEb1Wr1a07rl7pumFW9XTUNMIxew+wZ3VrXrNe7zY5pQkXv6vpGpVL5StMvqmv1TL1mtHRdb7Zb3Uzd/j5UjW5Xa0FF/LRhf38D5u0jvJsEUwLKjPmWw9+rD2/eueQy8RXFmdgx7KqwucX+gS3Wg+HZ2elZDx/Bs3GsKx5f0U0NjkN/HMyIomrw7Oz0ydngRAMvoQ4PQqrBZeLlpXtRHFDuKYy7YcI12Hz0tERkcXhlMdybJmySjc7LDTxQpqGv3vQUtJE41m6MORw9Pu3BOSMx0JCDAEsoDxzE7RaMWpY4b5mlOAUBum7WNQRfN03kPQfthJRxSCgLfEpcSIGySKzrOHxDKPRhdH58nBnBgr+JxSH3/phQn0/wKx0/2ajKjwhHIAt4YjjOoPAqoDdvUtnZYlcNeNAHJYw4CvUJxwfFjn1HA/x/hXTcIGa9yaaqqvDhxmz2LuDORI5b6ndsRmDH3elJBKYufc3U2wVfEyuD8JeGfoafnV5hcCATF2aEMdsna3wt7wTrHKxMVt6rbvUs0XD+2TrkGy4wD2hCct0fUxZGWzMMhGG0NMNcoYHr9FK/gH4fdl7pO3cFc0iv7GngQkzeJoTxe2BZlXQnKKXmLxnKeDwlVNirwm9Qv6el8HKLXazNOIU8891sfoBGU2cWCatR5fPnON+6DCjpGS1DRkmrXhIlyyP/OpMjYXsbll8cHS1E/mw0MyYLmu2UZtssp3njfH0wv5TXYh/+JuRWpH9lhjRKuMXFdlKbzveTS0zcrYblEid0iZVBqpjZXmZ20kTeLbroOlNi4pDgCpMrv44IbLkwGh+fgNQpKJXvnwqKUWFXUQr7o7o0ZWSUEKiUi5g3kdlDDyifzp7FIQ+dcKreCSGuQAjOlNg0iW5JZiS+ChxiCQJOENnTuRc26h0BrGGWARMjebrbp9v9ihuK9glXHGGJQmiY+BPcwGZhfP2ZSX/dipXJKxL7JLVbyBXoiZb5HKHS5zA2Uypa4YVwxpUSCGswydlMOTeK0T7nPLNfhzHKtnnCYBuevHhhvbB+Px2ND0fnQ2s0HB4MD+6EP60SswizRUGK8UrclRLxTvzH43mVKwl8Y+4+Y1ZM8EtGLIxyj8TK9iygC04abKOORcRlOaDRlIm10SwWc3PUiaicf26PRj1OdC1JaJBnmGatQl+afBd4ZYpttFoleEXbjSAtDm+fwgqJwWNxmMq7lwZigndKh+uW7mD46PxJD5Z0lO2Amd7Yc4y22bEIc+yISE5qUVX5qa1g2tqTG07j1oObRN3WJep254dOGqPBD5A0Oi2Jutv8P2msI3ufpNFtCLxNvfEtk8bmYNDvb36HrDFX/N9IG82GrnUQttwHBWyUugEzO6CicIX01iS1Nb3mETcoLy9wRh++7KZGzl4KFLcDJ4M/rcH5+On49Gg4so6Ho4uHSzc5xlL4CByeTzhLT0lpPZwefKpgYIfQpOYjLzVRbkLNZllmJO8DruhqeTVcFktH2R1lHq1MfRhjLImiME5Jf/IyD/Xg36+EumjW3i48HfwxtDArD54dwu7exr8zzzqnWBYAAA==
| perl -MCompress::Zlib -MMIME::Base64 -e 'print
Compress::Zlib::memGunzip(decode_base64())' 
/tmp/negotiate_kerberos_auth-buffering.patch


Re: [squid-users] Uneven load distribution between SMP Workers

2013-07-30 Thread Alex Rousskov
On 07/30/2013 07:13 PM, Tim Murray wrote:

 In the meantime, I might see if using separate http_ports for each
 worker and using the load balancer to even up the spread of traffic
 will work.

It should work as well as the load balancer can balance the load. You
might be introducing an additional single point of failure (the load
balancer) though.

Alex.