[squid-users] rock storage integrity

2015-12-04 Thread Hussam Al-Tayeb
Hi. I am using squid with rock storage right now to cache computer
updates for my Linux computers. It works well.
Since this is a database, it is possible for part of the database to
get corrupted through a crash or incorrect poweroff?
I know from sql database that incorrect shutdowns can cause binary
corruption.

I had an incorrect poweroff yesterday but cache.log did not list
anything weird.

2015/12/03 01:00:11| Store rebuilding is 0.31% complete
2015/12/03 01:01:00| Finished rebuilding storage from disk.
2015/12/03 01:01:00|31 Entries scanned
2015/12/03 01:01:00| 0 Invalid entries.
2015/12/03 01:01:00| 0 With invalid flags.
2015/12/03 01:01:00| 55523 Objects loaded.
2015/12/03 01:01:00| 0 Objects expired.
2015/12/03 01:01:00| 0 Objects cancelled.
2015/12/03 01:01:00| 0 Duplicate URLs purged.
2015/12/03 01:01:00| 0 Swapfile clashes avoided.
2015/12/03 01:01:00|   Took 49.94 seconds (.79 objects/sec).
2015/12/03 01:01:00| Beginning Validation Procedure
2015/12/03 01:01:00|   Completed Validation Procedure
2015/12/03 01:01:00|   Validated 0 Entries
2015/12/03 01:01:00|   store_swap_size = 3187216.00 KB

Nevertheless, what would be the best way to check if there was some
damage to the database (unusable slots/cells/whatever)?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] vary headers

2015-05-04 Thread Hussam Al-Tayeb


 Sent: Monday, May 04, 2015 at 6:32 PM
 From: Amos Jeffries squ...@treenet.co.nz
 To: Hussam Al-Tayeb hussam.ta...@gmx.com
 Cc: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] vary headers

 On 5/05/2015 3:15 a.m., Hussam Al-Tayeb wrote:
  
  
  Sent: Monday, May 04, 2015 at 12:49 PM
  From: Amos Jeffries squ...@treenet.co.nz
  To: squid-users@lists.squid-cache.org
  Subject: Re: [squid-users] vary headers
 
  On 4/05/2015 6:54 a.m., Hussam Al-Tayeb wrote:
  Sent: Sunday, May 03, 2015 at 9:45 PM
  From: Yuri Voinov
 
 
  I understand what do your want. But for what?
 
 
  because a wget --server-response http://someurl; operation that
  replies with a Vary: user-agent header always results in a MISS
  even if the same wget version (same user-agent) and computer.
  Instead, multiple copies of the file are stored.
 
  That is not right, the wget being used twice should be MISS then HIT,
  just like any oter cacheable traffic.
 
  The nasty thing with Vary:User-Agent is that browsers UA string embeds
  so much plugin info they are changing between each different client
  request. Which defeats the purpose of caching one clients reply for use
  by other clients.
 
  NP: what you had with the store_miss should be working.
   Are you using Squid-3.5 ?
   How are you identifying a fail ?
 
 
  Hello. I am using 3.5.4
  There are new objects on disk that have the Vary: User-Agent Http header.
  I can tell for example if type head -n13 /home/squid/04/D1/0004D122
  
 
 That is not a good way to identify. All it means is that the object used
 a disk file for its transfer. Cache files are also sometimes used as
 on-disk buffers.
 
 If you check with store.log for ID (0004D122) you should expect to see
 that file pushed to disk, then a cache index RELEASED action performed.
 The file part may stay on disk until something else needs to use the
 same filename.
 
 The ways to identify caching activity is:
  * access.log - checking that no HIT or REFRESH occur on the relevant
 URLs, or
  * store.log - checking that objects with the URLs are all getting that
 RELEASED action.
  * cache.log - setting debug_options 20,3 and watching for store_miss
 prohibits caching
 
 There is unfortunatly currently no easy debugs linking what URL
 store_miss prohibited to correlate the logs :-(
 
 Amos
 
 
 
 
ok, I tried setting debug_options.
this is part of what I found:
http://pastebin.com/raw.php?i=4HTw9es3

So it looks like it only blocks caching of Vary header if followed by 
Accept-Encoding: gzip,deflate?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] vary headers

2015-05-04 Thread Hussam Al-Tayeb


 Sent: Monday, May 04, 2015 at 9:04 PM
 From: Amos Jeffries squ...@treenet.co.nz
 To: Hussam Al-Tayeb hussam.ta...@gmx.com
 Cc: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] vary headers

 On 5/05/2015 4:38 a.m., Hussam Al-Tayeb wrote:
  
  
  Sent: Monday, May 04, 2015 at 6:32 PM
  From: Amos Jeffries squ...@treenet.co.nz
  To: Hussam Al-Tayeb hussam.ta...@gmx.com
  Cc: squid-users@lists.squid-cache.org
  Subject: Re: [squid-users] vary headers
 
  On 5/05/2015 3:15 a.m., Hussam Al-Tayeb wrote:
 
 
  Sent: Monday, May 04, 2015 at 12:49 PM
  From: Amos Jeffries squ...@treenet.co.nz
  To: squid-users@lists.squid-cache.org
  Subject: Re: [squid-users] vary headers
 
  On 4/05/2015 6:54 a.m., Hussam Al-Tayeb wrote:
  Sent: Sunday, May 03, 2015 at 9:45 PM
  From: Yuri Voinov
 
 
  I understand what do your want. But for what?
 
 
  because a wget --server-response http://someurl; operation that
  replies with a Vary: user-agent header always results in a MISS
  even if the same wget version (same user-agent) and computer.
  Instead, multiple copies of the file are stored.
 
  That is not right, the wget being used twice should be MISS then HIT,
  just like any oter cacheable traffic.
 
  The nasty thing with Vary:User-Agent is that browsers UA string embeds
  so much plugin info they are changing between each different client
  request. Which defeats the purpose of caching one clients reply for use
  by other clients.
 
  NP: what you had with the store_miss should be working.
   Are you using Squid-3.5 ?
   How are you identifying a fail ?
 
 
  Hello. I am using 3.5.4
  There are new objects on disk that have the Vary: User-Agent Http header.
  I can tell for example if type head -n13 /home/squid/04/D1/0004D122
 
 
  That is not a good way to identify. All it means is that the object used
  a disk file for its transfer. Cache files are also sometimes used as
  on-disk buffers.
 
  If you check with store.log for ID (0004D122) you should expect to see
  that file pushed to disk, then a cache index RELEASED action performed.
  The file part may stay on disk until something else needs to use the
  same filename.
 
  The ways to identify caching activity is:
   * access.log - checking that no HIT or REFRESH occur on the relevant
  URLs, or
   * store.log - checking that objects with the URLs are all getting that
  RELEASED action.
   * cache.log - setting debug_options 20,3 and watching for store_miss
  prohibits caching
 
  There is unfortunatly currently no easy debugs linking what URL
  store_miss prohibited to correlate the logs :-(
 
  Amos
 
 
 
 
  ok, I tried setting debug_options.
  this is part of what I found:
  http://pastebin.com/raw.php?i=4HTw9es3
  
  So it looks like it only blocks caching of Vary header if followed by 
  Accept-Encoding: gzip,deflate?
  
 
 Ah, your regex pattern was . so if Vary header exists at all it will
 block that response caching.
 
 Since its working now use:
 
  acl hasVary rep_header Vary User-Agent
 
 
 Amos
 
Ok, thank you. How would I modify that to include
Vary: somethingelse, User-Agent
and Vary: User-Agent, somethingelse?
Thanks again!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] vary headers

2015-05-03 Thread Hussam Al-Tayeb


 Sent: Sunday, May 03, 2015 at 8:04 PM
 From: Yuri Voinov yvoi...@gmail.com
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] vary headers

 Headers has own acl.
 Example:
 
 # Strip User-Agent from Vary
 request_header_access Vary deny all
 request_header_replace Vary Accept-Encoding
 
vary headers are reply headers, not request headers.
Anyway, that one requires enabling http-violations option which I don't want to 
do.
I simply don't want to cache them (i.e I don't want them stored in cache). How 
can I do that?
Thank you.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] vary headers

2015-05-03 Thread Hussam Al-Tayeb
 Sent: Sunday, May 03, 2015 at 9:45 PM
 From: Yuri Voinov yvoi...@gmail.com
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] vary headers

 
 I understand what do your want. But for what?
 

because a wget --server-response http://someurl; operation that replies with a 
Vary: user-agent header always results in a MISS even if the same wget 
version (same user-agent) and computer. Instead, multiple copies of the file 
are stored. That means those stored entries are redundant and hence I would 
rather not store them.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] vary headers

2015-05-03 Thread Hussam Al-Tayeb


 Sent: Sunday, May 03, 2015 at 9:55 PM
 From: Yuri Voinov yvoi...@gmail.com
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] vary headers

 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
  
 Will be good enough simple strip user-agent from vary header.
 
 http://www.fastly.com/blog/best-practices-for-using-the-vary-header/
 

That works on destination server level and no proxy or client.

Plus if someone was to try:
reply_header_access Vary deny all
reply_header_replace Vary Accept-Encoding
What are the implications? Anything affects gzipped/non-gzipped content?

Nevertheless, I would rather simply not cache instead manipulate http headers.
Do you know how to do that?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How do I no-cache the following url?

2015-04-30 Thread Hussam Al-Tayeb
What rule would I have to add to not cache the following url?

http://images.example.comimageview.gif?anything

Everything up to the ? is an exact match.

So I want to not cache


http://images.example.comimageview.gif?


http://images.example.comimageview.gif?anything


http://images.example.comimageview.gif?anything.gif

etc...

Thank you.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cannot purge items that are not upstream anymore

2014-11-12 Thread Hussam Al-Tayeb
Hello. I have a problem with 'squidclient -m PURGE' and also the purge 
command.
They won't purge urls from disk that are not available online anymore or 
redirect to other links.


For example, 
http://static.firedrive.com/dynamic/previews/75/27577be2d6d86af20265734b64e8d563.jpg
which corresponds to /home/squid/00/BB/BBC0

Even purge -e static.firedrive.com -c /etc/squid/squid.conf -P  0x01 reads 
it but will not really remove it from disk.
Are such files stuck on disk forever?
What would the correct way to clear them?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot purge items that are not upstream anymore

2014-11-12 Thread Hussam Al-Tayeb
On Thursday 13 November 2014 01:39:27 Amos Jeffries wrote:
 On 13/11/2014 12:17 a.m., Hussam Al-Tayeb wrote:
  Hello. I have a problem with 'squidclient -m PURGE' and also the
  purge command. They won't purge urls from disk that are not
  available online anymore or redirect to other links.
 
 PURGE was designed for use in HTTP/1.0. It does not handle HTTP/1.1
 negotiated content / variants at all well.
 
  For example,
  http://static.firedrive.com/dynamic/previews/75/27577be2d6d86af20265734b64
  e8d563.jpg
 which corresponds to /home/squid/00/BB/BBC0
 
  Even purge -e static.firedrive.com -c /etc/squid/squid.conf -P
  0x01 reads it but will not really remove it from disk. Are such
  files stuck on disk forever?
 
 No. Cache is a temporary storage location. Think of it as a buffer in
 the network.
 
 They will exist only until a) the storage space is needed for
 something else, or b) the timestamp in their Expires: header, or c) an
 origin server informs Squid they are no longer existing.
 
 PURGE method was a way to fake (c).
 
  What would the correct way to clear them?
 
 By obeying HTTP protocol. HTTP has built-in mechanisms for doing that
 automatically.
 
 Or you can just delete the disk file. Squid will auto-detect the
 removal at some point when it needs to use or delete the object. Until
 then it will just under-count the amount of used disk.
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users


head /home/squid/00/BB/BBC0 -n20

�u��(▒��,�|�S�|
�S��f7�P`Uhttp://static.firedrive.com/dynamic/previews/75/27577be2d6d86af20265734b64e8d563.jpg
Raccept-encoding=gzip,%20deflateHTTP/1.1 200 OK
Date: Tue, 26 Aug 2014 12:25:13 GMT
Content-Type: image/jpeg
Last-Modified: Wed, 31 Oct 2012 06:12:07 GMT
ETag: 5090c137-2efb
Expires: Fri, 23 Aug 2024 12:25:13 GMT
Cache-Control: public, max-age=31536
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Origin,Referer,Accept-Encoding,Accept-
Language,Accept,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-
Modified-Since,Cache-Control,Content-Type,X-Forwarded-For,X-Forwarded-Proto
CF-Cache-Status: HIT
Vary: Accept-Encoding
Accept-Ranges: bytes
Server: cloudflare-nginx
CF-RAY: 160002c2e4730887-FRA
Content-Length: 12027
Connection: Keep-Alive
Set-Cookie: __cfduid=dfcf568ed46ef827243b3d6c342b3bdc41409055913424; 
expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.firedrive.com; HttpOnly


The Expires header is in the past.  
http://static.firedrive.com/dynamic/previews/75/27577be2d6d86af20265734b64e8d563.jpg;
Sending HTTP request ... done.
HTTP/1.1 404 Not Found
Server: squid
Mime-Version: 1.0
Date: Wed, 12 Nov 2014 12:54:13 GMT
Content-Length: 0
X-Cache: MISS from SERV1
X-Cache-Lookup: NONE from SERV1:3129
Via: 1.1 SERV1 (squid)
Connection: close

Is that why  squidclient -m PURGE -h 127.0.0.1 -p 3129 says not found in 
cache?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Re: warning This cache hit is still fresh and more than 1 day old

2014-02-04 Thread Hussam Al-Tayeb
same thing happens with 
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg

it generates an warning:

  Warning: 113 SERV1 (squid) This cache hit is still fresh and more than 1 
day old


Any way to tell squid not to cache objects that would generate this 
warning?

thank you

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] Re: warning This cache hit is still fresh and more than 1 day old

2014-02-04 Thread Hussam Al-Tayeb
On Wednesday 05 February 2014 11:05:55 Amos Jeffries wrote:
 On 2014-02-05 07:38, Hussam Al-Tayeb wrote:
  same thing happens with
  
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg
  
  it generates an warning:
Warning: 113 SERV1 (squid) This cache hit is still fresh and more
  
  than 1
  day old
  
  
  Any way to tell squid not to cache objects that would generate this
  warning?
 
 You want to discard everything over 24hrs old out of your cache?
 
 That URL headers says:
Date: Tue, 04 Feb 2014 21:57:39 GMT
Expires: Sat, 01 Mar 2014 09:30:32 GMT
Cache-Control: max-age=2592000
Last-Modified: Fri, 27 Sep 2013 00:49:39 GMT
Age: 476436
 
 The object is cacheable up until March, and is currently older than
 24hrs (5.5 days  1 day). Like Squid is reporting still fresh and older
 than 1 day. One of the strange little mandatory behaviors in RFC 
2616
 is adding that Warning.
 
 Amos

I suppose I an just having a hard time understanding the warning. 
Is it because 5.5 days is more recent than the Last-Modified in 2013?


signature.asc
Description: This is a digitally signed message part.


[squid-users] not caching a mime type

2014-01-31 Thread Hussam Al-Tayeb
what is the best way of expiring existing entries of a certain mime type 
and not caching them anymore?

Is this enough?
refresh_pattern application/ocsp-response 0 0% 0


I tried this too
acl ocsp-response rep_mime_type application/ocsp-response
cache deny ocsp-response
but requests of this mime type still get cached.

How do I go about doing this?
thank you.

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] not caching a mime type

2014-01-31 Thread Hussam Al-Tayeb
On Friday 31 January 2014 13:45:35 Walter H. wrote:
 On 31.01.2014 13:36, Hussam Al-Tayeb wrote:
  I tried this too
  acl ocsp-response rep_mime_type application/ocsp-response
  cache deny ocsp-response
  but requests of this mime type still get cached.
 
 are you talking about OCSP requests or OCSP responses?

Addresses such as this:

http://ocsp.usertrust.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBR8sWZUnKvbRO5iJhat9GV793rVlAQUrb2YejS0Jvf6xCZU7wO94CTLVBoCEAdvEkaBRZwo1UjWl8QOABs%3D

wget says:
  Content-Type: application/ocsp-response

I would like to expire existing entries and not cache those anymore.

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] Re: warning This cache hit is still fresh and more than 1 day old

2014-01-12 Thread Hussam Al-Tayeb
On Sunday 12 January 2014 19:32:01 RW wrote:
 On Sat, 11 Jan 2014 20:28:59 +0200
 
 I wouldn't have thought so.

It was because wget --server-response was printing that warning. Then 
when i purged that file from cache, the unvalidated entry disappeared.

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] possible typo in source code

2014-01-12 Thread Hussam Al-Tayeb
On Sunday 12 January 2014 21:16:23 Ralf Hildebrandt wrote:
 * Hussam Al-Tayeb hus...@visp.net.lb:
  second line. squid 3.4.2
 
 oops :/ it's the second line in squid-3.4.2-20131231-
r13067/compat/os/hpux.h
 
 And since SQUID_OS_PHUX_H is not being used anywhere else, I guess
 this is ok:
 
 --- compat/os/hpux.h  2013-12-31 06:31:22.0 +0100
 +++ compat/os/hpux.h  2014-01-12 21:15:24.199343270 +0100
 @@ -1,5 +1,5 @@
  #ifndef SQUID_OS_HPUX_H
 -#define SQUID_OS_PHUX_H
 +#define SQUID_OS_HPUX_H
 
 #if _SQUID_HPUX_

they already fixed it in trunk and 3.4 branch. thank you :)

signature.asc
Description: This is a digitally signed message part.


[squid-users] squidclient PURGE is not working

2014-01-11 Thread Hussam Al-Tayeb
This is what I tried:
squidclient PURGE -m http://i.imgur.com/OtvOiYIb.jpg
Sending HTTP request ... done.

relevant squid.conf entry

acl PURGE method PURGE
http_access allow PURGE localhost hussam
http_access deny PURGE


The file is not purged.
Any ideas?

signature.asc
Description: This is a digitally signed message part.


[squid-users] Re: squidclient PURGE is not working

2014-01-11 Thread Hussam Al-Tayeb
On Saturday 11 January 2014 19:12:25 you wrote:
 This is what I tried:
 squidclient PURGE -m http://i.imgur.com/OtvOiYIb.jpg
 Sending HTTP request ... done.
 
 relevant squid.conf entry
 
 acl PURGE method PURGE
 http_access allow PURGE localhost hussam
 http_access deny PURGE
 
 
 The file is not purged.
 Any ideas?

I got it. changing the rule to

acl PURGE method PURGE
http_access allow PURGE localhost
http_access allow PURGE hussam
http_access deny PURGE

fixed it.

signature.asc
Description: This is a digitally signed message part.


[squid-users] warning This cache hit is still fresh and more than 1 day old

2014-01-11 Thread Hussam Al-Tayeb
sometimes wget something that is already cached gets a warning This 
cache hit is still fresh and more than 1 day old.
What causes this?

Also a while ago, I complained about unValidated objects in cache.conf
The above warning seems to be the reason behind those unvalidated 
objects.

For example, if it says 530420 Objects loaded but Validated 530419 
Entries, the 1 object that failed validation is the one that gives This 
cache hit is still fresh and more than 1 day old warning.
using squidclient to PURGE that one entry from cache, removes the 
unvalidated entry.
I can tell which is the unvalidated entry because when it gets request 
again, a SWAP_FAIL is logged in access.log

I am using the default refresh patterns.
What causes This cache hit is still fresh and more than 1 day old 
warnings?

thank you.

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] squidclient PURGE is not working

2014-01-11 Thread Hussam Al-Tayeb
On Saturday 11 January 2014 21:36:36 Eliezer Croitoru wrote:
 Hey Hussam,
 
 I want to make sure again.
 What version of squid is it?
 
 Did you tried to strictly re-validate the object before puring it?
 
 Eliezer
 


3.4.2

but changing from

http_access allow PURGE localhost hussam

to

http_access allow PURGE localhost 
http_access allow PURGE hussam

somehow fixed it.

signature.asc
Description: This is a digitally signed message part.


[squid-users] possible typo in source code

2014-01-11 Thread Hussam Al-Tayeb
in compat/os/hpux.h

#ifndef SQUID_OS_HPUX_H
#define SQUID_OS_PHUX_H

should be

#ifndef SQUID_OS_HPUX_H
#define SQUID_OS_HPUX_H
?

(change HPUX to HPUX)

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] possible typo in source code

2014-01-11 Thread Hussam Al-Tayeb
On Saturday 11 January 2014 22:34:06 Eliezer Croitoru wrote:
 What line?
 What version of squid?
 
 Eliezer
 
 On 11/01/14 22:05, Hussam Al-Tayeb wrote:
  in compat/os/hpux.h
  
  #ifndef SQUID_OS_HPUX_H
  #define SQUID_OS_PHUX_H
  
  should be
  
  #ifndef SQUID_OS_HPUX_H
  #define SQUID_OS_HPUX_H
  ?
  
  (change HPUX to HPUX)

second line. squid 3.4.2

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] Re: squidclient PURGE is not working

2014-01-11 Thread Hussam Al-Tayeb
On Sunday 12 January 2014 14:30:35 Amos Jeffries wrote:
 On 12/01/2014 6:42 a.m., Hussam Al-Tayeb wrote:
  On Saturday 11 January 2014 19:12:25 you wrote:
  This is what I tried: squidclient PURGE -m
  http://i.imgur.com/OtvOiYIb.jpg Sending HTTP request ... done.
  
  relevant squid.conf entry
  
  acl PURGE method PURGE http_access allow PURGE localhost 
hussam
  http_access deny PURGE
  
  
  The file is not purged. Any ideas?
  
  I got it. changing the rule to
  
  acl PURGE method PURGE http_access allow PURGE localhost
  http_access allow PURGE hussam http_access deny PURGE
  
  fixed it.
 
 Either your purge requests were not coming from the localhost IPs or
 the localhost ACL is not matching the IP properly.
 
 squidclient should be using ::1 by default. Is this a RHEL build of
 Squid with their patch to stop ::1 being considered as localhost?
 
 Amos

Nope, Archlinux (3.4.2 release) but I make my own builds because it 
can sometimes take them weeks to upgrade.

signature.asc
Description: This is a digitally signed message part.


Re: [squid-users] Wondering Why This Isn't Caching

2013-12-14 Thread Hussam Al-Tayeb
On Friday 13 December 2013 11:51:05 Martin Sperl wrote:
 Here the back-ported patch to 3.4.1, which we will be testing 
internally -
 until it gets incorporated into the 3.4 release tree...
 
 Initial tests show that it works as expected - the vary-response-
header is
 now handled correctly...
 
 Thanks,
   Martin
 
 Here the modified patch based on
 http://bugs.squid-cache.org/show_bug.cgi?id=3806 which applies 
against the
 clean tar ball of 3.4.1:
 
 === modified file 'src/store.cc'
 --- src/store.cc  2013-02-16 02:26:31 +
 +++ src/store.cc  2013-03-25 10:51:54 +
 @@ -750,7 +750,7 @@
  StoreEntry *pe = storeCreateEntry(mem_obj-url,
 mem_obj-log_url, request-flags, request-method);
 /* We are allowed to
 do this typecast */
  HttpReply *rep = new HttpReply;
 -rep-setHeaders(Http::scOkay, Internal marker object,
 x-squid-internal/vary, -1, -1, squid_curtime + 10);
 +   
 rep-setHeaders(Http::scOkay, Internal marker object,
 x-squid-internal/vary, 0, -1, squid_curtime + 10); vary =
 mem_obj-getReply()-header.getList(HDR_VARY); 
  if (vary.size()) {
 
 Various fixes making Vary caching work better.
 More work is needed to re-enable shared memory caching of Vary 
responses.
 
 bag5s r12741: Do not start storing the vary marker object until its 
key
 becomes public.
 bag5s r12742: Log failed (due to Vary object loop or
 URL mismatch) hits as TCP_MISSes. bag5s r12743: Refuse to cache
 Vary-controlled objects in shared memory (for now). 
 === modified file 'src/MemStore.cc'
 --- src/MemStore.cc   2012-10-16 00:18:09 +
 +++ src/MemStore.cc   2013-11-08 22:02:09 +
 @@ -293,40 +293,46 @@ MemStore::considerKeeping(StoreEntry 
e)
  }
  
  assert(e.mem_obj);
  
  const int64_t loadedSize = e.mem_obj-endOffset();
  const int64_t expectedSize = e.mem_obj-expectedReplySize();
  
  // objects of unknown size are not allowed into memory cache, for 
now
  if (expectedSize  0) {
  debugs(20, 5, HERE  Unknown expected size:   e);
  return;
  }
  
  // since we copy everything at once, we can only keep fully loaded
 entries
 if (loadedSize != expectedSize) {
  debugs(20, 7, HERE  partially loaded:   loadedSize   
!= 
 
 expectedSize);
  return;
  }
  
 +if (e.mem_obj-vary_headers) {
 +// XXX: We must store/load SerialisedMetaData to cache Vary in 
RAM
 +debugs(20, 5, Vary not yet supported:  
 e.mem_obj-vary_headers); +return;
 +}
 +
  keep(e); // may still fail
  }
  
  bool
  MemStore::willFit(int64_t need) const
  {
  return need = static_castint64_t(Ipc::Mem::PageSize());
  }
  
  /// allocates map slot and calls copyToShm to store the entry in 
shared
 memory
 void
  MemStore::keep(StoreEntry e)
  {
  if (!map) {
  debugs(20, 5, HERE  No map to mem-cache   e);
  return;
  }
  
  sfileno index = 0;
  Ipc::StoreMapSlot *slot = map-
openForWriting(reinterpret_castconst
 cache_key *(e.key), index);
 
 === modified file 'src/client_side_reply.cc'
 --- src/client_side_reply.cc  2013-10-01 23:21:17 +
 +++ src/client_side_reply.cc  2013-11-08 22:02:09 +
 @@ -470,71 +470,73 @@ clientReplyContext::cacheHit(StoreIOBuff
  /* the store couldn't get enough data from the file for us to id
 the
 * object
   */
  /* treat as a miss */
  http-logType = LOG_TCP_MISS;
  processMiss();
  return;
  }
  
  assert(!EBIT_TEST(e-flags, ENTRY_ABORTED));
  /* update size of the request */
  reqsize = result.length + reqofs;
  
  /*
   * Got the headers, now grok them
   */
  assert(http-logType == LOG_TCP_HIT);
  
  if (strcmp(e-mem_obj-url, http-request-storeId()) != 0) {
  debugs(33, DBG_IMPORTANT, clientProcessHit: URL mismatch, ' 

 e-mem_obj-url  ' != '  http-request-storeId()  ');
 +   
 http-logType = LOG_TCP_MISS; // we lack a more precise LOG_*_MISS 
code
 processMiss();
  return;
  }
  
  switch (varyEvaluateMatch(e, r)) {
  
  case VARY_NONE:
  /* No variance detected. Continue as normal */
  break;
  
  case VARY_MATCH:
  /* This is the correct entity for this request. Continue */
  debugs(88, 2, clientProcessHit: Vary MATCH!);
  break;
  
  case VARY_OTHER:
  /* This is not the correct entity for this request. We need
   * to requery the cache.
   */
  removeClientStoreReference(sc, http);
  e = NULL;
  /* Note: varyEvalyateMatch updates the request with vary
 information
 * so we only get here once. (it also takes care of cancelling
 loops) */
  debugs(88, 2, clientProcessHit: Vary detected!);
  clientGetMoreData(ourNode, http);
  return;
  
  case VARY_CANCEL:
  /* varyEvaluateMatch found a object loop. Process as miss */
  debugs(88, 

[squid-users] unvalidated objects.

2013-12-07 Thread Hussam Al-Tayeb
I have something like this:
2013/12/08 00:18:50| Done reading /home/squid swaplog (293760 entries)
2013/12/08 00:18:50| Finished rebuilding storage from disk.
2013/12/08 00:18:50|293760 Entries scanned
2013/12/08 00:18:50| 0 Invalid entries.
2013/12/08 00:18:50| 0 With invalid flags.
2013/12/08 00:18:50|293760 Objects loaded.
2013/12/08 00:18:50| 0 Objects expired.
2013/12/08 00:18:50| 0 Objects cancelled.
2013/12/08 00:18:50| 0 Duplicate URLs purged.
2013/12/08 00:18:50| 0 Swapfile clashes avoided.
2013/12/08 00:18:50|   Took 1.00 seconds (294266.73 objects/sec).
2013/12/08 00:18:50| Beginning Validation Procedure
2013/12/08 00:18:50|   262144 Entries Validated so far.
2013/12/08 00:18:50|   Completed Validation Procedure
2013/12/08 00:18:50|   Validated 293759 Entries
2013/12/08 00:18:50|   store_swap_size = 18411592.00 KB
2013/12/08 00:18:50| storeLateRelease: released 0 objects

This means 1 object (293760-293759 =1) was not validated.

- Can squid still eventually automatically purge that 1 object from
disk through aging or something?
- Any way to extract through some debug option what that object is?

Yes I know it is just one file but I would like to keep the cache clean.


[squid-users] not caching a certain domain

2013-08-11 Thread Hussam Al-Tayeb
how do I force quick expiration of everything already cached from 
*.microsoft.com 
and make everything in the future from *.microsoft.com not cached?


Re: [squid-users] not caching a certain domain

2013-08-11 Thread Hussam Al-Tayeb
On Sunday 11 August 2013 13:41:26 Antony Stone wrote:
 http://wiki.squid- 
 cache.org/SquidFaq/ConfiguringSquid#Can_I_make_Squid_go_direct_for_some_site
 s.3F


Ok, thank you :)


Re: [squid-users] reading swap.state file

2013-08-07 Thread Hussam Al-Tayeb
On Wednesday 07 August 2013 19:48:33 Amos Jeffries wrote:
 On 7/08/2013 5:51 p.m., Hussam Al-Tayeb wrote:
  this is what I have right now.
  cache.log says
  2013/08/07 08:47:51|222094 Entries scanned
  2013/08/07 08:47:51| 0 Invalid entries.
  2013/08/07 08:47:51| 0 With invalid flags.
  2013/08/07 08:47:51|222094 Objects loaded.
  
  [root@hades squid]# find . -type f | wc -l
  222087
  
  if those objects that are not on disk get requested again, will squid
  fetch
  them from parent server and try again to cache them to disk?
 
 Yes it will.
 
 Amos

Thank you Amos. That is what I was looking for :) Now I don't have to worry :)


Re: [squid-users] reading swap.state file

2013-08-06 Thread Hussam Al-Tayeb
On Tuesday 06 August 2013 14:13:57 Eliezer Croitoru wrote:

 Dont be afraid to lose couple cached files...
 Squid recover from it very fast..
 
 Eliezer

not very fast. if i leave it like that, i end up with 50 to 200 files in 
swap.state every week that are not actually on disk. 

I wish we had some sort of 'squid fsck' option which checked for such 
inconsistencies. 


Re: [squid-users] reading swap.state file

2013-08-06 Thread Hussam Al-Tayeb
On Wednesday 07 August 2013 03:07:39 Eliezer Croitoru wrote:
 On 08/07/2013 12:25 AM, Hussam Al-Tayeb wrote:
  On Tuesday 06 August 2013 14:13:57 Eliezer Croitoru wrote:
  Dont be afraid to lose couple cached files...
  Squid recover from it very fast..
  
  Eliezer
  
  not very fast. if i leave it like that, i end up with 50 to 200 files in
  swap.state every week that are not actually on disk.
  
  I wish we had some sort of 'squid fsck' option which checked for such
  inconsistencies.
 
 Well you can follow on that in the store.log..
 This is why these are there.
 You can verify what files are being used as files and not as cache objects.
 Then you can make sure that all the files are scheduled for removal in
 the right time and place.
 it's kind of a simple tool but it needs to be designed properly to make
 sure it wont miss calculate that there is indeed a bug in squid or the OS..
 
 if you are up to the task of sketch the pseudo code for this operation
 It will be nice to see this kind of logs analysed.
 
 Eliezer
this is what I have right now. 
cache.log says 
2013/08/07 08:47:51|222094 Entries scanned
2013/08/07 08:47:51| 0 Invalid entries.
2013/08/07 08:47:51| 0 With invalid flags.
2013/08/07 08:47:51|222094 Objects loaded.

[root@hades squid]# find . -type f | wc -l
222087

if those objects that are not on disk get requested again, will squid fetch 
them from parent server and try again to cache them to disk?




Re: [squid-users] reading swap.state file

2013-08-05 Thread Hussam Al-Tayeb
On Monday 05 August 2013 10:12:41 Amos Jeffries wrote:
 On 5/08/2013 1:11 a.m., Hussam Al-Tayeb wrote:
  how I can parse swap.state file for inconsistencies?
  i.e. files referenced in swap.state but not in disk cache.
 
 swap.state is a journal of transactions. It includes references to
 operations that occured on old deleted files as part of its normal
 content. Squid handles any such files without the delete record
 automatically you do not need to worry about it.
 
  Or files on disk but not referenced in swap.state. it seems squid does not
  know how to shut down correctly if one of the users is viewing a youtube
  video.
 What do you mean by that last one? If Squid is shutting down it waits
 shutdown_timeout for clients to finish up (a long video would not do so)
 then terminates all remaining client connections and stops. None of them
 get written to the log, and on restart the corrupted files will be
 overwritten.
 
 Amos

That's not the case here. I have not set shutdown_timeout so it should be the 
default. now on kernel updates I need to restart the server. 

sometimes I get more files on disk that cache.log says (this very rarely 
happens).

And sometimes cache.log claims that swap.state has more referenced files than 
actually on disk. This happens almost every single time I squid -k shutdown 
then restart my server. Access.log file says the last thing being downloaded is 
a youtube video.



Re: [squid-users] reading swap.state file

2013-08-05 Thread Hussam Al-Tayeb
On Tuesday 06 August 2013 01:27:02 Amos Jeffries wrote:
 My earlier description is what happens on shutdown. If you have not set
 shutdown_timeout yourself then the default of 30 seconds will be when
 the timeout stages happen that is all. The process *will* happen
 regardless of when the timeout occurs. Before the timeout Squid is
 simply waiting for existing clients to finish and clearing up as much
 disk I/O as possible.
 It is good to hear that the cache garbage is a rare event, that means
 your Squid is getting enough cycles on shutdown to finish the disk I/O
 erase events at least.
 
 I expect that the swap.state claiming more files than exist is something
 to do with your OS caching disk I/O.
 
 Amos

Would manually setting the shutdown_timeout value to over 30 seconds (let's 
say 180 seconds) and typing 'sync' before shutting down squid to restart the 
server help with the OS caching disk i/o issue?
I am expecting the 3.10.5 kernel update anytime now so it would be a good time 
to test things.


Re: [squid-users] reading swap.state file

2013-08-05 Thread Hussam Al-Tayeb
On Monday 05 August 2013 21:40:40 Eliezer Croitoru wrote:
 On 08/05/2013 04:47 PM, Hussam Al-Tayeb wrote:
  Would manually setting the shutdown_timeout value to over 30 seconds
  (let's
  say 180 seconds) and typing 'sync' before shutting down squid to restart
  the server help with the OS caching disk i/o issue?
  I am expecting the 3.10.5 kernel update anytime now so it would be a good
  time to test things.
 
 The above is kind of not really needed to my understanding.
 The sync is OS level operation which squid can but not directly needs to
 control.
 
 In this level squid actually gives the OS the swapin\swapout results to
 schedule a write and read operations.
 From squid point of view the write operation was done already once the
 connection was ended and in a case that the mem_cache max size is
 exceeded and the cache_dir is the only option for saving the object.
 
 Once you shutdown a running OS in the nice way not just disconnecting
 the cable the OS does a sync as part of the clean shutdown.
 
 The reasons I can think of a file that was yet to be removed is that it
 was not scheduled by the OS or squid.
 What version of squid are you using?
 
 Eliezer
version 3.3.8


Re: [squid-users] reading swap.state file

2013-08-05 Thread Hussam Al-Tayeb
On Monday 05 August 2013 21:40:40 Eliezer Croitoru wrote:
 On 08/05/2013 04:47 PM, Hussam Al-Tayeb wrote:
  Would manually setting the shutdown_timeout value to over 30 seconds
  (let's
  say 180 seconds) and typing 'sync' before shutting down squid to restart
  the server help with the OS caching disk i/o issue?
  I am expecting the 3.10.5 kernel update anytime now so it would be a good
  time to test things.
 
 The above is kind of not really needed to my understanding.
 The sync is OS level operation which squid can but not directly needs to
 control.
 
 In this level squid actually gives the OS the swapin\swapout results to
 schedule a write and read operations.
 From squid point of view the write operation was done already once the
 connection was ended and in a case that the mem_cache max size is
 exceeded and the cache_dir is the only option for saving the object.
 
 Once you shutdown a running OS in the nice way not just disconnecting
 the cable the OS does a sync as part of the clean shutdown.
 
 The reasons I can think of a file that was yet to be removed is that it
 was not scheduled by the OS or squid.
 What version of squid are you using?
 
 Eliezer
squid version 3.3.8
I squid -k shutdown before restarting for kernel updates. so no bad 
shutdowns/poweoffs


[squid-users] reading swap.state file

2013-08-04 Thread Hussam Al-Tayeb
how I can parse swap.state file for inconsistencies? 
i.e. files referenced in swap.state but not in disk cache.
Or files on disk but not referenced in swap.state. it seems squid does not know 
how to shut down correctly if one of the users is viewing a youtube video.


Re: [squid-users] Problem with compile squid 3.4.0.1 on RHEL6 x64

2013-07-31 Thread Hussam Al-Tayeb
On Wednesday 31 July 2013 01:52:35 Kris Glynn wrote:
 Hi,
 
 I'm using a squid.spec from squid 3.3 to build 3.4.0.1 but it fails with
 /usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32
 against `.rodata' can not be used when making a shared object; recompile
 with -fPIC ../snmplib/libsnmplib.a: could not read symbols: Bad value
 
 libtool: link: g++ -I/usr/include/libxml2 -Wall -Wpointer-arith
 -Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT -O2 -g -fPIC
 -fpie -march=native -std=c++0x .libs/squidS.o -fPIC -pie -Wl,-z -Wl,relro
 -Wl,-z -Wl,now -o squid AclRegs.o AuthReg.o AccessLogEntry.o AsyncEngine.o
 YesNoNone.o cache_cf.o CacheDigest.o cache_manager.o carp.o cbdata.o
 ChunkedCodingParser.o client_db.o client_side.o client_side_reply.o
 client_side_request.o BodyPipe.o clientStream.o CompletionDispatcher.o
 ConfigOption.o ConfigParser.o CpuAffinity.o CpuAffinityMap.o
 CpuAffinitySet.o debug.o delay_pools.o DelayId.o DelayBucket.o
 DelayConfig.o DelayPool.o DelaySpec.o DelayTagged.o DelayUser.o
 DelayVector.o NullDelayId.o ClientDelayConfig.o disk.o
 DiskIO/DiskIOModule.o DiskIO/ReadRequest.o DiskIO/WriteRequest.o dlink.o
 dns_internal.o DnsLookupDetails.o errorpage.o ETag.o event.o EventLoop.o
 external_acl.o ExternalACLEntry.o FadingCounter.o fatal.o fd.o fde.o
 filemap.o fqdncache.o ftp.o FwdState.o gopher.o helper.o
 HelperChildConfig.o HelperReply.o htcp.o http.o HttpHdrCc.o HttpHdrRange.o
 HttpHdrSc.o HttpHdrScTarget.o HttpHdrContRange.o HttpHeader.o
 HttpHeaderTools.o HttpBody.o HttpMsg.o HttpParser.o HttpReply.o
 RequestFlags.o HttpRequest.o HttpRequestMethod.o icp_v2.o icp_v3.o int.o
 internal.o ipc.o ipcache.o SquidList.o main.o MasterXaction.o mem.o
 mem_node.o MemBuf.o MemObject.o mime.o mime_header.o multicast.o
 neighbors.o Notes.o Packer.o Parsing.o pconn.o peer_digest.o
 peer_proxy_negotiate_auth.o peer_select.o peer_sourcehash.o peer_userhash.o
 redirect.o refresh.o RemovalPolicy.o send-announce.o MemBlob.o snmp_core.o
 snmp_agent.o SquidMath.o SquidNew.o stat.o StatCounters.o StatHist.o
 String.o StrList.o stmem.o store.o StoreFileSystem.o store_io.o
 StoreIOState.o store_client.o store_digest.o store_dir.o store_key_md5.o
 store_log.o store_rebuild.o store_swapin.o store_swapmeta.o store_swapout.o
 StoreMeta.o StoreMetaMD5.o StoreMetaSTD.o StoreMetaSTDLFS.o
 StoreMetaUnpacker.o StoreMetaURL.o StoreMetaVary.o StoreStats.o
 StoreSwapLogData.o Server.o SwapDir.o MemStore.o time.o tools.o tunnel.o
 unlinkd.o url.o URLScheme.o urn.o wccp.o wccp2.o whois.o wordlist.o
 LoadableModule.o LoadableModules.o DiskIO/DiskIOModules_gen.o err_type.o
 err_detail_type.o globals.o hier_code.o icp_opcode.o LogTags.o lookup_t.o
 repl_modules.o swap_log_op.o DiskIO/AIO/AIODiskIOModule.o
 DiskIO/Blocking/BlockingDiskIOModule.o
 DiskIO/DiskDaemon/DiskDaemonDiskIOModule.o
 DiskIO/DiskThreads/DiskThreadsDiskIOModule.o
 DiskIO/IpcIo/IpcIoDiskIOModule.o DiskIO/Mmapped/MmappedDiskIOModule.o
 -Wl,--export-dynamic  auth/.libs/libacls.a ident/.libs/libident.a
 acl/.libs/libacls.a acl/.libs/libstate.a auth/.libs/libauth.a libAIO.a
 libBlocking.a libDiskDaemon.a libDiskThreads.a libIpcIo.a libMmapped.a
 acl/.libs/libapi.a base/.libs/libbase.a ./.libs/libsquid.a ip/.libs/libip.a
 fs/.libs/libfs.a ipc/.libs/libipc.a mgr/.libs/libmgr.a anyp/.libs/libanyp.a
 comm/.libs/libcomm.a eui/.libs/libeui.a http/.libs/libsquid-http.a
 icmp/.libs/libicmp.a icmp/.libs/libicmp-core.a log/.libs/liblog.a
 format/.libs/libformat.a repl/libheap.a repl/liblru.a -lpthread -lcrypt
 adaptation/.libs/libadaptation.a esi/.libs/libesi.a
 ../lib/libTrie/libTrie.a -lxml2 -lexpat ssl/.libs/libsslsquid.a
 ssl/.libs/libsslutil.a snmp/.libs/libsnmp.a ../snmplib/libsnmplib.a
 ../lib/.libs/libmisccontainers.a ../lib/.libs/libmiscencoding.a
 ../lib/.libs/libmiscutil.a -lssl -lcrypto -lgssapi_krb5 -lkrb5 -lk5crypto
 -lcom_err -L/root/rpmbuild/BUILD/squid-3.4.0.1/compat -lcompat-squid -lm
 -lnsl -lresolv -lcap -lrt -ldl -L/root/rpmbuild/BUILD/squid-3.4.0.1 -lltdl
 /usr/bin/ld: ../snmplib/libsnmplib.a(snmp_vars.o): relocation R_X86_64_32
 against `.rodata' can not be used when making a shared object; recompile
 with -fPIC ../snmplib/libsnmplib.a: could not read symbols: Bad value
 collect2: ld returned 1 exit status
 libtool: link: rm -f .libs/squidS.o
 make[3]: *** [squid] Error 1
 make[3]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
 make[2]: *** [all-recursive] Error 1
 make[2]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
 make[1]: *** [all] Error 2
 make[1]: Leaving directory `/root/rpmbuild/BUILD/squid-3.4.0.1/src'
 make: *** [all-recursive] Error 1
 
 Any ideas?
 
 
 
 


I don't know much about spec files but try unsetting CFLAGS and CXXFLAGS before 
configuring. 


Re: [squid-users] file count mismatch

2013-07-04 Thread Hussam Al-Tayeb
On Thursday 04 July 2013 17:55:32 Amos Jeffries wrote:
 Interesting. If you scan swap.state and pull out the fileno field it
 should map to the directory L1/L2/file structure. Can you identify
 which objects the overlap is about and what swap.state is saying about them?

How can I do that?


[squid-users] file count mismatch

2013-07-02 Thread Hussam Al-Tayeb
this is squid 3.3.6

[root@LARS squid]# tail -n 30 /var/log/squid/cache.log
2013/07/03 06:34:59| Target number of buckets: 40392
2013/07/03 06:34:59| Using 65536 Store buckets
2013/07/03 06:34:59| Max Mem  size: 262144 KB
2013/07/03 06:34:59| Max Swap size: 1024 KB
2013/07/03 06:34:59| Rebuilding storage in /home/squid (clean log)
2013/07/03 06:34:59| Using Least Load store dir selection
2013/07/03 06:34:59| Set Current Directory to /home/squid
2013/07/03 06:34:59| Loaded Icons.
2013/07/03 06:34:59| HTCP Disabled.
2013/07/03 06:34:59| Squid plugin modules loaded: 0
2013/07/03 06:34:59| Accepting NAT intercepted HTTP Socket connections at 
local=0.0.0.0:3128 remote=[::] FD 16 flags=41
2013/07/03 06:34:59| Accepting HTTP Socket connections at local=127.0.0.1:3129 
remote=[::] FD 17 flags=9
2013/07/03 06:34:59| Store rebuilding is 1.33% complete
2013/07/03 06:35:00| Done reading /home/squid swaplog (299709 entries)
2013/07/03 06:35:00| Finished rebuilding storage from disk.
2013/07/03 06:35:00|299709 Entries scanned
2013/07/03 06:35:00| 0 Invalid entries.
2013/07/03 06:35:00| 0 With invalid flags.
2013/07/03 06:35:00|299709 Objects loaded.
2013/07/03 06:35:00| 0 Objects expired.
2013/07/03 06:35:00| 0 Objects cancelled.
2013/07/03 06:35:00| 0 Duplicate URLs purged.
2013/07/03 06:35:00| 0 Swapfile clashes avoided.
2013/07/03 06:35:00|   Took 1.36 seconds (220842.74 objects/sec).
2013/07/03 06:35:00| Beginning Validation Procedure
2013/07/03 06:35:00|   262144 Entries Validated so far.
2013/07/03 06:35:00|   Completed Validation Procedure
2013/07/03 06:35:00|   Validated 299707 Entries
2013/07/03 06:35:00|   store_swap_size = 8047432.00 KB
2013/07/03 06:35:01| storeLateRelease: released 0 objects
[root@LARS squid]# find . -type f | wc -l
299706

this means there are 299706 files in /home/squid including the swap.state file 
so a total of 299705 objects on disk
but swap.state thinks there are 299709 files.

another thing I found.
cat /var/log/squid/cache.log | grep WARNING | grep swapin

2013/06/08 23:35:45| WARNING: 1 swapin MD5 mismatches
2013/06/09 00:28:59| WARNING: 10 swapin MD5 mismatches
2013/06/09 12:20:56| WARNING: 1 swapin MD5 mismatches
2013/06/09 12:25:46| WARNING: 10 swapin MD5 mismatches
2013/06/09 14:40:18| WARNING: 1 swapin MD5 mismatches
2013/06/10 02:31:02| WARNING: 1 swapin MD5 mismatches
2013/06/10 13:00:37| WARNING: 1 swapin MD5 mismatches
2013/06/10 22:59:53| WARNING: 1 swapin MD5 mismatches
2013/06/12 14:41:23| WARNING: 1 swapin MD5 mismatches
2013/06/15 04:37:03| WARNING: 1 swapin MD5 mismatches
2013/06/28 07:07:33| WARNING: 1 swapin MD5 mismatches
2013/06/28 21:51:19| WARNING: 1 swapin MD5 mismatches
2013/06/29 23:41:49| WARNING: 1 swapin MD5 mismatches
2013/06/30 04:14:24| WARNING: 10 swapin MD5 mismatches
2013/06/30 19:10:49| WARNING: 1 swapin MD5 mismatches
2013/06/30 19:10:49| WARNING: 10 swapin MD5 mismatches
2013/07/02 13:44:11| WARNING: 1 swapin MD5 mismatches
2013/07/02 13:48:35| WARNING: 10 swapin MD5 mismatches
2013/07/02 23:27:32| WARNING: 100 swapin MD5 mismatches
2013/07/03 06:02:18| WARNING: 1 swapin MD5 mismatches
2013/07/03 06:20:43| WARNING: 1 swapin MD5 mismatches
2013/07/03 06:26:59| WARNING: 10 swapin MD5 mismatches


1) Any of the above is something to worry about? 
2) Does squid resolve the file mismatch eventually as I reach the max size of 
cache dir?
3) the swapin MD5 mismatches problem. Is it something I can fix? If so, how?

Any other information I can post to help detect where the problem is?


Re: [squid-users] how to block facebook using squid transparent with SSL support?

2013-05-17 Thread Hussam Al-Tayeb
On Friday 17 May 2013 09:21:55 Jose Junior wrote:
 Personnel, the company where I work she, I need to block facebook, I
 can but it affects the connection with other sites such as gmail
 
 thank you very much

acl blockedurls dstdom_regex -i /etc/squid/squid.blockedurls
http_access deny blockedurls

add this to /etc/squid/squid.blockedurls

(^|\.)facebook\.com$

this blocks http://anything.facebook.com

but your users will still be able to access https://anything.facebook.com





Re: [squid-users] how to block facebook using squid transparent with SSL support?

2013-05-17 Thread Hussam Al-Tayeb
On Friday 17 May 2013 13:36:07 Delton wrote:
 Using the dstdomain http://www.facebook.com is blocked and you receive
 the error page of Squid, but when accessing https://www.facebook is
 displayedthe proxy server connection refused, not the Squid error page.
 
 Em 17/05/2013 11:57, Amos Jeffries escreveu:
  On 18/05/2013 1:41 a.m., Hussam Al-Tayeb wrote:
  On Friday 17 May 2013 09:21:55 Jose Junior wrote:
  Personnel, the company where I work she, I need to block facebook, I
  can but it affects the connection with other sites such as gmail
  
  thank you very much
  
  acl blockedurls dstdom_regex -i /etc/squid/squid.blockedurls
  http_access deny blockedurls
  
  add this to /etc/squid/squid.blockedurls
  
  (^|\.)facebook\.com$
  
  this blocks http://anything.facebook.com
  
  but your users will still be able to access
  https://anything.facebook.com
  
  Just doing these is *exactly* equivalent to the above:
   acl blockedurls dstdomain .facebook.com
   http_access deny blockedurls
  
  And both ways of writing it will block HTTPS traffic as well as HTTP.
  
  Amos

It depends on whether you are routing https traffic thought squid or not. 
Almost 
not ISP routes https traffic through proxies.


Re: [squid-users] Re: Cache compression

2013-05-13 Thread Hussam Al-Tayeb
On Monday 13 May 2013 02:04:16 babajaga wrote:
 Good idea.
 It should not be too complicated to modify storeUfsWrite/storeUfsRead
 for example to include some type of compression.
 However, question is, how effective it would be, as there are many graphic
 file types, not easily compressable. So it would result in a lot of wasted
 CPU cycles.
 
 A simpler solution might be the usage of a compressing proxy on top of
 squid. So squid will cache compressed files in all FS available, without any
 patches.
 
 
 
 --

Why? It'll just cost CPU usage. storage is cheap nowadays anyways. you can 
just buy larger capacity disks.


Re: [squid-users] ssd hardsik to the operation system , does it make difference ?

2013-05-12 Thread Hussam Al-Tayeb
On Sunday 12 May 2013 01:02:19 Ahmad wrote:
 hi ,
 i want to ask about ssd hardsik to the squid opersting system
 
 does it make difference with squid performance ??  im asking about the os
 hardsik not cache hardsiks 

I personally wouldn't use ssd hard disk for caching. SSDs have a huge but 
limited write operations. Use a regular mechanical disk but choose ext4 for 
partition and a linux 3.3+ kernel. I have had good experiences with that 
combination.


[squid-users] invalid entries

2013-04-19 Thread Hussam Al-Tayeb
once in a while, squid says:
20 Invalid entries
this means there are stale files on disk that can't be indexed in the swap file.
any way to clean those files from disk without purging the whole cache?


Re: [squid-users] invalid entries

2013-04-19 Thread Hussam Al-Tayeb
On Friday 19 April 2013 18:23:52 Hussam Al-Tayeb wrote:
 once in a while, squid says:
 20 Invalid entries
 this means there are stale files on disk that can't be indexed in the swap
 file. any way to clean those files from disk without purging the whole
 cache?

it just happened again. it happens when I shutdown squid while a user is 
downloading from a website like youtube.


[squid-users] stale files on disk

2013-03-10 Thread Hussam Al-Tayeb
I am using squid 3.1.23 and not planning on migrating to a higher version for 
another few months.
squid says 8 Duplicate URLs purged
It will not clear the actual stale files form disk
so cache.log says there are 8 less files than find command says.
I fall into this situation once every few weeks and I end up purging the whole 
cache or restore a back up of the disk cache.
Is there any way to clear the stale files from disk instead?