Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-27 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I confirm: 3.5.10 has normal cache-hits rate. Against 4.0.1. For the
same sites.

Something broken in 4.x.

27.10.15 1:37, Amos Jeffries пишет:
> On 27/10/2015 6:22 a.m., Yuri Voinov wrote:
>>
>> Ah, ok:
>>
>> We see in redbot.org this info in server response:
>>
>>  Cache-Control: no-cache
>>
>
> It also says "this content was negotiated but does not have an
> appropriate Vary header". Which is marked as a protocol error.
>
> And has a status code of 400 (unspecified error by the client).
>
> And has passed through three non-Squid proxies without being cached
> there either.
>
>
>>
>>
>> So, what? 3.5.10 permit ignore this. 4.0.x - deny.
>>
>
> Rather bold statement. Where is the cache.log line(s) saying that was
> the decision Squid made?
>
>
>> Squid decides?
>>
>
> No, the content owner does.
>
>
>> Maybe I'll decide what and how to cache in the my setup?
>
> You are just the caretaker of the information. It belongs to its
> creators. What you can do with their property depends on what they allow
> to be done with it.
>
> HTTP is the legal rights granting methodology they chose to distribute
> with. The creators have granted you/anyone the license to cache
> (redistribute) that object. They did so via the badly named
> Cache-Control:no-cache header. Which comes with the license condition
> that the content be revalidated before redistribution.
>
> In other words, the content owner(s) retain the right to veto any
> recipient receiving their content or to provide alternative content at
> any time.
>
> [[ Given that it seems to flip between an error page and an image
> depicting the internal design of a nuclear device - depending on where
> in the world one views it from. It would seem that the behaviour is
> probably intentional. ]]
>
>
> Within your new right to cache and redistribute you then get to choose
> how long for - on that particular item.
>
>
> BTW: Revalidate does not lead to MISS. It leads to a REFRESH_MODIFIED or
> REFRESH_UNMODIFIED.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWL6IPAAoJENNXIZxhPexG50UH/igLtThHvhinUYZsQNepGtsI
Wwz91vM3c2NpIqX0mxcC3z+0hYJ0GK/8KLxvwJfjQALDV5UOxX3xIWCdFU8hd8UF
Kn9AdKSz1/HoNGJQefx1OuQITLtgCYEu7iY61Dxj+VChn1qP5bEr4ZEreKGbrBoX
3yoqSfis1tjOiAHdth0XxI//Ebk9w7h3r3dhF0Ewe2X+F7X6Cxzn1dvg8I/896qy
gQ61qJo47M+C9P//N8TYLx9+psG1J63P8wPvO2VDQeAcIKZ1HGrZJ0G/OsNQkTOZ
t0o5RrKbO5NwX+I6xCShod4ElQVJu5vGkxzo1Wxf2CDDrPUd8hnSa74HAMXEFys=
=sJ8t
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Yuri Voinov
I understand perfectly. However - I changed version of the proxy and 
everywhere at the same time changed the headlines? Now I will return the 
version 3.4 and everything will be as it was. I already did.



That's why I ask the question - what has changed so much that the same 
configuration I get 10 times smaller cache hit.


26.10.15 16:29, Eliezer Croitoru пишет:

Hey Yuri,

What have you tried until now to understand the situation of the issue?
From your basic question I was sure that you ran some tests on some 
well defined objects.
To asses the state of squid you would need some static objects and 
some changing objects.
You would also be required to test using a simple UA like a 
script\wget\curl and using a fully fledged UA such as 
chrome\firefox\explorer\opera\safari.


I can try to give you couple cache friendly sites which will show you 
what the basic status of squid.

In the past someone here asked how to cache the site: http://djmaza.info/
which by default is built using lots of static content such as html 
pages, pictures and good css and also implements cache headers nicely.
I still do not know why wordpress main pages do not have a valid cache 
headers, and what I mean by that is: Is it not possible to have a page 
cached for 60 seconds? how much updates can accrue in 60 seconds? 
would a "must-revalidate" cost that much compared to what there is 
now?(maybe it does).
Another one would be https://www.yahoo.com/ which is mostly fine with 
cache and define their main page with "Cache-Control: no-store, 
no-cache, private, max-age=0" since it's a news page which updates too 
much.


I was looking for cache testing subjects and I am still missing couple 
examples\options.


Until now what have mainly used for basic test was redbot.org and a 
local copy of it for testing purposes.
And while writing to you looking for test subjects I have seen that my 
own web server have a question for squid.
I have an example case which gives a great example how to "break" 
squid cache.

So I have my website and the main page:
http://ngtech.co.il/
https://ngtech.co.il/ (self signed certificate)

that would be cached properly by squid!
If for any reason I would remove the Last-Modified header from the 
page it would become un-cachable in squid (3.5.10) with default settings.
Accidentally I turned ON the apache option to treat html as php using 
the apache configuration line:

AddType application/x-httpd-php .html .htm

Which is recommended to be used by the first google result for "html 
as php".

Once you will try the next page(with this setting on):
http://ngtech.co.il/squidblocker/
https://ngtech.co.il/squidblocker/ (self signed certificate)

You will see that it is not being cached at all.
Redbot.org claims that the cache is allowed it's own freshness for the 
object but squid (3.5.10) will not cache it what so ever and no matter 
what I do.
When I am removing "http as php" tweak the page response with a 
Last-Modified header and can be cached again.


I am unsure who is the culprit for the issue but I will ask about it 
in a separated thread *if* I will get no response here.(sorry for 
partially top-posting)


Eliezer


On 25/10/2015 21:29, Yuri Voinov wrote:

In a nutshell - I need no possible explanation. I want to know - it's a
bug or so conceived?

26.10.15 1:17, Eliezer Croitoru пишет:

Hey Yuri,

I am not sure if you think that Squid version 4 with extreme low hit

ratio is bad or not but I can understand your sight about things.

Usually I am redirecting to this page:
http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples 



But this time I can proudly say that the squid project is doing things

the right way while it might not be understood by some.

Before you or anyone declares that there is a low hit ratio due to

something that is missing I will try to put some sense into how things
looks in the real world.

Small thing from a nice day of mine:
I was sitting talking with a friend of mine, a MD to be exact and

while we were talking I was just comforting him about the wonders of
Computers.

He was complaining on how the software in the office moves so slow and

he needs to wait for the software to response with results. So I
hesitated a bit but then I asked him "What would have happen if some MD
here in the office will receive the wrong content\results on a patient
from the software? he described it to me terrified from the question 'He
can get the wrong decision!' and then I described to him how he is in
such a good place when he doesn't need to fear from such scenarios.

In this same office Squid is being used for many things and it's

crucial that besides the option to cache content the possibility to
validate cache properly will be set right.


I do understand that there is a need for caches and sometimes it is

crucial in order to give the application more CPU cycles or more RAM but
sometimes the hunger for cache can consume the actual requirement for

Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Eliezer Croitoru

Hey Yuri,

What have you tried until now to understand the situation of the issue?
From your basic question I was sure that you ran some tests on some 
well defined objects.
To asses the state of squid you would need some static objects and some 
changing objects.
You would also be required to test using a simple UA like a 
script\wget\curl and using a fully fledged UA such as 
chrome\firefox\explorer\opera\safari.


I can try to give you couple cache friendly sites which will show you 
what the basic status of squid.

In the past someone here asked how to cache the site: http://djmaza.info/
which by default is built using lots of static content such as html 
pages, pictures and good css and also implements cache headers nicely.
I still do not know why wordpress main pages do not have a valid cache 
headers, and what I mean by that is: Is it not possible to have a page 
cached for 60 seconds? how much updates can accrue in 60 seconds? would 
a "must-revalidate" cost that much compared to what there is now?(maybe 
it does).
Another one would be https://www.yahoo.com/ which is mostly fine with 
cache and define their main page with "Cache-Control: no-store, 
no-cache, private, max-age=0" since it's a news page which updates too much.


I was looking for cache testing subjects and I am still missing couple 
examples\options.


Until now what have mainly used for basic test was redbot.org and a 
local copy of it for testing purposes.
And while writing to you looking for test subjects I have seen that my 
own web server have a question for squid.
I have an example case which gives a great example how to "break" squid 
cache.

So I have my website and the main page:
http://ngtech.co.il/
https://ngtech.co.il/ (self signed certificate)

that would be cached properly by squid!
If for any reason I would remove the Last-Modified header from the page 
it would become un-cachable in squid (3.5.10) with default settings.
Accidentally I turned ON the apache option to treat html as php using 
the apache configuration line:

AddType application/x-httpd-php .html .htm

Which is recommended to be used by the first google result for "html as 
php".

Once you will try the next page(with this setting on):
http://ngtech.co.il/squidblocker/
https://ngtech.co.il/squidblocker/ (self signed certificate)

You will see that it is not being cached at all.
Redbot.org claims that the cache is allowed it's own freshness for the 
object but squid (3.5.10) will not cache it what so ever and no matter 
what I do.
When I am removing "http as php" tweak the page response with a 
Last-Modified header and can be cached again.


I am unsure who is the culprit for the issue but I will ask about it in 
a separated thread *if* I will get no response here.(sorry for partially 
top-posting)


Eliezer


On 25/10/2015 21:29, Yuri Voinov wrote:

In a nutshell - I need no possible explanation. I want to know - it's a
bug or so conceived?

26.10.15 1:17, Eliezer Croitoru пишет:

Hey Yuri,

I am not sure if you think that Squid version 4 with extreme low hit

ratio is bad or not but I can understand your sight about things.

Usually I am redirecting to this page:

http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples


But this time I can proudly say that the squid project is doing things

the right way while it might not be understood by some.

Before you or anyone declares that there is a low hit ratio due to

something that is missing I will try to put some sense into how things
looks in the real world.

Small thing from a nice day of mine:
I was sitting talking with a friend of mine, a MD to be exact and

while we were talking I was just comforting him about the wonders of
Computers.

He was complaining on how the software in the office moves so slow and

he needs to wait for the software to response with results. So I
hesitated a bit but then I asked him "What would have happen if some MD
here in the office will receive the wrong content\results on a patient
from the software? he described it to me terrified from the question 'He
can get the wrong decision!' and then I described to him how he is in
such a good place when he doesn't need to fear from such scenarios.

In this same office Squid is being used for many things and it's

crucial that besides the option to cache content the possibility to
validate cache properly will be set right.


I do understand that there is a need for caches and sometimes it is

crucial in order to give the application more CPU cycles or more RAM but
sometimes the hunger for cache can consume the actual requirement for
the content integrity and it must be re-validated from time to time.


I have seen couple times how a cache in a DB or other levels results

with a very bad and unwanted result while I do understand some of the
complexity and caution that the programmers take when building all sort
of systems with cache in them.


If you do want to understand more about the subject pick 

Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Amos Jeffries
On 27/10/2015 6:22 a.m., Yuri Voinov wrote:
> 
> Ah, ok:
> 
> We see in redbot.org this info in server response:
> 
>  Cache-Control: no-cache
> 

It also says "this content was negotiated but does not have an
appropriate Vary header". Which is marked as a protocol error.

And has a status code of 400 (unspecified error by the client).

And has passed through three non-Squid proxies without being cached
there either.


> 
> 
> So, what? 3.5.10 permit ignore this. 4.0.x - deny.
> 

Rather bold statement. Where is the cache.log line(s) saying that was
the decision Squid made?


> Squid decides?
> 

No, the content owner does.


> Maybe I'll decide what and how to cache in the my setup?

You are just the caretaker of the information. It belongs to its
creators. What you can do with their property depends on what they allow
to be done with it.

HTTP is the legal rights granting methodology they chose to distribute
with. The creators have granted you/anyone the license to cache
(redistribute) that object. They did so via the badly named
Cache-Control:no-cache header. Which comes with the license condition
that the content be revalidated before redistribution.

In other words, the content owner(s) retain the right to veto any
recipient receiving their content or to provide alternative content at
any time.

[[ Given that it seems to flip between an error page and an image
depicting the internal design of a nuclear device - depending on where
in the world one views it from. It would seem that the behaviour is
probably intentional. ]]


Within your new right to cache and redistribute you then get to choose
how long for - on that particular item.


BTW: Revalidate does not lead to MISS. It leads to a REFRESH_MODIFIED or
REFRESH_UNMODIFIED.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


27.10.15 1:37, Amos Jeffries пишет:
> On 27/10/2015 6:22 a.m., Yuri Voinov wrote:
>>
>> Ah, ok:
>>
>> We see in redbot.org this info in server response:
>>
>>  Cache-Control: no-cache
>>
>
> It also says "this content was negotiated but does not have an
> appropriate Vary header". Which is marked as a protocol error.
>
> And has a status code of 400 (unspecified error by the client).
>
> And has passed through three non-Squid proxies without being cached
> there either.
>
>
>>
>>
>> So, what? 3.5.10 permit ignore this. 4.0.x - deny.
>>
>
> Rather bold statement. Where is the cache.log line(s) saying that was
> the decision Squid made?
>
>
>> Squid decides?
>>
>
> No, the content owner does.
>
>
>> Maybe I'll decide what and how to cache in the my setup?
>
> You are just the caretaker of the information. It belongs to its
> creators. What you can do with their property depends on what they allow
> to be done with it.
Sure. But in real world we have much, very much unscrupulous webmasters,
such as distributors of advertising or unscrupulous CEOs who abuse their
rights.

This is why we fight againts them with our caches.
>
>
> HTTP is the legal rights granting methodology they chose to distribute
> with. The creators have granted you/anyone the license to cache
> (redistribute) that object. They did so via the badly named
> Cache-Control:no-cache header. Which comes with the license condition
> that the content be revalidated before redistribution.
>
> In other words, the content owner(s) retain the right to veto any
> recipient receiving their content or to provide alternative content at
> any time.
I understand, Amos.
>
>
> [[ Given that it seems to flip between an error page and an image
> depicting the internal design of a nuclear device - depending on where
> in the world one views it from. It would seem that the behaviour is
> probably intentional. ]]
>
>
> Within your new right to cache and redistribute you then get to choose
> how long for - on that particular item.
>
>
> BTW: Revalidate does not lead to MISS. It leads to a REFRESH_MODIFIED or
> REFRESH_UNMODIFIED.
Absolutely sure. But this not occurs.

This problem occurs at the same time on different squid's. 4.0.1 leads
TCP_MISS, but 3.5.10 on _the_same_ content gets TCP_HIT or TCP_MEM_HIT.
>
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLpH6AAoJENNXIZxhPexGtMwIAIy4TJi14vSO9d0EKmzsLyHz
PMN850bJ+6+kinDy5bPg9Y+SByGgrUU81wpTEqDHPj0AwxW0lUoyurQQLPmY2CMB
1n93eWZrDsJRz2MJNXPNubKxA9qmxsTze2yZbjzPLjysVp8C2VWbRJWl5UePH7NZ
1xyY1qCWsrlGRjnQrQVptn1yeK2fFjy08fUzfO7uDG1+oRuhVCuJ1nTr6ESe60U3
BAacyaP4R8YngFMnz8+bKSV06MIZswjYe9+zYORxX2E/acXmL5b0hN3gQH211RNN
8cfEYO7nxA2MnjlvbKqHmJj9flly98yj7BNdRHvYYa6VDQ6lDYzwMuzbwDLB4DE=
=w0DL
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Here is one day Squid 4 working statistics (by Calamaris):

http://i.imgur.com/XeYRWbY.png

It's about nothing. Squid 3 in bad days easy achieves 35%

26.10.15 23:34, Alex Rousskov пишет:
> On 10/26/2015 11:19 AM, Yuri Voinov wrote:
>
>> 4.0.1 has more than 4 times
>> bigger mem_cache, 1 Gb. 1st example 3.5.10 has only 256 Mbytes. This is
>> the reason of miss??
>
> Please see my previous email if you want to improve your chances of
> getting a correct answer to that and other related questions.
>
> In summary: Squid logs why it made an immediate-hit, immediate-miss, or
> revalidate-and-then-decide decision. There are approximately 100 factors
> that affect that decision. If you want folks to speculate, keep posting
> questions. If you want an answer, collect and share those logs.
>
>
> And, yes, Squid needs a better interface for triaging caching decisions
> than studying detailed cache.logs. You already know what to do to add
> that very useful feature:
>
http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
>
>
> Alex.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLmhDAAoJENNXIZxhPexGMLsIAMyu9d2GGuXT5GgEcFt3qoMT
EKNnaj11h4QP0KIwyty/wh9eHeC8yan940B0DZ4Ne1MmJANrC98dGN1srfdHPdsj
haEEYInjpcgfg5vdWjR6ZDx0hupOCp3rBrRNl6Dj1/2JhXY736L6VppaBJt1suUn
ya+x8JDRpNGv3rHmK/WgWZ5wkCPTbFGosupeBZtaF8BDih8NGer3wXcGGge5Mscg
z6cMPL1NVhm9mQgUdNeVvdBvXpn8d/brGK79ahf44kTaZtPA2rticrhiL1HlF21x
lJqdt8U0FDG3AjEbSYY97qmcEd1TgmzLwgsgy8p8mx0W7muOhQH4CIgzpPXRPg4=
=JlBa
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Alex Rousskov
On 10/26/2015 11:19 AM, Yuri Voinov wrote:

> 4.0.1 has more than 4 times
> bigger mem_cache, 1 Gb. 1st example 3.5.10 has only 256 Mbytes. This is
> the reason of miss??

Please see my previous email if you want to improve your chances of
getting a correct answer to that and other related questions.

In summary: Squid logs why it made an immediate-hit, immediate-miss, or
revalidate-and-then-decide decision. There are approximately 100 factors
that affect that decision. If you want folks to speculate, keep posting
questions. If you want an answer, collect and share those logs.


And, yes, Squid needs a better interface for triaging caching decisions
than studying detailed cache.logs. You already know what to do to add
that very useful feature:
http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
This should be understood as "Life Saving handiwork of drowning." Or "We
change something, but you sort it out, how to fix it, it's the Open
Source, baby";)

So, finally, there is no answer.

PS. If developers do not know - the more we do not know and can not know.

26.10.15 23:34, Alex Rousskov пишет:
> On 10/26/2015 11:19 AM, Yuri Voinov wrote:
>
>> 4.0.1 has more than 4 times
>> bigger mem_cache, 1 Gb. 1st example 3.5.10 has only 256 Mbytes. This is
>> the reason of miss??
>
> Please see my previous email if you want to improve your chances of
> getting a correct answer to that and other related questions.
>
> In summary: Squid logs why it made an immediate-hit, immediate-miss, or
> revalidate-and-then-decide decision. There are approximately 100 factors
> that affect that decision. If you want folks to speculate, keep posting
> questions. If you want an answer, collect and share those logs.
>
>
> And, yes, Squid needs a better interface for triaging caching decisions
> than studying detailed cache.logs. You already know what to do to add
> that very useful feature:
>
http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
>
>
> Alex.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLmbcAAoJENNXIZxhPexG7GAIAKYIXePUCf5oGIPaOyRV8WAt
jkmfS7jXf+B0eFq3QBMs6yVfinc2N8TVeN68zEYcXyBuTacr3sV2dcqr23lU2VR/
D9+mp7Q9lg3MnSBlPoRe6R69//Wc3ihl7yOVQNdFLrCDLHg7XlKWf8+0VbNcW9Zs
PMwv/p40tmvQLzJvjtEvgy7BMN7ZUSruHPO5r24kDRs6GAiz5Fqhq6Gen7ZvdaRj
K5XMpW7CdMG5LAHHmbAiGlFutCPJd7RuMwMNfPJ0hotw3QMwn3xksSt1LPoX+uYh
WbHCMgpM6yTuyT58nlZpnK1I9kg525pTxJWI7zTn+n4L+/NVqiPDHgD4QCeYNOk=
=LGMy
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
The answer is simple.

Look ath this row from 3.5.10 access.log:

1445879345.827 48 127.0.0.1 TCP_MEM_HIT/200 24425 GET
https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Teller-Ulam_device.png/200px-Teller-Ulam_device.png
- HIER_NONE/- image/png

We can see a big enoug image from Wiki. Mem hit, yea?

With next request we got TCP_HIT, when image comes from disk cache after
swap out.

Refresh pattern from 3.5.10 related to this URL:

# Other long-lived items
refresh_pattern -i
\.(jp(e?g|e|2)|gif|png|tiff?|bmp|ico|svg|webp|flv|f4f|mp4|ttf|eot|woff2?)(\?.*)?$
   
1440099%518400 override-expire override-lastmod
reload-into-ims ignore-private ignore-no-store

Then we look in 4.0.1 access.log with the same image:

14458854979.432 48 127.0.0.1 TCP_MISS/200 24425 GET
https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Teller-Ulam_device.png/200px-Teller-Ulam_device.png
- HIER_NONE/- image/png

# Other long-lived items
refresh_pattern -i
\.(jp(e?g|e|2)|gif|png|bmp|ico|svg|web(p|m)|flv|f4f|mp(3|4)|ttf|eot|woff2?))(\?.*)?$
   
14400100%518400override-expire override-lastmod
reload-into-ims ignore-private ignore-no-store

Did you see principal difference? Ah, yes. 4.0.1 has more than 4 times
bigger mem_cache, 1 Gb. 1st example 3.5.10 has only 256 Mbytes. This is
the reason of miss??

Both servers has the similar squid.conf. Both images are static.

Well, now we goes to redbot.org and check this URL:

http://i.imgur.com/KiydfT0.png

What we seen?


  Caching

  * This response allows all caches to store it.
  * This response cannot be served from cache without validation.

So, what's wrong with Squid NOW?

Interesting, isn't it?

26.10.15 23:01, Alex Rousskov пишет:
> On 10/26/2015 04:41 AM, Yuri Voinov wrote:
>
>> what has changed so much that the same
>> configuration I get 10 times smaller cache hit.
>
> You are asking a good question. I do not think anybody knows the exact
> answer -- too many things have changed in general to either identify the
> changes that have affected your [complicated] setup or to exclude all of
> the changes and blame some yet-unknown v4 bug.
>
>
> However, the following procedure is almost guaranteed to lead you to the
> answer:
>
> 1. Find a URL/resource that was served from the cache in v3 but became a
> miss in v4. You probably have access.logs that can be used for that. If
> not, enable them and run more experiments. Since your drop in hit ratio
> is so drastic, it should not take long to find a URL that was usually a
> hit before and is usually a miss now (or that becomes a hit/miss as soon
> as you switch to v3/4).
>
> 2. Reproduce the v4 miss with an ALL,9 cache.log and share that log. Do
> this using single-transaction command-line tools, with no other traffic
> going through Squid. Others on this list can guide you as to what
> logging options to use if you do not want to run with ALL,9.
>
> The above requires some work on your part. It is a good idea to run
> these tests in a non-production environment (which may require even more
> work from you). Needless to say, you are not required to do that extra
> work. However, some extra work is most likely required if you want to
> get the answer to your question because there is currently not enough
> information to answer it.
>
>
> HTH,
>
> Alex.
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLmC+AAoJENNXIZxhPexGGj8IAKfXg0FZU/27pZxcga0cmcRD
lSKxRf9zwxpGu5lZ9ubDaZwf/SYlish8Ej7pQkzTSTi4ApVP5QWYg4P9DoY1PnJ2
ziSwFPuc3zVmgm8ua8fKUL+zsBMe46xIBNHino13dEN4tzFgxm4gQ2VjDOk/0S2n
xM0svah0UDBiagSeyKt7SeDIwZRNNhMywiALwyjltrYc73OkfrwLt4yDE08SehOi
phDlsVwaaQDqOk8AauYoNl55BBa3Qj+GlqJ3CduIIU6C54+6cAYG8V3YTJtwFFvX
wLa7u5PWB5fkWAiuS+ZlKdG3A2KRO16Ya87dMCdu0GaYTS0eo2I+sqtCiCCRgg8=
=ahlV
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-26 Thread Alex Rousskov
On 10/26/2015 04:41 AM, Yuri Voinov wrote:

> what has changed so much that the same
> configuration I get 10 times smaller cache hit.

You are asking a good question. I do not think anybody knows the exact
answer -- too many things have changed in general to either identify the
changes that have affected your [complicated] setup or to exclude all of
the changes and blame some yet-unknown v4 bug.


However, the following procedure is almost guaranteed to lead you to the
answer:

1. Find a URL/resource that was served from the cache in v3 but became a
miss in v4. You probably have access.logs that can be used for that. If
not, enable them and run more experiments. Since your drop in hit ratio
is so drastic, it should not take long to find a URL that was usually a
hit before and is usually a miss now (or that becomes a hit/miss as soon
as you switch to v3/4).

2. Reproduce the v4 miss with an ALL,9 cache.log and share that log. Do
this using single-transaction command-line tools, with no other traffic
going through Squid. Others on this list can guide you as to what
logging options to use if you do not want to run with ALL,9.

The above requires some work on your part. It is a good idea to run
these tests in a non-production environment (which may require even more
work from you). Needless to say, you are not required to do that extra
work. However, some extra work is most likely required if you want to
get the answer to your question because there is currently not enough
information to answer it.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru
I cannot speak for the Squid project and you may ask the squid-dev more 
about it and also see the release notes about it.
What I can is that the phrase "it's not a bug, it's a feature" can work 
the other way around "it's not a feature, it's a bug" and as you have 
mentioned "it worked yesterday" and yes... some will look at this as a 
bug from a caching point of view.


Eliezer

On 25/10/2015 22:53, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I can not understand why so much dropped for caching of https. This is
very critical in the modern conditions, for obvious reasons. In older
versions of the same for this to work. And it does not have the
slightest desire to write or use a third-party services or crutches. As
well as the search for workaround for functionality that worked yesterday.

26.10.15 2:15, Eliezer Croitoru пишет:

On 25/10/2015 21:28, Yuri Voinov wrote:

It's not about that. It's about the fact that, with exactly the same
parameters caching and maintaining the cache at the same URL, which I
used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's

all.


OK then, if it's that important for you and it worth money for the

business you are running\working for think about writing an ECAP module
or an ICAP service that will do this same thing and sometimes will do
more then you are asking for.


I didn't mentioned this before but you if you are using a non tproxy

environment you can use two squid instances to get the same effect.

You would be able to asses the network stress you have and decide

which of the solution is for you.

Maybe you are already know the article I wrote at:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
Which you can use to use a similar thing.

 From my side of the picture I think you are over-simplifying the issue.
I cannot speak for everyone and I know that there are other opinions

about this subject and similar other but, I can say for sure that from
what I have seen, squid have had many issues which resulted from the
basic fact the it was something like a "geany in lamp" project which
many just asked for something they needed.

If you do not know, some literally *hate* squid.
One of the reasons is that it has a huge list of open bugs which are

waiting for someone to find them attractive enough to write a patch for
them.


And yes with exactly the same parameters which resulted in 85% cache

hit you are now getting 0% like you should have been.

I am not sure how many users are happy with this change and I

encourage others to write their opinions and ideas about it.


I am staying with my suggestions for a set of solutions for the

specific issue.


I am not the greatest squid programmer but if someone will fund my

time I will might be able to write a module that will do just what you
and maybe others want. And if I might add that it's like in any other
software, you have an API and you can use it. if you think it's
important file a bug and send your question to the squid-dev list with
hope that you will get some answers even if these will not be to your
liking.


All The Bests,
Eliezer

* Somebody told me on squid once something like "I am sharing your

sorrow" while I was very happy with it.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJWLUE2AAoJENNXIZxhPexGW9QH/2MsuMAC/LKhrwnh23grQ20a
2aOvJhvx8Pl8umxjrk0JJf+J9jLlRYQ8SIXcpGe8ETv/1whchqo/Dh2hz0Ib79Qv
dK5Vm+vFKbosL7foElSQgPClhF/cDuXrJonSvUsZ68CeZA8VIy5zUx+KtGAsPTEJ
3H4fbQVX6DF5HViCHln400g0YFTXYAx3VOC4K8EBKIjwLG8RZdBio8aCA2uoJ7Fx
vFY98rpyYS44pKEXfs0QoQzyuu3tQLosCJjc01aOqtF1iI8plWWN4lJlViyzBr4p
IH5rDRkBeldJw/0Irs9nwApwUGumWilLCR1k5c196LiibrD1rMqRNRya0eRFHbI=
=ZrNZ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Amos Jeffries
On 26/10/2015 9:53 a.m., Yuri Voinov wrote:
> 
> I can not understand why so much dropped for caching of https.

I dont understand what you mean by that.

You started caching HTTPS and ratio dropped?


HTTPS is less than 30% of total traffic. But;

* has a higher proportion of Cache-Control: private or no-store
messages, and

* the store entries for URI with http:// and https:// are different
objects even if the rest of the URI is identical.

* has a larger amount of Chrome 'sdch' encoding requests.

Any one of the above can cause more MISS by increasing the churn of
non-cacheable content. The wider object space is also trying to fit into
the same cache space/capacity, reducing the time any http:// or https://
object will be cached.
 Don't expect HIT rates for HTTP+HTTPS caching to be the same as for
HTTP-only caching. You likely need to re-calculate all your tuning.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Amos Jeffries
On 26/10/2015 8:29 a.m., Yuri Voinov wrote:
> 
> In a nutshell - I need no possible explanation. I want to know - it's a
> bug or so conceived?

Well, I don't think it is what you think.

For starters ignore-no-cache was removed back in 3.2, so your 3.4
version working okay shows that its not that parameter.

Secondly, what ignore-no-cache did when it was supported was *prevent*
things marked with Cache-Control:no-cache by servers from caching. Quite
the opposite of what most proxy admin seemed to think.


What has been removed in 4.x is:
1) ignore-auth which again was preventing things being cached,

2) ignore-must-revalidate which was causing auth credentials, Cookies,
and per-user payload things to be delivered from cache to the wrong
users in some/many proxies.

As a result ignore-private is now relatively safe to use. Before it
utterly wiped out cache integrity when combined with the (2) behaviours.

Also ignore-expires is now safe to use. Since Squid should be acting
like a proper HTTP/1.1 cache with revalidations of stale content.


60% -ish sounds about right for the proportion of traffic using
Cache-Control: with any of must-revalidate, proxy-revalidate, no-cache,
private, or authentication.

If you are only looking at *_HIT you will see a massive decline. But
that is an illusion. In 4.x you need to count REFRESH_UNMODIFIED as a
HIT, and look at the cache ratio statistics for near-HITs as well as HITs.



Right after the upgrade from an older Squid it could be a case of your
cache having bad content in it. The revalidations would cause a burst of
replacements until that old content is updated. You would see that as a
sudden low point in the rate, increasing roughly exponentially back up
towards some new "normal" rate.

If you are having the huge decline even after revalidations are taken
into account and the new normal rate is reached, that is not expected.
You would need to analyse your traffic headers to find out what the
actual situation is.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru

On 25/10/2015 21:28, Yuri Voinov wrote:

It's not about that. It's about the fact that, with exactly the same
parameters caching and maintaining the cache at the same URL, which I
used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's all.


OK then, if it's that important for you and it worth money for the 
business you are running\working for think about writing an ECAP module 
or an ICAP service that will do this same thing and sometimes will do 
more then you are asking for.


I didn't mentioned this before but you if you are using a non tproxy 
environment you can use two squid instances to get the same effect.
You would be able to asses the network stress you have and decide which 
of the solution is for you.

Maybe you are already know the article I wrote at:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
Which you can use to use a similar thing.

From my side of the picture I think you are over-simplifying the issue.
I cannot speak for everyone and I know that there are other opinions 
about this subject and similar other but, I can say for sure that from 
what I have seen, squid have had many issues which resulted from the 
basic fact the it was something like a "geany in lamp" project which 
many just asked for something they needed.

If you do not know, some literally *hate* squid.
One of the reasons is that it has a huge list of open bugs which are 
waiting for someone to find them attractive enough to write a patch for 
them.


And yes with exactly the same parameters which resulted in 85% cache hit 
you are now getting 0% like you should have been.
I am not sure how many users are happy with this change and I encourage 
others to write their opinions and ideas about it.


I am staying with my suggestions for a set of solutions for the specific 
issue.


I am not the greatest squid programmer but if someone will fund my time 
I will might be able to write a module that will do just what you and 
maybe others want. And if I might add that it's like in any other 
software, you have an API and you can use it. if you think it's 
important file a bug and send your question to the squid-dev list with 
hope that you will get some answers even if these will not be to your 
liking.


All The Bests,
Eliezer

* Somebody told me on squid once something like "I am sharing your 
sorrow" while I was very happy with it.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I can not understand why so much dropped for caching of https. This is
very critical in the modern conditions, for obvious reasons. In older
versions of the same for this to work. And it does not have the
slightest desire to write or use a third-party services or crutches. As
well as the search for workaround for functionality that worked yesterday.

26.10.15 2:15, Eliezer Croitoru пишет:
> On 25/10/2015 21:28, Yuri Voinov wrote:
>> It's not about that. It's about the fact that, with exactly the same
>> parameters caching and maintaining the cache at the same URL, which I
>> used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's
all.
>
> OK then, if it's that important for you and it worth money for the
business you are running\working for think about writing an ECAP module
or an ICAP service that will do this same thing and sometimes will do
more then you are asking for.
>
> I didn't mentioned this before but you if you are using a non tproxy
environment you can use two squid instances to get the same effect.
> You would be able to asses the network stress you have and decide
which of the solution is for you.
> Maybe you are already know the article I wrote at:
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
> Which you can use to use a similar thing.
>
> From my side of the picture I think you are over-simplifying the issue.
> I cannot speak for everyone and I know that there are other opinions
about this subject and similar other but, I can say for sure that from
what I have seen, squid have had many issues which resulted from the
basic fact the it was something like a "geany in lamp" project which
many just asked for something they needed.
> If you do not know, some literally *hate* squid.
> One of the reasons is that it has a huge list of open bugs which are
waiting for someone to find them attractive enough to write a patch for
them.
>
> And yes with exactly the same parameters which resulted in 85% cache
hit you are now getting 0% like you should have been.
> I am not sure how many users are happy with this change and I
encourage others to write their opinions and ideas about it.
>
> I am staying with my suggestions for a set of solutions for the
specific issue.
>
> I am not the greatest squid programmer but if someone will fund my
time I will might be able to write a module that will do just what you
and maybe others want. And if I might add that it's like in any other
software, you have an API and you can use it. if you think it's
important file a bug and send your question to the squid-dev list with
hope that you will get some answers even if these will not be to your
liking.
>
> All The Bests,
> Eliezer
>
> * Somebody told me on squid once something like "I am sharing your
sorrow" while I was very happy with it.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLUE2AAoJENNXIZxhPexGW9QH/2MsuMAC/LKhrwnh23grQ20a
2aOvJhvx8Pl8umxjrk0JJf+J9jLlRYQ8SIXcpGe8ETv/1whchqo/Dh2hz0Ib79Qv
dK5Vm+vFKbosL7foElSQgPClhF/cDuXrJonSvUsZ68CeZA8VIy5zUx+KtGAsPTEJ
3H4fbQVX6DF5HViCHln400g0YFTXYAx3VOC4K8EBKIjwLG8RZdBio8aCA2uoJ7Fx
vFY98rpyYS44pKEXfs0QoQzyuu3tQLosCJjc01aOqtF1iI8plWWN4lJlViyzBr4p
IH5rDRkBeldJw/0Irs9nwApwUGumWilLCR1k5c196LiibrD1rMqRNRya0eRFHbI=
=ZrNZ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Hi gents,

Pay attention to whether someone from the test SQUID 4 as extremely low
of cache hits from the new version? Particularly with respect to sites
HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
cache hit collapsed from 85 percent or more on the level of 5-15
percent. I believe this is due to the exclusion of support guidelines
ignore-no-cache, which eliminates the possibility of aggressive caching
and reduces the value of caching proxy to almost zero.

This HTTP caches normally. However, due to the widespread use of HTTPS
trends - caching dramatically decreased to unacceptable levels.

Noticed there anyone else this effect? And what is now with caching?

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLPKOAAoJENNXIZxhPexGCx4H/j0R2aAxOPp5K1kYwHPgkBF1
oH/7nqKRWLbRJ32tqkRtQIE4zbyqqNjmGamRoa59UCK/xs6H3Z8t8Y2Bbkx6umDH
lwUWjlksVxATVAxbjIWowkmjU4FVc20dM0p6quvz1A9LqdcZHu5x4AzLGLs2re4b
Dy7urAjn8dA5jgvQ05rTBLkqgOeDUlakyBaMlHaK8VUJ829H3YreSWpbobjCKAIz
/Bu5pLSRXDvdPqEzOa4MRwSirggntKHET1ThxwVN9xDa1wCc3SW4cRoKmqobmSv/
F7ryEkTFC05AcCiGb7ArEjGQf7R7zi4PXybOoUIypEyhipvd5hv2PKdw3Dha4OY=
=m/MT
-END PGP SIGNATURE-


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru

Hey Yuri,

I am not sure if you think that Squid version 4 with extreme low hit 
ratio is bad or not but I can understand your sight about things.
Usually I am redirecting to this page: 
http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples


But this time I can proudly say that the squid project is doing things 
the right way while it might not be understood by some.
Before you or anyone declares that there is a low hit ratio due to 
something that is missing I will try to put some sense into how things 
looks in the real world.

Small thing from a nice day of mine:
I was sitting talking with a friend of mine, a MD to be exact and while 
we were talking I was just comforting him about the wonders of Computers.
He was complaining on how the software in the office moves so slow and 
he needs to wait for the software to response with results. So I 
hesitated a bit but then I asked him "What would have happen if some MD 
here in the office will receive the wrong content\results on a patient 
from the software? he described it to me terrified from the question 'He 
can get the wrong decision!' and then I described to him how he is in 
such a good place when he doesn't need to fear from such scenarios.
In this same office Squid is being used for many things and it's crucial 
that besides the option to cache content the possibility to validate 
cache properly will be set right.


I do understand that there is a need for caches and sometimes it is 
crucial in order to give the application more CPU cycles or more RAM but 
sometimes the hunger for cache can consume the actual requirement for 
the content integrity and it must be re-validated from time to time.


I have seen couple times how a cache in a DB or other levels results 
with a very bad and unwanted result while I do understand some of the 
complexity and caution that the programmers take when building all sort 
of systems with cache in them.


If you do want to understand more about the subject pick your favorite 
scripting language and just try to implement a simple object caching.
You would then see how complex the task can be and you can maybe then 
understand why caches are not such a simple thing and specially why 
ignore-no-cache should not be used in any environment if it is possible.


While I do advise you to not use it I would hint you and others on 
another approach to the subject.
If you are greedy and you have hunger for cache for specific 
sites\traffic and you would like to be able to benefit from over-caching 
there is a solution for that!

- You can alter\hack squid code to meet your needs
- You can write an ICAP service that will be able to alter the response 
headers so squid would think it is cachable by default.
- You can write an ECAP module that will be able to alter the response 
headers ...

- Write your own cache service with your algorithms in it.

Take in account that the squid project tries to be as fault tolerance as 
possible due to it being a very sensitive piece of software in very big 
production systems.
Squid doesn't try to meet the requirement of "Maximum Cache" and it is 
not squid that as a caching proxy makes a reduction of any cache percentage!
The reason that the content is not cachable is due to all these 
application that describe their content as not cachable!
For a second of sanity from the the squid project, try to contact 
google\youtube admins\support\operators\forces\what-ever to understand 
how would you be able to benefit from a local cache.
If and when you do manage to contact them let them know I was looking 
for a contact and I never managed to find one of these available to me 
on the phone or email. You cannot say anything like that on the squid 
project, the squid project can be contacted using an email and if 
required you can get a hold of the man behind the software(while he is a 
human).


And I will try to write it in a geeky way:
deny_info 302:https://support.google.com/youtube/ 
big_system_that_doesnt_want_to_be_cached


Eliezer

* P.S If you do want to write an ICAP service or an ECAP module to 
replace the "ignore-no-cache" I can give you some code that will might 
help you as a starter.



On 25/10/2015 17:17, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi gents,

Pay attention to whether someone from the test SQUID 4 as extremely low
of cache hits from the new version? Particularly with respect to sites
HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
cache hit collapsed from 85 percent or more on the level of 5-15
percent. I believe this is due to the exclusion of support guidelines
ignore-no-cache, which eliminates the possibility of aggressive caching
and reduces the value of caching proxy to almost zero.

This HTTP caches normally. However, due to the widespread use of HTTPS
trends - caching dramatically decreased to unacceptable levels.

Noticed there anyone else this effect? And what is now with 

Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
In a nutshell - I need no possible explanation. I want to know - it's a
bug or so conceived?

26.10.15 1:17, Eliezer Croitoru пишет:
> Hey Yuri,
>
> I am not sure if you think that Squid version 4 with extreme low hit
ratio is bad or not but I can understand your sight about things.
> Usually I am redirecting to this page:
http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples
>
> But this time I can proudly say that the squid project is doing things
the right way while it might not be understood by some.
> Before you or anyone declares that there is a low hit ratio due to
something that is missing I will try to put some sense into how things
looks in the real world.
> Small thing from a nice day of mine:
> I was sitting talking with a friend of mine, a MD to be exact and
while we were talking I was just comforting him about the wonders of
Computers.
> He was complaining on how the software in the office moves so slow and
he needs to wait for the software to response with results. So I
hesitated a bit but then I asked him "What would have happen if some MD
here in the office will receive the wrong content\results on a patient
from the software? he described it to me terrified from the question 'He
can get the wrong decision!' and then I described to him how he is in
such a good place when he doesn't need to fear from such scenarios.
> In this same office Squid is being used for many things and it's
crucial that besides the option to cache content the possibility to
validate cache properly will be set right.
>
> I do understand that there is a need for caches and sometimes it is
crucial in order to give the application more CPU cycles or more RAM but
sometimes the hunger for cache can consume the actual requirement for
the content integrity and it must be re-validated from time to time.
>
> I have seen couple times how a cache in a DB or other levels results
with a very bad and unwanted result while I do understand some of the
complexity and caution that the programmers take when building all sort
of systems with cache in them.
>
> If you do want to understand more about the subject pick your favorite
scripting language and just try to implement a simple object caching.
> You would then see how complex the task can be and you can maybe then
understand why caches are not such a simple thing and specially why
ignore-no-cache should not be used in any environment if it is possible.
>
> While I do advise you to not use it I would hint you and others on
another approach to the subject.
> If you are greedy and you have hunger for cache for specific
sites\traffic and you would like to be able to benefit from over-caching
there is a solution for that!
> - You can alter\hack squid code to meet your needs
> - You can write an ICAP service that will be able to alter the
response headers so squid would think it is cachable by default.
> - You can write an ECAP module that will be able to alter the response
headers ...
> - Write your own cache service with your algorithms in it.
>
> Take in account that the squid project tries to be as fault tolerance
as possible due to it being a very sensitive piece of software in very
big production systems.
> Squid doesn't try to meet the requirement of "Maximum Cache" and it is
not squid that as a caching proxy makes a reduction of any cache percentage!
> The reason that the content is not cachable is due to all these
application that describe their content as not cachable!
> For a second of sanity from the the squid project, try to contact
google\youtube admins\support\operators\forces\what-ever to understand
how would you be able to benefit from a local cache.
> If and when you do manage to contact them let them know I was looking
for a contact and I never managed to find one of these available to me
on the phone or email. You cannot say anything like that on the squid
project, the squid project can be contacted using an email and if
required you can get a hold of the man behind the software(while he is a
human).
>
> And I will try to write it in a geeky way:
> deny_info 302:https://support.google.com/youtube/
big_system_that_doesnt_want_to_be_cached
>
> Eliezer
>
> * P.S If you do want to write an ICAP service or an ECAP module to
replace the "ignore-no-cache" I can give you some code that will might
help you as a starter.
>
>
> On 25/10/2015 17:17, Yuri Voinov wrote:
>>
> Hi gents,
>
> Pay attention to whether someone from the test SQUID 4 as extremely low
> of cache hits from the new version? Particularly with respect to sites
> HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
> cache hit collapsed from 85 percent or more on the level of 5-15
> percent. I believe this is due to the exclusion of support guidelines
> ignore-no-cache, which eliminates the possibility of aggressive caching
> and reduces the value of caching proxy to almost zero.
>
> This HTTP caches normally.