Re: [squid-users] [Squid 4.x]: Truncated accounts when there is spaces in usernames

2015-10-25 Thread Amos Jeffries
On 25/10/2015 5:47 a.m., David Touzeau wrote:
> 
> auth_param ntlm program /usr/bin/ntlm_auth  --domain=TOUZEAU.BIZ
> --helper-protocol=squid-2.5-ntlmssp
> auth_param ntlm children 20 startup=5 idle=3
> auth_param ntlm keep_alive on
> authenticate_ttl 14400 seconds
> authenticate_cache_garbage_interval 18000 seconds
> authenticate_ip_ttl 14400 seconds
> 
> auth_param basic program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-basic
> auth_param basic children 10 startup=5 idle=1
> auth_param basic realm Basic Identification
> auth_param basic credentialsttl 4 hours
> 
> here a debug log with an account logged as "david touzeau"
> 
> 
> Proxy-Authorization: NTLM
> TlRMTVNTUAADGAAYAJAYABgAqA4ADgBYGgAaAGYQABAAgADABYKIogYBsR0PudyEOYFjFhMW+qrJNxLkdlQATwBVAFoARQBBAFUAZABhAHYAaQBkACAAdABvAHUAegBlAGEAdQBXAEkATgA3AFUAUwAtADEAkZrVyKTcrdAA/wlnYT2Q+F
> 
> 2015/10/24 12:34:58.089 kid1| 84,5| helper.cc(1384)
> helperStatefulDispatch: helperStatefulDispatch: Request sent to
> ntlmauthenticator #Hlpr65, 260 bytes
> 2015/10/24 12:34:58.092 kid1| 84,5| helper.cc(1000)
> helperStatefulHandleRead: helperStatefulHandleRead: 17 bytes from
> ntlmauthenticator #Hlpr65
> 2015/10/24 12:34:58.092 kid1| 29,6| UserRequest.cc(171)
> releaseAuthServer: releasing NTLM auth server '0x1d91cd8'
> 2015/10/24 12:34:58.092 kid1| 29,4| UserRequest.cc(327) HandleReply:
> Successfully validated user via NTLM. Username 'touzeau'
> 

Okay. I think there is nothing we can do about it except to say you
can't have whitespace in usernames when using the old-style helpers.
That currently still includes ntlm_auth from Samba.

It is not a new problem. The NTLM/Negotiate helper response lines have
an optional token field before the username and the line is whitespace
delimited. If the username has whitespace inside it, then the first part
is parsed as being that field. It should be %-encoding the username,
which seems not to be happening.

We moved to the key=value protocol as the solution to avoid that in
future. But it requires the helper(s) to be using the new protocol. And
this one is not doing that either.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 4.x]: Truncated accounts when there is spaces in usernames

2015-10-25 Thread Amos Jeffries
On 25/10/2015 9:01 p.m., Amos Jeffries wrote:
> On 25/10/2015 5:47 a.m., David Touzeau wrote:
>>
>> auth_param ntlm program /usr/bin/ntlm_auth  --domain=TOUZEAU.BIZ
>> --helper-protocol=squid-2.5-ntlmssp
>> auth_param ntlm children 20 startup=5 idle=3
>> auth_param ntlm keep_alive on
>> authenticate_ttl 14400 seconds
>> authenticate_cache_garbage_interval 18000 seconds
>> authenticate_ip_ttl 14400 seconds
>>
>> auth_param basic program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>> auth_param basic children 10 startup=5 idle=1
>> auth_param basic realm Basic Identification
>> auth_param basic credentialsttl 4 hours
>>
>> here a debug log with an account logged as "david touzeau"
>>
>>
>> Proxy-Authorization: NTLM
>> TlRMTVNTUAADGAAYAJAYABgAqA4ADgBYGgAaAGYQABAAgADABYKIogYBsR0PudyEOYFjFhMW+qrJNxLkdlQATwBVAFoARQBBAFUAZABhAHYAaQBkACAAdABvAHUAegBlAGEAdQBXAEkATgA3AFUAUwAtADEAkZrVyKTcrdAA/wlnYT2Q+F
>>
>> 2015/10/24 12:34:58.089 kid1| 84,5| helper.cc(1384)
>> helperStatefulDispatch: helperStatefulDispatch: Request sent to
>> ntlmauthenticator #Hlpr65, 260 bytes
>> 2015/10/24 12:34:58.092 kid1| 84,5| helper.cc(1000)
>> helperStatefulHandleRead: helperStatefulHandleRead: 17 bytes from
>> ntlmauthenticator #Hlpr65
>> 2015/10/24 12:34:58.092 kid1| 29,6| UserRequest.cc(171)
>> releaseAuthServer: releasing NTLM auth server '0x1d91cd8'
>> 2015/10/24 12:34:58.092 kid1| 29,4| UserRequest.cc(327) HandleReply:
>> Successfully validated user via NTLM. Username 'touzeau'
>>
> 
> Okay. I think there is nothing we can do about it except to say you
> can't have whitespace in usernames when using the old-style helpers.
> That currently still includes ntlm_auth from Samba.
> 
> It is not a new problem. The NTLM/Negotiate helper response lines have
> an optional token field before the username and the line is whitespace
> delimited. If the username has whitespace inside it, then the first part
> is parsed as being that field. It should be %-encoding the username,
> which seems not to be happening.
> 
> We moved to the key=value protocol as the solution to avoid that in
> future. But it requires the helper(s) to be using the new protocol. And
> this one is not doing that either.

This is being tracked at:
 

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] delay pools question

2015-10-25 Thread Alex Samad
HI

I have had a look at http://wiki.squid-cache.org/Features/DelayPools

Wondering if somebody can maybe explain how it rate limits downloads.

So I can understand it would be able to limit proxy to client traffic
as squid is the sender and can limit how it sends.

But if I want to limit speed from say microsoft.com to the
organisation how does it organise that.

My limited understanding is you make a request of the ms web servers
and then they send it as fast as they can.

The only way I can think of it happening is slowing the TCP ACK's.  Or
does squid make request for partial ranges of files such as to fit in
the speed requirements.

A
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Access regulation with multiple outgoing IPs

2015-10-25 Thread Amos Jeffries
On 26/10/2015 4:28 a.m., rudi wrote:
> Hey Amos,
> 
> thank you very much for your very helpful information. Now i have an access
> control an SSL is fixed too but i had to add port 80 as SSL port to use the
> proxies as https proxy in proxifier.
> 
> One more question. Now i can use the proxies from
> 193.xxx.xxx.x1/255.255.255.xxx. But if i want to use the proxies from a
> virtual machine i can not get access to them. I tried different Ips. Do you
> know what IPs or information i have to add to the acl on top  to get the VMs
> working? Thank you so much!
> 

These do not seem right:

The clients are on network 193
> acl localnet src 193.xxx.xxx.x1/255.255.255.xxx

But Squid is listening and sending with network 178.

> http_port 178.xxx.xxx.x3:3129 name=3129
> acl vm3129 myportname 3129
> tcp_outgoing_address 178.xxx.xxx.x3 vm3129
> 

Hopefully that is enough to resolve your issue. I can't help any further
without the numbers which you are eliding and a lot more details about
the network topology configuration.


PS. Squid uses modern CIDR subnet masks, not the 1970's netmask format.
If the xxx mask bit does not exactly match a CIDR mask it will be mapped
by dropping/0-ing the rightmost mask bits until it does.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pool with large negative numbers

2015-10-25 Thread Amos Jeffries
On 26/10/2015 5:23 a.m., Chico Venancio wrote:
> Is everyone still having this issue?
> We tried messing arround with the .conf and used a 32bit debian as well to
> no avail.
> 

No. Just those of you having it.

I am suspecting that the problem is related to CONNECT tunnels. Since
HTTPS traffic is on the rise, the tunnel handling did not used to do
delay pools (counting upload traffic but not delay it), and I'm not sure
if that got fixed or not.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 4.x]: Truncated accounts when there is spaces in usernames

2015-10-25 Thread Amos Jeffries
On 26/10/2015 6:28 a.m., David Touzeau wrote:
> 
> I think you are right Amos, but could you explain why in 3.2x, 3.4x
> branchs (exactly 3.4.6 ) there is no issue.
> And samba was the same version...

I'm not sure why 3.4 would work and not 4.x. The code has not changed
since 3.4.2, and not in a way that would affect this since some time
back in 3.3.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru
I cannot speak for the Squid project and you may ask the squid-dev more 
about it and also see the release notes about it.
What I can is that the phrase "it's not a bug, it's a feature" can work 
the other way around "it's not a feature, it's a bug" and as you have 
mentioned "it worked yesterday" and yes... some will look at this as a 
bug from a caching point of view.


Eliezer

On 25/10/2015 22:53, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I can not understand why so much dropped for caching of https. This is
very critical in the modern conditions, for obvious reasons. In older
versions of the same for this to work. And it does not have the
slightest desire to write or use a third-party services or crutches. As
well as the search for workaround for functionality that worked yesterday.

26.10.15 2:15, Eliezer Croitoru пишет:

On 25/10/2015 21:28, Yuri Voinov wrote:

It's not about that. It's about the fact that, with exactly the same
parameters caching and maintaining the cache at the same URL, which I
used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's

all.


OK then, if it's that important for you and it worth money for the

business you are running\working for think about writing an ECAP module
or an ICAP service that will do this same thing and sometimes will do
more then you are asking for.


I didn't mentioned this before but you if you are using a non tproxy

environment you can use two squid instances to get the same effect.

You would be able to asses the network stress you have and decide

which of the solution is for you.

Maybe you are already know the article I wrote at:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
Which you can use to use a similar thing.

 From my side of the picture I think you are over-simplifying the issue.
I cannot speak for everyone and I know that there are other opinions

about this subject and similar other but, I can say for sure that from
what I have seen, squid have had many issues which resulted from the
basic fact the it was something like a "geany in lamp" project which
many just asked for something they needed.

If you do not know, some literally *hate* squid.
One of the reasons is that it has a huge list of open bugs which are

waiting for someone to find them attractive enough to write a patch for
them.


And yes with exactly the same parameters which resulted in 85% cache

hit you are now getting 0% like you should have been.

I am not sure how many users are happy with this change and I

encourage others to write their opinions and ideas about it.


I am staying with my suggestions for a set of solutions for the

specific issue.


I am not the greatest squid programmer but if someone will fund my

time I will might be able to write a module that will do just what you
and maybe others want. And if I might add that it's like in any other
software, you have an API and you can use it. if you think it's
important file a bug and send your question to the squid-dev list with
hope that you will get some answers even if these will not be to your
liking.


All The Bests,
Eliezer

* Somebody told me on squid once something like "I am sharing your

sorrow" while I was very happy with it.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJWLUE2AAoJENNXIZxhPexGW9QH/2MsuMAC/LKhrwnh23grQ20a
2aOvJhvx8Pl8umxjrk0JJf+J9jLlRYQ8SIXcpGe8ETv/1whchqo/Dh2hz0Ib79Qv
dK5Vm+vFKbosL7foElSQgPClhF/cDuXrJonSvUsZ68CeZA8VIy5zUx+KtGAsPTEJ
3H4fbQVX6DF5HViCHln400g0YFTXYAx3VOC4K8EBKIjwLG8RZdBio8aCA2uoJ7Fx
vFY98rpyYS44pKEXfs0QoQzyuu3tQLosCJjc01aOqtF1iI8plWWN4lJlViyzBr4p
IH5rDRkBeldJw/0Irs9nwApwUGumWilLCR1k5c196LiibrD1rMqRNRya0eRFHbI=
=ZrNZ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Amos Jeffries
On 26/10/2015 9:53 a.m., Yuri Voinov wrote:
> 
> I can not understand why so much dropped for caching of https.

I dont understand what you mean by that.

You started caching HTTPS and ratio dropped?


HTTPS is less than 30% of total traffic. But;

* has a higher proportion of Cache-Control: private or no-store
messages, and

* the store entries for URI with http:// and https:// are different
objects even if the rest of the URI is identical.

* has a larger amount of Chrome 'sdch' encoding requests.

Any one of the above can cause more MISS by increasing the churn of
non-cacheable content. The wider object space is also trying to fit into
the same cache space/capacity, reducing the time any http:// or https://
object will be cached.
 Don't expect HIT rates for HTTP+HTTPS caching to be the same as for
HTTP-only caching. You likely need to re-calculate all your tuning.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Amos Jeffries
On 26/10/2015 8:29 a.m., Yuri Voinov wrote:
> 
> In a nutshell - I need no possible explanation. I want to know - it's a
> bug or so conceived?

Well, I don't think it is what you think.

For starters ignore-no-cache was removed back in 3.2, so your 3.4
version working okay shows that its not that parameter.

Secondly, what ignore-no-cache did when it was supported was *prevent*
things marked with Cache-Control:no-cache by servers from caching. Quite
the opposite of what most proxy admin seemed to think.


What has been removed in 4.x is:
1) ignore-auth which again was preventing things being cached,

2) ignore-must-revalidate which was causing auth credentials, Cookies,
and per-user payload things to be delivered from cache to the wrong
users in some/many proxies.

As a result ignore-private is now relatively safe to use. Before it
utterly wiped out cache integrity when combined with the (2) behaviours.

Also ignore-expires is now safe to use. Since Squid should be acting
like a proper HTTP/1.1 cache with revalidations of stale content.


60% -ish sounds about right for the proportion of traffic using
Cache-Control: with any of must-revalidate, proxy-revalidate, no-cache,
private, or authentication.

If you are only looking at *_HIT you will see a massive decline. But
that is an illusion. In 4.x you need to count REFRESH_UNMODIFIED as a
HIT, and look at the cache ratio statistics for near-HITs as well as HITs.



Right after the upgrade from an older Squid it could be a case of your
cache having bad content in it. The revalidations would cause a burst of
replacements until that old content is updated. You would see that as a
sudden low point in the rate, increasing roughly exponentially back up
towards some new "normal" rate.

If you are having the huge decline even after revalidations are taken
into account and the new normal rate is reached, that is not expected.
You would need to analyse your traffic headers to find out what the
actual situation is.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Access regulation with multiple outgoing IPs

2015-10-25 Thread rudi
Hey Amos,

thank you very much for your very helpful information. Now i have an access
control an SSL is fixed too but i had to add port 80 as SSL port to use the
proxies as https proxy in proxifier.

One more question. Now i can use the proxies from
193.xxx.xxx.x1/255.255.255.xxx. But if i want to use the proxies from a
virtual machine i can not get access to them. I tried different Ips. Do you
know what IPs or information i have to add to the acl on top  to get the VMs
working? Thank you so much!


This is my file with working access regulation and SSL fix:

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl localnet src 193.xxx.xxx.x1/255.255.255.xxx


acl SSL_ports port 443
acl SSL_ports port 80
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports


# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

http_port 178.xxx.xxx.x3:3129 name=3129
acl vm3129 myportname 3129
tcp_outgoing_address 178.xxx.xxx.x3 vm3129



# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
#http_port 3128


Best regards
Manuel



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-regulation-with-multiple-outgoing-IPs-tp4673900p4673906.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pool with large negative numbers

2015-10-25 Thread Chico Venancio
Is everyone still having this issue?
We tried messing arround with the .conf and used a 32bit debian as well to
no avail.

Chico Venancio

> 2015-10-14 0:03 GMT-03:00 Amos Jeffries :
>>
>> On 14/10/2015 11:46 a.m., Chico Venancio wrote:
>> > I have configured delay pools for a client that delays access to a few
>> > sites, including youtube and facebook.
>> > It seems to work for some clients, and has significantly reduced link
>> > congestion. However, some clients seem to be unaffected by the delay
pools.
>> >
>> > The output to squidclient mgr:delay is as follows:
>> >
>> > Sending HTTP request ... done.
>> > HTTP/1.1 200 OK
>> > Server: squid/3.4.8
>> > Mime-Version: 1.0
>> > Date: Tue, 13 Oct 2015 22:43:28 GMT
>> > Content-Type: text/plain
>> > Expires: Tue, 13 Oct 2015 22:43:28 GMT
>> > Last-Modified: Tue, 13 Oct 2015 22:43:28 GMT
>> > X-Cache: MISS from proxy-server
>> > X-Cache-Lookup: MISS from proxy-server:3128
>> > Via: 1.1 proxy-server (squid/3.4.8)
>> > Connection: close
>> >
>> > Delay pools configured: 1
>> >
>> > Pool: 1
>> > Class: 2
>> >
>> > Aggregate:
>> > Max: 2
>> > Restore: 1
>> > Current: -108514139
>> >
>> > Individual:
>> > Max: 12000
>> > Restore: 7000
>> > Current: 87:12000 56:12000 92:12000 123:12000
94:-58135034
>> > 89:12000 223:12000 55:12000 93:12000
>> >
>> > Memory Used: 1496 bytes
>> >
>> >
>> > I have searched for answers and some do mention that sometimes the
current
>> > bytes in the pool shoudl be negative, but a low negative like -1 or
-6. To
>> > me it seems that the delay pools are beeing ignored...
>>
>>
>> Sounds like it might be a side effect of
>> 
>>
>> Or it could be the fact that delay pools are still 32-bit functionality.
>>
>> Amos
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
>

Chico Venancio
CEO e Diretor de Criação
VM TECH - (98)8800-2743
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [Squid 4.x]: Truncated accounts when there is spaces in usernames

2015-10-25 Thread David Touzeau



Le 25/10/2015 09:01, Amos Jeffries a écrit :

On 25/10/2015 5:47 a.m., David Touzeau wrote:

auth_param ntlm program /usr/bin/ntlm_auth  --domain=TOUZEAU.BIZ
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 20 startup=5 idle=3
auth_param ntlm keep_alive on
authenticate_ttl 14400 seconds
authenticate_cache_garbage_interval 18000 seconds
authenticate_ip_ttl 14400 seconds

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 10 startup=5 idle=1
auth_param basic realm Basic Identification
auth_param basic credentialsttl 4 hours

here a debug log with an account logged as "david touzeau"


Proxy-Authorization: NTLM
TlRMTVNTUAADGAAYAJAYABgAqA4ADgBYGgAaAGYQABAAgADABYKIogYBsR0PudyEOYFjFhMW+qrJNxLkdlQATwBVAFoARQBBAFUAZABhAHYAaQBkACAAdABvAHUAegBlAGEAdQBXAEkATgA3AFUAUwAtADEAkZrVyKTcrdAA/wlnYT2Q+F

2015/10/24 12:34:58.089 kid1| 84,5| helper.cc(1384)
helperStatefulDispatch: helperStatefulDispatch: Request sent to
ntlmauthenticator #Hlpr65, 260 bytes
2015/10/24 12:34:58.092 kid1| 84,5| helper.cc(1000)
helperStatefulHandleRead: helperStatefulHandleRead: 17 bytes from
ntlmauthenticator #Hlpr65
2015/10/24 12:34:58.092 kid1| 29,6| UserRequest.cc(171)
releaseAuthServer: releasing NTLM auth server '0x1d91cd8'
2015/10/24 12:34:58.092 kid1| 29,4| UserRequest.cc(327) HandleReply:
Successfully validated user via NTLM. Username 'touzeau'


Okay. I think there is nothing we can do about it except to say you
can't have whitespace in usernames when using the old-style helpers.
That currently still includes ntlm_auth from Samba.

It is not a new problem. The NTLM/Negotiate helper response lines have
an optional token field before the username and the line is whitespace
delimited. If the username has whitespace inside it, then the first part
is parsed as being that field. It should be %-encoding the username,
which seems not to be happening.

We moved to the key=value protocol as the solution to avoid that in
future. But it requires the helper(s) to be using the new protocol. And
this one is not doing that either.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


I think you are right Amos, but could you explain why in 3.2x, 3.4x 
branchs (exactly 3.4.6 ) there is no issue.

And samba was the same version...






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru

On 25/10/2015 21:28, Yuri Voinov wrote:

It's not about that. It's about the fact that, with exactly the same
parameters caching and maintaining the cache at the same URL, which I
used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's all.


OK then, if it's that important for you and it worth money for the 
business you are running\working for think about writing an ECAP module 
or an ICAP service that will do this same thing and sometimes will do 
more then you are asking for.


I didn't mentioned this before but you if you are using a non tproxy 
environment you can use two squid instances to get the same effect.
You would be able to asses the network stress you have and decide which 
of the solution is for you.

Maybe you are already know the article I wrote at:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
Which you can use to use a similar thing.

From my side of the picture I think you are over-simplifying the issue.
I cannot speak for everyone and I know that there are other opinions 
about this subject and similar other but, I can say for sure that from 
what I have seen, squid have had many issues which resulted from the 
basic fact the it was something like a "geany in lamp" project which 
many just asked for something they needed.

If you do not know, some literally *hate* squid.
One of the reasons is that it has a huge list of open bugs which are 
waiting for someone to find them attractive enough to write a patch for 
them.


And yes with exactly the same parameters which resulted in 85% cache hit 
you are now getting 0% like you should have been.
I am not sure how many users are happy with this change and I encourage 
others to write their opinions and ideas about it.


I am staying with my suggestions for a set of solutions for the specific 
issue.


I am not the greatest squid programmer but if someone will fund my time 
I will might be able to write a module that will do just what you and 
maybe others want. And if I might add that it's like in any other 
software, you have an API and you can use it. if you think it's 
important file a bug and send your question to the squid-dev list with 
hope that you will get some answers even if these will not be to your 
liking.


All The Bests,
Eliezer

* Somebody told me on squid once something like "I am sharing your 
sorrow" while I was very happy with it.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I can not understand why so much dropped for caching of https. This is
very critical in the modern conditions, for obvious reasons. In older
versions of the same for this to work. And it does not have the
slightest desire to write or use a third-party services or crutches. As
well as the search for workaround for functionality that worked yesterday.

26.10.15 2:15, Eliezer Croitoru пишет:
> On 25/10/2015 21:28, Yuri Voinov wrote:
>> It's not about that. It's about the fact that, with exactly the same
>> parameters caching and maintaining the cache at the same URL, which I
>> used to get 85% cache hit, I am now, with a SQUID 4, I get 0%. That's
all.
>
> OK then, if it's that important for you and it worth money for the
business you are running\working for think about writing an ECAP module
or an ICAP service that will do this same thing and sometimes will do
more then you are asking for.
>
> I didn't mentioned this before but you if you are using a non tproxy
environment you can use two squid instances to get the same effect.
> You would be able to asses the network stress you have and decide
which of the solution is for you.
> Maybe you are already know the article I wrote at:
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator
> Which you can use to use a similar thing.
>
> From my side of the picture I think you are over-simplifying the issue.
> I cannot speak for everyone and I know that there are other opinions
about this subject and similar other but, I can say for sure that from
what I have seen, squid have had many issues which resulted from the
basic fact the it was something like a "geany in lamp" project which
many just asked for something they needed.
> If you do not know, some literally *hate* squid.
> One of the reasons is that it has a huge list of open bugs which are
waiting for someone to find them attractive enough to write a patch for
them.
>
> And yes with exactly the same parameters which resulted in 85% cache
hit you are now getting 0% like you should have been.
> I am not sure how many users are happy with this change and I
encourage others to write their opinions and ideas about it.
>
> I am staying with my suggestions for a set of solutions for the
specific issue.
>
> I am not the greatest squid programmer but if someone will fund my
time I will might be able to write a module that will do just what you
and maybe others want. And if I might add that it's like in any other
software, you have an API and you can use it. if you think it's
important file a bug and send your question to the squid-dev list with
hope that you will get some answers even if these will not be to your
liking.
>
> All The Bests,
> Eliezer
>
> * Somebody told me on squid once something like "I am sharing your
sorrow" while I was very happy with it.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLUE2AAoJENNXIZxhPexGW9QH/2MsuMAC/LKhrwnh23grQ20a
2aOvJhvx8Pl8umxjrk0JJf+J9jLlRYQ8SIXcpGe8ETv/1whchqo/Dh2hz0Ib79Qv
dK5Vm+vFKbosL7foElSQgPClhF/cDuXrJonSvUsZ68CeZA8VIy5zUx+KtGAsPTEJ
3H4fbQVX6DF5HViCHln400g0YFTXYAx3VOC4K8EBKIjwLG8RZdBio8aCA2uoJ7Fx
vFY98rpyYS44pKEXfs0QoQzyuu3tQLosCJjc01aOqtF1iI8plWWN4lJlViyzBr4p
IH5rDRkBeldJw/0Irs9nwApwUGumWilLCR1k5c196LiibrD1rMqRNRya0eRFHbI=
=ZrNZ
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Hi gents,

Pay attention to whether someone from the test SQUID 4 as extremely low
of cache hits from the new version? Particularly with respect to sites
HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
cache hit collapsed from 85 percent or more on the level of 5-15
percent. I believe this is due to the exclusion of support guidelines
ignore-no-cache, which eliminates the possibility of aggressive caching
and reduces the value of caching proxy to almost zero.

This HTTP caches normally. However, due to the widespread use of HTTPS
trends - caching dramatically decreased to unacceptable levels.

Noticed there anyone else this effect? And what is now with caching?

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWLPKOAAoJENNXIZxhPexGCx4H/j0R2aAxOPp5K1kYwHPgkBF1
oH/7nqKRWLbRJ32tqkRtQIE4zbyqqNjmGamRoa59UCK/xs6H3Z8t8Y2Bbkx6umDH
lwUWjlksVxATVAxbjIWowkmjU4FVc20dM0p6quvz1A9LqdcZHu5x4AzLGLs2re4b
Dy7urAjn8dA5jgvQ05rTBLkqgOeDUlakyBaMlHaK8VUJ829H3YreSWpbobjCKAIz
/Bu5pLSRXDvdPqEzOa4MRwSirggntKHET1ThxwVN9xDa1wCc3SW4cRoKmqobmSv/
F7ryEkTFC05AcCiGb7ArEjGQf7R7zi4PXybOoUIypEyhipvd5hv2PKdw3Dha4OY=
=m/MT
-END PGP SIGNATURE-


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Eliezer Croitoru

Hey Yuri,

I am not sure if you think that Squid version 4 with extreme low hit 
ratio is bad or not but I can understand your sight about things.
Usually I am redirecting to this page: 
http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples


But this time I can proudly say that the squid project is doing things 
the right way while it might not be understood by some.
Before you or anyone declares that there is a low hit ratio due to 
something that is missing I will try to put some sense into how things 
looks in the real world.

Small thing from a nice day of mine:
I was sitting talking with a friend of mine, a MD to be exact and while 
we were talking I was just comforting him about the wonders of Computers.
He was complaining on how the software in the office moves so slow and 
he needs to wait for the software to response with results. So I 
hesitated a bit but then I asked him "What would have happen if some MD 
here in the office will receive the wrong content\results on a patient 
from the software? he described it to me terrified from the question 'He 
can get the wrong decision!' and then I described to him how he is in 
such a good place when he doesn't need to fear from such scenarios.
In this same office Squid is being used for many things and it's crucial 
that besides the option to cache content the possibility to validate 
cache properly will be set right.


I do understand that there is a need for caches and sometimes it is 
crucial in order to give the application more CPU cycles or more RAM but 
sometimes the hunger for cache can consume the actual requirement for 
the content integrity and it must be re-validated from time to time.


I have seen couple times how a cache in a DB or other levels results 
with a very bad and unwanted result while I do understand some of the 
complexity and caution that the programmers take when building all sort 
of systems with cache in them.


If you do want to understand more about the subject pick your favorite 
scripting language and just try to implement a simple object caching.
You would then see how complex the task can be and you can maybe then 
understand why caches are not such a simple thing and specially why 
ignore-no-cache should not be used in any environment if it is possible.


While I do advise you to not use it I would hint you and others on 
another approach to the subject.
If you are greedy and you have hunger for cache for specific 
sites\traffic and you would like to be able to benefit from over-caching 
there is a solution for that!

- You can alter\hack squid code to meet your needs
- You can write an ICAP service that will be able to alter the response 
headers so squid would think it is cachable by default.
- You can write an ECAP module that will be able to alter the response 
headers ...

- Write your own cache service with your algorithms in it.

Take in account that the squid project tries to be as fault tolerance as 
possible due to it being a very sensitive piece of software in very big 
production systems.
Squid doesn't try to meet the requirement of "Maximum Cache" and it is 
not squid that as a caching proxy makes a reduction of any cache percentage!
The reason that the content is not cachable is due to all these 
application that describe their content as not cachable!
For a second of sanity from the the squid project, try to contact 
google\youtube admins\support\operators\forces\what-ever to understand 
how would you be able to benefit from a local cache.
If and when you do manage to contact them let them know I was looking 
for a contact and I never managed to find one of these available to me 
on the phone or email. You cannot say anything like that on the squid 
project, the squid project can be contacted using an email and if 
required you can get a hold of the man behind the software(while he is a 
human).


And I will try to write it in a geeky way:
deny_info 302:https://support.google.com/youtube/ 
big_system_that_doesnt_want_to_be_cached


Eliezer

* P.S If you do want to write an ICAP service or an ECAP module to 
replace the "ignore-no-cache" I can give you some code that will might 
help you as a starter.



On 25/10/2015 17:17, Yuri Voinov wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi gents,

Pay attention to whether someone from the test SQUID 4 as extremely low
of cache hits from the new version? Particularly with respect to sites
HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
cache hit collapsed from 85 percent or more on the level of 5-15
percent. I believe this is due to the exclusion of support guidelines
ignore-no-cache, which eliminates the possibility of aggressive caching
and reduces the value of caching proxy to almost zero.

This HTTP caches normally. However, due to the widespread use of HTTPS
trends - caching dramatically decreased to unacceptable levels.

Noticed there anyone else this effect? And what is now with 

Re: [squid-users] Squid4 has extremely low hit ratio due to lacks of ignore-no-cache

2015-10-25 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
In a nutshell - I need no possible explanation. I want to know - it's a
bug or so conceived?

26.10.15 1:17, Eliezer Croitoru пишет:
> Hey Yuri,
>
> I am not sure if you think that Squid version 4 with extreme low hit
ratio is bad or not but I can understand your sight about things.
> Usually I am redirecting to this page:
http://wiki.squid-cache.org/Features/StoreID/CollisionRisks#Several_real_world_examples
>
> But this time I can proudly say that the squid project is doing things
the right way while it might not be understood by some.
> Before you or anyone declares that there is a low hit ratio due to
something that is missing I will try to put some sense into how things
looks in the real world.
> Small thing from a nice day of mine:
> I was sitting talking with a friend of mine, a MD to be exact and
while we were talking I was just comforting him about the wonders of
Computers.
> He was complaining on how the software in the office moves so slow and
he needs to wait for the software to response with results. So I
hesitated a bit but then I asked him "What would have happen if some MD
here in the office will receive the wrong content\results on a patient
from the software? he described it to me terrified from the question 'He
can get the wrong decision!' and then I described to him how he is in
such a good place when he doesn't need to fear from such scenarios.
> In this same office Squid is being used for many things and it's
crucial that besides the option to cache content the possibility to
validate cache properly will be set right.
>
> I do understand that there is a need for caches and sometimes it is
crucial in order to give the application more CPU cycles or more RAM but
sometimes the hunger for cache can consume the actual requirement for
the content integrity and it must be re-validated from time to time.
>
> I have seen couple times how a cache in a DB or other levels results
with a very bad and unwanted result while I do understand some of the
complexity and caution that the programmers take when building all sort
of systems with cache in them.
>
> If you do want to understand more about the subject pick your favorite
scripting language and just try to implement a simple object caching.
> You would then see how complex the task can be and you can maybe then
understand why caches are not such a simple thing and specially why
ignore-no-cache should not be used in any environment if it is possible.
>
> While I do advise you to not use it I would hint you and others on
another approach to the subject.
> If you are greedy and you have hunger for cache for specific
sites\traffic and you would like to be able to benefit from over-caching
there is a solution for that!
> - You can alter\hack squid code to meet your needs
> - You can write an ICAP service that will be able to alter the
response headers so squid would think it is cachable by default.
> - You can write an ECAP module that will be able to alter the response
headers ...
> - Write your own cache service with your algorithms in it.
>
> Take in account that the squid project tries to be as fault tolerance
as possible due to it being a very sensitive piece of software in very
big production systems.
> Squid doesn't try to meet the requirement of "Maximum Cache" and it is
not squid that as a caching proxy makes a reduction of any cache percentage!
> The reason that the content is not cachable is due to all these
application that describe their content as not cachable!
> For a second of sanity from the the squid project, try to contact
google\youtube admins\support\operators\forces\what-ever to understand
how would you be able to benefit from a local cache.
> If and when you do manage to contact them let them know I was looking
for a contact and I never managed to find one of these available to me
on the phone or email. You cannot say anything like that on the squid
project, the squid project can be contacted using an email and if
required you can get a hold of the man behind the software(while he is a
human).
>
> And I will try to write it in a geeky way:
> deny_info 302:https://support.google.com/youtube/
big_system_that_doesnt_want_to_be_cached
>
> Eliezer
>
> * P.S If you do want to write an ICAP service or an ECAP module to
replace the "ignore-no-cache" I can give you some code that will might
help you as a starter.
>
>
> On 25/10/2015 17:17, Yuri Voinov wrote:
>>
> Hi gents,
>
> Pay attention to whether someone from the test SQUID 4 as extremely low
> of cache hits from the new version? Particularly with respect to sites
> HTTPS directive "no cache"? After replacing the Squid 3.4 to 4 squid
> cache hit collapsed from 85 percent or more on the level of 5-15
> percent. I believe this is due to the exclusion of support guidelines
> ignore-no-cache, which eliminates the possibility of aggressive caching
> and reduces the value of caching proxy to almost zero.
>
> This HTTP caches normally.