Re: [squid-users] How to limit upload bandwidth in squid proxy?

2015-08-23 Thread Brandon Elliott
Hi Nicolaas,

I read the article you pointed to, but it isn't applicable. I need to be
able to limit upload bandwidth PER USER just like squid does for download
bandwidth, per user.

QUOTE >> I don’t really know if squid supports this or not, but I can’t
imagine why it would need to, with traffic control built into the linux
kernel in a number of ways.

Somebody saw fit to put download bandwidth limiting into squid, so it makes
sense to support upload bandwidth limiting as well.

I am already using ncsa_auth to manage squid users. I need a solution that
doesn't involve using a second authentication just to limit upload
bandwidth per user.

Any ideas?

On Sun, Aug 23, 2015 at 5:35 PM, Nicolaas Hyatt 
wrote:

> I don’t really know if squid supports this or not, but I can’t imagine why
> it would need to, with traffic control built into the linux kernel in a
> number of ways. For example the Linux Advanced Routing & Traffic Control
> (Chapter 9) (http://www.lartc.org/howto/lartc.qdisc.html) can easily help
> you acieve what you are wanting.
>
>
>
>
>
> *From:* squid-users [mailto:squid-users-boun...@lists.squid-cache.org] *On
> Behalf Of *Brandon Elliott
> *Sent:* Sunday, August 23, 2015 4:27 PM
> *To:* squid-users@lists.squid-cache.org
> *Subject:* [squid-users] How to limit upload bandwidth in squid proxy?
>
>
>
> Hello all,
>
> I ran into another issue which I could not find any answers for online. In
> testing the proxies, I found that bandwidth limiting was properly limited
> for download using delay pools but upload bandwidth was allowed unlimited
> access. So a single user could potentially use up all the bandwidth with a
> large, fast upload.
>
> How can I prevent this?
>
> Thanks,
>
> Brandon
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>


-- 
*Brandon Elliott*
CEO
-

Office:  888.763.6797
http://www.neosys.net
2774 N Cobb Pkwy NW #244
Suite 109
Kennesaw, GA 30152
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] external_acl_type not working on Squid Cache: Version 3.5.5

2015-08-23 Thread hs tan
I have been trying to test squid but it doesn't seems to be working. The
closest example I studied are:

http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+12.+Authentication+Helpers/12.5+External+ACLs/
http://www.stress-free.co.nz/transparent_squid_authentication_to_edirectory
but none of it works.

>From the simple test, I did on the following:

The print "ERR" supposed to have an out put at the cache.log, but I din't
see anything appearing
Neither I change the "ERR" nor "OK", there is no effect on the access.
I just want a simple test, if set to print "ERR" then stop user to proceed,
if "OK" then proceed.

The error message in cache.log
2015/07/28 11:45:56 kid1| helperHandleRead: unexpected reply on channel 0
from mysql_log #Hlpr17 ''

squid.conf is:

auth_param basic program /usr/lib64/squid/basic_ldap_auth -v 3 -b
"dc=xxx,dc=edu.xx" -D "cn=Manager,dc=xxx,dc=edu.xx"  -w passwd -f uid=%s
ldap.xxx.edu.xx:389

acl ldap-auth proxy_auth REQUIRED
auth_param basic children 5
auth_param basic realm Web Proxy Server
auth_param basic credentialsttl 1 minute

external_acl_type mysql_log %SRC %LOGIN %{Host} /home/squid/quota_helper.pl
acl ex_log external mysql_log
http_access allow ex_log

http_access allow ldap-auth
http_access allow localnet
http_access allow localhost
http_access deny all
quota_helper.pl is:

#!/usr/bin/perl -wl

$|=1;
while(){
print "ERR";
}
[root@localhost ~]# squid -v shows:

Squid Cache: Version 3.5.5
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-follow-x-forwarded-for'
'--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp'
'--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2' '--enable-esi'
'--enable-ssl-crtd' '--enable-icmp' '--with-aio'
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl'
'--with-openssl' '--with-pthreads' '--with-included-ltdl'
'--disable-arch-native' '--without-nettle'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro ' 'CXXFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic
-fPIC'
'PKG_CONFIG_PATH=%{_PKG_CONFIG_PATH}:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
--enable-ltdl-convenience
[root@localhost ~]#
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump updates coming in 3.5.8

2015-08-23 Thread Alex Rousskov
On 08/21/2015 01:28 AM, Amos Jeffries wrote:

> Christos has managed (we think) to resolve a fairly major design issue
> that has been plaguing the 3.5 series peek-and-splice feature so far.
> ()


Clarification: No major design issue has been resolved. The design has
not changed. We fixed the implementation to match the documented design.

I cannot come up with a specific previously-working configuration
example that our fix would break, but that does not mean such
configurations do not exist. If your ssl_bump peek or stare rule could
match at step #3, then you were in a danger zone: Our buggy code used to
incorrectly splice or bump (depending on various complex factors) when
such a match happens at step3. After the fix, such a match can never
happen: peek and stare rules are now correctly ignored during step3.

Here is an example of a configuration that was _not_ working reliably
before the fix (under certain atypical but realistic conditions such as
IE on Windows XP):

  ssl_bump peek all
  ssl_bump splice all

The above configuration should work as expected after the fix.


The change is not meant to resolve any assertions. However, since it
affects when/whether Squid splices or bumps, the change may affect the
asserting code as well.


Hope this clarifies,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to limit upload bandwidth in squid proxy?

2015-08-23 Thread Nicolaas Hyatt
I don’t really know if squid supports this or not, but I can’t imagine why it 
would need to, with traffic control built into the linux kernel in a number of 
ways. For example the Linux Advanced Routing & Traffic Control (Chapter 9) 
(http://www.lartc.org/howto/lartc.qdisc.html) can easily help you acieve what 
you are wanting.

 

 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Brandon Elliott
Sent: Sunday, August 23, 2015 4:27 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] How to limit upload bandwidth in squid proxy?

 

Hello all,

I ran into another issue which I could not find any answers for online. In 
testing the proxies, I found that bandwidth limiting was properly limited for 
download using delay pools but upload bandwidth was allowed unlimited access. 
So a single user could potentially use up all the bandwidth with a large, fast 
upload.

How can I prevent this?

Thanks,

Brandon

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to limit upload bandwidth in squid proxy?

2015-08-23 Thread Brandon Elliott
Hello all,

I ran into another issue which I could not find any answers for online. In
testing the proxies, I found that bandwidth limiting was properly limited
for download using delay pools but upload bandwidth was allowed unlimited
access. So a single user could potentially use up all the bandwidth with a
large, fast upload.

How can I prevent this?

Thanks,

Brandon
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] peek and splice content inspection question

2015-08-23 Thread wmunny william

> Looking for one single thing that does everything DG or e2guardian do,
> or wraps them completely is the wrong approach. They are almost
> full-blown proxies like Squid.
> 
> The *CAP design is to leave all the transfer proxying and caching duties
> to software like Squid and only perform the actual content adaptation
> policies in the service/module.
> 
> You need to look at what DG is doing for you now and how much of that is
> available from squid.conf capabilities. Most of it usually is. Only the
> remaining "fiddling with payloads" bit actually needs a third-party service.
> 
> 

Yes I know that Squid is very powerful, but (DG or E2) seems - to me - more 
easier with complex rules
I'm working with multi users groups, regex in HTML, rules with exceptions (site 
allowed if some conditions), etc

I guess if I reproduce my configuration in squid.conf it will be more hard to 
maintain after ..
 
Also these soft are massively multi-threaded, in my usage squid + dg use less 
CPU than Squid alone I mean the load is shared by the cores. 
I also tried squid with SMP but there are some restrictions (delay pool, 
identification - if I remember right -)  


> GreasySpoon or qlproxy seems to be the high profile ones. c-icap and
> Traffic Spicer seem to offer frameworks rather than pre-made filters
> 
> 
> The interfaces GreasySpoon or qlproxy describe say easily scriptable to
> do filtering. Though filtering is a huge topic, so "easily" is up for
> interpretation.

Thanks, I will take a look 

> 
> And fiddling around with your customers content is very site-specific
> about what can or can't be done. Thus the frameworks and script engines
> being most high profile.
> 

Yes you are right, I wish that there is a change in E2, for me the ideal 
situation is SQUID (cache and identification) and a pool of E2 with ICAP
I guess that there is no hope for SquidGuard because it's very different, a 
redirector related with Squid
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] peek and splice content inspection question

2015-08-23 Thread Amos Jeffries
On 23/08/2015 10:36 p.m., wmunny william wrote:
> 
>>> Sorry to jump on a late thread - it is also possible to use ICAP/eCAP 
>>> server to filter the actual contents of the stream.
>>>
>>> C-ICAP comes to mind first, then eCap samples from 
>>> http://www.e-cap.org/Downloads
>>>
>>
>> And the *CAP services is a better solution than either URL-rewriting or
>> chaining proxies. Since the HTTPS only gets MITM'd once, not twice or more.
>>
>> Amos
>> ___
> 
> 
> Hello All,
> 
> I know DansGuardian, e2guardian, squidguard but no free solution with ICAP
> Do you have an advice for this ?
> 
> Maybe there are some "hubs", Squid > icap > DansGuardian ?

Looking for one single thing that does everything DG or e2guardian do,
or wraps them completely is the wrong approach. They are almost
full-blown proxies like Squid.

The *CAP design is to leave all the transfer proxying and caching duties
to software like Squid and only perform the actual content adaptation
policies in the service/module.

You need to look at what DG is doing for you now and how much of that is
available from squid.conf capabilities. Most of it usually is. Only the
remaining "fiddling with payloads" bit actually needs a third-party service.


GreasySpoon or qlproxy seems to be the high profile ones. c-icap and
Traffic Spicer seem to offer frameworks rather than pre-made filters


The interfaces GreasySpoon or qlproxy describe say easily scriptable to
do filtering. Though filtering is a huge topic, so "easily" is up for
interpretation.

And fiddling around with your customers content is very site-specific
about what can or can't be done. Thus the frameworks and script engines
being most high profile.

HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2015-08-23 Thread Amos Jeffries
On 22/08/2015 4:20 a.m., Sebastian Goicochea wrote:
> Hello everyone, I'm having a strange problem:
> 
> Several servers, same hardware, using same version of squid (3.5.4)
> compiled using the same configure options, same configuration files. But
> in two of them I get LOTS of these Vary object loop! lines in cache.log
> 
> 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://resources.mlstatic.com/frontend/vip-fend-webserver/assets/bundles/photoswipe-6301b943e5586fe729e5d6480120a893.js'
> 'accept-encoding="gzip"'
> 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
> 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt, 'http://www.google.com/afs/ads/i/iframe.html'
> 'accept-encoding="gzip,%20deflate"'
> 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
> 2015/08/21 13:08:01 kid1| varyEvaluateMatch: Oops. Not a Vary match on
> second attempt,
> 'http://minicuotas.ribeiro.com.ar/images/products/large/035039335000.jpg' 
> 'accept-encoding="gzip,%20deflate"'
> 
> 2015/08/21 13:08:01 kid1| clientProcessHit: Vary object loop!
> 
> I've read what I could find on forums but could not solve it. Is this
> something to worry about?

The short answer:

Yes and no. Squid is signalling that it is completely unable to perform
its caching duty for these URLs. The proxying duty continues with only
high latency visible to the client.

It is up to you whether that latency cost is urgent or not. It is
certainy high enough importance that you need to be told each time (no
rate limiting) when you have asked to receive important notices.


> If that is not the case, how can I disable the
> excessive logging?

You can reduce your logging level to show only critical problems,
instead of showing all details rated 'important'.

  debug_options ALL,0

NOTE: important (ALL,1) includes a lot of things like this that do
really need to be fixed to get better service out of either your proxy
or the underlying network. But can be put on your todo list if you dont
have time right now.


> Which is the condition that generates this?


In long;


The "whats happening" is:

Your cache contains an object which was delivered by the server along
with headers stating that behind the URL is a large set of porssible
responses. *all* requests for that URL use a certain set of headers
(listed in Vary) to determine which binary-level object is applicable
(or not) on a per-client / per-reqeust basis.
 In order to cache the object Squid has to follow that same selection
criteria *exactly*.

Most common example is gzip vs non-gzip encoded copies of things. Which
you can see those messages relate to.

Squid stores this information in a "Vary object" associated with only
the URL. That vary object is used to perform a secondary cache index
lookup to see if the partiular variant needed is stored.

The expectation is that there would be 3+ objects stored for this URL; a
gzip data object, various non-gzip data objects, and a metadata object
("Vary object") telling Squid that it needs to look at the
accept-encoding header to find which of the those data objects to send
the client.


The messages themselves mean:

"Oops. Not a Vary match on second attempt"

 - that the Vary object saying look at headers X+Y+X is pointing at
itself or another Vary metadata object saying look at some other
headers. A URL cannot have two different Vary header values
simultaneously (Vary is a single list "value").
Something really weird is going on in your cache. Squid should handle
this by abandoning the cache lookups and go to the origin for fresh copies.

You could be causing it by using url-rewrite or store-id helpers wrongly
to pass requests for a URL to servers which produce different responses.
So that is well worth looking into.

IMPORTANT: It is mandatory that any re-writing only be done to
'collapse' URLs that are *actually* producing identical objects and
producing them in (outwardly) identical ways. This Vary looping is just
the tip of an iceberg of truly horrible failures that occur "silently"
with re-writing.



There is another similar message that can be mixed in the long list:

"Oops. Not a Vary object on second attempt," (note the 1-word difference)
 - this is almost but not quite so bad, and is usually seen with broken
origin servers. All you can do about the problem itself then is fire off
bug reports to people and hope it gets fixed by the sysadmin in charge.


Both situations are very bad for HTTP performance, and bad for churning
your cache as well. But Squid can cope easily enough by just fetching a
new object and dropping what is in the cache. That "Vary object loop!"
message is telling you Squid is doing exactly that.


A quick test with the tool at redbot.org shows that the
resources.mlstatic.com server is utterly borked. Not even sending
correct ETag ids for the objects its outputting. Thats a sign to me that
the admin is trying to be smart with heade

Re: [squid-users] peek and splice content inspection question

2015-08-23 Thread wmunny william

> > Sorry to jump on a late thread - it is also possible to use ICAP/eCAP 
> > server to filter the actual contents of the stream.
> > 
> > C-ICAP comes to mind first, then eCap samples from 
> > http://www.e-cap.org/Downloads
> > 
> 
> And the *CAP services is a better solution than either URL-rewriting or
> chaining proxies. Since the HTTPS only gets MITM'd once, not twice or more.
> 
> Amos
> ___


Hello All,

I know DansGuardian, e2guardian, squidguard but no free solution with ICAP
Do you have an advice for this ?

Maybe there are some "hubs", Squid > icap > DansGuardian ?

William 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users