Re: [squid-users] TPROXY Squid Error.

2014-07-10 Thread Eliezer Croitoru
Well about the rules of mikrotik you already know that NAT is not the 
direction.

In any case about the basic_data.sh script.
I had a type but..
What terminal are you using??
In most color terminals you won't see the special markings.

Thanks,
Eliezer

On 07/10/2014 03:28 AM, Info OoDoO wrote:

Hi,
I'm using Microtik 1100 AH X2 Router,

here is my Basic Data from your latest script.

http://pastebin.com/GHkD5yYx

Thanks,
Ganesh J




[squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread israelsilva1
Hi, thanks for your clarification.

On another note, do you have a guide to install squid 3.4 as you suggested?
I guess there is no rpm and I'll have to compile?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-rock-caching-memory-RAM-tp476p4666788.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread Antony Stone
On Thursday 10 July 2014 at 08:47:37, israelsilva1 wrote:

 Hi, thanks for your clarification.
 
 On another note, do you have a guide to install squid 3.4 as you suggested?
 I guess there is no rpm and I'll have to compile?

You could try http://wiki.squid-cache.org/SquidFaq/BinaryPackages


Regards,

Antony.

-- 
In fact I wanted to be John Cleese and it took me some time to realise that 
the job was already taken.

 - Douglas Adams

   Please reply to the list;
 please *don't* CC me.


[squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread israelsilva1
hmm, that's how I installed squid.

weird:

[root@dxb-squid34 ~]# yum install squid
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
 * base: centos.fastbull.org
 * epel: epel.mirror.srv.co.ge
 * extras: centos.fastbull.org
 * updates: centos.fastbull.org
Setting up Install Process
Package 7:*squid-3.5.0.001*-1.el6.x86_64 already installed and latest
version
Nothing to do


[root@dxb-squid34 ~]#
[root@dxb-squid34 ~]# squid -v
Squid Cache: Version *3.HEAD-20140127-r13248*
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share'
'--includedir=/usr/include' '--libdir=/usr/lib64'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-eui'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2'
'--enable-esi' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=16384' '--with-dl' '--with-openssl'
'--with-pthreads' '--disable-arch-native'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-rock-caching-memory-RAM-tp476p4666790.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Blocking spesific url

2014-07-10 Thread Andreas Westvik
So this is driving me crazy. Some of my users are playing battlefield 4 and 
battlefield have this server browsing page that has webm background.
Turns of this video downloads every few seconds and that adds up to about 8Gb 
every day. 
Here is the url: 
http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

Now, I dont want to block http://eaassets-a.akamaihd.net/ since updates and 
such comes from this CDN, and I dont want to block the file webm.
And I cant for the life of me figure how to block this spesific url? Google 
gives me only what I dont want to do.

Any pointers?

-Andreas

Re: [squid-users] Re: Unable to get HULU ads to cache (not partial content)

2014-07-10 Thread Eliezer Croitoru

More relevant to us:
http://bugs.squid-cache.org/show_bug.cgi?id=3830

I am not sure what the bug causes exactly, whether it will use a 4MB 
default maximum size on the cache_dir.


Eliezer

On 07/09/2014 11:16 PM, Antony Stone wrote:

I see you have maximum_object_size in your quid.conf after cache_dir.

Try specifying maximum_object_size before cache_dir and you shouldn't need the
max_size option to cache_dir.

See alsohttps://bugzilla.redhat.com/show_bug.cgi?id=951224


Regards,


Antony.




Re: [squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread Eliezer Croitoru

You can try my repository at:
http://www1.ngtech.co.il/rpm/centos/6/

There is also a head repository at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/

I would still suggest the 3.4 branch.

Eliezer

On 07/10/2014 10:15 AM, israelsilva1 wrote:

hmm, that's how I installed squid.

weird:

[root@dxb-squid34 ~]# yum install squid
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
  * base: centos.fastbull.org
  * epel: epel.mirror.srv.co.ge
  * extras: centos.fastbull.org
  * updates: centos.fastbull.org
Setting up Install Process
Package 7:*squid-3.5.0.001*-1.el6.x86_64 already installed and latest
version
Nothing to do


[root@dxb-squid34 ~]#
[root@dxb-squid34 ~]# squid -v
Squid Cache: Version *3.HEAD-20140127-r13248*
Service Name: squid
configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share'
'--includedir=/usr/include' '--libdir=/usr/lib64'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-eui'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-wccpv2'
'--enable-esi' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=16384' '--with-dl' '--with-openssl'
'--with-pthreads' '--disable-arch-native'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'




Re: [squid-users] Blocking spesific url

2014-07-10 Thread Eliezer Croitoru

Why don't you cache it?
Take a look at:
https://redbot.org/?uri=http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

Eliezer

On 07/10/2014 10:21 AM, Andreas Westvik wrote:

So this is driving me crazy. Some of my users are playing battlefield 4 and 
battlefield have this server browsing page that has webm background.
Turns of this video downloads every few seconds and that adds up to about 8Gb 
every day.
Here is the 
url:http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

Now, I dont want to blockhttp://eaassets-a.akamaihd.net/  since updates and 
such comes from this CDN, and I dont want to block the file webm.
And I cant for the life of me figure how to block this spesific url? Google 
gives me only what I dont want to do.

Any pointers?

-Andreas




[squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread israelsilva1
ok, but which version do I have in the end?
3.head 20140127 or 3.5.0?

please see my post above



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-rock-caching-memory-RAM-tp476p4666795.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread Eliezer Croitoru
You have 3.HEAD but.. 3.5.00X is based on 3.HEAD of the dates between 
3.4 and 3.5 stable.
So you have squid-3.5.0.001 which is my first release of 3.HEAD after 
3.4 got stable.


Eliezer

On 07/10/2014 10:42 AM, israelsilva1 wrote:

ok, but which version do I have in the end?
3.head 20140127 or 3.5.0?

please see my post above




[squid-users] Antwort: [squid-users] Create and download cache digest to create proxy siblings

2014-07-10 Thread Klaus Reithmaier
 Now I'v read, that the right URL to the cache digest is

  http://test.lut.ac.uk:3128/squid-internal-periodic/store_digest (See here #8: 
http://www.squid-cache.org/CacheDigest/cache-digest-v5.txt)

But it seems, that my squid doesn't create this store_digest file, because wget 
says error 404. The second problem is, the squid sibling doesn't try 
downloading this file. It only tries to download the netdb file.

I hope, that somebody can help me. Thanks


-Klaus Reithmaier klaus.reithma...@lindner-group.com schrieb: -
An: squid-users@squid-cache.org
Von: Klaus Reithmaier klaus.reithma...@lindner-group.com
Datum: 09.07.2014 16:51
Betreff: [squid-users] Create and download cache digest to create proxy siblings

Hello,

I  want do make a combined cache by defining two proxies in a cluster as  
siblings. To minimize traffic and latency, I want to do this by using  cache 
digests and not by using ICP.

Squid is compiled with --enable-cache-digest, squid version is 3.3.12

cache_peer configuration in squid.conf on SIBLING_1:
cache_peer [IP_OF_SIBLING_2] sibling 8080 0 proxy-only no-query

A few minutes after restarting SIBLING_1 I see on the access.log of SIBLING_2 
that SIBLING_1 is making the following request:

1404912746.105   1 [IP_OF_SIBLING_1] TCP_MISS/200 271 GET  
http://[NAME_OF_SIBLING_2]:8080/squid-internal-dynamic/netdb -  HIER_NONE/- -

Is the netdb file the cache digest? It is way to  small, the access.log says, 
the size is 271 bytes, if I am downloading  this file with wget, the filesize 
of netdb is 0 bytes.

I've found  somewhere else the following path for getting the cache digest:  
/squid-internal-periodic/store_digest, but here i get a 404 error code.  What 
is the right path for the cache digest and what do I need to create  the digest?

Thanks




This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.




This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.


[squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread israelsilva1
ok, so should I uninstall squid and install using the latest 3.4 version from
www1.ngtech.co.il/rpm/centos/6/x86_64?

thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-rock-caching-memory-RAM-tp476p4666799.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread Eliezer Croitoru

Depends on how adventurous you are.
The 3.4 is the official stable version and should work fine for most admins.
If you want to try the latest code and leave yourself open (basically) 
for unknown bugs and unknown situations feel free.


In anyway it mostly depends on your needs.
I have used 3.HEAD in couple occasions and it was stable for production.

from the topic subject I can understand that you use rock storage and it 
is present in my RPMs and there-for I think that it's good enough for you.


Eliezer

P.S. Any tests of 3.HEAD code is more then welcomed by the squid project.

On 07/10/2014 12:20 PM, israelsilva1 wrote:

ok, so should I uninstall squid and install using the latest 3.4 version from
www1.ngtech.co.il/rpm/centos/6/x86_64?

thanks




Re: [squid-users] Antwort: [squid-users] Create and download cache digest to create proxy siblings

2014-07-10 Thread Eliezer Croitoru

What OS are you using?
Is it a self compiled squid?
if so, can you run the basic_data.sh script from here:
http://www1.ngtech.co.il/squid/basic_data.sh

It has too much data so feel free to filter most of it.
In your case a basic squid -v should give the basic info.

Eliezer

On 07/10/2014 11:26 AM, Klaus Reithmaier wrote:

  Now I'v read, that the right URL to the cache digest is

   http://test.lut.ac.uk:3128/squid-internal-periodic/store_digest  (See here 
#8:http://www.squid-cache.org/CacheDigest/cache-digest-v5.txt)

But it seems, that my squid doesn't create this store_digest file, because wget 
says error 404. The second problem is, the squid sibling doesn't try 
downloading this file. It only tries to download the netdb file.

I hope, that somebody can help me. Thanks




[squid-users] Passing Information up to the eCap adapter

2014-07-10 Thread Jatin Bhasin
Hello,

As I understand currently squid can send client IP address up to the eCap
adapter using squid configuration directive *adaptation_send_client_ip.*

I needed more information in my eCap adapter so I changed the squid source
code to be able to send *Client Port, Destination Address and Destination
port* to the eCap adapter.

But now my requirement is to be able to pass *source MAC address and
destination MAC address* as well to the eCap adapter. But I am not able to
understand how I can do it.

Can someone please guide me where should I start looking at in squid source
code so that the MAC address can be passed up to the eCap adapter.


Thanks,
Jatin


[squid-users] Re: squid rock caching memory RAM

2014-07-10 Thread israelsilva1
Thanks for the clarifications!
Cheers


Eliezer Croitoru-2 wrote
 Depends on how adventurous you are.
 The 3.4 is the official stable version and should work fine for most
 admins.
 If you want to try the latest code and leave yourself open (basically) 
 for unknown bugs and unknown situations feel free.
 
 In anyway it mostly depends on your needs.
 I have used 3.HEAD in couple occasions and it was stable for production.
 
 from the topic subject I can understand that you use rock storage and it 
 is present in my RPMs and there-for I think that it's good enough for you.
 
 Eliezer





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-rock-caching-memory-RAM-tp476p4666804.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Passing Information up to the eCap adapter

2014-07-10 Thread Antony Stone
On Thursday 10 July 2014 at 12:34:37, Jatin Bhasin wrote:

 Hello,
 
 As I understand currently squid can send client IP address up to the eCap
 adapter using squid configuration directive *adaptation_send_client_ip.*
 
 I needed more information in my eCap adapter so I changed the squid source
 code to be able to send *Client Port, Destination Address and Destination
 port* to the eCap adapter.
 
 But now my requirement is to be able to pass *source MAC address and
 destination MAC address* as well to the eCap adapter. But I am not able to
 understand how I can do it.

What do you mean by destination MAC address?

So long as you're aware that this will be the MAC address of the Squid proxy, 
and not the MAC address of the server with the destination IP address, okay, 
but there's no way for a machine to find out the MAC address of another machine 
which is not on its own local subnet.

That said, I'd be slightly surprised if Squid even knows the MAC addresses 
(they're likely to be stripped off by the networking stack shortly before it 
passes the IP packet to Squid), however I'm happy to be corrected on this by 
someone more familir with its internals than I am.


Regards,


Antony.

-- 
Normal people think If it ain't broke, don't fix it.
Engineers think If it ain't broke, it doesn't have enough features yet.

   Please reply to the list;
 please *don't* CC me.


Re: [squid-users] Antwort: [squid-users] Create and download cache digest to create proxy siblings

2014-07-10 Thread Klaus Reithmaier
 Thanks for your help, but I just found the problem by myself. It was a layer 8 
problem:

I've copy pasted the configuration parameter from a wrong documentation. It was 
documented as --enable-cache-digest, but the right parameter is 
--enable-cache-digests

Now it's working.

Thanks anyway...

-Eliezer Croitoru elie...@ngtech.co.il schrieb: -
An: squid-users@squid-cache.org
Von: Eliezer Croitoru elie...@ngtech.co.il
Datum: 10.07.2014 12:08
Betreff: Re: [squid-users] Antwort: [squid-users] Create and download cache 
digest to create proxy siblings

What OS are you using?
Is it a self compiled squid?
if so, can you run the basic_data.sh script from here:
http://www1.ngtech.co.il/squid/basic_data.sh

It has too much data so feel free to filter most of it.
In your case a basic squid -v should give the basic info.

Eliezer

On 07/10/2014 11:26 AM, Klaus Reithmaier wrote:
   Now I'v read, that the right URL to the cache digest is

    http://test.lut.ac.uk:3128/squid-internal-periodic/store_digest  (See here 
 #8:http://www.squid-cache.org/CacheDigest/cache-digest-v5.txt)

 But it seems, that my squid doesn't create this store_digest file, because 
 wget says error 404. The second problem is, the squid sibling doesn't try 
 downloading this file. It only tries to download the netdb file.

 I hope, that somebody can help me. Thanks





This e-mail contains confidential and/or privileged information from the 
Lindner Group. If you are not the intended recipient or have received this 
e-mail by fault, please notify the sender immediately and destroy this e-mail. 
Any unauthorized copying and/or distribution of this e-mail is strictly not 
allowed.


Re: [squid-users] Passing Information up to the eCap adapter

2014-07-10 Thread Jatin Bhasin
Hi Antony,

Yes I need the source and destination MAC address of the packet which
is received by squid (I am happy with that).
Also I did think at first that squid would not have access to the
source and destination MAC of the packet as you said that it would
have been stripped off by the networking stack, but then I saw that
squid has acls based on MAC addresses.

Please visit below link:
http://wiki.squid-cache.org/SquidFaq/SquidAcl

* ACL TYPES AVAILABLE *
arp: Ethernet (MAC) address matching


Seeing this I hope that we have MAC address of the packet and so that
I can push that information up to the eCap adapter.

Thanks,
Jatin

On Thu, Jul 10, 2014 at 8:46 PM, Antony Stone
antony.st...@squid.open.source.it wrote:
 On Thursday 10 July 2014 at 12:34:37, Jatin Bhasin wrote:

 Hello,

 As I understand currently squid can send client IP address up to the eCap
 adapter using squid configuration directive *adaptation_send_client_ip.*

 I needed more information in my eCap adapter so I changed the squid source
 code to be able to send *Client Port, Destination Address and Destination
 port* to the eCap adapter.

 But now my requirement is to be able to pass *source MAC address and
 destination MAC address* as well to the eCap adapter. But I am not able to
 understand how I can do it.

 What do you mean by destination MAC address?

 So long as you're aware that this will be the MAC address of the Squid proxy,
 and not the MAC address of the server with the destination IP address, okay,
 but there's no way for a machine to find out the MAC address of another 
 machine
 which is not on its own local subnet.

 That said, I'd be slightly surprised if Squid even knows the MAC addresses
 (they're likely to be stripped off by the networking stack shortly before it
 passes the IP packet to Squid), however I'm happy to be corrected on this by
 someone more familir with its internals than I am.


 Regards,


 Antony.

 --
 Normal people think If it ain't broke, don't fix it.
 Engineers think If it ain't broke, it doesn't have enough features yet.

Please reply to the list;
  please *don't* CC me.


Re: [squid-users] Blocking spesific url

2014-07-10 Thread Alexandre
I imagine it is not cached because you either don't have caching enabled
or the size of the video is larger than the maximum object cache size.
This is defined in maximum_object_size (default is 4MB). Increasing
this for everything will obviously have some impact.

I don't know if you can force squid to cache a particular content (?)

Concerning blocking the specific URL. Someone correct me if I am wrong
but I don't believe you can not do this with only squid.
The squid ACL system can apparently block per domain:
http://wiki.squid-cache.org/SquidFaq/SquidAcl

What I recommend is to look into a url rewriting (ie. filtering).
Squidguard is the one I use and is quite popular.
Essentially you install squidguard and setup the config file to the
filtering according to your blacklist / whitelist.

* http://www.squidguard.org/

Then you need to define squidguard in your squid config as url rewritter:

/
url_rewrite_program /usr/bin/squidGuard/


Obviously this is a bit of work for just one URL but if you think you
will need to block more URL in the future it is the way to go IMO.
Squidguard has  some performance overhead but I believe it is small even
with fairly large list.

Alexandre


On 10/07/14 09:27, Eliezer Croitoru wrote:
 Why don't you cache it?
 Take a look at:
 https://redbot.org/?uri=http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm


 Eliezer

 On 07/10/2014 10:21 AM, Andreas Westvik wrote:
 So this is driving me crazy. Some of my users are playing battlefield
 4 and battlefield have this server browsing page that has webm
 background.
 Turns of this video downloads every few seconds and that adds up to
 about 8Gb every day.
 Here is the
 url:http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

 Now, I dont want to blockhttp://eaassets-a.akamaihd.net/  since
 updates and such comes from this CDN, and I dont want to block the
 file webm.
 And I cant for the life of me figure how to block this spesific url?
 Google gives me only what I dont want to do.

 Any pointers?

 -Andreas




Re: [squid-users] Blocking spesific url

2014-07-10 Thread Leonardo Rodrigues

Em 10/07/14 09:04, Alexandre escreveu:

Concerning blocking the specific URL. Someone correct me if I am wrong
but I don't believe you can not do this with only squid.
The squid ACL system can apparently block per domain:
http://wiki.squid-cache.org/SquidFaq/SquidAcl



Of course you can block specific URLs using only squid ACL options !!

#   acl aclname url_regex [-i] ^http:// ... # regex matching on 
whole URL
#   acl aclname urlpath_regex [-i] \.gif$ ...   # regex matching 
on URL path


if the URL is:

http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

then something like:

acl blockedurl url_regex -i akamaihd\.net\/battlelog\/background-videos\/
http_access deny block

should do it ! And i not even include the filename which, i 
imagine, can change between different stages.




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it





[squid-users] Transparent proxying and forwarding loop detected

2014-07-10 Thread Peter Smith
Hi list,

I'm running Squid 3.3 on Linux as part of a wireless hotspot solution.

The box has two network interfaces: one to the outside world, the
other a private LAN with IP 10.0.0.1. On the LAN I'm using CoovaChilli
as an active portal.

I'd like to transparently intercept and cache web traffic from wifi
clients. Coova has a configuration option for the IP and port of an
optional proxy - all web traffic from wireless clients will be routed
through this. I've set it to 10.0.0.1:3128

Here's my squid config:

acl localnet src 10.0.0.0/255.0.0.0   # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localnet
http_access deny all

http_port 10.0.0.1:3128 transparent
http_port 10.0.0.1:3127

coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

Unfortunately this throws WARNING: Forwarding loop detected warnings
(and in the client's browser an Access Denied error from Squid) and
I can't figure out why.

Running Squid in debugging mode (level 2), here's what I see when one
of the clients generates some Windows-related traffic


2014/07/10 13:43:57.438| client_side.cc(2316) parseHttpRequest: HTTP
Client local=10.0.0.1:3128 remote=10.0.0.4:60976 FD 8 flags=33
2014/07/10 13:43:57.438| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET /ncsi.txt HTTP/1.1
Connection: Close
User-Agent: Microsoft NCSI
Host: www.msftncsi.com


--
2014/07/10 13:43:57.449| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.449| client_side_request.cc(760)
clientAccessCheck2: No adapted_http_access configuration. default:
ALLOW
2014/07/10 13:43:57.449| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.450| forward.cc(121) FwdState: Forwarding client
request local=10.0.0.1:3128 remote=10.0.0.4:60976 FD 8 flags=33,
url=http://www.msftncsi.com/ncsi.txt
2014/07/10 13:43:57.451| peer_select.cc(289) peerSelectDnsPaths: Found
sources for 'http://www.msftncsi.com/ncsi.txt'
2014/07/10 13:43:57.451| peer_select.cc(290) peerSelectDnsPaths:  
always_direct = DENIED
2014/07/10 13:43:57.451| peer_select.cc(291) peerSelectDnsPaths:   
never_direct = DENIED
2014/07/10 13:43:57.451| peer_select.cc(295) peerSelectDnsPaths:  
   DIRECT = local=0.0.0.0 remote=10.0.0.1:3128 flags=1
2014/07/10 13:43:57.451| peer_select.cc(304) peerSelectDnsPaths:  
 timedout = 0
2014/07/10 13:43:57.454| http.cc(2204) sendRequest: HTTP Server
local=10.0.0.1:35439 remote=10.0.0.1:3128 FD 11 flags=1
2014/07/10 13:43:57.455| http.cc(2205) sendRequest: HTTP Server REQUEST:
-
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


--
2014/07/10 13:43:57.456| client_side.cc(2316) parseHttpRequest: HTTP
Client local=10.0.0.1:3128 remote=10.0.0.1:35439 FD 13 flags=33
2014/07/10 13:43:57.456| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


--
2014/07/10 13:43:57.459| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.459| client_side_request.cc(760)
clientAccessCheck2: No adapted_http_access configuration. default:
ALLOW
2014/07/10 13:43:57.459| client_side_request.cc(786)
clientAccessCheckDone: The request GET
http://www.msftncsi.com/ncsi.txt is ALLOWED, because it matched
'localnet'
2014/07/10 13:43:57.459| WARNING: Forwarding loop detected for:
GET /ncsi.txt HTTP/1.1
User-Agent: Microsoft NCSI
Host: www.msftncsi.com
Via: 1.1 c3me-pete (squid/3.3.8)
X-Forwarded-For: 10.0.0.4
Cache-Control: max-age=259200
Connection: keep-alive


2014/07/10 13:43:57.460| errorpage.cc(1281) BuildContent: No existing
error page language negotiated for ERR_ACCESS_DENIED. Using default
error file.
2014/07/10 13:43:57.463| client_side_reply.cc(1974)

[squid-users] Re: Transparent proxying and forwarding loop detected

2014-07-10 Thread babajaga
Having a very similar config like you, up and running(squid+chilli), you
better ask in  a chilli forum OR in the chilli group of linkedin. (I am
there, too :-)
Cause there are several issues to be considered with our setup:
- Proper config of iptables, as chilli also modifies them. And for
transparent squid you need special rule(s).
- Do not mix up transparent and non-transparent in regards to squid/chilli:
Coova has a configuration option for the IP and port of an
optional proxy - all web traffic from wireless clients will be routed
through this. I've set it to 10.0.0.1:3128

So you set up chilli for NON-transparent proxy, most likely.
As in chillis config NOT to enable HS_POSTAUTH_PROXYPORT for TRANSPARENT !

http_port 10.0.0.1:3128 transparent #shouldn't it be intercept for squid 3.3
?


Again, switch to one of the chilli forums, pls, to be better served. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Transparent-proxying-and-forwarding-loop-detected-tp4666810p4666811.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Blocking spesific url

2014-07-10 Thread Alexandre
My bad. I need to check squid ACL in more detail.

I guess squidguard main advantage is speed when dealing with large list
of URL then.

Alexandre

On 10/07/14 14:31, Leonardo Rodrigues wrote:
 Em 10/07/14 09:04, Alexandre escreveu:
 Concerning blocking the specific URL. Someone correct me if I am wrong
 but I don't believe you can not do this with only squid.
 The squid ACL system can apparently block per domain:
 http://wiki.squid-cache.org/SquidFaq/SquidAcl


 Of course you can block specific URLs using only squid ACL options !!

 #   acl aclname url_regex [-i] ^http:// ... # regex matching
 on whole URL
 #   acl aclname urlpath_regex [-i] \.gif$ ...   # regex
 matching on URL path

 if the URL is:

 http://eaassets-a.akamaihd.net/battlelog/background-videos/naval-mov.webm

 then something like:

 acl blockedurl url_regex -i akamaihd\.net\/battlelog\/background-videos\/
 http_access deny block

 should do it ! And i not even include the filename which, i
 imagine, can change between different stages.






Re: [squid-users] Squid v3.4.6 SMP errors

2014-07-10 Thread Alex Rousskov
On 07/09/2014 02:42 PM, Mike wrote:

 (squid-coord-8): Ipc::Mem::Segment::attach failed to
 mmap(/squid-squid-page-pool.shm): (22) Invalid argument

If there are no other errors before that, try stracing Squid and its
kids (one strace file per kid). The problem may be happening _before_
the mmap() system call mentioned above, and the error code may be more
specific at that point. Post the relevant strace tail if you can.

BTW, cpu_affinity_map and the exact number of workers are most likely
unrelated to this issue. Something in your environment screws up shared
memory operations.


Cheers,

Alex.



[squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-07-10 Thread Vadim Rogoziansky

Hello All.

Do you have any ideas how we can resolve it? I have the same issue.



[squid-users] Re: Waiting for www...

2014-07-10 Thread m3tatr0n
Cool... thank you very much babajaga!




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Waiting-for-www-tp4666774p4666815.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] access log request size x google drive

2014-07-10 Thread fernando

Hi there,

I configured my squid.conf to generate a second access log but using 
the client request size  (%st) in place of the response size (%st):


logformat upload %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un 
%Sh/%a %mt

access_log stdio:/var/log/squid/upload.log logformat=upload
access_log stdio:/var/log/squid/access.log


My goal was to use sarg to generate a report for upload sizes alongside 
the standard report wich contains only download sizes.


The reports looks ok for regular web browsing (download sizes much 
larger than upload sizes) but after I uploaded some big files to google 
drive the reports still doesn't show a significant increase in upload 
sizes.


I also run darkstat on the server and it shows the expected increase 
for Out traffic.


So, why aren't my upload.log showing uploads to google drive? Is this 
supposed to work at all, or do I need some trick for squid?




[]s, Fernando Lozano




[squid-users] how to implement access control using connetcing hostname and port

2014-07-10 Thread freefall12
some http proxy service providers here just assigned an unique proxy address
and port to a user, and the user just need to enter the necessary proxy
address and port to get access.I think this method is superior to username
and password authentication, and also,this makes it possible to proxy a lot
of mobile apps on ios devices and android which don't support traditional
proxy authentication. i found they are using squid for caching and proxying.
can squid alone achieve this? Thank you



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-implement-access-control-using-connetcing-hostname-and-port-tp4666818.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid: Memory utilization higher than expected since moving from 3.3 to 3.4 and Vary: working

2014-07-10 Thread Amos Jeffries
On 8/07/2014 10:20 p.m., Martin Sperl wrote:
 The problem is that it is a slow leak - it takes some time (month) to find 
 it...
 Also it only happens on real live traffic with high volume plus high 
 utilization of Vary:
 Moving our prod environment to head would be quite a political issue inside 
 our organization.
 Arguing to go to the latest stable version 3.4.6 would be possible, but I 
 doubt it would change a thing
 
 In the meantime we have not restarted the squids yet, so we still got a bit 
 of information available if needed.
 But we cannot keep it up in this state much longer.
 
 I created a core-dump, but analyzing that is hard.
 
 Here the top strings from that 10GB core-file - taken via: strings corefile| 
 sort | uniq -c | sort -rn | head -20).
 This may give you some idea:
 2071897 =0.7
 1353960 Keep-Alive
 1343528 image/gif
  877129 HTTP/1.1 200 OK
  855949  GMT
  852122 Content-Type
  851706 HTTP/
  851371 Date
  850485 Server
  848027 IEND
  821956 Content-Length
  776359 Content-Type: image/gif
  768935 Cache-Control
  760741 ETag
  743341 live
  720255 Connection
  677920 Connection: Keep-Alive
  676108 Last-Modified
  662765 Expires
  585139 X-Powered-By: Servlet/2.4 JSP/2.0
 
 Another thing I thought we could do is:
 * restart squids
 * run mgr:mem every day and compare the daily changes for all the values 
 (maybe others?)
 
 Any other ideas how to find the issue?
 

Possibly a list of the mgr:filedescriptors open will show if there are
any hung connections/transactions, or long-polling connections holding
onto state.


Do you have the mgr:mem reports over the last few days? I can start
analysing to see if anything else pops out at me.

Amos



[squid-users] sorry, i updated my email mode, and i have a question about wccp

2014-07-10 Thread johnzeng
Hello Dear Everyone:

 i config wccp mode recently , but i found http request don't succeed
 to be sent via gre tunnel at wccp mode .

 This is my config , if possible , give me some advisement , Thanks again.



 19:36:58.728514 IP 192.168.5.66.37225  180.149.132.165.http: Flags
 [F.], seq 0, ack 1, win 108, length 0
 19:37:00.304327 IP 192.168.5.66.41485 
 rev.opentransfer.com.28.147.130.98.in-addr.arpa.http: Flags [S], seq
 2204475760, win 5840, options [mss 1460,sackOK,TS val 3757970 ecr
 0,nop,wscale 6], length 0
 19:37:00.976403 IP 192.168.5.66.40789  202.104.237.103.http: Flags
 [S], seq 2214840108, win 5840, options [mss 1460,sackOK,TS val 3758139
 ecr 0,nop,wscale 6], length 0
 19:37:03.597139 IP 192.168.5.66.58461  101.226.142.33.http: Flags
 [.], ack 2180972149, win 227, options [nop,nop,TS val 3758794 ecr
 2556809136], length 0
 19:37:03.806973 IP 192.168.5.66.58461  101.226.142.33.http: Flags
 [.], ack 1, win 227, options [nop,nop,TS val 3758846 ecr
 2556809198,nop,nop,sack 1 {0:1}], length 0
 19:37:03.976184 IP 192.168.5.66.40789  202.104.237.103.http: Flags
 [S], seq 2214840108, win 5840, options [mss 1460,sackOK,TS val 3758889
 ecr 0,nop,wscale 6],


 19:06:33.356333 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:33.388306 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:33.388565 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:33.604188 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:38.187049 IP 192.168.5.1  192.168.2.2: GREv0, length 60:
 gre-proto-0x883e
 19:06:41.931862 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:42.434829 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e
 19:06:55.047736 IP 192.168.5.1  192.168.2.2: GREv0, length 48:
 gre-proto-0x883e



 *Mar 8 12:48:05.300: WCCP-EVNT:S00: Here_I_Am packet from 192.168.2.2
 w/bad rcv_id 
 *Mar 8 12:48:05.300: WCCP-PKT:S00: Sending I_See_You packet to
 192.168.2.2 w/ rcv_id 2378
 *Mar 8 12:48:05.300: IP: tableid=0, s=192.168.2.1 (local),
 d=192.168.2.2 (Ethernet1/0), routed via FIB
 *Mar 8 12:48:05.304: IP: s=192.168.2.1 (local), d=192.168.2.2
 (Ethernet1/0), len 168, sending
 *Mar 8 12:48:05.580: IP: tableid=0, s=192.168.5.1 (local),
 d=192.168.5.66 (FastEthernet0/1), routed via FIB
 *Mar 8 12:48:05.584: IP: tableid=0, s=192.168.5.1 (local),
 d=192.168.5.66 (FastEthernet0/1), routed via FIB

 *Mar 8 12:48:15.119: IP: tableid=0, s=192.168.2.2 (Ethernet1/0),
 d=192.168.2.1 (Ethernet1/0), routed via RIB
 *Mar 8 12:48:15.119: IP: s=192.168.2.2 (Ethernet1/0), d=192.168.2.1
 (Ethernet1/0), len 172, rcvd 3
 *Mar 8 12:48:15.123: WCCP-PKT:S00: Received valid Here_I_Am packet
 from 192.168.2.2 w/rcv_id 2378
 *Mar 8 12:48:15.123: WCCP-PKT:S00: Sending I_See_You packet to
 192.168.2.2 w/ rcv_id 2379
 *Mar 8 12:48:15.123: IP: tableid=0, s=192.168.2.1 (local),
 d=192.168.2.2 (Ethernet1/0), routed via FIB
 *Mar 8 12:48:15.123: IP: s=192.168.2.1 (local), d=192.168.2.2
 (Ethernet1/0), len 168, sending
 *Mar 8 12:48:15.299: IP: tableid=0, s=192.168.2.2 (Ethernet1/0),
 d=192.168.5.1 (FastEthernet0/1), routed via RIB
 *Mar 8 12:48:15.299: IP: s=192.168.2.2 (Ethernet1/0), d=192.168.5.1,
 len 172, rcvd 4
 *Mar 8 12:48:15.299: WCCP-EVNT:S00: Here_I_Am packet from 192.168.2.2
 w/bad rcv_id 
 *Mar 8 12:48:15.299: WCCP-PKT:S00: Sending I_See_You packet to
 192.168.2.2 w/ rcv_id 237A






 
 squid config
 

 wccp2_router 192.168.2.2

 wccp2_address 192.168.0.1 #interface ip address

 wccp_version 4

 wccp2_forwarding_method 1 # Gre for 1 L2rewriting for 2

 wccp2_return_method 1 # Gre for 1 L2rewriting for 2

 wccp2_assignment_method 1 Gre for 1 L2rewriting for 2

 wccp2_weight 5

 *
 other environment ( ip tunnel  iptables )
 *

 first step

 modprobe ip_gre

 ip tunnel add wccp0 mode gre remote 192.168.5.1 local 192.168.2.2 dev eth1


 second step

 ip addr add 10.1.1.2/24 dev wccp0
 ip route add 10.1.1.0/24 dev wccp0
 ip link set wccp0 up

 Or

 ifconfig wccp0 10.1.1.2 netmask 255.255.255.0 up
 route add -net 10.1.1.0 netmask 255.255.255.0 dev wccp0


 third step

 echo 0 /proc/sys/net/ipv4/conf/wccp0/rp_filter
 echo 0 /proc/sys/net/ipv4/conf/eth1/rp_filter
 echo 1  /proc/sys/net/ipv4/ip_forward

 fouth step

 iptables -P INPUT ACCEPT
 iptables -P OUTPUT ACCEPT
 iptables -P FORWARD ACCEPT
 iptables -A INPUT -i lo -j ACCEPT
 iptables -A OUTPUT -o lo -j ACCEPT
 iptables -A INPUT -i wccp0 -m state --state ESTABLISHED,RELATED -j ACCEPT
 iptables -A FORWARD -i wccp0 -j ACCEPT
 iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j
 REDIRECT --to-ports 3128
 iptables -t nat -A POSTROUTING -o eth1 -j SNAT 

Re: [squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-07-10 Thread Alex Rousskov
 On 04/11/2014 11:01 PM, Amm wrote:

 I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)
 
 Now there is this (BROKEN) bank site:
 
 https://www.mahaconnect.in
 
 This site closes connection if you try TLS1.2 or TLS1.1
 
 When squid tries to connect, it says:
 
 Failed to establish a secure connection to 125.16.24.200
 
 The system returned: (71) Protocol error (TLS code:
 SQUID_ERR_SSL_HANDSHAKE) Handshake with SSL server failed:
 error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
 failure
 
 The site works, if I specify:
 sslproxy_options NO_TLSv1_1
 
 
 But then it stops using TLS1.2 for sites supporting it.
 
 When I try in Chrome or Firefox without proxy settings, they auto detect
 this and fallback to TLS1.0/SSLv3.
 
 So my question is shouldn't squid fallback to TLS1.0 when TLS1.2/1.1
 fails? Just like Chrome/Firefox does?
 
 (PS: I can not tell bank to upgrade)
 
 Amm.


On 07/10/2014 09:27 AM, Vadim Rogoziansky wrote:

 Do you have any ideas how we can resolve it? I have the same issue.


I believe a proper support for secure version fallback requires some
development. I do not know of anybody working on this feature right now,
and there may be no formal feature requests on bugzilla, but it has been
informally requested before.

In addition to TLS v1.2-1.0 fallback, there are also servers that do
not support SSL Hellos that advertise TLS, so there is a need for
TLS-SSL fallback. Furthermore, some admins want Squid to talk TLS with
the client even if the server does not support TLS. Simply propagating
from-server I want SSL errors to the TLS-speaking client does not work
in such an environment, and a proper to-server fallback is needed.


Cheers,

Alex.



Re: [squid-users] how to implement access control using connetcing hostname and port

2014-07-10 Thread Amos Jeffries
On 11/07/2014 2:34 p.m., freefall12 wrote:
 some http proxy service providers here just assigned an unique proxy address
 and port to a user, and the user just need to enter the necessary proxy
 address and port to get access.I think this method is superior to username
 and password authentication, and also,this makes it possible to proxy a lot
 of mobile apps on ios devices and android which don't support traditional
 proxy authentication. i found they are using squid for caching and proxying.
 can squid alone achieve this? Thank you
 

The myportname type ACL is used to match the Squid listening http_port.

 * be aware that there is zero security verification that the client
accessing the port is the one you believe it to be. It is far inferior
to authentication, and this type of proxy protection can leave your
Squid as an open proxy / open relay.


For matching remote client IP:port details it is not possible because
the source port is randomised by TCP on every connection. Beyond that
killer problem all modern clients have between 2 and 8 IP addresses, and
the IPv6 so-called privacy address changes its value randomly every
few minutes.


On the subject of superiority, allowing an unverified access is inferior
to allowing a verified access. Authentication is simply the name for
*the* process of verifing some details are from the source they claim to
be (whether that detail be an IP:port or a user:password).
 So by definition authorizing access to an IP:port without
authenticating the IP:port values first is inferior security.


Yes allowing based on IP:port (or just IP as usually done) allows a lot
of applications that are not compliant with HTTP through the proxy. It
also allows a lot of attack types to happen far more easily. Your choice.

Amos



Re: [squid-users] access log request size x google drive

2014-07-10 Thread Amos Jeffries
On 11/07/2014 7:18 a.m., fernando wrote:
 Hi there,
 
 I configured my squid.conf to generate a second access log but using the
 client request size  (%st) in place of the response size (%st):
 
 logformat upload %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a
 %mt
 access_log stdio:/var/log/squid/upload.log logformat=upload
 access_log stdio:/var/log/squid/access.log
 
 
 My goal was to use sarg to generate a report for upload sizes alongside
 the standard report wich contains only download sizes.
 
 The reports looks ok for regular web browsing (download sizes much
 larger than upload sizes) but after I uploaded some big files to google
 drive the reports still doesn't show a significant increase in upload
 sizes.
 
 I also run darkstat on the server and it shows the expected increase for
 Out traffic.

Is that out to the client?
 or out to the server?
 or both (when out means servicing clients over the same NIC)?

 
 So, why aren't my upload.log showing uploads to google drive? Is this
 supposed to work at all, or do I need some trick for squid?

What type of requests are being logged?
 IIRC there is an issue with CONNECT traffic only logging one direction.

Amos



Re: [squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-07-10 Thread Amm



On 07/11/2014 09:45 AM, Alex Rousskov wrote:

On 04/11/2014 11:01 PM, Amm wrote:



I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)

Now there is this (BROKEN) bank site:

https://www.mahaconnect.in

This site closes connection if you try TLS1.2 or TLS1.1



snip


When I try in Chrome or Firefox without proxy settings, they auto detect
this and fallback to TLS1.0/SSLv3.

So my question is shouldn't squid fallback to TLS1.0 when TLS1.2/1.1
fails? Just like Chrome/Firefox does?

(PS: I can not tell bank to upgrade)

Amm.




On 07/10/2014 09:27 AM, Vadim Rogoziansky wrote:


Do you have any ideas how we can resolve it? I have the same issue.





I believe a proper support for secure version fallback requires some
development. I do not know of anybody working on this feature right now,
and there may be no formal feature requests on bugzilla, but it has been
informally requested before.

In addition to TLS v1.2-1.0 fallback, there are also servers that do
not support SSL Hellos that advertise TLS, so there is a need for
TLS-SSL fallback. Furthermore, some admins want Squid to talk TLS with
the client even if the server does not support TLS. Simply propagating
from-server I want SSL errors to the TLS-speaking client does not work
in such an environment, and a proper to-server fallback is needed.


Cheers,

Alex.



A similar discussion used to go on in Firefox bugzilla.

All are now FIXED.

Possibly we can simply look at what they did and follow?

https://bugzilla.mozilla.org/show_bug.cgi?id=901718
https://bugzilla.mozilla.org/show_bug.cgi?id=969479
https://bugzilla.mozilla.org/show_bug.cgi?id=839310

My current workaround is to put such sites in nosslbump acl i.e. NO SSL 
bumping for sites which support only SSL. Then (Latest) Firefox 
automatically detects SSL only site and does proper fallback.


Amm