Re: [squid-users] squid-3.4.8 sslbump breaks facebook

2014-10-16 Thread Amm


On 10/16/2014 02:35 PM, Jason Haar wrote:

On 16/10/14 20:54, Jason Haar wrote:

I also checked the ssl_db/certs dir and
removed the facebook certs and restarted - didn't help

let me rephrase that. I deleted the dirtree and re-ran ssl_crtd -s
/usr/local/squid/var/lib/ssl_db -c - ie restarted with an empty cache.
It didn't help. It created a new fake facebook cert - but the cert
doesn't fully match the characteristics of the real cert


http://bugs.squid-cache.org/show_bug.cgi?id=4102

Please add weight to bug report :)

Amm.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL/SSH/SFTP/FTPS to alternate ports

2014-10-11 Thread Amm


On 10/12/2014 05:18 AM, Timothy Spear wrote:

Hello,

Here is the issue:
I can proxy through Squid just fine to HTTP and HTTPS. I can also run 
SSH via Corkscrew to a SSH server running on port 443 and it works fine.

What I cannot do, is access HTTPS or SSH on any other port except 443.


Look at SSL_ports and Safe_ports in your squid.conf (unless you rewrote 
it completely)


Amm.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache.org won't redirect to www.squid-cache.org?

2014-09-30 Thread Amm
I had pointed this out few months back but I suppose it was not 
corrected or not considered necessary.


Amm.

On 09/30/2014 02:15 PM, Дмитрий Шиленко wrote:
without www.*  -- Forbidden You don't have permission to access / 
on this server.


Visolve Squid писал 30.09.2014 11:42:

Hi,

The http://www.squid-cache.org/ domain web site is working fine.

We have accessed the site a min ago.

Regards,
ViSolve Squid

On 9/30/2014 1:47 PM, Neddy, NH. Nam wrote:

Hi,

I accidentally access squid-cache.org and get 403 Forbidden error,
and am wondering why NOT redirect to WWW.squid-cache.org
automatically?

I'm sorry if it's intention.
~Ned
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amm


On 08/20/2014 10:52 AM, Jatin Bhasin wrote:

And when I browse to https://weather.yahoo.com then it goes in
redirect loop. I am using Chrome browser and I get a message at
the end saying 'This webpage has a redirect loop'.


Happens in 3.4 series too.

I added these in squid.conf as a solution:

via off
forwarded_for delete

Amm


Re: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Amm


On 08/20/2014 04:06 PM, Jatin Bhasin wrote:

Hi,

Thanks, for that. It solved for me as well. But does anyone why this loop 
happens and how does these squid directives resolve the issue?
I think only Yahoo can answer that. They seem to send redirect when they 
find Via and/or X-Forwarded-For headers.


I was lazy to find which header exactly but I disabled both anyway.

Amm.



[squid-users] security.use_mozillapkix_verification and squid ssl bump

2014-08-02 Thread Amm

Hello,

Recent version of Firefox made some changes to certificate verification.

See here:
https://wiki.mozilla.org/SecurityEngineering/Certificate_Verification

After this many SSL bumped sites are showing verification error.

An error occurred during a connection to s-static.ak.facebook.com.
Certificate extension value is invalid.
(Error code: sec_error_extension_value_invalid)

Examples:
Facebook = https://s-static.ak.facebook.com/
Hotmail = https://sc.imp.live.com

Those sites work without SSL bumping.

Currently it can be fixed by changing:
security.use_mozillapkix_verification to false in Firefox.

As per Mozilla this will become always true from FF 33.

There is a bug report at Mozilla:
https://bugzilla.mozilla.org/show_bug.cgi?id=1045973

But I doubt this actually is bug but future security feature.

Can anything be done in squid to allow above?
i.e. allow it to work regardless of value of mozillapkix

Thanks and regards,

Amm


Re: [squid-users] Re: YouTube Resolution Locker

2014-07-26 Thread Amm


On 07/26/2014 12:05 PM, Stakres wrote:

Hi All,

Feel free to modify the script (client side) to do not send all requests.
As Cassiano said, only the YouTube urls need to be rewritten...


My point here is that you have not mentioned anywhere that your script 
collects information.


Script is made by Unveiltech and it sends all data to Unveiltech servers.

Your server can very easily send redirection to their own server and 
fetch username OR password of any site. (If end user is not technically 
sound)


For example your server can easily redirect http://login.google.com to 
http://storeid.unveiltech.com/login.google.com/ (which looks exactly 
same as Google login page). End user will not even know what is happening.


Not sure if you did this on purpose OR you are new to programming that 
you did not realize this huge security and privacy angle.


Additionally your script is one small function modification EXAMPLE 
redirector script. A real script would include full logic of youtube 
resolution locker (what your storeid server does currently).


No offence meant, please. I am just warning other users if they try to 
use this php script, there is huge security risk.


Regards,

PS: Sorry for being off-topic on squid mailing list.

AMM


Re: [squid-users] Re: YouTube Resolution Locker

2014-07-26 Thread Amm


On 07/26/2014 02:36 PM, Amos Jeffries wrote:

On 26/07/2014 8:36 p.m., Stakres wrote:

HI Amm,

Everyone is free to modify the script (client side) by sending YouTube urls
only, no need to send all the Squid traffic.
...
Bye Fred


It would be better practice to publish a script which is pre-restricted
to the YT URLs which your server is useful for and your initial
advertisement stated its purpose was.

That would protect your servers from excessive bandwidth from naive
administrators, help to offer better security by default, and protect
your company from this type of complaint and any future legal
accusations that may arise from naive use of the script.

Amos


Yes and also mention on top of script that Script sends URL data to 
your servers and giving link to privacy policy and if / how you use the 
URL data.


Otherwise you may really have legal issue for capturing data without 
permission. (Even if you directly throw it in dustbin you can still be 
sued. - just my two cents)


Amm.


Re: [squid-users] YouTube Resolution Locker

2014-07-25 Thread Amm

On 07/25/2014 09:03 PM, Stakres wrote:

Hi All,

Free API to lock resolution in YouTube players via your prefered Squid
Cache.
https://sourceforge.net/projects/youtuberesolutionlocker/


BIG WARNING:

I looked at the script out of curiosity. It sends all queries to 
storeid.unveiltech.com in background.


Amm


Re: [squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-07-10 Thread Amm



On 07/11/2014 09:45 AM, Alex Rousskov wrote:

On 04/11/2014 11:01 PM, Amm wrote:



I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)

Now there is this (BROKEN) bank site:

https://www.mahaconnect.in

This site closes connection if you try TLS1.2 or TLS1.1



snip


When I try in Chrome or Firefox without proxy settings, they auto detect
this and fallback to TLS1.0/SSLv3.

So my question is shouldn't squid fallback to TLS1.0 when TLS1.2/1.1
fails? Just like Chrome/Firefox does?

(PS: I can not tell bank to upgrade)

Amm.




On 07/10/2014 09:27 AM, Vadim Rogoziansky wrote:


Do you have any ideas how we can resolve it? I have the same issue.





I believe a proper support for secure version fallback requires some
development. I do not know of anybody working on this feature right now,
and there may be no formal feature requests on bugzilla, but it has been
informally requested before.

In addition to TLS v1.2-1.0 fallback, there are also servers that do
not support SSL Hellos that advertise TLS, so there is a need for
TLS-SSL fallback. Furthermore, some admins want Squid to talk TLS with
the client even if the server does not support TLS. Simply propagating
from-server I want SSL errors to the TLS-speaking client does not work
in such an environment, and a proper to-server fallback is needed.


Cheers,

Alex.



A similar discussion used to go on in Firefox bugzilla.

All are now FIXED.

Possibly we can simply look at what they did and follow?

https://bugzilla.mozilla.org/show_bug.cgi?id=901718
https://bugzilla.mozilla.org/show_bug.cgi?id=969479
https://bugzilla.mozilla.org/show_bug.cgi?id=839310

My current workaround is to put such sites in nosslbump acl i.e. NO SSL 
bumping for sites which support only SSL. Then (Latest) Firefox 
automatically detects SSL only site and does proper fallback.


Amm


[squid-users] Squid 3.4.5 crashes when adaptation_access is used

2014-05-06 Thread Amm

Hello,

I have already filed this bug on squid bugzilla.

But I have always noticed that responses on mailing list are much faster 
(almost on same day) and response on bugzilla has taken weeks for me in 
past!


So just bringing this bug in notice.
http://bugs.squid-cache.org/show_bug.cgi?id=4057

Summary is, I have this line in squid.conf

adaptation_access service_avi allow !novirusscan

(scan for viruses if site is not list in novirusscan)


On squid 3.4.4.2 this works perfect but squid 3.4.5 crashes.

If I remove that line then squid 3.4.5 works fine but then I lose virus 
scanning.


Please see bug report for details.

Thanks and regards,

Amm.


Re: [squid-users] Squid 3.4.5 is available

2014-05-05 Thread Amm

I agree with Martin.

Squid is very widely used software and this move may break lots of 
things for many administrators.


autoconf, automake, gcc may depend on other softwares and so even those 
other softwares may also require update. Updating those softwares may 
also break other softwares which depend on older versions.


So its going to be huge task for administrator.

I am also surprised that this change (a major one in my opinion) was 
made in minor-minor release.


As Martin suggested may be this change should be pushed to Squid 4.x 
major version.


So please consider the request.

Thanks and regards,

Amm.


On 05/05/2014 07:39 PM, Martin Sperl wrote:

Hi Amos!
...



So I wonder if it is really a wise move to potentially cut off
people from security patches because they can no longer
compile squid on the system they want to use it on just
due to the build-tool dependencies.

Is there maybe a plan not to change build-tool versions
within a minor version (3.4, 3.5, ...) to somewhat avoid
such issues?

Thanks,
Martin


Re: [squid-users] Squid 3.4.4 and SSL Bump not working (error (92) Protocol not available)

2014-04-16 Thread Amm



On 04/16/2014 07:45 PM, Ict Security wrote:

  Hello to everybody,

we use Squid for http transparent proxyging and everything is all right.


http_port 3127 intercept  ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem

  -A PREROUTING -p tcp -s 192.168.10.8 --dport 443 -j DNAT
--to-destination 192.168.10.254:3127


for 443 intercept use https_port not http_port.

Amm.


Re: [squid-users] squid sslbump server-first local loops?

2014-04-13 Thread Amm

On 04/13/2014 04:27 PM, Amos Jeffries wrote:

On 12/04/2014 5:23 p.m., Amm wrote:


So I ran this command:
openssl s_client -connect 192.168.1.2:8081

where 8081 is https_port on which squid runs. (with sslbump)

And BOOM, squid went in to infinite loop! And started running out of
file descriptors.





Is this happening with via on ?
It is an expected vulnerability with via off.

Amos



I dont have any via line, so that means default in on

Again tested it. Very easy to crash squid. It just takes 2 seconds for 
squid to report:


WARNING! Your cache is running out of filedescriptors

And takes away 100% CPU too.

Regards,

Amm


Re: [squid-users] squid sslbump server-first local loops?

2014-04-13 Thread Amm



On 04/13/2014 08:35 PM, Eliezer Croitoru wrote:

Why https_port? and why ssl_bump on https_port ?

it should run ontop of http_port as far as I can understand and know.


https_port is needed when you intercept port 443 traffic.

http_port intercepts port 80 and https_port intercepts port 443.


There was an issue which I reported about and which is similar and I
have used couple acls to block the access and the loop from the port to
itself.


Can you share acl? Because there is already default acl called 
Safe_ports. And it does not list port 8081.


Only ports listed in Safe_ports should be allowed. But this sslbump 
still continues and cause infinite loop.




Eliezer


Amm.


[squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
Hello,

Yesterday I upgraded OpenSSL version. (Although I was using OpenSSL 1.0.0 - not 
affected by Heartbleed, but I upgraded none-the-less)


I am using sslbump (squid 3.4.4). Using Firefox 28.0 (latest 64bit tar.bz2)

After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving certificate 
error stating sec_error_inadequate_key_usage.

This does not happen for all domains but looks like happening ONLY for google 
servers. i.e. youtube, news.google.com

Certificate is issued for *.google.com with lots of alternate names.

I also recompiled squid (with new OpenSSL) just to be sure.

I also cleared certificate store.

But error still occurs.


Google search gave me a patch for this for 3.3.9. But just wanted to make sure 
if there is any other way to resolve this issue? (Like some squid configuration 
directive)

So please let me know, if patch is the only way OR this has been resolved?

Is it Firefox bug or squid bug?


Thanks in advance,


Amm.



Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 4:46 PM, Amos wrote:


 On 11/04/2014 10:16 p.m., Amm wrote:
 After this upgrade i.e. from 1.0.0 to 1.0.1, Firefox started giving
 certificate error stating sec_error_inadequate_key_usage.
 
 This does not happen for all domains but looks like happening ONLY
 for google servers. i.e. youtube, news.google.com
 
 Certificate is issued for *.google.com with lots of alternate names.
 
 Is it Firefox bug or squid bug?



 Hard to say.

 key_usage is an explicit restriction on what circumstances and
 actions the certificate can be used for.

 What the message you are seeing indicates one of two things:
 Either, the website owner has placed some limitations on how their
 website certificate can be used and your SSL-bumping is violating those
 restrictions.


As I said, its google domains. You can check
https://news.google.com OR https://www.youtube.com

Both have same ceritificate. *.google.com is primary and
youtube.com is one of the many alternate names.

It worked before I upgraded to OpenSSL 1.0.1.

The sslbump configuration was working till yesterday. Today
too it works for all other domains (Yahoo, hotmail etc.)

Infact https://www.google.com also works, because it has
specific certificate and not same *.google.com cerificate.


 Or, the creator of the certificate you are using to sign the generated
 SSL-bump certificates has restricted your signing certificate
 capabilities. (ie the main Trusted Authorities prohibit using certs they
 sign as secondary CA to generate fake certs like SSL-bump does).

 Either case is just as likely.

Did OpenSSL 1.0.0 not support key_usage? And hence squid did not
use it either?

I wonder why other Firefox+sslbump users are not complaining about this?

I see only few people complaining. That too was in November 2013.

I used the patch here:
http://www.squid-cache.org/mail-archive/squid-users/201311/att-0310/squid-3.3.9-remove-key-usage.patch

And it fixes the issue.

But I would prefer to do it without patch.

If I am the only one facing this, then what could be wrong?

Amm.


Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 5:19 PM


 I also use this patch and would like if it is possible to somehow go on 
 without it.
 
 May it be due to the fact squid caches the generated SSL certificates in the 
 ssl_crtd store?
 So we need to clear the store when root CA certificate for SSL bump is 
 regenerated?

 Raf

I had cleared the ssl cert store but the issue still occured (without patch).

So finally I gave up trying different things and used the patch.

Here is exact same issue discussed earlier in mailing list:
http://www.squid-cache.org/mail-archive/squid-users/201311/0310.html

Amm



Re: [squid-users] sslbump - firefox sec_error_inadequate_key_usage

2014-04-11 Thread Amm
On Friday, 11 April 2014 6:29 PM, Amos wrote:


 It seems to be something in firefox was buggy and they have a workaround
 coming out in version 29.0, whether that will fix the warnign display or
 just allow people to ignore/bypass it like other cert issues I'm not
 certain.

 Amos

Ok, but then how come Firefox did not throw warning display just yesterday?

Squid version and configuration were exact same yesterday.

Unfortunately I can not switch OpenSSL back to older version else I
would have checked if squid mimicked key_usage in that version
as well or not?

Amm



[squid-users] fallback to TLS1.0 if server closes TLS1.2?

2014-04-11 Thread Amm

Hello,

I recently upgraded OpenSSL from 1.0.0 to 1.0.1 (which supports TLS1.2)

I also recompiled squid against new OpenSSL.

Now there is this (BROKEN) bank site:

https://www.mahaconnect.in

This site closes connection if you try TLS1.2 or TLS1.1

When squid tries to connect, it says:

Failed to establish a secure connection to 125.16.24.200

The system returned: (71) Protocol error (TLS code: 
SQUID_ERR_SSL_HANDSHAKE) Handshake with SSL server failed: 
error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake 
failure


The site works, if I specify:
sslproxy_options NO_TLSv1_1


But then it stops using TLS1.2 for sites supporting it.

When I try in Chrome or Firefox without proxy settings, they auto detect 
this and fallback to TLS1.0/SSLv3.


So my question is shouldn't squid fallback to TLS1.0 when TLS1.2/1.1 
fails? Just like Chrome/Firefox does?


(PS: I can not tell bank to upgrade)

Amm.


[squid-users] squid sslbump server-first local loops?

2014-04-11 Thread Amm

Hello,

I accidentally came across this. I was trying to test what TLS version 
my squid reports.


So I ran this command:
openssl s_client -connect 192.168.1.2:8081

where 8081 is https_port on which squid runs. (with sslbump)

And BOOM, squid went in to infinite loop! And started running out of 
file descriptors.


It continued the loop even after I ctrl-c'ed the openssl.

I suppose this happens due to server-first in sslbump, where squid keeps 
trying to connect to self in an infinite loop.


Port 8081 is NOT listed in Safe_ports. So shouldn't squid be blocking it 
before trying server-first?


Or shouldn't squid check something like this?

If (destIP == selfIP and destPort == selfPort) then break?

I am also not sure if this can be used to DoS. So just reporting,

Amm.


Re: [squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?

2014-03-16 Thread Amm



On 03/16/2014 03:02 AM, Andrew Beverley wrote:

I used (and created) the patch to get the value from the remote server.
However, I can't remember whether it does it the other way as well (at
the time I thought I'd written the documentation so clearly, but coming
back to it now it's not clear...)

 From memory, however, you do need to configure qos_flows to *something*,
to trigger its operation. I think you can simply state qos_flows mark.


Yes it needs qos_flows mark, without specifying qos_flows, its not 
working. But ...




My question however was to pass on mark from client side to server side.
i.e. reverse of what above paragraph says.



As above, it's primarily server to client. Get that working first so you
know everything is in order, and then try it the other way.


... it works only from server to client. If I CONNMARK server (to squid) 
packet, I can see it appearing in log.


If I CONNMARK client (to server) packet its not showing in LOG.



Let me know what you find out and I will update the documentation! (I
don't have time to look through the source code right now)


So documentation is right but placement of the statement is possibly 
wrong. Its not highlighted right infront. i.e qos_flows applies only for 
packets from server to client(squid) NOT from client to server.


Is it possible to do reverse too? Or atleast have an acl where I can 
check incoming MARK on packet? So then I can make use of tcp_outgoing_mark.


I just noticed that there was same discussion done in list previously as 
well (in 2013), here is the link:


http://www.squid-cache.org/mail-archive/squid-users/201303/0421.html

Regards

Amm


Re: [squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?

2014-03-15 Thread Amm


On 03/15/2014 05:11 PM, Amos Jeffries wrote:


On 15/03/2014 6:46 p.m., Amm wrote:

I would like to mark outgoing packet (on server side) with SAME MARK as on 
incoming (NATed or CONNECTed) packet.




http://www.squid-cache.org/Doc/config/qos_flows/

Squid default action is to pass the netfilter MARK value from client
through to the server. All you should need to do is *omit*
tcp_outgoing_mark directives from changing it to something else.

Amos



Oh that's great, thanks, I did not know this.

However, I tried this but somehow I am not able to get it working

Please let me know what could be wrong.

First I thought it may be because netfilter-conntrack-devel was not 
installed. So I installed the same.


Then I recompiled squid with these:
--with-netfilter-conntrack and --with-libcap


configure: ZPH QOS enabled: yes
configure: QOS netfilter mark preservation enabled: yes
...
checking for operational libcap2 headers... yes
configure: libcap support enabled: yes
configure: libcap2 headers are ok: yes
...
configure: Linux Netfilter support requested: yes
configure: Linux Netfilter Conntrack support requested: yes
checking for library containing nfct_query... -lnetfilter_conntrack
(4-5 more lines with header check with answer yes)


Installed new squid and restarted squid.

Ran following iptables command for debugging:

# CMD 1- mark all packets coming from 192.168.1.45
$ iptables -t mangle -I PREROUTING -s 192.168.1.45 -j MARK --set-mark 0x112

# CMD 2 - count packets/bytes going OUT on port 80 and marked 0x112
$ iptables -t mangle -I POSTROUTING -m mark --mark 0x112 -p tcp --dport 80

# CMD 3 - NAT settings (intercept)
$ iptables -t nat -nvL

Chain PREROUTING (policy ACCEPT 22610 packets, 2251K bytes)
 pkts bytes target prot opt in out source 
destination
  347 21371 REDIRECT   tcp  --  eth0   *   0.0.0.0/0 
0.0.0.0/0tcp dpt:80 redir ports 3128




Some settings in /etc/squid/squid.conf:

http_port 3128 intercept

# log for nfmark logging
logformat nfmark %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %[un %Sh/%a 
%mt %nfmark %nfmark


access_log daemon:/var/log/squid/access.log squid all
access_log daemon:/var/log/squid/nfmark.log nfmark all

(Do I need to put anything else in squid.conf for marking?)
(There is no tcp_outgoing_mark)


Now I accessed Google from 192.168.1.45

$ tail /var/log/squid/nfmark.log

1394891128.585403 192.168.1.45 TCP_MISS/200 21137 GET 
http://www.google.co.in/?xxx - HIER_DIRECT/173.194.36.56 text/html 0x0 0x0
1394891128.793 92 192.168.1.45 TCP_MISS/304 393 GET 
http://www.google.co.in/images/srpr/mlogo2x_3.png - 
HIER_DIRECT/173.194.36.56 - 0x0 0x0
1394891128.851115 192.168.1.45 TCP_MISS/304 393 GET 
http://www.google.co.in/images/logo_mobile_srp_3.png - 
HIER_DIRECT/173.194.36.56 - 0x0 0x0



nfmark in and out both are logged as 0x0 whereas I was expecting atleast 
one of them to be 0x112



$ iptables -t mangle -nvL PREROUTING

Chain PREROUTING (policy ACCEPT 1590 packets, 604K bytes)
 pkts bytes target prot opt in out source 
destination
  135 22042 MARK   all  --  *  *   192.168.1.45 
0.0.0.0/0MARK set 0x112



$ iptables -t mangle -nvL POSTROUTING

Chain POSTROUTING (policy ACCEPT 1653 packets, 372K bytes)
 pkts bytes target prot opt in out source 
destination
0 0tcp  --  *  *   0.0.0.0/0 
0.0.0.0/0mark match 0x112 multiport dports 80,443



PREROUTING shows 135 packets MARKed as 0x112 but POSTROUTING shows no 
packets marked.


What could be wrong?

Thanks in advance.

Amm


Re: [squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?

2014-03-15 Thread Amm



On 03/15/2014 08:03 PM, Amm wrote:

On 03/15/2014 05:11 PM, Amos Jeffries wrote:



On 15/03/2014 6:46 p.m., Amm wrote:

I would like to mark outgoing packet (on server side) with SAME MARK
as on incoming (NATed or CONNECTed) packet.




http://www.squid-cache.org/Doc/config/qos_flows/

Squid default action is to pass the netfilter MARK value from client
through to the server. All you should need to do is *omit*
tcp_outgoing_mark directives from changing it to something else.

Amos




Oh that's great, thanks, I did not know this.

However, I tried this but somehow I am not able to get it working

Please let me know what could be wrong.



Ok I read further on that link itself, somewhere it says:

disable-preserve-miss
This option disables the preservation of the TOS or netfilter
mark. By default, the existing TOS or netfilter mark value of
the response coming from the remote server will be retained
and masked with miss-mark.
NOTE: in the case of a netfilter mark, the mark must be set on
the connection (using the CONNMARK target) not on the packet
(MARK target).

First, it says to use CONNMARK and not MARK. I tried with CONNMARK as 
well but it did not work.


Second, it says its for response coming from the remote server.

My question however was to pass on mark from client side to server side. 
i.e. reverse of what above paragraph says.

(But your earlier reply said client to server - so there is confusion)

Any idea?

Regards

Amm


[squid-users] Is it possible to mark tcp_outgoing_mark (server side) with SAME MARK as incoming packet (client side)?

2014-03-14 Thread Amm
Hello,

I would like to mark outgoing packet (on server side) with SAME MARK as on 
incoming (NATed or CONNECTed) packet.

There is option tcp_outgoing_mark with which I can mark packets.

But there is no ACL option to check incoming mark.


If there is already a way to do this then please guide.


Otherwise I would like to suggest:

Option 1)
---


Syntax: tcp_outgoing_mark SAMEMARK [!]aclname

where SAMEMARK is special (literal) word where acl matching are applied same 
mark as on incoming packet.

For e.g I can do:

tcp_outgoing_mark SAMEMARK all

And all packets will be applied same mark as incoming packet mark.


Option 2)
---


Have an acl:

Syntax: acl aclname nfmark mark-value


Then I can do something like this:

acl mark101 nfmark 0x101
tcp_outgoing_mark 0x101 mark101


If both above options can be combined then it would be even better.

Thanks in advance,

Amm.



[squid-users] Re: squid -z for SMP does not create worker's directories

2013-08-27 Thread Amm
Resending the mail. As always my mails DONT always reach the group.
Sometimes reaches, sometimes not from past 1-2 months.

I reported this earlier too.There some issue with seconday MX record.

Also may be there is issue with connectivity with Yahoo. As after
1-2days mail bounces back with timeout error.

Anyway, my bug reports follows below.

-

From: Amm ammdispose-sq...@yahoo.com
To: squid-users@squid-cache.org squid-users@squid-cache.org 
Sent: Monday, 26 August 2013 4:18 PM
Subject: squid -z for SMP does not create worker's directories


Hello all,

I have following configuration: (For SMP)


workers 2
cache_dir ufs /var/spool/squid/worker${process_number} 1000 16 256

when I run squid -z to create directories, it creates workers0 directory.

i.e. /var/spool/squid/worker0

Apparently this is the main process (process 0) which is supposed to
call 2 workers and workers should in turn create their own directories.
(or parent should smartly create worker directories)

i.e. /var/spool/squid/worker1 and /var/spool/squid/worker2

But in this case, parent itself creates directory which will never be used!

Possibly this needs to be fixed. (Or am I missing something?!)

Currently workaround I use is simple copy:
cp -a  /var/spool/squid/worker0  /var/spool/squid/worker1
cp -a  /var/spool/squid/worker0  /var/spool/squid/worker2

Hopefully that is right workaround.

Amm



Re: [squid-users] Re: can we know the ip of transparent proxy ??

2013-08-19 Thread Amm

 From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org 
Sent: Monday, 19 August 2013 12:11 PM
Subject: Re: [squid-users] Re: can we know the ip of transparent proxy ??

On 19/08/2013 5:50 p.m., Ahmad wrote:
 is there a trick to know the ip of transparent squid ?


Not from the client end. Even NAT interception transparency the proxy IP 
is hidden from the client. The server knows, but not the client.

Amos


Unless I misunderstood the original question, combination of curl,
www.whatismyip.comand grep should work, shouldn't it?

Amm.



Re: [squid-users] Re: squid and url_regex. Not working

2013-08-19 Thread Amm
- Original Message -

 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org

 On 20/08/2013 2:20 a.m., ranmanh wrote:
  Apologies
 
  I corrected the original post a few minutes after posting it
  Now details included in the initial message.

 
 This is an email mailing list. You cannot correct initial posts like that.
 Please post the details.
 
 Amos



He is actually posting in some forum and all his posts are sent to squid 
mailing list.


http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-and-url-regex-Not-working-td4661633.html

@ranmanh

Use http_access, always_direct is not for access restriction.

Amm.


Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-30 Thread Amm




- Original Message -
 From: csn233 csn...@gmail.com
 To: Amm ammdispose-sq...@yahoo.com
 Cc: 
 Sent: Tuesday, 30 July 2013 2:03 PM
 Subject: Re: [squid-users] Basic questions on transparent/intercept proxy

Thanks to all who replied. Looks like the ssl_bump none all is
 required to stop those pop-warnings about self-signed certificates.
 
 Another related question, what do people do about ftp://... that no
 longer works in an intercepted proxy


Please use reply all instead of reply!

For intercepted proxy, you only use HTTP/HTTPS interception. So browser
will access FTP site directly. (Unless you have blocked/redirected FTP port)

Amm.



Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-29 Thread Amm
- Original Message -

 From: csn233 csn...@gmail.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org

To intercept HTTPS traffic, is SSL-bump a must? Even when I only want
 to record the CONNECT traffic in access.log just like a normal forward
 proxy without decrypting anything?

No. But it will log only IPs not the host name or URL.

Amm



[squid-users] MX issues? (was Re: Basic questions on transparent/intercept proxy)

2013-07-29 Thread Amm


Is there some issue with mailing list? (I am assuming Yahoo! mail would not 
have issue)


My past two-three e-mails were delivered much late to list. One even bounced 
back. (which I resent)

The one below was delivered after more than 24hrs or so.


It appears only one MX is working.


squid-cache.org mail exchanger = 10 squid-cache.org.
squid-cache.org mail exchanger = 90 mx2.squid-cache.org.

mx2 does not seem to be working.


Regards,


Amm.




- Original Message -
 From: Amm ammdispose-sq...@yahoo.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Sunday, 28 July 2013 6:41 PM
 Subject: Re: [squid-users] Basic questions on transparent/intercept proxy

 - Original Message -
  From: csn233 csn...@gmail.com
  To: squid-users@squid-cache.org 
 squid-users@squid-cache.org
 
 To intercept HTTPS traffic, is SSL-bump a must? Even when I only want
  to record the CONNECT traffic in access.log just like a normal forward
  proxy without decrypting anything?
 
 No. But it will log only IPs not the host name or URL.
 
 Amm



Re: [squid-users] Basic questions on transparent/intercept proxy

2013-07-29 Thread Amm
 From: csn233 csn...@gmail.com
Sent: Monday, 29 July 2013 10:40 PM
Subject: Re: [squid-users] Basic questions on transparent/intercept proxy



On Sun, Jul 28, 2013 at 9:11 PM, Amm ammdispose-sq...@yahoo.com wrote:
 - Original Message -

 From: csn233 csn...@gmail.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org

To intercept HTTPS traffic, is SSL-bump a must? Even when I only want
 to record the CONNECT traffic in access.log just like a normal forward
 proxy without decrypting anything?

 No. But it will log only IPs not the host name or URL.

 Amm



No, as in ssl-bump is not a requirement for HTTPS traffic to be
logged? Your answer seems to be different from other replies. Can you
provide examples of how?



I am not sure if I understood your previous question right. I think what others 
said is right.


Here is what I have done. (simplified version)

https_port 8081 intercept ssl-bump generate-host-certificates=on 
cert=/etc/squid/ssl_cert/squid.pem
#ssl_bump none all #--- this line is not required


So ssl-bump as a keyword is required on https_port but you dont need ssl_bump 
ACL line (by default it bumps nothing).


Traffic will be logged just as IP. (Not actual hostname)


Regards,


Amm.



Re: [squid-users] strip_query_terms by acl?

2013-07-22 Thread Amm


My previous e-mail bounced back.

squid-users@squid-cache.org: Mail server for squid-cache.org unreachable 
for too long

So reposting, sorry if already it had reached the group.

- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz

 On 20/07/2013 2:04 p.m., Amm wrote:

  Hello,
 
  Squid already has option to log FULL query. i.e strip_query_terms off.
 
  I would like to know is there any way to log FULL query only for particular 
 acl?


 Not in the existing Squid.
 
 It could be added fairly easily, but the utility of doing it is very 
 small. The major gain from stripping such terms is to protect stupid 
 security systems which do things like place credentials or users private 
 details in the query-string portion of URLs.


Yes that is why I am asking, I do not want to log everything, just search
queries made. So basically do not want to violate privacy of anyone.

If it is easy to add, can you provide some hints on which files or what
functions to change?


  I am asking this because, I do not want log file to get full by recording 
 everything, just wanted queries recorded for few cases.
 
 If you are worried about query-string filling logs then you have bigger 
 problems. A simple flood of rejected requests could dump far more 
 content into your logs than query-strings on normal traffic do.

No I am not worried about someone trying to flood. But why
record unwanted things?

With acl based recording, only selected stuff can be recorded. It
will also save disk I/Os and disk space.

There will be hardly 100 search requests in average 10 lines
of log.

So logging everything just for 100 search queries is excessive.

That is why I was looking for this feature.


 If this is an actual problem I suggest looking at making yourself a 
 daemon helper, you can do anything you like with the log lines in the 
 daemon. Our squid-3.3 daemon does some basic checks on file size and 
 rotates the logs if they get too big, in addition to the squid-requested 
 rotations.
   Or one of the other network I/O logging modules can send logs to a 
 machine with more space available.

Writing helper etc for small thing is a big ask. But if simple acl based
filtering can be implemented it would be great and in my opinion best
place to do.


 Amos

Thanks for your replies.

Amm


[squid-users] strip_query_terms by acl?

2013-07-19 Thread Amm
Hello,

Squid already has option to log FULL query. i.e strip_query_terms off.

I would like to know is there any way to log FULL query only for particular acl?

I am asking this because, I do not want log file to get full by recording 
everything, just wanted queries recorded for few cases.

For example:

acl searchterms url_regex -i search
acl logdomains dstdom_regex -i google bing yahoo

acl src fullrecord 192.168.0.25


strip_query_terms off searchterms logdomains
strip_query_terms off fullrecord

First one will log FULL query if URL contains search and domain is google, 
bing, yahoo

Second on will log FULL query for a particular IP.


Thanks in advance,

Amm



Re: [squid-users] Re: TPROXY

2013-05-28 Thread Amm

 From: alvarogp alvarix...@gmail.com
To: squid-users@squid-cache.org 
Sent: Tuesday, 28 May 2013 1:28 PM
Subject: [squid-users] Re: TPROXY
 

alvarogp wrote
 Hello,
 
 I have the next configuration:
 - Ubuntu 12.04 with 2 interfaces eth0 (local) and eth1 (internet access)
 - IPtables 1.4.12
 - Squid 3.3.4 with Tproxy
  
 With Iptables I have configured the proxy to forward the traffic from the
 local LAN (eth0) to the outside world (eth1). The configuration is:
 
 iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
 iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED
 -j ACCEPT
 iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
 echo 1  /proc/sys/net/ipv4/ip_forward
 
 To configure and install Tproxy I have followed the tutorial described in
 the wiki:
 
 ./configure --enable-linux-netfilter
 
 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0
 
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 3129
 
 For squid.conf, I have maintained the configuration my default adding to
 it:
 
 http_port 3128
 http_port 3129 tproxy
 
 If Squid is running, the packets from the local LAN are routed correctly
 and the web pages are showed perfectly. The problem I have is that this
 accesses are not reflected in the access.log and cache.log, so could be
 possible that squid is not caching any cacheable content?



I have had exact same problem when I was trying TPROXY with similar
configuration.

Squid would route packets but not LOG anything in access log.

If I stop squid then clients cant access any website. (this indicates that
packets are indeed routing through squid).

I gave up later on. I might give it a try again after few days.


Amm.



Re: [squid-users] Re: TPROXY

2013-05-28 Thread Amm




 From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org 
Sent: Tuesday, 28 May 2013 4:15 PM
Subject: Re: [squid-users] Re: TPROXY
 

On 28/05/2013 8:11 p.m., Amm wrote:


 
 From: alvarogp alvarix...@gmail.com
 To: squid-users@squid-cache.org
 Sent: Tuesday, 28 May 2013 1:28 PM
 Subject: [squid-users] Re: TPROXY


 alvarogp wrote:

 If Squid is running, the packets from the local LAN are routed correctly
 and the web pages are showed perfectly. The problem I have is that this
 accesses are not reflected in the access.log and cache.log, so could be
 possible that squid is not caching any cacheable content?




 I have had exact same problem when I was trying TPROXY with similar
 configuration.

 Squid would route packets but not LOG anything in access log.

 If I stop squid then clients cant access any website. (this indicates that
 packets are indeed routing through squid).

access.log would indicate that none of them are actually making it to 
the Squid process.


Perhapse the Ubuntu kernel version has a bug which makes the packets 
work when *some* process it listening on the required port, but the 
packets actually not getting there.


Actually I had tried on Fedora 16 kernel version is 3.6.X.
So now this bug is in Ubuntu as well as Fedora?


Dont remember squid version but it was 3.2 series.


Or perhapse TCP packets are sending the HTTP reuqest through Squid and 
Squid relaying it but the response not going back to Squid (direct back 
to client). In that event Squid would wait for some time (read/write 
timeouts are 15 minutes long) before logging the failed HTTP 
transaction. That could be caused by some bad configuration on a router 
outside of the Squid machine.


May be, I dont know what was happening. As I didnt give it much thought that 
time.


I will try again this week end and report back. This time I will wait for 15 
minutes.


Thanks

Amm.


Re: [squid-users] Looking for squid spec file

2013-05-13 Thread Amm




- Original Message -
 From: Eliezer Croitoru elie...@ngtech.co.il
 To: Alex Domoradov alex@gmail.com
 Cc: squid-users@squid-cache.org squid-users@squid-cache.org
 Sent: Monday, 13 May 2013 6:05 PM
 Subject: Re: [squid-users] Looking for squid spec file

 On 5/13/2013 3:30 PM, Alex Domoradov wrote:
  For which version of squid do you need spec file?

 3.2
 3.3
 3.head
 
 any of the above ^^
 I had 3.2 but now 3.3 is stable so I don't really care which one of them 
 I will customize it again.

See if this helps in anyway, its from Fedora tree and for 3.3.4
http://pkgs.fedoraproject.org/cgit/squid.git/tree/

Amm

 
 Eliezer


Re: [squid-users] Looking for squid spec file

2013-05-13 Thread Amm




- Original Message -
 From: Alex Domoradov alex@gmail.com
 To: Amm ammdispose-sq...@yahoo.com
 Cc: squid-users@squid-cache.org squid-users@squid-cache.org
 Sent: Monday, 13 May 2013 6:22 PM
 Subject: Re: [squid-users] Looking for squid spec file
 
 On Mon, May 13, 2013 at 3:45 PM, Amm ammdispose-sq...@yahoo.com wrote:
 
 
 
 
  - Original Message -
  From: Eliezer Croitoru elie...@ngtech.co.il
  To: Alex Domoradov alex@gmail.com
  Cc: squid-users@squid-cache.org 
 squid-users@squid-cache.org
  Sent: Monday, 13 May 2013 6:05 PM
  Subject: Re: [squid-users] Looking for squid spec file
 


  I had 3.2 but now 3.3 is stable so I don't really care which one of 
 them
  I will customize it again.

 
  See if this helps in anyway, its from Fedora tree and for 3.3.4
  http://pkgs.fedoraproject.org/cgit/squid.git/tree/
 

 It's require systemd. CentOS doesn't have it

Well one can modify it to require for init.d (or whatever that package is 
called)

Or even pick up spec file from previous Fedora releases.

Amm



[squid-users] vary obkect loop when activating SMP

2013-05-13 Thread Amm
Hello all,


I am trying out squid with SMP. I am using squid version 3.3.4.

Squid works fine without workers directive i.e. without SMP.

For SMP, all I do is add is these two lines on top of squid.conf, rest of 
squid.conf is exactly same.

workers 2
cpu_affinity_map process_numbers=1,2 cores=1,2

After this I see these lines in cache.log every 12-15 seconds. (sometimes kid1 
sometimes kid2)

2013/05/13 20:36:21 kid2| varyEvaluateMatch: Oops. Not a Vary object on  second 
attempt, 'http://www.espncricinfo.com/netstorage/598060.html'  
'accept-encoding=gzip,%20deflate'
2013/05/13 20:36:21 kid2| clientProcessHit: Vary object loop!

Squid works fine though. (from just 5-10minutes testing)

Any idea what is the issue? Can it make squid unstable? Or its just a warning 
of some sort which can be ignored safely?

Thanks and regards,


Amm.



Re: [squid-users] Squid 3.3.4 is available

2013-05-07 Thread Amm
Hi Amos,

This patch (to 3.3.2) is still missing (which you had sent
for wrong logging of IPv6 address instead of IPv4)


--- squid-3.3.2/src/forward.cc  2013-02-25 03:42:35 +
+++ squid-3.3.2/src/forward.cc  2013-03-07 07:38:16 +
@@ -984,6 +984,7 @@
 serverConn-peerType = HIER_DIRECT;
 #endif
 ++n_tries;
+    request-hier.note(serverConn, request-GetHost());
 request-flags.pinned = 1;
 if (pinned_connection-pinnedAuth())
 request-flags.auth = 1;

Regards

Amm.



 From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org squid-users@squid-cache.org 
Sent: Saturday, 27 April 2013 7:40 PM
Subject: [squid-users] Squid 3.3.4 is available
 

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-3.3.4 release!

...

Amos Jeffries



Re: [squid-users] assertion failed: Checklist.cc:287: !needsAsync !matchFinished after upgrade from squid 3.2.7 to 3.3.3

2013-04-10 Thread Amm


- Original Message -
 From: Dieter Bloms sq...@bloms.de
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Wednesday, 10 April 2013 3:03 PM
 Subject: [squid-users] assertion failed: Checklist.cc:287: !needsAsync  
 !matchFinished after upgrade from squid 3.2.7 to 3.3.3
 
 Hi,
 
 I run 3.2.7 squid successfully for some weeks now.
 Yesterday I tried to upgrade to squid 3.3.3 and after a few minutes
 squid exits and I get the following messages in my cache.log:
 
 --snip--
 2013/04/09 08:46:50| assertion failed: Checklist.cc:287: !needsAsync 
  !matchFinished
 2013/04/09 08:46:52| Starting Squid Cache version 3.3.3 for 
 x86_64-suse-linux-gnu...
 --snip--


This is known bug in 3.3 series. Even I faced it.

You can use the backported patch I have added at:

http://bugs.squid-cache.org/show_bug.cgi?id=3717


Note that patch does not solve the actual bug. Patch just adds -n acl
option with which you can disable DNS checks (which cause
the crash)


Amm.



Re: [squid-users] blocking ads/sites not working anymore?

2013-03-09 Thread Amm


- Original Message -
 From: Andreas Westvik andr...@spbk.no
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Saturday, 9 March 2013 6:24 PM
 Subject: [squid-users] blocking ads/sites not working anymore?
 
 Hi everyone
 
 Over the time I have collected a lot of sites to block. ads/malware/porn etc. 
 This has been working like a charm. I have even created a
 custom errorpage for this.
 But since I don't know when, this has stopped working. And according to the 
 googling I have done, my syntax in squid.conf are correct. 
 So what can be wrong here?
 
 This is my setup:
 
 
 cat /etc/squid3/squid.conf 
 http_port 192.168.0.1:3128 transparent
 acl LAN src 192.168.0.0/24
 http_access allow LAN
 http_access deny all
 cache_dir ufs /var/spool/squid3 5000 16 256
 
 
 #Block
 acl ads dstdom_regex -i /etc/squid3/adservers
 http_access deny ads

Dont know how it worked earlier but you need to put
http_access deny ads
before
http_access allow LAN

Amm



Re: [squid-users] Squid 3.3.2 and SMP

2013-03-08 Thread Amm


- Original Message -

 From: Alex Rousskov rouss...@measurement-factory.com
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Saturday, 9 March 2013 6:38 AM
 Subject: Re: [squid-users] Squid 3.3.2 and SMP
 
 On 03/08/2013 08:11 AM, Adam W. Dace wrote:
  Does anyone have a simple example configuration for running Squid
  3.3.2 with multiple workers?
 
 You can just add workers 2 to squid.conf.default.
 
 Alex.

This may be offtopic. I do not know.

I may also sound stupid. But is it possible that both workers will occupy same 
CPU?

Or is it that operating system takes care of it? Or does squid take care of it?

If OS, how does OS (atleast Linux) do it? Does it alternate CPU for every new 
process.

Lets say I have two cores and I use workers 2 in squid.conf.

Squid forks 1st worker and it lands on 1st core.
But in between some other process starts (or forks) which lands on 2nd core
Now squid forks 2nd worker and lands on 1st again?!

Is this possible? (kind of race condition)

Thanks and regards,

Amm.



Re: [squid-users] Squid 3.3.2 and SMP

2013-03-08 Thread Amm


- Original Message -
 From: Alex Rousskov rouss...@measurement-factory.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Saturday, 9 March 2013 11:54 AM
 Subject: Re: [squid-users] Squid 3.3.2 and SMP
 
 On 03/08/2013 07:40 PM, Amm wrote:
 
 
  Lets say I have two cores and I use workers 2 in squid.conf.
 
  Squid forks 1st worker and it lands on 1st core.
  But in between some other process starts (or forks) which lands on 2nd core
  Now squid forks 2nd worker and lands on 1st again?!
 
  Is this possible? (kind of race condition)
 
 Without CPU affinity, processes such as workers change their cores
 often. It is normal. You can see that for yourself if you add and watch
 last CPU used column in top on a busy system.

Thanks for your replies. I was under impression that once main squid process
starts two workers, it depends on OS which core they will run on.
(assuming no cpu_affinity given)

Once OS assigns them core, they always run on that same core. Did not know
that cores keep changing.

In short, for best results and to make sure that each worker uses separate core
and dont end up using same core, one must use cpu_affinity_map as well?

Am I correct?

 
 HTH,

Yes helped a lot. Thanks again.

 Alex

Amm.



Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-07 Thread Amm
- Original Message -

 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Thursday, 7 March 2013 1:11 PM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 7/03/2013 7:22 p.m., Amm wrote:
 
 snip


  For testing, URL was accessed with curl (curl -k https://www.google.com/)
 
  Here is debug output: (Google IP has changed but in same subnet, 1.2.3.4 is 
 my public IP replaced)
 
  2013/03/07 11:40:46.326 kid1| client_side.cc(2325) parseHttpRequest: HTTP 
 Client local=173.194.36.18:443 remote=1.2.3.4:50145 FD 21 flags=33
  2013/03/07 11:40:46.326 kid1| client_side.cc(2326) parseHttpRequest: HTTP 
 Client REQUEST:
  -
  GET / HTTP/1.1
  User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 
 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7
  Host: www.google.com
  Accept: */*
 
 
  --
  2013/03/07 11:40:46.326 kid1| http.cc(2177) sendRequest: HTTP Server 
 local=1.2.3.4:50146 remote=173.194.36.18:443 FD 23 flags=1
  2013/03/07 11:40:46.326 kid1| http.cc(2178) sendRequest: HTTP Server 
 REQUEST:
  -
  GET / HTTP/1.1
  User-Agent: curl/7.21.7 (x86_64-redhat-linux-gnu) libcurl/7.21.7 
 NSS/3.13.5.0 zlib/1.2.5 libidn/1.22 libssh2/1.2.7
  Host: www.google.com
  Accept: */*
  Via: 1.1 a.b.c (squid/3.3.2)
  X-Forwarded-For: 1.2.3.4
  Cache-Control: max-age=259200
  Connection: keep-alive
 
 
  HTTP server REQUEST shows 173.194.36.18, but access.logs show IPv6 address:
  1362636646.416     90 1.2.3.4 TCP_MISS/302 1138 GET https://www.google.com/ 
 - PINNED/2404:6800:4009:802::1011 text/html


 This is a really *really* strange outcome. It is indeed looking like a 
 code bug somewhere.
 
 The cache.log showed the TCP level details being apparently correct. So 
 I think we can ignore everyting up to the point of logging.
 Just to cofirm that can you add the server response trace from that 
 server request? It will be a short while later with identical local= 
 remote= and FD values.
 If there is anything else on that FD 23 it would be useful to know as well.


2013/03/07 11:40:46.416 kid1| ctx: enter level  0: 'https://www.google.com/'
2013/03/07 11:40:46.416 kid1| http.cc(746) processReplyHeader: HTTP Server 
local=1.2.3.4:50146 remote=173.194.36.18:443 FD 23 flags=1
2013/03/07 11:40:46.416 kid1| http.cc(747) processReplyHeader: HTTP Server 
REPLY:
-
HTTP/1.1 302 Found
Location: https://www.google.co.in/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Set-Cookie: X
Set-Cookie: X
P3P: CP=This is not a P3P policy! See 
http://www.google.com/support/accounts/bin/answer.py?hl=enanswer=151657 for 
more info.
Date: Thu, 07 Mar 2013 06:10:46 GMT
Server: gws
Content-Length: 222
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN

HTMLHEADmeta http-equiv=content-type content=text/html;charset=utf-8
TITLE302 Moved/TITLE/HEADBODY
H1302 Moved/H1
The document has moved
A HREF=https://www.google.co.in/;here/A.
/BODY/HTML
--
2013/03/07 11:40:46.416 kid1| ctx: exit level  0
2013/03/07 11:40:46.416 kid1| client_side.cc(1386) sendStartOfMessage: HTTP 
Client local=173.194.36.18:443 remote=1.2.3.4:50145 FD 21 flags=33
2013/03/07 11:40:46.416 kid1| client_side.cc(1387) sendStartOfMessage: HTTP 
Client REPLY:
-
HTTP/1.1 302 Moved Temporarily
Location: https://www.google.co.in/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Set-Cookie: X
Set-Cookie: X
P3P: CP=This is not a P3P policy! See 
http://www.google.com/support/accounts/bin/answer.py?hl=enanswer=151657 for 
more info.
Date: Thu, 07 Mar 2013 06:10:46 GMT
Server: gws
Content-Length: 222
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Cache: MISS from a.b.c
X-Cache-Lookup: MISS from a.b.c:8080
Via: 1.1 a.b.c (squid/3.3.2)
Connection: keep-alive



 If we assume there is someting terribly broken with where the access.log 
 data is being generate from.
 Can you create a custom log format please which outputs:
 
   logformat test %A/%a:%p - %la:%lp (%la:%lp) 
 %la:%lp - 
 %A/%a:%p [%h{Host}]
 
 and use it for a secondary access_log line. Lets see what gets logged by 
 that.

[%h{Host}] was giving error, so i changed it to [%{Host}h]

Here is output:
ABCD.net.in/1.2.3.4:33307 - 173.194.36.16:443 (-:8081) :::0 - 
www.google.com/2404:6800:4009:802::1011:443 [www.google.com]

Notice :::0 - somewhere it thinks its IPv6??

If domain has just IPv4 address and no IPv6 address:

ABCD.net.in/1.2.3.4:58347 - 174.122.92.66:443 (-:8081) 0.0.0.0:0 - 
www.bigrock.com/174.122.92.65:443 [www.bigrock.com]


If i use dns_v4_first, it logs IPv4 address.

ABCD.mtnl.net.in/1.2.3.4:33559
 - 74.125.236.147:443 (-:8081) 0.0.0.0:0 - 
www.google.com/74.125.236.146:443 [www.google.com]

Notice the change in IP address though. But may be that is expected as squid 
does its own DNS lookup and picks other IP.


 If it is still logging the IPv6. I have an experimental patch here:
 http://master.squid-cache.org/~amosjeffries/patches

Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-07 Thread Amm




- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Friday, 8 March 2013 2:47 AM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 7/03/2013 10:54 p.m., Amm wrote:
  - Original Message -

  [%h{Host}] was giving error, so i changed it to [%{Host}h]
 
  Here is output:
  ABCD.net.in/1.2.3.4:33307 - 173.194.36.16:443 (-:8081) :::0 - 
 www.google.com/2404:6800:4009:802::1011:443 [www.google.com]
 
  Notice :::0 - somewhere it thinks its IPv6??
 
  If domain has just IPv4 address and no IPv6 address:
 
  ABCD.net.in/1.2.3.4:58347 - 174.122.92.66:443 (-:8081) 0.0.0.0:0 - 
 www.bigrock.com/174.122.92.65:443 [www.bigrock.com]
 
 
  If i use dns_v4_first, it logs IPv4 address.
 
  ABCD.mtnl.net.in/1.2.3.4:33559
    - 74.125.236.147:443 (-:8081) 0.0.0.0:0 -
  www.google.com/74.125.236.146:443 [www.google.com]
 
  Notice the change in IP address though. But may be that is expected as 
 squid does its own DNS lookup and picks other IP.


 Okay that zero IP:port on the outbound confirmed my suspicion about what 
 the code was doing. When using a pinned connection it is not setting the 
 real connection details into the log.


  Applying and trying patch will take about a day. Will let you know once I 
 do.

 
 Thanks.
 The above log entry implies it should be the fix, but I will still need 
 confirmation of that.
 
 Amos


I just applied the patch and it now logs IPv4 address correctly.

But earlier it was showing word PINNED now it shows HIER_DIRECT. I am not sure 
if it is right or wrong.

1362709553.045    172 1.2.3.4 TCP_MISS/302 1138 GET https://www.google.com/ - 
HIER_DIRECT/74.125.236.146 text/html

test.log file just incase you want to have a look.
ABCD.net.in/1.2.3.4:33007 - 74.125.236.145:443 (-:8081) 1.2.3.4:33008 - 
www.google.com/74.125.236.145:443 [www.google.com]

Thanks for the patch.

Regards

Amm.



[squid-users] WARNING: (B) '::/0' is a subnetwork of (A) '::/0'

2013-03-07 Thread Amm
Hello all,

I am using squid-3.3.2

I keep getting these messages in cache.log (on squid start or reload)


2013/03/06 18:36:12 kid1| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
2013/03/06 18:36:12 kid1| WARNING: because of this '::/0' is ignored to keep 
splay tree searching predictable
2013/03/06 18:36:12 kid1| WARNING: You should probably remove '::/0' from the 
ACL named 'all'
2013/03/06 18:36:12 kid1| WARNING: (B) '127.0.0.1' is a subnetwork of (A) 
'127.0.0.1'
2013/03/06 18:36:12 kid1| WARNING: because of this '127.0.0.1' is ignored to 
keep splay tree searching predictable
2013/03/06 18:36:12 kid1| WARNING: You should probably remove '127.0.0.1' from 
the ACL named 'localhost'
2013/03/06 18:36:12 kid1| WARNING: (B) '127.0.0.0/8' is a subnetwork of (A) 
'127.0.0.0/8'
2013/03/06 18:36:12 kid1| WARNING: because of this '127.0.0.0/8' is ignored to 
keep splay tree searching predictable
2013/03/06 18:36:12 kid1| WARNING: You should probably remove '127.0.0.0/8' 
from the ACL named 'to_localhost'

all, localhost, to_localhost are internal ACLs and I have not specified them 
anywhere in squid.conf


This ALSO appears for every ACL where I have used IP address.

2013/03/06 18:36:12 kid1| WARNING: (B) '127.0.0.1' is a subnetwork of (A) 
'127.0.0.1'
2013/03/06 18:36:12 kid1| WARNING: because of this '127.0.0.1' is ignored to 
keep splay tree searching predictable
2013/03/06 18:36:12 kid1| WARNING: You should probably remove '127.0.0.1' from 
the ACL named 'allowed_hosts'
2013/03/06 18:36:12 kid1| WARNING: (B) '10.25.1.165' is a subnetwork of (A) 
'10.25.1.165'
2013/03/06 18:36:12 kid1| WARNING: because of this '10.25.1.165' is ignored to 
keep splay tree searching predictable
2013/03/06 18:36:12 kid1| WARNING: You should probably remove '10.25.1.165' 
from the ACL named 'nopromo_ips'


Dont know if it is just a wrong warning or something is really wrong. Because 
ignoring 127.0.0.1 from localhost, can cause many side effects.

I did not happen in 3.3.1.

Just for the info, I am using patch for -n acl option at: (to avoid DoS or 
crashes)

http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12620.patch

But I doubt that has any relation to this.


Regards,


Amm.



Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-06 Thread Amm




- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Wednesday, 6 March 2013 11:36 AM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 6/03/2013 1:40 p.m., Alex Rousskov wrote:
  On 03/05/2013 03:09 AM, Amos Jeffries wrote:
 
 
  Squid tunnel functionality requires a CONNECT wrapper to generate
  outgoing connections.
  It is not yet setup to do the raw-TCP type of bypass the intercepted
  traffic would require.
  Are you sure? IIRC, ssl_bump none tunneling code works for 
 intercepted
  connections, and that is what we claim in squid.conf:
 
 Hmm. Yes I see the code now.
 
 Looks like it should work form IPv4 but IPv6 intercepted HTTPS might be 
 missing the [] around the IP.
 
 Amos


I just tried 443 port interception with sslbump and is working perfectly.

If sslbump none applies for request then it passes requests as is:
Log shows something like this:

1362574305.069  90590 192.168.1.1 TCP_MISS/200 3600 CONNECT 23.63.101.48:443 - 
HIER_DIRECT/23.63.101.48 -


if sslbump server-first applied for request then log shows:
1362574001.569    294 192.168.1.1 TCP_MISS/200 515 GET 
https://mail.google.com/mail/images/c.gif? - PINNED/2404:6800:4009:801::1015 
image/gif

(Note: URL may not be same in both cases, these are just example)

I dont have IPv6, why is it showing IPv6 address, in 2nd case?

Using squid 3.3.2.

Regards

Amm



Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-06 Thread Amm




- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Thursday, 7 March 2013 4:11 AM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 7/03/2013 2:03 a.m., Amm wrote:
 
  I just tried 443 port interception with sslbump and is working perfectly.
 
  If sslbump none applies for request then it passes requests as is:
  Log shows something like this:
 
  1362574305.069  90590 192.168.1.1 TCP_MISS/200 3600 CONNECT 
 23.63.101.48:443 - HIER_DIRECT/23.63.101.48 -
 
 
  if sslbump server-first applied for request then log shows:
  1362574001.569    294 192.168.1.1 TCP_MISS/200 515 GET 
 https://mail.google.com/mail/images/c.gif? - PINNED/2404:6800:4009:801::1015 
 image/gif
 
  (Note: URL may not be same in both cases, these are just example)
 
  I dont have IPv6, why is it showing IPv6 address, in 2nd case?
 
 Because you *do* have IPv6, or at least the Squid box does. And Squid is 
 using it successfully to contact the upstream web server.
 
 Amos


Nope I do not have IPv6. I have been begging my ISP to give IPv6.

squid is running on the very same machine.

Rule used is:
iptables -t nat -A OUTPUT -m owner ! --uid-owner squid -p tcp --dport 443 -j 
REDIRECT --to-ports 8081

URL accessed is https://www.google.com

nslookup -q=a www.google.com = 173.194.36.48 (one of many IPs in result)
nslookup -q= www.google.com = 2404:6800:4009:803::1014 (only 1 IPv6 in 
result)

access.log:
1362629583.956    532 192.168.1.1 TCP_MISS/200 28088 GET 
https://www.google.com/ - PINNED/2404:6800:4009:803::1014 text/html

I used wireshark to monitor the traffic. Result is:

0.00 192.168.1.1 - 173.194.36.48 TLSv1 775 Application Data
0.017809 173.194.36.48 - 192.168.1.1 TCP 68 443  40400 [ACK] Seq=1 Ack=708 
Win=1002 Len=0 TSval= TSecr=

Clearly its using IPv4 and not IPv6.

Note: I have replaced my public IP with 192.168.1.1

I have a feeling that since I am using REDIRECT, squid receives redirect 
packets on local (loopback) IPv6 address, so it assumes that connection is IPv6 
and logs IPv6 address instead. (even though it connects to IPv4 address)

So I tried to change iptables rule to:
iptables -t nat -A OUTPUT -m owner ! --uid-owner squid -p tcp --dport 443 -j 
DNAT --to 127.0.0.1:8081

still it logs IPv6 address in access.log. So do not know why it assumes IPv6.

So may be somewhere there is a bug. (either logical or coding)

Regards,

Amm.



Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-06 Thread Amm


- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Thursday, 7 March 2013 11:19 AM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 7/03/2013 5:30 p.m., Amm wrote:
  - Original Message -
  From: Amos Jeffries
 
  On 7/03/2013 2:03 a.m., Amm wrote:
    I just tried 443 port interception with sslbump and is working 
 perfectly.
 
    If sslbump none applies for request then it passes requests as 
 is:
    Log shows something like this:
 
    1362574305.069  90590 192.168.1.1 TCP_MISS/200 3600 CONNECT
  23.63.101.48:443 - HIER_DIRECT/23.63.101.48 -
 
    if sslbump server-first applied for request then log shows:
    1362574001.569    294 192.168.1.1 TCP_MISS/200 515 GET
  https://mail.google.com/mail/images/c.gif? - 
 PINNED/2404:6800:4009:801::1015
  image/gif
    (Note: URL may not be same in both cases, these are just example)
 
    I dont have IPv6, why is it showing IPv6 address, in 2nd case?
  Because you *do* have IPv6, or at least the Squid box does. And Squid 
 is
  using it successfully to contact the upstream web server.
 
  Amos
 
  Nope I do not have IPv6. I have been begging my ISP to give IPv6.


 
 I hear what you are saying. Yet your logs are showing successful IPv6 traffic.
 Maybe they enabled it on the router without informing you. Or maybe someone 
 else 
 on the network setup a IPv6 gateway router (PC running 6to4 and emitting 
 RAs?). 
 I don't know.
 
 Somehow Squid detected that global IPv6 connectivity was available and is 
 doing 
 full TCP connection setup and HTTP transactions resulting in over 28KB of 
 data 
 transferred over IPv6 so far.
 
 Try these three tests:
 ping6 mail.google.com
 netstat -antup
 mtr -n6 mail.google.com


Please trust me. I am a network engineer since about 15 years now. (Not trying 
to brag).

I also appreciate your efforts a lot for always replying.

But I do not have IPv6. The squid is running on my standalone laptop.(there is 
no LAN network)

# ping6 mail.google.com
connect: Network is unreachable

# mtr -n6 mail.google.com
gives EMPTY screen i.e shows nothing except headers.

# list IPv6 addresses
# ip -f inet6 addr list
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 
    inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever


Hence, no interface has IPv6 except lo (loopback)


# list IPv4 addresses
# ip -f inet addr list
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
    inet 127.0.0.1/8 scope host lo
7: ppp1: POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP mtu 1492 qdisc htb state 
UNKNOWN qlen 3
    inet 1.2.3.4 peer 1.2.3.4/32 scope global ppp1

PPP interface is ADSL connection and just has IPv4 address. ADSL router is in 
bridge mode.


  squid is running on the very same machine.
 
  Rule used is:
  iptables -t nat -A OUTPUT -m owner ! --uid-owner squid -p tcp --dport 443 
 -j REDIRECT --to-ports 8081
 
  URL accessed is https://www.google.com
 
  nslookup -q=a www.google.com = 173.194.36.48 (one of many IPs in result)
  nslookup -q= www.google.com = 2404:6800:4009:803::1014 (only 1 IPv6 in 
 result)
 
  access.log:
  1362629583.956    532 192.168.1.1 TCP_MISS/200 28088 GET 
 https://www.google.com/ - PINNED/2404:6800:4009:803::1014 text/html
 
  I used wireshark to monitor the traffic. Result is:
 
  0.00 192.168.1.1 - 173.194.36.48 TLSv1 775 Application Data
  0.017809 173.194.36.48 - 192.168.1.1 TCP 68 443  40400 [ACK] Seq=1 
 Ack=708 Win=1002 Len=0 TSval= TSecr=
 
 Your log states the client-Squid connection as being IPv4, this trace 
 confirms _that_.

The wireshark output I gave is not client-squid. Wireshark was run on ppp1 
interface i.e. squid-internet.

# command run was
# tshark -plnippp1 port 443.


 
  Clearly its using IPv4 and not IPv6.
 
  Note: I have replaced my public IP with 192.168.1.1
 
  I have a feeling that since I am using REDIRECT, squid receives redirect 
 packets on local (loopback) IPv6 address, so it assumes that connection is 
 IPv6 
 and logs IPv6 address instead. (even though it connects to IPv4 address)


 Notice that:
 * the client-Squid connection and the Squid-server connection are 
 independent TCP connections


Agree.


 * IPv6 is on the Squid-Internet connection side of things

But as shown above squid-internet is also IPv4


 * IPv4 is happening on the client-Squid connection
 * REDIRECT is happening on the client-173.194.36.48 packets

Agree.

 
 NAT happening *into* Squid does not require IPv4 outbound. In cases like 
 these 
 where the HTTP Host: header can be 100% validated as belonging to the 
 destination IP address Squid will use DNS to locate the upstream server. In 
 this 
 case it locates the  and uses it.
 
 You can enable debug_options 11,2 to see the client and server HTTP 
 transaction 
 IP addressing details.

I enabled debug.

For testing, URL was accessed with curl (curl -k https://www.google.com/)

Here is debug output: (Google IP has changed

Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-05 Thread Amm



- Original Message -
 From: Alex Rousskov rouss...@measurement-factory.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Wednesday, 6 March 2013 6:20 AM
 Subject: Re: [squid-users] Bypassing SSL Bump for dstdomain
 
 On 03/04/2013 10:11 PM, Amm wrote:
 
  # Let user specify domains to avoid decrypting, such as internet 
 banking
  acl bump-bypass dstdomain .commbank.com.au 
  ssl_bump none bump-bypass
  ssl_bump server-first all 
 
 
  This will not work for intercepting traffic. Because domain is known
  only after SSL connection is established. So certificate stage etc
  has already passed.
 
 It will work but only if the reverse DNS lookup for the intercepted IP
 address works: ssl_bump supports slow ACLs, and dstdomain is a slow ACL
 if given an IP address.

As per http://www.squid-cache.org/Doc/config/acl/  its a fast ACL.

acl aclname dstdomain   .foo.com ...
    # Destination server from URL [fast]

Also depending on reverse lookup for bypassing ssl_bump is can be
insecure w.r.t. policy. Rare but still somewhat insecure.


  I am also assuming that squid checks IP based ACLs for ssl_bump
  before establishing connection with client.
 
 Squid checks all ssl_bump ACLs before establishing a TCP connection with
 the server. The TCP connection from the client is already accepted (or
 intercepted) by the time ssl_bump ACL is checked.

What I would like to know is, does squid check ssl_bump ACL before starting
SSL connection with client OR after? (for intercepting on https_port)

Otherwise ssl_bump server-first OR none feature does not help much.

Regards,

Amm.



Re: [squid-users] Bypassing SSL Bump for dstdomain

2013-03-04 Thread Amm

 From: Dan Charlesworth d...@getbusi.com
To: squid-users@squid-cache.org 
Sent: Tuesday, 5 March 2013 10:21 AM
Subject: [squid-users] Bypassing SSL Bump for dstdomain
 
Hi

I've recently set up a very simple Squid 3.3.1 deployment to test out Server 
First bumping and Mimicking in a REDIRECT type intercept configuration.

It's working quite nicely, but I'm trying to accommodate a scenario where an 
admin would like to disable bumping for certain webistes, for example internet 
banking ones.

I basically have the exact same ssl_bump parameters from the config example 
and yet requests matching the ACL are still being bumped as evidenced by:
- The full HTTPS URLs being recorded in the access log.
- My client browser continuing to show that the certificate is signed by the 
squid-signed CA when accessing the dstdomain.

I feel like I'm making some obvious mistake here, but can't see the forest 
right now.

...

# Let user specify domains to avoid decrypting, such as internet banking
acl bump-bypass dstdomain .commbank.com.au 

 ...
 

ssl_bump none bump-bypass
ssl_bump server-first all



This will not work for intercepting traffic. Because domain is known only after 
SSL connection is established. So certificate stage etc has already passed.


You should try ACL check based on real IP or IP range. Ofcourse this assumes 
that IP will never change for those banks.

I am also assuming that squid checks IP based ACLs for ssl_bump before 
establishing connection with client. (I have personally not tried this setup so 
can not tell for sure)


Or you need to create rules at firewall level which will *not* divert traffic 
for those sites to squid.

Amm.



Re: [squid-users] squid running out of filedescriptors

2013-02-20 Thread Amm




- Original Message -
 From: Sandrini Christian (xsnd) x...@zhaw.ch
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Wednesday, 20 February 2013 3:29 PM
 Subject: [squid-users] squid running out of filedescriptors
 
 Hi
 
 
 Today squid was suddenly running at 100% CPU and a lot of running out of 
 filedescriptors messages in the cache.log. But if I look with squidclient 
 it only had 989 of 65k filedescriptors open.
 Is there something else I need to look at? I am using squid-3.2.6 on Centos 
 6.3

I recently had same problems. I figured that the 65k figure shown by squid
is not what it actually gets from OS. But what it expects from OS.

In your case too OS limit is 1024 (just like in my case).

Solution to problem is here: (should work on centos)
http://www.mail-archive.com/squid-users@squid-cache.org/msg88082.html
OR

http://www.squid-cache.org/mail-archive/squid-users/201302/0142.html


Re: [squid-users] Squid does not respond to TCP SYN when there are thousands of connection

2013-02-15 Thread Amm
 

 ulimit -n must be run as the same user that the proxy is running.
 
 In debian/ubuntu that user is proxy, and if you type ulimit as root you 
 will get a different answer that if you type ulimit logged in as proxy user.
 
 Be sure  to check the ulimit for the right user

Or you can check current limits using:

/proc/SQUIDPID/limits



Re: [squid-users] Redirect Youtube out second ISP

2013-02-15 Thread Amm




- Original Message -
 From: Stinn, Ryan ryan.st...@htcsd.ca
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Saturday, 16 February 2013 4:13 AM
 Subject: [squid-users] Redirect Youtube out second ISP
 
 I'm wondering if it's possible to use squid to redirect youtube out a 
 second ISP line. We have two connections and I'd like to push all youtube 
 out the second connection. 

Try this:

acl dstdom_regex yt -i youtube
tcp_outgoing_address yt 1.2.3.4

1.2.3.4 is IP address of 2nd line (should be on same machine as squid).

Amm.



[squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Hello,

I have a query about how --with-filedescriptors and ulimit.

Every 2-3 days I keep getting WARNING that system is running out of descriptors.


I compiled squid using --with-filedescriptors=16384.

So do I still need to set ulimit before starting squid?

Or does squid automatically set ulimit? (as it starts as root)


I am using Fedora 16 with systemd squid.service (standard fedora file, no 
change)

Cache.log says:

2013/02/14 10:28:52 kid1| With 16384 file descriptors available


which is as expected.


squidclient gives this:

[root@localhost ]# squidclient -h 127.0.0.1 mgr:info |grep -i desc
File descriptor usage for squid:
    Maximum number of file descriptors:   16384
    Largest file desc currently in use:    888
    Number of file desc currently in use:  774
    Available number of file descriptors: 15610
    Reserved number of file descriptors:   100

ulimit -H -n gives 4096
ulimit -n gives 1024

These are standard Fedora settings, I have not made any changes.


So back to my question:
If I am compiling squid with --with-filedescriptors=16384
do I need to set ulimit before starting squid?

Or does squid automatically set ulimit?


Thanks 


Amm.



Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Umm your reply confused me further! :)

Please see below inline.




- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 
 On 14/02/2013 10:12 p.m., Amm wrote:
 
  I compiled squid using --with-filedescriptors=16384.
 
  So do I still need to set ulimit before starting squid?

 Yes. Squid obeys both limits. The smaller of the two will determine how 
 many are available for active use.

So in my case the max limit is, 4096 or 1024? (for squid)


  squidclient gives this:
 
  [root@localhost ]# squidclient -h 127.0.0.1 mgr:info |grep -i desc
  File descriptor usage for squid:
           Maximum number of file descriptors:   16384
           Largest file desc currently in use:    888
           Number of file desc currently in use:  774
           Available number of file descriptors: 15610
           Reserved number of file descriptors:   100
 
  ulimit -H -n gives 4096
  ulimit -n gives 1024
 
  These are standard Fedora settings, I have not made any changes.

If squid obeys the smaller limit shoudn't it report Available number of file 
descriptors to max 4096?
Why is it reporting 15610?

 ... when this proxy reaches the limit for Squid, you will get a message 
 about socket errors and FD reserved will jump from 100 to something just 
 below that limit to prevent running out of FD in future.

I have SELinux disabled.

I just got this:

2013/02/14 15:07:08 kid1| Attempt to open socket for EUI retrieval failed: (24) 
Too many open files
2013/02/14 15:07:08 kid1| comm_open: socket failure: (24) Too many open files
2013/02/14 15:07:08 kid1| Reserved FD adjusted from 100 to 15391 due to failures
2013/02/14 15:07:08 kid1| '/usr/share/squid/errors/en-us/ERR_CONNECT_FAIL': 
(24) Too many open files
2013/02/14 15:07:08 kid1| WARNING: Error Pages Missing Language: en-us
2013/02/14 15:07:08 kid1| WARNING! Your cache is running out of filedescriptors

How to know number of FD open when this error occurred? I want to know if it 
was 1024 or 4096?

Did squid automatically handle it? Why does it say 15391 instead of something 
below 4096?
Or 15391 is right and expected and I do not have to set ulimit before squid 
starts?


 
  So back to my question:
  If I am compiling squid with --with-filedescriptors=16384
  do I need to set ulimit before starting squid?
 
  Or does squid automatically set ulimit?
 
 Yes.

Yes was for that I have to set ulimit before starting squid
OR
Yes was for that squid automatically sets ulimit and i do not have to do 
anything

 Amos

Thanks for your quick response.

Regards

Amm



Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Amm
Ok I am answering my own question just incase someone also faces the same issue.

Compile time option -with-filedescriptors is just a suggestion to squid. (as 
clarified by Amos)


Earlier I was assuming that, it is enough and there is no need to set ulimit.

But after few commands and Amos's reply, I realised we must set ulimit.
Even after the WARNING by squid, squid was not actually increasing the limit.


Before ulimit (1024/4096) and -with-filedescriptors=16384

cat /proc/SQUIDPID/limits
Max open files    1024 4096 files 



After ulimit (16384/16384) and -with-filedescriptors=16384

cat /proc/SQUIDPID/limits
Max open files    16384    16384    files 


In short, you still need to set ulimit.


Here is how to do it on Fedora

1) Create file /etc/systemd/system/squid.service
2) Add following 3 lines in it.

.include /lib/systemd/system/squid.service
[Service]
LimitNOFILE=16384

3) systemctl daemon-reload
4) systemctl restart squid.service

Hope it helps

Amm


- Original Message -
 From: Amm ammdispose-sq...@yahoo.com
 To: squid-users@squid-cache.org squid-users@squid-cache.org
 Cc: 
 Sent: Thursday, 14 February 2013 3:53 PM
 Subject: Re: [squid-users] query about  --with-filedescriptors and ulimit
 

   I compiled squid using --with-filedescriptors=16384.
 
   So do I still need to set ulimit before starting squid?


[squid-users] Is squi 3.3.1 stable released?

2013-02-12 Thread Amm


Hello,

I am seeing squid 3.3.1 released on 9th Feb 2013, mentioned under Stable 
versions at:
http://www.squid-cache.org/Versions/

Where it is also mentioned that:
Current versions suitable for production use.

But when I see release notes for 3.3.1, its written that:
While this release is not deemed ready for production use, we believe it is 
ready for wider testing by the community.

Also I have not seen any official announcement here in mailing list. Sorry if I 
missed it.

So please clarify if squid 3.3.1 is released as stable and production use, or 
not?

Thank you,

Amm.



[squid-users] squid 3.3.1 - assertion failed with dstdom_regex with IP based URL

2013-02-12 Thread Amm


I had reported this bug earlier in Dec 2012 but probably went unnoticed in 
squid-dev group

http://www.squid-cache.org/mail-archive/squid-dev/201212/0099.html

So just re-posting as it still exists in stable branch 3.3.1


Hello,

I get following when using squid 3.3.1.
2013/02/13 08:57:33 kid1| assertion failed: Checklist.cc:287: !needsAsync  
!matchFinished

Squid restarts after this.


The culprit acl line seems to be this:
acl noaccess dstdom_regex -i /etc/squid/noaccess

This happens only when URL is IP based instead of domain based.
i.e. http://1.2.3.4

Squid acl reference has this note for dstdom_regex:

# For dstdomain and dstdom_regex a reverse lookup is tried if a IP
# based URL is used and no match is found. The name none is used
# if the reverse lookup fails

So I suppose 3.3.1 is trying to do reverse lookup and some kind of assertion
fails.

This bug does not exist in 3.2 as I did not notice it happening in 3.2

So please fix it.


Regards,

AMM


- Forwarded Message -
 From: Amm ammdispose-sq...@yahoo.com
 To:  squid-...@squid-cache.org
 Cc: 
 Sent: Thursday, 13 December 2012 1:28 PM
 Subject: assertion failed with dstdom_regex with IP based URL atleast for 
 3.3.0.2



Re: [squid-users] Squid 3.2.5 wants to use IPv6 address?

2012-12-19 Thread Amm
Amos,

This is a great info about how squid handles DNSes.

I used to get confused when in some rare cases squid log was showing IPv6 
addresses.

This e-mail clears it all. Also good note about timeouts used.

I think you must add the content of your reply at:

Features/IPv6 - http://wiki.squid-cache.org/Features/IPv6


As that is the first place people try to find issues related to IPv4/IPv6.

OR put a link to this e-mail in above article:
Link - http://marc.info/?l=squid-usersm=13559704304w=2
(could not find e-mail at squid-cache mail archives [will happen tonight I 
suppose])

Thanks and regards,

Amm.

- Original Message -
 From: Amos Jeffries squ...@treenet.co.nz
 To: squid-users@squid-cache.org
 Cc: 
 Sent: Thursday, 20 December 2012 7:56 AM
 Subject: Re: [squid-users] Squid 3.2.5 wants to use IPv6 address?
 
 
 For the record squid-3.2 tries all the destination IPs it can find, the above 
 method only means that all attempts failed and the given IPv6 address was the 
 *latest* tried. Squid could very well have tried a bunch of IPv4 addresses 
 earlier which failed, or scheduled them for connecting to later but 
 forward_timeout and connect_timeout prevented reaching them.
 
 Also, Squid by default only tries to connect 10 times then gives up. Lookign 
 at 
 teh website address list I notice that it on a primarily IPv6 network.
 
 # host www.vkontakte.ru
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:908
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:909
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:900
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:901
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:902
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:903
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:904
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:905
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:906
 www.vkontakte.ru has IPv6 address 2a00:bdc0:3:103:1:0:403:907
 www.vkontakte.ru has address 87.240.188.252
 www.vkontakte.ru has address 87.240.188.254
 
 
 Squid will do all 10 connection attemps before reaching any of the IPv4 
 addresses.
 
 You can use the dns_v4_first sort order option, or you can extend the number 
 of 
 connection attempts Squid performs with forward_max_tries.
 http://www.squid-cache.org/Doc/config/forward_max_tries/
 http://www.squid-cache.org/Doc/config/dns_v4_first/
 
 
 Some other things to be aware of in 3.2:
 * connect_timeout controls each individual TCP connection setup, ensure this 
 is 
 small to avoid broken IPs quickly but long enough to use slow links.
 * forward_timeout controls *total* time locating a working connection. For 
 example, N connection attempts with their connect_timeout on each one all fit 
 within forward_timeout, but the N+1 attempt would take longer so is cut short 
 or 
 never tried.
 
 http://www.squid-cache.org/Doc/config/connect_timeout/
 http://www.squid-cache.org/Doc/config/forward_timeout/


Re: [squid-users] Bypassing Proxy or SSL bump for specific IPs

2012-11-06 Thread Amm
http://www.squid-cache.org/Doc/config/ssl_bump/




- Original Message -
 From: Sharon Sahar sharon.sa...@gmail.com
 For such connections, is there an option to:
 
 1. Disable SSL Bump for certain domains / IPs?
 2. Disable squid  for certain domains / IPs?


Re: [squid-users] add DENIED tag by redirector for easy identification in logfile

2012-11-02 Thread Amm


- Original Message -

 From: Alex Rousskov rouss...@measurement-factory.com

 Hi Amm,
 
    There is a solution, but it requires switching from a url_rewriter
 script to an eCAP adapter. Adapters can set annotations (name:value
 tags) that Squid can log via %adapt::last_h logformat code. 

Thanks for suggestion. But writing ecap adapter is difficult for me

Currently I figured out a way to identify the blocks by url_redirect_program.

Since redirection is to a static page, size of that page is same.

So squid always logs same size and also mostly picks it up from cache,
hence it also shows REFRESH_UNMODIFIED.

Ofcourse not exactly a right way to identify.

Regards,

Amm.



[squid-users] add DENIED tag by redirector for easy identification in logfile

2012-10-31 Thread Amm
Hi

I wanted to know if url_rewrite_program can add a TAG for logging.

I have a redirector which blocks certain sites. But in squid logs
there is no way to indicate if redirector blocked it.

As per this, there is already a tag called DENIED when request is
rejected by acl.
http://wiki.squid-cache.org/SquidFaq/SquidLogs#access.log

I would like that redirector should also have ability to add a tag,
say same one, DENIED.

So that its easy to identify the blocked requests (either by acl
or by redirector)


Similar feature already exists for external_acl_type:
http://www.squid-cache.org/Doc/config/external_acl_type/


which says: tag =Apply a tag to a request (for both ERR and OK results)

So can redirector do the same?

Thanks in advance,

Amm



Re: [squid-users] add DENIED tag by redirector for easy identification in logfile

2012-10-31 Thread Amm


- Original Message -

 From: Amos Jeffries squ...@treenet.co.nz

   If you are interested in sponsoring any code development towards that 
 please 
 contact me off-list about payment details.

Hi Amos,

First of all thanks for replying immediately. But sorry to say that its a
very small company. Bosses will not approve.

 NOTE: redirectors do not block anything. They redirect. Possibly to 
 a location which does not exist, or a page containing the word 
 blocked.

Yes you are right if you consider a literal meaning and what it actually
does. But I suppose most of the people use redirector only for blocking
hence I used word block.

But technically you are right.

 Um, REDIRECT tag is documented 6 lines above DENIED. Please upgrade to 
 Squid-3.2 
 where this logging is available by default already. Or re-build your Squid 
 with 
 the -DLOG_TCP_REDIRECTS compiler flag.

I am already using 3.2.

 In all Squid whether they use that tag or not Squid will log a 301, 302, 303, 
 or 
 307 status code along with NONE/- as the server contacted if 
 url_rewrite_program redirected the request.  If there is anything else in the 
 upstream server field it means the 3xx status logged was generated by that 
 server, not by Squid.

I am doing URL rewrite instead of redirect.

The reason I am doing a rewrite instead of redirect is to avoid additional
lookup by client. It also maintains original URL of the page in browser.

Redirect otherwise changes the URL in location bar of the browser. And
people get confused.

And if I recall right then I have also seen some browser complaining
about XSS or something, because URL domains do not match.

I suppose as of now there is no solution. But thanks again.

Regards,

Amm



Re:: [squid-users] Squid and SSL interception (ssl-bump)

2012-10-31 Thread Amm





--
On Wed 31 Oct, 2012 9:03 PM IST Heinrich Hirtzel wrote:


http_port 10.0.1.1.:3128 intercept
https_port 10.0.1.1.:443 ssl-bump cert=/user/local/squid3/ssl_cert/myCA.pm

 
you have forgotten intercept on https line  

Amm


Re: [squid-users] 3.3.0.1 warning on reload - max_filedescriptors disabled

2012-10-24 Thread Amm
Further to this on running squidclient mgr:info

I always get:
Maximum number of file descriptors:   16384


be it after start or after reload OR even if i mention max_filedescriptor 1024 
or 4096.


looks like somewhere this number 16384 is hard-coded in 3.3.0.1

Amm



- Original Message -
 Hello all,
 
 
 I am trying out 3.3.0.1 beta on Fedora 16 64 bit.(kernel 3.4.11-1.fc16.x86_64 
 #1 
 SMP)
 
 I have created RPM file using same spec file and patches as 3.2.1 (which I 
 have 
 been using from a month without any issues).
 
 In squid.conf, I have max_filedescriptors 4096
 
 When I start squid (3.3.0.1) using systemctl start squid.service
 
 I see this in log file:
 2012/10/23 12:52:05 kid1| With 16384 file descriptors available
 
 So I am not sure why it is showing 16384 instead of 4096
 
 In 3.2.1 with exactly same squid.conf, it was showing:
 2012/10/23 08:36:29 kid1| With 4096 file descriptors available
 
 
 Secondly when i reload squid (3.3.0.1) using systemctl reload 
 squid.service
 
 Log file shows this:
 2012/10/23 11:09:01 kid1| WARNING: max_filedescriptors disabled. Operating 
 System setrlimit(RLIMIT_NOFILE) is missing.
 
 I want to make sure that even after squid reloads, it atleast maintain 4096 
 as 
 max and does not reduce to 1024 or so.
 
 
 
 Thirdly, in an unrelated log entry, just now I noticed this:
 2012/10/23 12:51:59 kid1| assertion failed: forward.cc:217: err
 2012/10/23 12:52:05 kid1| Starting Squid Cache version 3.3.0.1 for 
 x86_64-unknown-linux-gnu...
 
 
 It appears that squid crashed and restarted. But there is not much 
 information 
 on why? May be something in forward.cc:217
 
 So just reporting - please check.
 
 Thank you,
 
 Amm.


Re: [squid-users] 3.3.0.1 warning on reload - max_filedescriptors disabled

2012-10-24 Thread Amm
Ah yes, you are right. I just checked the spec file


   --with-filedescriptors=16384 


Earlier I didnt check spec file because EXACTLY same spec file was used for 
3.2.1 and it was using max_filedescriptor setting even when compile time 16384 
was mentioned. Its strange though.


Anyway atleast the root cause has been found

Apparently its hardcoded in Fedora repositories. (currently line 156)

http://pkgs.fedoraproject.org/cgit/squid.git/tree/squid.spec?h=f18


I suppose I will leave that as default as I do not want to change the spec file.



 From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org 
Sent: Wednesday, 24 October 2012 6:44 PM
Subject: Re: [squid-users] 3.3.0.1 warning on reload - max_filedescriptors 
disabled
 
On 24/10/2012 8:00 p.m., Amm wrote:


 looks like somewhere this number 16384 is hard-coded in 3.3.0.1

It is only hard-coded in your particular build because of

Amos



[squid-users] 3.3.0.1 warning on reload - max_filedescriptors disabled

2012-10-23 Thread Amm
Hello all,


I am trying out 3.3.0.1 beta on Fedora 16 64 bit.(kernel 3.4.11-1.fc16.x86_64 
#1 SMP)

I have created RPM file using same spec file and patches as 3.2.1 (which I have 
been using from a month without any issues).

In squid.conf, I have max_filedescriptors 4096

When I start squid (3.3.0.1) using systemctl start squid.service

I see this in log file:
2012/10/23 12:52:05 kid1| With 16384 file descriptors available

So I am not sure why it is showing 16384 instead of 4096

In 3.2.1 with exactly same squid.conf, it was showing:
2012/10/23 08:36:29 kid1| With 4096 file descriptors available


Secondly when i reload squid (3.3.0.1) using systemctl reload squid.service

Log file shows this:
2012/10/23 11:09:01 kid1| WARNING: max_filedescriptors disabled. Operating 
System setrlimit(RLIMIT_NOFILE) is missing.

I want to make sure that even after squid reloads, it atleast maintain 4096 as 
max and does not reduce to 1024 or so.



Thirdly, in an unrelated log entry, just now I noticed this:
2012/10/23 12:51:59 kid1| assertion failed: forward.cc:217: err
2012/10/23 12:52:05 kid1| Starting Squid Cache version 3.3.0.1 for 
x86_64-unknown-linux-gnu...


It appears that squid crashed and restarted. But there is not much information 
on why? May be something in forward.cc:217

So just reporting - please check.

Thank you,

Amm.