[squid-users] ssl-bump strange behaviour with incomplete config

2023-09-13 Thread squid
Hi all

I was trying to configure the ssl-bump feature. I forgot to allow the
initial CONNECT (or the fake CONNECT, in case of intercepting proxy). This
led me to some strange results which I'd like to point out. I am using
CentOS 8 with squid 6.13 recompiled from the Fedora RPM.
First case, forward proxy. The configuration is:

debug_options 83,8
logformat timereadable %tl %6tr %>a %Ss/%03Hs %a %Ss/%03Hs
%https://www.nytimes.com/ - HIER_NONE/- text/html

See the second GET. It seems that Squid here is kind-of bumping, instead of
splicing, since it did read what should be the encrypted request. 
Also the TCP_DENIED/200 is not clear to me: shouldn't it be TCP_DENIED/403?
I checked with tcpdump, and the CONNECT is really allowed with 200 code, a
client hello and server hello are exchanged, than the connections is closed
after some transaction.

I started debugging the ssl-bump thing with debug_options 83,8 and in
cache.log I see:

2023/09/13 17:14:00.283 kid1| 83,7| LogTags.cc(57) update: TAG_NONE to
TCP_DENIED
2023/09/13 17:14:00.283 kid1| 83,3| client_side_request.cc(1501)
sslBumpNeed: sslBump required: bump
2023/09/13 17:14:00.283 kid1| 83,3| client_side_request.cc(1501)
sslBumpNeed: sslBump required: client-first

Why " sslBump required: bump" and not splice? Even "worse", why does it do
client-first then? Is it right it defaults to this if the CONNECT is
refused?
Then in cache.log I see a lot of messages where it seems that Squid is
talking TLS with the client, probably sending the access denied page. 

If I simply add in the config
acl lux src 192.168.1.179
http_access allow lux

then it works, with 
2023-09-13T17:28:21.877+0200 108353 192.168.1.179 TCP_TUNNEL/200 4853
CONNECT vp.nyt.com:443 - HIER_DIRECT/vp.nyt.com -
2023-09-13T17:28:21.878+0200 108441 192.168.1.179 TCP_TUNNEL/200 7894
CONNECT mwcm.nytimes.com:443 - HIER_DIRECT/mwcm.nytimes.com -
2023-09-13T17:28:21.879+0200 110154 192.168.1.179 TCP_TUNNEL/200 6337
CONNECT a.nytimes.com:443 - HIER_DIRECT/a.nytimes.com -
2023-09-13T17:28:21.879+0200 111702 192.168.1.179 TCP_TUNNEL/200 514229
CONNECT g1.nyt.com:443 - HIER_DIRECT/g1.nyt.com -
2023-09-13T17:28:21.880+0200 111702 192.168.1.179 TCP_TUNNEL/200 648915
CONNECT static01.nyt.com:443 - HIER_DIRECT/static01.nyt.com -
2023-09-13T17:28:21.880+0200 111890 192.168.1.179 TCP_TUNNEL/200 10986
CONNECT samizdat-graphql.nytimes.com:443 -
HIER_DIRECT/samizdat-graphql.nytimes.com -
2023-09-13T17:28:21.881+0200 112017 192.168.1.179 TCP_TUNNEL/200 72448
CONNECT static01.nytimes.com:443 - HIER_DIRECT/static01.nytimes.com -
2023-09-13T17:28:21.881+0200 112018 192.168.1.179 TCP_TUNNEL/200 286645
CONNECT static01.nytimes.com:443 - HIER_DIRECT/static01.nytimes.com -
2023-09-13T17:28:21.882+0200 112043 192.168.1.179 TCP_TUNNEL/200 15345
CONNECT g1.nyt.com:443 - HIER_DIRECT/g1.nyt.com -
2023-09-13T17:28:21.883+0200 112333 192.168.1.179 TCP_TUNNEL/200 2390864
CONNECT www.nytimes.com:443 - HIER_DIRECT/www.nytimes.com -

And in cache.log, no more client-first mentions:
2023/09/13 17:26:33.523 kid1| 83,3| client_side_request.cc(1748) doCallouts:
Doing clientInterpretRequestHeaders()
2023/09/13 17:26:33.523 kid1| 83,3| client_side_request.cc(1501)
sslBumpNeed: sslBump required: peek
2023/09/13 17:26:33.523 kid1| 83,3| client_side_request.cc(1842) doCallouts:
calling processRequest()


I also tried with an interception proxy (transparent proxy) config. The
results are similar but not identical:
I added to the original config (the one which prohibits the CONNECT):
http_port 3128

and changed port 3130 so:
https_port 3130 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=6MB cert=/etc/squid/ssl_cert/myCA.pem

Now I get:
2023-09-13T17:52:17.109+0200  6 192.168.1.179 TCP_DENIED/000 0 CONNECT
151.101.241.164:443 - HIER_NONE/- -
2023-09-13T17:52:17.109+0200  0 192.168.1.179 NONE_NONE/403 3732 GET
https://www. nytimes.com/ - HIER_NONE/- text/html

TCP_DENIED/000 is more clear than TCP_DENIED/200. But the GET, making me
think to some bumping code, is still there.

In cache.log, I find:

2023/09/13 17:53:06.539 kid1| 83,7| LogTags.cc(57) update: TAG_NONE to
TCP_DENIED
2023/09/13 17:53:06.539 kid1| 83,3| client_side_request.cc(1501)
sslBumpNeed: sslBump required: peek
2023/09/13 17:53:06.539 kid1| 83,3| client_side_request.cc(1501)
sslBumpNeed: sslBump required: client-first

So this time, it is doing peek first (and this is ok with the config), but
then it recurs to client-first, which could be the reason why the encrypted
HTTPS traffic is decoded and the HTTP GET is shown.

Is this whole behavior correct? Anyway, when I white-list the CONNECT,
everything seems to work right.

Thank you, 
Luigi

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Reverse Proxy Redirect - Stops in Browser

2023-02-15 Thread squid
I have a reverse proxy that that does the following:

acl example_www url_regex -i ^https?:\/\/example-www?.example.com.*
http_access allow internal_IPs example_www
deny_info https://other-www.other.com%R example_www
http_access deny example_www

When a tool or a browser goes to http://example-www.example.com it immediately 
sends them to https://other-www.other.com as expected.

When a tool or a browser goes to https://example-www.example.com it brings up 
in chrome the Your connection is not private page and when you hit Advanced and 
hit the allow to proceed it is then redirected to the site.

This is causing us come compliance issues due to the tool thinking we are 
running a non-compliant https page since user interaction is required to get to 
the other page.

Is there a way to send to the other page earlier so the tool or user doesn't 
even see the Your connection is not private page?  I just want to only aloow 
the internal IPs and cut everyone else off.

I've tried taking out the deny_info, but that sends the user and tool to a 
squid error page which basically fails the test as well since it's on the same 
site.
I've also tried doing a TCP_RESET instead, but for some reason the squid 
actually send the word reset back to the client the first time and again would 
fail the test. 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Advice on Cache Peer ACLs

2019-08-30 Thread squid
Thank you for the reply.  It appears that I had a IP address typo in one of the 
cache_peer lines that allowed the requests with /tst/map1 or map2 to slip bye.  
It appears to be working.  I think you confirmed what I'm trying to do should 
work. 

One question about your last statement concerning inconsistent  domain names.  
All requests will always start with www.example.com / or 
origin-www.example.com/.  even the ones what I'm trying to send to specific 
backends using the "limited" acl. 

Are you saying I should have the following for .4 and .5 instead of what I'm 
currently using?  

 cache_peer 192.168.1.5 parent 80 0 no-query no-digest connect-fail-limit=10 
weight=1 originserver round-robin
 cache_peer_access 192.168.1.5 allow limited
cache_peer_access 192.168.1.5 allow all_requests
 cache_peer_access 192.168.1.5 deny all

I was trying to limit the requests to .4 and .5 to only those that contained 
/tst/map1 or map2.  I thought if I included the "allow all_requests" line in .4 
and .5 it would send requests that also did not include /tst/map2 or map2.  For 
example "origin-www.example.com/hello/test/etc"  could possibly be sent to .4 
and .5 as well.  

How do I ensure that www.example.com/tst/map1/. and map2 only go to .4 and 
.5 while still correctly being consistent with the domain was you suggested.  
Thanks.  

On Fri, Aug 30, 2019, at 11:41 AM, Alex Rousskov wrote:
> On 8/30/19 11:44 AM, cred...@eml.cc wrote:
> > We use several squid servers in accelerator mode for load balancing to send 
> > public requests to backend servers.   The squids don't do any caching, they 
> > just forward requests to the backend. 
> > 
> > We have cache_peer directives to send the incoming requests to the backend 
> > Apache servers.  What I need to do is send requests to a certain page to a 
> > specific backend server and all others to the  other backends.  The site 
> > has many pages, subpages etc.  
> > 
> > What I want to do is if someone requests:
> > https://www.example.com/anything/anything/script.php   or 
> > https://origin-www.example.com/anything/anything/etc/etc/script.php
> > 
> > Send the request to only .1, .2,.3.
> > 
> > If someone requests :
> > https://www.example.com/anything/tst/map2/script.php   or 
> > https://origin-www.example.com/anything/anything/tst/map1/etc/script.php
> > 
> > Send that request only to .4 and .5.
> > 
> > It seems to work most of the time, but tailing the access logs on the 
> > servers I sometimes see one of the requests for ../tst/map2/... or map1 
> > show up on .1,.2, or .3.  
> 
> 
> Do Squid access logs have the corresponding records as well? What cache
> peer selection algorithm does Squid record for those misdirected
> transactions?
> 
> 
> > Is there something I'm missing?
> 
> Could Squid go direct to one of those origin servers (e.g., when all
> eligible cache peers were down)?
> 
> BTW, please note that your cache_peer_access rules look inconsistent:
> Your cache_peer_access .1-3 rules require certain domain names but .4-5
> rules do not. This does not explain the discrepancy you are describing
> above, but you may want to adjust your rules for consistency sake
> (either to ignore dstdomain completely or to require correct domains for
> all cache peers).
> 
> 
> HTH,
> 
> Alex.
> 
> 
> > acl all_requests dstdomain -n www.example.com origin-www.example.com
> > acl limited  url_regex -i /tst/map1|/tst/map2
> > 
> > 
> > cache_peer 192.168.1.1 parent 80 0 no-query no-digest connect-fail-limit=10 
> > weight=1 originserver round-robin
> > cache_peer_access 192.168.1.1 deny limited
> > cache_peer_access 192.168.1.1 allow all_requests
> > cache_peer_access 192.168.1.1 deny all
> > 
> > cache_peer 192.168.1.2 parent 80 0 no-query no-digest connect-fail-limit=10 
> > weight=1 originserver round-robin
> > cache_peer_access 192.168.1.2 deny limited
> > cache_peer_access 192.168.1.2 allow all_requests
> > cache_peer_access 192.168.1.2 deny all
> > 
> > cache_peer 192.168.1.3 parent 80 0 no-query no-digest connect-fail-limit=10 
> > weight=1 originserver round-robin
> > cache_peer_access 192.168.1.3 deny limited
> > cache_peer_access 192.168.1.3 allow all_requests
> > cache_peer_access 192.168.1.3 deny all
> > 
> > cache_peer 192.168.1.4 parent 80 0 no-query no-digest connect-fail-limit=10 
> > weight=1 originserver round-robin
> > cache_peer_access 192.168.1.4 allow limited
> > cache_peer_access 192.168.1.4 deny all
> > 
> > cache_peer 192.168.1.5 parent 80 0 no-query no-digest connect-fail-limit=10 
> > weight=1 originserver round-robin
> > cache_peer_access 192.168.1.5 allow limited
> > cache_peer_access 192.168.1.5 deny all
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Deny_Info TCP_RESET

2019-03-27 Thread squid
Operating in reverse proxy mode.   I'm trying to send a TCP reset in response 
to the acl below:

acl example_url url_regex -i [^:]+://[^0-9]*.example.com.*
deny_info TCP_RESET example_url
http_access deny example_url

Looking at the packets I see the following response:

HTTP/1.0 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Wed, 27 Mar 2019 20:36:20 GMT
Content-Type: text/html
Content-Length: 5
X-Squid-Error: TCP_RESET 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from www.example.com
X-Cache-Lookup: NONE from www.example.com:80
Via: 1.0 www.example.com (squid)
Connection: keep-alive

reset

Squid sends the headers and the word reset.  Then future requests seem to work 
as expected.  No headers are sent, the word reset isn't sent and squid 
ultimately sends a RST and ACK.

Then after some time  or squid gets reloaded the headers are sent again, then 
things seem to work as I would expect.

I'm not sure if it will help, but wanted to try the following to see if that 
will get rid of that initial header being sent. 

acl example_url url_regex -i [^:]+://[^0-9]*.example.com.*
deny_info TCP_RESET example_url
http_reply_access deny example_url

Do I still need the http_access deny example_url in addition to the 
http_reply_access deny example_url statement, or does the http_reply_access 
take the place of http_access statement:


acl example_url url_regex -i [^:]+://[^0-9]*.example.com.*
deny_info TCP_RESET example_url
http_reply_access deny example_url
http_access deny example_url
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL / TLS

2018-12-20 Thread Squid users
Slightly off topic but am I correct in thinking TLS supersedes SSL?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Advice - Squid Proxy

2018-12-19 Thread Squid users
> So, Squid is installed on an Ubuntu VM, which runs on your laptop?
Correct

> So, the phone is either - direct connection via mobile Internet access, or 
> via Squid and your home Internet connection - no way for the phone to use the 
> Internet connection without going via Squid?
Yeah - however I use bitdefender on top of squid. Once the phone detects and 
connects to my laptop it then uses the proxy server

> Configured it in Squid, so users have to authenticate there to get access?
Yeah - I have an ACL running in Squid

> So, where do any other devices (phone, TV, the three VMs) get their IP 
> addresses from?  They must have them, otherwise they couldn't communicate 
> with Squid...  What do these devices have as a gateway address?
I use dhcp allocated from ubuntu, the gateway address that’s broadcast is my 
Ubuntu address.


 I'm writing this and thinking I've gone a bit Orwellian. Still I think I've 
covered the bases. I was toying with the idea of running Asterix off my laptop 
too, but I figured I'd start with this project.

-Original Message-
From: squid-users  On Behalf Of 
Antony Stone
Sent: 19 December 2018 16:17
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Advice - Squid Proxy

On Wednesday 19 December 2018 at 16:04:36, Squid users wrote:

> Hi,
> 
> Re network diagram - Mish Mash / blended / spaghetti  I think :p
> 
> Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw 
> that on.

So, Squid is installed on an Ubuntu VM, which runs on your laptop?

> The phone connects to mobile internet when out of the house, then 
> reverts back to going via squid proxy when my laptop wifi is turned 
> on. The phone detects my laptop and connects accordingly. The phone 
> reconfigures to go via proxy when it connects to my laptop.

So, the phone is either - direct connection via mobile Internet access, or via 
Squid and your home Internet connection - no way for the phone to use the 
Internet connection without going via Squid?

> As for the TV - yeah my laptop needs to be in the house for that to work.

Okay.

> Internet Use - I'm happy to record websites called by 'user' so for
> example: Tv=user1
> Phone=user2
> Laptop user=user3
> Then each family member with their own user id /password.
> I've configured this bit already

Configured it in Squid, so users have to authenticate there to get access?

> I have set my home internet router to only allocate my laptop mac a 
> DHCP address

So, where do any other devices (phone, TV, the three VMs) get their IP 
addresses from?  They must have them, otherwise they couldn't communicate with 
Squid...  What do these devices have as a gateway address?

> I'll draw a better diagram later today.

Okay.

> I may have gone a bit overboard with the control and monitoring :s

Yes, maybe :)


Antony.

--
Software development can be quick, high quality, or low cost.

The customer gets to pick any two out of three.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Advice - Squid Proxy

2018-12-19 Thread Squid users
Hi,

Re network diagram - Mish Mash / blended / spaghetti  I think :p

Squid is installed on the Ubuntu virtual machine. Sorry forgot to draw that on.

The phone connects to mobile internet when out of the house, then reverts back 
to going via squid proxy when my laptop wifi is turned on. The phone detects my 
laptop and connects accordingly. The phone reconfigures to go via proxy when it 
connects to my laptop.

As for the TV - yeah my laptop needs to be in the house for that to work.

Internet Use - I'm happy to record websites called by 'user' so for example:
Tv=user1
Phone=user2
Laptop user=user3
Then each family member with their own user id /password.
I've configured this bit already

I have set my home internet router to only allocate my laptop mac a DHCP 
address

I'll draw a better diagram later today. 
I may have gone a bit overboard with the control and monitoring :s

Thanks

-Original Message-
From: squid-users  On Behalf Of 
Antony Stone
Sent: 19 December 2018 13:19
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Advice - Squid Proxy

On Wednesday 19 December 2018 at 13:22:57, Squid users wrote:

> The attached configuration is currently in use on my computer.

It isn't a network diagram; I'm not quite sure what to describe it as, but I 
don't even see where Squid is on there.

> My aim is to use my laptop while I'm out and about (libraries, work 
> etc) and when I'm at home have my TV and Phone connect into the proxy server.
> This would allow caching by any device to my laptop so I'm minimising 
> my connections outbound.

So, Squid runs on your laptop?

What are the phone and TV supposed to do when the laptop isn't there?

> I also want it to record use by other people so I can monitor my 
> internet use at home.

Define "use".  What level of detail do you want to record?

> As you can see I run bitdefender parental control on my computer. 
> Would it be possible for someone to manipulate the proxy server to bypass 
> this?
> Could the proxy server be used to hide / obscure actual sites visited?

Show us a rather more conventional network diagram, which shows how packets get 
to & from the Internet, and what filters / firewalls are in place between 
different bits of equipment, and we might be able to asnwer this.


Antony.

--
"Can you keep a secret?"
"Well, I shouldn't really tell you this, but... no."


   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Advice - Squid Proxy

2018-12-19 Thread Squid users
The attached configuration is currently in use on my computer. 

My aim is to use my laptop while I'm out and about (libraries, work etc) and 
when I'm at home have my TV and Phone connect into the proxy server.  This 
would allow caching by any device to my laptop so I'm minimising my connections 
outbound.

I also want it to record use by other people so I can monitor my internet use 
at home. 

As you can see I run bitdefender parental control on my computer. Would it be 
possible for someone to manipulate the proxy server to bypass this? Could the 
proxy server be used to hide / obscure actual sites visited?

Can anyone point out any flaws or issues.

Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Unable to Disable sslv3

2018-09-12 Thread squid
I asked this some time ago and am bringing it up again to see if there are any 
suggestions since we haven't been able to fix it.

We are using squid as reverse proxy and we have disabled SSLv3 :

https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost 
cert=/etc/cert.pem key=/etc/privkey.pem 
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,CIPHER_SERVER_PREFERENCE 
cipher=ECDHE-ECDSA . . .. dhparams=/etc/...dhparams.pem

We have also tried the sslproxy_options as well.  

Using Nessus scanning tool, it reports that SSLv3 is enabled, but not SSLv2.   

Version of Squid is  (3.1.23) which is stock RH6 which I know is old, but for 
now we need to use it. 

The only thing we have been able to do so far is add NO_TLSv1 to the https_port 
section.  Then the scan comes back clean.   Not sure what to look at next.  Any 
suggestions? 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Possible access via v6 when no interfaces present, fixable with dns_v4_first

2018-05-18 Thread squid-users
Hello squid users,

I'm trying to understand a strange problem with requests to edge.apple.com,
which I think may be related to IPv6 DNS resolution.

To set the scene - we operate a large (1,000+) fleet of Squid 3.5.25 caches.
Each runs on a separate LAN, connected to the internet via another upstream
proxy, accessed over a wide-area network.  Each local cache runs on a CentOS
6 box, incuding BIND for name resolution.  For DNS resolution, each local
CentOS server runs BIND, which is configured to resolve against a local
Microsoft DNS server, which then resolves internet queries using a
whole-of-WAN BIND service operated by the carrier.  The WAN does not support
IPv6, and CentOS does not have any v6 network interfaces configured.

Recently we became aware of a fault on a single cache serving requests for
edge.icloud.com.  Requests would time out with a TAG_NONE/503 written to the
log.  The error could be replicated with cURL at the CLI using this URL:
https://edge.icloud.com/perf.css.  This was a strange error, because at the
time it happened, it was possible to connect to edge.icloud.com on port 443.
The error was happening in just one site.

To isolate the fault we stripped the Squid config at the affected site right
back to the following:

# Skeleton Squid 3.5.25 config
shutdown_lifetime 2 seconds
max_filedesc 16384
coredump_dir /var/spool/squid
dns_timeout 5 seconds
error_directory /var/www/squid-errors
logfile_rotate 0
http_port 3128
cache_dir ufs /var/spool/squid 8192 16 256
maximum_object_size 536870912 bytes
cache_replacement_policy heap LFUDA
http_access allow localhost
debug_options ALL,5

Here's the messages written to the log when fetching
https://edge.icloud.com/perf.css with curl:

2018/05/08 16:25:46.321 kid1| 14,3| ipcache.cc(362) ipcacheParse: 18 answers
for 'edge.icloud.com'
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #0 [2403:300:a50:105::f]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #1 [2403:300:a50:105::9]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #2 [2403:300:a50:100::e]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #3 [2403:300:a50:101::5]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #4 [2403:300:a50:104::e]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #5 [2403:300:a50:104::9]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #6 [2403:300:a50:104::5]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(431) ipcacheParse:
edge.icloud.com #7 [2403:300:a50:101::6]
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #8 17.248.155.107
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #9 17.248.155.142
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #10 17.248.155.110
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #11 17.248.155.80
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #12 17.248.155.114
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #13 17.248.155.77
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #14 17.248.155.145
2018/05/08 16:25:46.322 kid1| 14,3| ipcache.cc(420) ipcacheParse:
edge.icloud.com #15 17.248.155.89
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'edge.icloud.com:443'
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths:
always_direct = DENIED
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths:
never_direct = DENIED
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:105::f]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:105::9]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:100::e]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:101::5]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:104::e]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:104::9]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:104::5]:443 flags=1
2018/05/08 16:25:46.322 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=[::] remote=[2403:300:a50:101::6]:443 flags=1

[squid-users] Disable SSLv3 Not working

2018-03-30 Thread squid
We are using squid as reverse proxy and we have disabled SSLv3 :

https_port XXX.XXX.XXX.XXX:443 accel defaultsite=www.example.com vhost 
cert=/etc/cert.pem key=/etc/privkey.pem 
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,CIPHER_SERVER_PREFERENCE 
cipher=ECDHE-ECDSA . . .. dhparams=/etc/...dhparams.pem

Using Nessus scanning tool, it reports that SSLv3 is enabled, but not SSLv2.   
Looking at the ssl handshake client hello and server hellos is does seem that 
the sslv3 is being used.  Is there something that we are missing?

Version of Squid  (3.1) is stock RH6 which I know is old, but for now we need 
to use.  We will be upgrading to RH7, but it may be a little while so I'd like 
to get this solved. 

Secure Sockets Layer
SSLv3 Record Layer: Handshake Protocol: Server Hello
Content Type: Handshake (22)
Version: SSL 3.0 (0x0300)
Length: 74
Handshake Protocol: Server Hello
Handshake Type: Server Hello (2)
Length: 70
Version: SSL 3.0 (0x0300)
Random: 5aa83ae26555f6dcc7042c341d090c6715a243a7be05d69b...
Session ID Length: 32
Session ID: 44bb10e985c067cc987bf2e698d458dd37d2b3c469ce9fe7...
Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
Compression Method: null (0)
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] No DNS records?

2017-08-11 Thread Squid Help
Hi, I have just installed squid on windows 10, open the port 3128 in the 
firewall and configured FF to use the proxy on localhost:3128 for all the 
requests, but every request ends with the following page:
-
ERROR
The requested URL could not be retrieved

The following error was encountered while trying to retrieve the URL: 
http://www.google.com/search?

Unable to determine IP address from host name "www.google.com"

The DNS server returned:

No DNS records

This means that the cache was not able to resolve the hostname presented in the 
URL. Check if the address is correct.
-
Without proxy I can navigate without problems, I saw also in the configuration 
that squid uses google DNS server 8.8.8.8, so I configured my network adapter 
to use google DNS servers 8.8.8.8 and I can navigate in internet without 
problems. 
At this point I have also checked the squid log files for errors but there was 
none.
Resuming it: it seems that squid can connect to google DNS server 8.8.8.8 but 
the DNS server doesn't know the IP of "www.google.com", what it is impossible.
Now I'm confused and I don't know how to solve this problem.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache peer selection with duplicate host names

2017-04-23 Thread squid-users
Hi Squid users,

I'm having some trouble understanding Squid's peer selection algorithms, in
a configuration where multiple cache_peer lines reference the same host.

The background to this is that we wish to present cache service using
multiple accounts at an upstream provider, with account selection taking
place based on the local TCP port (8080, 8181, 8282) the request arrived on.

First we define the cache peers:

cache_peer proxy.myisp.net parent 8080 0 login=staffuser:abc123 no-query
no-digest no-netdb-exchange connect-timeout=1 connect-fail-limit=2
name=Staff
cache_peer proxy.myisp.net parent 8080 0 login=guestuser:abc123 no-query
no-digest no-netdb-exchange connect-timeout=1 connect-fail-limit=2
name=Guest
cache_peer proxy.myisp.net parent 8080 0 login=PASS no-query no-digest
no-netdb-exchange connect-timeout=1 connect-fail-limit=2 name=Student

Then lock access down:

acl localport_Staff localport 8282
acl localport_Guest localport 8181
acl localport_Student localport 8080
cache_peer_access Staff allow localport_Staff !localport_Guest
!localport_Student
cache_peer_access Guest allow localport_Guest !localport_Staff
!localport_Student
cache_peer_access Student allow localport_Student !localport_Guest
!localport_Staff

To reproduce the error, first a connection is made with wget to tcp port
8282:

  http_proxy=http://10.159.192.24:8282/ wget www.monash.edu --delete-after

Squid selects the Staff profile as expected:

  1492999376.993811 10.159.192.26 TCP_MISS/200 780195 GET
http://www.monash.edu/ - FIRSTUP_PARENT/Staff text/html "EDU%20%20%20en"
"Wget/1.12 (linux-gnu)"

Then another connection is made, this time to port 8080:

  http_proxy=http://10.159.192.24:8080/ wget www.monash.edu --delete-after

But instead of the desired Student profile being selected, the Staff profile
is still used instead:

  1492999405.953338 10.159.192.26 TCP_MISS/200 780195 GET
http://www.monash.edu/ - FIRSTUP_PARENT/Staff text/html "EDU%20%20%20en"
"Wget/1.12 (linux-gnu)"

I had a look in the cache.log with debug_options 44,6 enabled.  None of the
messages reference the contents of the name= parameter in the cache_peer
lines; only hostnames and IP addresses are mentioned.  I suspect that the
peer selection algorithms have changed since Squid 3.1, whereby peers are
now selected based on hostname (or IP address) rather than the name defined
in the cache_peer line.  Is this correct?  If so, is there any other way to
achieve the functionality outlined above (hit different usernames on an
upstream peer based on which localport the request arrived on?)

Cheers
Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Will squid core dump with worker threads? Investigating squid crash, 3.5.23

2017-01-19 Thread squid

>>
>> assertion failed: MemBuf.cc:216: "0 <= tailSize && tailSize <= cSize"
>>
> 
> This is <http://bugs.squid-cache.org/show_bug.cgi?id=4606>. We have


Is there a workaround for this - something that I can put in the config
perhaps?  I'm getting the same issue a few times a day.  I suspect it's
mainly due to clients accessing Windows Updates, but difficult to tell.

I am automatically restarting squid, but the delays for other users
while all this is happening can generate a poor browsing experience.

Thanks
Mark




_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [SOLVED] Re: TCP Outgoing Address ACL Problem

2016-11-13 Thread jarrett+squid-users
Thanks Garry and Amos!  My problem is solved.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP Outgoing Address ACL Problem

2016-11-11 Thread jarrett+squid-users
Can anyone point out what I'm doing wrong in my config?

Squid config:
https://bpaste.net/show/796dda70860d

I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses.  But, squid uses the first IP address
assigned to the interface not listed in the config instead.

IP/Ethernet Interface Assignment:
https://bpaste.net/show/5cf068a4ce9a

Thanks!

P.S. Sorry for that last message.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP Outgoing Address ACL Problem

2016-11-11 Thread jarrett+squid-users
You are not allowed to post to this mailing list, and your message has
been automatically rejected.  If you think that your messages are
being rejected in error, contact the mailing list owner at
squid-users-ow...@lists.squid-cache.org.

--- Begin Message ---
Can anyone point out what I'm doing wrong in my config?

Squid config:
https://bpaste.net/show/796dda70860d

I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses.  But, squid uses the first IP address
assigned to the interface not listed in the config instead.

IP/Ethernet Interface Assignment:
https://bpaste.net/show/5cf068a4ce9a

Thanks!

--- End Message ---
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Custom User Agent Per ACL

2016-10-27 Thread jarrett+squid-users
Is it possible to have a custom "request_header_replace User-Agent" assigned to 
mapped acl/listening port/tcp_outgoing_address?

Examples:
acl ipv4-1 myportname 3128 src xxx.xxx.xxx.xxx/24http_access allow ipv4-1 -> 
request_header_replace User Agent "Firefox x" ipv4-1 -> 
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-1
acl ipv4-2 myportname 3129 src xxx.xxx.xxx.xxx/24 -> http_access allow ipv4-2 
-> request_header_replace User Agent "Internet Explorer x" ipv4-2 -> 
tcp_outgoing_address xxx.xxx.xxx.xxx ipv4-2

Thanks!

_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-05 Thread squid-users
Alex,

> However, there is a difference between my August tests and this thread.
> My tests were for a request parsing error response. Access denials do not
> reach the same http_reply_access checks! See "early return"
> statements in clientReplyContext::processReplyAccess(), including:
> 
> > /** Don't block our own responses or HTTP status messages */
> > if (http->logType.oldType == LOG_TCP_DENIED ||
> > http->logType.oldType == LOG_TCP_DENIED_REPLY ||
> > alwaysAllowResponse(reply->sline.status())) {
> > headers_sz = reply->hdr_sz;
> > processReplyAccessResult(ACCESS_ALLOWED);
> > return;
> > }
> 
> I am not sure whether avoiding http_reply_access in such cases is a
> bug/misfeature or the right behavior. As any exception, it certainly
> creates problems for those who want to [ab]use http_reply_access as a
> delay hook. FWIW, Squid had this exception since 2007:

Thanks, makes sense.  It would be great if there was a way to slow down 407 
responses; at the moment the only workaround I can think of is to write a 
log-watching script to maintain a list of offending IP/domain pairs, then write 
a helper to use that data to introduce delay when the request is first received 
(via http_access and the !all trick).  If anyone has a better option, I'm all 
ears.

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
> > I set this up as you suggested, then triggered a 407 response from the
> cache.  It seems that way; I couldn't see aclMatchHTTPStatus or http-
> response-407 in the log:
> >
> 
> Strange. I was sure Alex did some tests recently and proved that even
> internally generated responses get http_reply_access applied to them.
> Yet no sign of that in your log.
> 
> Is this a very old Squid version?

It's a recent Squid version - 3.5.20 on CentOS 6, built from the SRPM kindly 
provided by Eliezer.

> Or are the "checking http_reply_access" lines just later in the log than
> your snippet covered?

There was nothing more in the log previously posted at the point the 407 
response was returned to the client.

That log did have a lot of other stuff in it though.  Using a much simpler 
squid.conf (attached), I tested for differences in authenticated vs 
unauthenticated requests, when "http_reply_access deny all" is in place.  When 
credentials are supplied, a http/403 (forbidden) response is provided, as you 
would expect.  But when credentials are not supplied, a http/407 response is 
provided.  The divergence seems to start around line 31 in cache_noauth.log:

Checklist.cc(63) markFinished: 0x331e4a8 answer AUTH_REQUIRED for 
AuthenticateAcl exception

Perhaps when answer=AUTH_REQUIRED (line 35), http_reply_access is not checked?  
Another difference is that Acl.cc(158) reports async when an authenticated 
request is in place, but not otherwise.  If someone could give me some pointers 
where to look in the source, I can start digging to see if I can find out more.

Luke



cache_auth.log
Description: Binary data


cache_noauth.log
Description: Binary data


squid.conf
Description: Binary data
_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
id1| 28,3| Acl.cc(158) matches: checked: snmp_access#1 
= 1
2016/10/04 22:37:17.698 kid1| 28,3| Acl.cc(158) matches: checked: snmp_access = 
1
2016/10/04 22:37:17.698 kid1| 28,3| Checklist.cc(63) markFinished: 
0x7ffcaaa6a540 answer ALLOWED for match
2016/10/04 22:37:17.698 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7ffcaaa6a540
2016/10/04 22:37:17.698 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a540
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(178) lookup: id=0x1e34884 query 
ARP table
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(222) lookup: id=0x1e34884 query 
ARP on each interface (160 found)
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(228) lookup: id=0x1e34884 found 
interface lo
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(228) lookup: id=0x1e34884 found 
interface eth0
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(237) lookup: id=0x1e34884 looking 
up ARP address for 10.159.192.19 on eth0
2016/10/04 22:37:18.149 kid1| 28,4| Eui48.cc(280) lookup: id=0x1e34884 got 
address 00:15:5d:c0:11:3f on eth0
2016/10/04 22:37:18.150 kid1| 28,4| FilledChecklist.cc(66) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7ffcaaa6a390
2016/10/04 22:37:18.150 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x7ffcaaa6a390
2016/10/04 22:37:18.150 kid1| 28,3| Checklist.cc(70) preCheck: 0x22e7f98 
checking slow rules
2016/10/04 22:37:18.150 kid1| 28,5| Acl.cc(138) matches: checking http_access
2016/10/04 22:37:18.150 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:18.150 kid1| 28,5| Acl.cc(138) matches: checking http_access#1
2016/10/04 22:37:18.150 kid1| 28,5| Acl.cc(138) matches: checking to_self
2016/10/04 22:37:18.150 kid1| 28,3| DestinationIp.cc(70) match: aclMatchAcl: 
Can't yet compare 'to_self' ACL for 'www.theage.com.au'
2016/10/04 22:37:18.150 kid1| 28,3| Acl.cc(158) matches: checked: to_self = -1 
async
2016/10/04 22:37:18.150 kid1| 28,3| Acl.cc(158) matches: checked: http_access#1 
= -1 async
2016/10/04 22:37:18.150 kid1| 28,3| Acl.cc(158) matches: checked: http_access = 
-1 async
2016/10/04 22:37:18.160 kid1| 28,5| InnerNode.cc(94) resumeMatchingAt: checking 
http_access at 0
2016/10/04 22:37:18.160 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:18.160 kid1| 28,5| InnerNode.cc(94) resumeMatchingAt: checking 
http_access#1 at 0
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking to_self
2016/10/04 22:37:18.160 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'150.101.161.17' NOT found
2016/10/04 22:37:18.160 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'150.101.161.26' NOT found
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: to_self = 0
2016/10/04 22:37:18.160 kid1| 28,3| InnerNode.cc(97) resumeMatchingAt: checked: 
http_access#1 = 0
2016/10/04 22:37:18.160 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking http_access#2
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:18.160 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'10.159.192.19:36466' NOT found
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 0
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: http_access#2 
= 0
2016/10/04 22:37:18.160 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking http_access#3
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking localhost
2016/10/04 22:37:18.160 kid1| 28,3| Ip.cc(539) match: aclIpMatchIp: 
'10.159.192.19:36466' NOT found
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: localhost = 0
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: http_access#3 
= 0
2016/10/04 22:37:18.160 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'DENIED/0is not banned
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking http_access#4
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking manager
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(51) match: 
aclRegexData::match: checking 'http://www.theage.com.au/'
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(62) match: 
aclRegexData::match: looking for '(^cache_object://)'
2016/10/04 22:37:18.160 kid1| 28,3| RegexData.cc(62) match: 
aclRegexData::match: looking for '(^https?://[^/]+/squid-internal-mgr/)'
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: manager = 0
2016/10/04 22:37:18.160 kid1| 28,3| Acl.cc(158) matches: checked: http_access#4 
= 0
2016/10/04 22:37:18.160 kid1| 28,5| Checklist.cc(400) bannedAction: Action 
'ALLOWED/0is not banned
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checking http_access#5
2016/10/04 22:37:18.160 kid1| 28,5| Acl.cc(138) matches: checki

Re: [squid-users] Introducing delay to HTTP 407 responses

2016-10-04 Thread squid-users
Eliezer,

Thankyou for your reply, I tried the following:

> Hey Luke,
> 
> Try to use the next line instead:
> external_acl_type delay ttl=1 negative_ttl=0 cache=0 %SRC %SRCPORT %URI 
> /tmp/delay.pl
> 
> And see what happens.

But it's not introducing a delay into the response.  Running strace across the 
pid of each child helper doesn't show any activity across those processes 
either.

I also tried the approach suggested by Amos:

> The outcome of that was a 'ext_delayer_acl helper in Squid-3.5
> 
> <http://www.squid-cache.org/Versions/v3/3.5/manuals/ext_delayer_acl.html>
> 
> It works slightly differently to what was being discussed in the thread.
> see the man page for details on how to configure it.

Using the following config:

external_acl_type delay concurrency=10 children-max=2 children-startup=1 
children-idle=1 cache=10 %URI /tmp/ext_delayer_acl -w 1000 -d
acl http-response-407 http_status 407
acl delay-1sec external delay
http_reply_access deny http-response-407 delay-1sec !all

Debug information from ext_delayer_acl is written to the cache log; I see the 
processes start up but they are not hit with any requests by Squid.  I also 
added %SRC %SRCPORT into the configuration, but that didn't seem to help either.

Would the developers be open to adding a configuration-based throttle to 
authentication responses, avoiding the need for an external helper?  Or 
alternatively, is there another way to slow down auth responses?  It's 
comprising about 90% of the log volume (450,000 requests/hr) in badly affected 
sites at the moment.

Luke


_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about the url rewrite before proxy out

2016-09-22 Thread squid-users
> 
> 
> > If you input http://www.yahoo.com/page.html, this will be transformed
> > to http://192.168.1.1/www.google.com/page.html.
> 
> I got the impression that the OP wanted the rewrite to work the other way
> around.

My apologies, that does seem to be the case.

> Squid sees http://192.168.1.1/www.google.com and  re-writes it to
> http://www.google.com
> 
> > The helper just needs to print that out prepended by "OK rewrite-
> url=xxx".
> > More info at
> > http://www.squid-cache.org/Doc/config/url_rewrite_program/
> >
> > Of course, you will need something listening on 192.168.1.1 (Apache,
> > nginx,
> > whatever) that can deal with those rewritten requests.
> 
> I got the impression that the OP wanted Squid to be listening on this
> address, doing the rewrites, and then fetching from standard origin
> servers.

Then not only the request needs to be rewritten, but probably the page content 
too.  Eg, assets in the page will all be pointing at 
http://www.yahoo.com/image.png and also need transforming to 
http://192.168.1.1/www.yahoo.com/image.png.

If that is the case, then Squid doesn't seem like the right tool for the job.  
I think CGIproxy can do this (https://www.jmarshall.com/tools/cgiproxy/) or 
perhaps Apache's mod_proxy 
(https://httpd.apache.org/docs/current/mod/mod_proxy.html) would work.

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about the url rewrite before proxy out

2016-09-21 Thread squid-users
> i am looking for a proxy which can "bounce" the request, which is not a 
> classic proxy.
>
> I want it works in this way.
> 
> e.g. a proxy is running a 192.168.1.1 
> and when i want to open http://www.yahoo.com, i just need call 
> http://192.168.1.1/www.yahoo.com
> the proxy can pickup the the host "http://www.yahoo.com; from the URI, and 
> retrieve the info for me​, 
> so it need to get the new $host from $location, and remove the $host from the 
> $location before proxy pass it.
> it is doable via squid?

Yes it is doable (but unusual).  First you need to tell Squid which requests 
should be rewritten, then send them to a rewrite program to be transformed.  
Identify the domains like this:

acl rewrite-domains dstdomain .yahoo.com .google.com etc)

Then set up a URL rewriting program, and only allow it to process requests 
matching the rewrite-domains ACL, like this:

url_rewrite_program /tmp/rewrite-program.pl
url_rewrite_extras "%>ru"
url_rewrite_access allow rewrite-domains
url_rewrite_access deny all

The program itself can be anything.  A very simple example in Perl might look 
like this:

#!/usr/bin/perl
use strict;
$| = 1;

# Enter loop
while (my $thisline = <>) {
my @parts = split(/\s+/, $thisline);
my $url = $parts[0];
$url =~ s/http:\/\/(.*)/http:\/\/192.168.1.1\/$1/g;
print "OK rewrite-url=\"$url\"\n";
}

If you input http://www.yahoo.com/page.html, this will be transformed to 
http://192.168.1.1/www.google.com/page.html.  The helper just needs to print 
that out prepended by "OK rewrite-url=xxx".  More info at 
http://www.squid-cache.org/Doc/config/url_rewrite_program/

Of course, you will need something listening on 192.168.1.1 (Apache, nginx, 
whatever) that can deal with those rewritten requests.  That is an unusual way 
of getting requests to 192.168.1.1 though, because you are effectively putting 
the hostname component into the URL then sending it to a web service and 
expecting it to deal with that.

Another note.  If you have a cache_peer defined, you might need some config to 
force rewritten requests to be sent to 192.168.1.1 and not your cache peer.  In 
that case this should do the trick:

acl rewrite-host dst 192.168.1.1
always_direct allow rewrite-host

HtH.

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Introducing delay to HTTP 407 responses

2016-09-13 Thread squid-users
Hi Squid users,

Seeking advice on how to slow down 407 responses to broken Apple & MS
clients, which seem to retry at very short intervals and quickly fill the
access.log with garbage.  The problem is very similar to this:

http://www.squid-cache.org/mail-archive/squid-users/201404/0326.html

However the config below doesn't seem to slow down the response:

acl delaydomains dstdomain .live.net .apple.com
acl authresponse http_status 407
external_acl_type delay ttl=0 negative_ttl=0 cache=0 %SRC /tmp/delay.pl
acl delay external delay
http_reply_access deny delaydomains authresponse delay
http_reply_access allow all

The helper is never asked by Squid to process the request.  Just wondering
if http_status ACLs can be used in http_reply_access?

My other thinking, if this isn't possible, was to mark 407 responses with
clientside_tos so they could be delayed/throttled with tc or iptables.  Ie,

acl authresponse http_status 407
clientside_tos 0x20 authresponse

However, auth response packets don't get the desired tos markings.  Instead
the following message appears in cache.log:

2016/09/13 11:35:43 kid1| WARNING: authresponse ACL is used in context
without an HTTP response. Assuming mismatch.

After reviewing
http://lists.squid-cache.org/pipermail/squid-users/2016-May/010630.html it
seems like this has cropped up before.  The suggestion in that thread was to
exclude 407 responses from the access log.  Fortunately this works.  But I'm
wondering if there is a way to introduce delay into the 407 response itself?
Partly to minimise load associated with serving broken clients, and also to
maintain logging of actual intrusion attempts.  Any suggestions?

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Subject: Bandwidth Ceiling

2016-06-29 Thread squid-cache
Thanks for the tip Amos.  I tried compiling my own version with BUFSIZ set to 
32KB, but 
it didn't seem to help.  The TCP buffer size on my system is 212992 bytes, I 
tried
64KB too, but that also didn't improve my situation.  Aside from adjsting the 
read_ahead_gap, is there anything else I'm missing?  I'm sort of out of my 
league here
so I may just quit and wait for v4. ;)

Thanks,
Jamie

>Sadly, that is kind of expected at present for any single client
>connection. We have some evidence that Squid is artificially lowering
>packet sizes in a few annoying ways. Used to make sense on slower
>networks, but not nowdays.
>
>Nathan Hoad has been putting a lot of work into this recently to figure
>out what can be done and has a performance fix in Squid-4. That is not
>going to make it into 3.5 because it relies on some major restructuring
>done only in Squid-4 code.
>
>
>But, if you are okay with playing around in the code his initial patch
>submission shows the key value to change:
><http://lists.squid-cache.org/pipermail/squid-dev/2016-March/005518.html>
>which should be the same in Squid-3. The 64KB bump in that patch leads
>to some pain so dont just apply that. In the end we went with 16KB to
>avoid huge per-connection memory requirements. It should really be tuned
>to about 1/2 or 1/4 the TCP buffer size on your system.
>After bumping up that read_ahead_gap directive also needs to be bumped
>up to a minimum of whatever value you choose there.
>
>HTH
>Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Subject: Bandwidth Ceiling

2016-06-28 Thread squid-cache
My squid server has 1Gbps connectivity to the internet and it routinely gets 
600 Mbps up/down to speedtest.net.

When a client computer on the same network has a direct connection to the 
internet it, too, gets 600 Mbps up/down.

However, when that client computer connects through the squid server, it can't 
seem to do any better than 120 Mbps down, 60 Mbps up. 

I've tried things like disabling disk cache, increasing maximum_object_size*, 
etc. Nothing I change in the config seems to increase or decrease my clients' 
bandwidth.

Any tips for getting better bandwidth to clients in a proxy-only setup?

Thanks,
Jamie

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] can I turn a http transparent proxy into a https(self cert)?

2016-06-05 Thread squid . support
Hi,

Trying to use Squid 3.5 to filter a white list on wifi hotspot. Got
http support without issue.

Tried lots of things to get https to work but always kills http, all
the http requests time out.

So I am starting to think that maybe

http 3128 transparent

is not compatible with ssl_bump

is that true?

Squid 3.5 built from source with

./configure --prefix=/usr \
--localstatedir=/var \
--libexecdir=${prefix}/lib/squid3 \
--datadir=${prefix}/share/squid3 \
--sysconfdir=/etc/squid3 \
--with-default-user=proxy \
--with-logdir=/var/log/squid3 \
--with-pidfile=/var/run/squid3.pid \
--enable-ssl \
--with-openssl \
--enable-ssl-crtd \
--with-open-ssl=/etc/ssl/openssl.cnf 



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slowly rising CPU load (eventually hits 100)

2016-04-18 Thread squid

> Thanks.  The current maximum_object_size_in_memory is 19 MB.
> 
>>
>> In summary, dealing with in-RAM objects significantly larger than 1MB
>> bigger the object, the longer Squid takes to scan its nodes.
>>
>> Short term, try limiting the size of in-RAM objects using
>> maximum_object_size_in_memory first. If that solves the problem, then,
>> most likely, only cached objects are affected.
> 

This seems to have fixed (or rather worked around) the problem.  I've
set maximum_object_size_in_memory down to 1 MB, and I haven't had
problem in more than a week.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slowly rising CPU load (eventually hits 100)

2016-04-04 Thread squid
On 2016-03-31 16:21, sq...@peralex.com wrote:
> On 2016-03-31 16:07, Yuri Voinov wrote:
>>
>> Looks like permanently running clients, which is exausted network
>> resources and then initiating connection abort.
>>
>> Try to add
>>
>> client_persistent_connections off

This option didn't fix the problem.  The CPU usage went wild again after
about a day.

I've changed the maximum_object_size_in_memory setting as suggested by
Alex, and I'll report back on that.

Mark


_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slowly rising CPU load (eventually hits 100)

2016-03-31 Thread squid
On 2016-03-31 18:44, Alex Rousskov wrote:
> 
> My working theory is that the longer you let your Squid run, the bigger
> objects it might store in RAM, increasing the severity of the linear
> search delays mentioned below. A similar pattern may also be caused by
> larger objects becoming more popular during certain days of the week.
> 

Thanks.  The current maximum_object_size_in_memory is 19 MB.

> 
> In summary, dealing with in-RAM objects significantly larger than 1MB
> bigger the object, the longer Squid takes to scan its nodes.
> 
> Short term, try limiting the size of in-RAM objects using
> maximum_object_size_in_memory first. If that solves the problem, then,
> most likely, only cached objects are affected.

I'll try this next, after I've given Yuri's suggestion a while to yield
results (or not).

> Also, forcing shared memory cache (even if you are not using SMP Squid)
> shared memory code itself does not have the above linear search, SMP
> Squid still uses local memory code and might hit the same linear search.

I'll look at that too.

> guess less. Please point to or copy to this email exchange to your bug
> report.

I've added bug http://bugs.squid-cache.org/show_bug.cgi?id=4477, and
included a reference to this thread at the end of the bug report.

Thanks for the assistance.
Mark




_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Slowly rising CPU load (eventually hits 100)

2016-03-31 Thread squid
On 2016-03-31 16:07, Yuri Voinov wrote:
> 
> Looks like permanently running clients, which is exausted network
> resources and then initiating connection abort.
> 
> Try to add
> 
> client_persistent_connections off
> 
> to squid.conf.
> 
> Then observe.

Thanks.

I added it and run squid -k reconfigure, which didn't change the CPU
usage.  I then restarted squid, which reset the CPU usage, but I'll have
to wait a week or so to tell whether it's been successful.

I'll report back.

_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Slowly rising CPU load (eventually hits 100)

2016-03-31 Thread squid
Hi,

I'm running:

Squid Cache: Version 3.5.15  (including patches up to revision 14000)

on FreeBSD 9.3-STABLE (recently updated)

Every week or so I run into a problem where squid's CPU usage starts
growing slowly, reaching 100% over the course of a day or so.  When
running normally its CPU usage is usually less than 5%.  Restarting
squid fixes the problem.

Memory usage is about 2 GBytes (on a system with 8 GBytes of RAM).

The number of socket connections (from clients and to servers) is about
the same (roughly 500) when I have the problem as when I don't have the
problem.

Attaching GDB and getting a stack trace while squid is stuck at 100%
generally gives me this:

#0  0x005deef4 in mem_node::end ()
#1  0x005df076 in mem_node::dataRange ()
#2  0x00625d34 in mem_hdr::NodeCompare ()
#3  0x00628ad1 in SplayNode<mem_node*>::splay<mem_node*> ()
#4  0x00628b85 in Splay<mem_node*>::find<mem_node*> ()
#5  0x00625f8e in mem_hdr::getBlockContainingLocation ()
#6  0x00625ff8 in mem_hdr::hasContigousContentRange ()
#7  0x005e00fe in MemObject::isContiguous ()
#8  0x00649d05 in StoreEntry::mayStartSwapOut ()
#9  0x00648b96 in StoreEntry::swapOut ()
#10 0x00639e87 in StoreEntry::invokeHandlers ()
#11 0x00633e09 in StoreEntry::write ()
#12 0x0079caa1 in Client::storeReplyBody ()
#13 0x0059c0bf in HttpStateData::writeReplyBody ()
#14 0x005a18fd in HttpStateData::processReplyBody ()
#15 0x005a41ce in HttpStateData::processReply ()
#16 0x005a4408 in HttpStateData::readReply ()
#17 0x005ab6df in JobDialer::dial ()
#18 0x006fd81a in AsyncCall::make ()
#19 0x00701bc6 in AsyncCallQueue::fireNext ()
#20 0x00701ecf in AsyncCallQueue::fire ()
#21 0x00566621 in EventLoop::dispatchCalls ()
#22 0x00566930 in EventLoop::runOnce ()
#23 0x00566b18 in EventLoop::run ()
#24 0x005dbb73 in SquidMain ()
#25 0x005dc0fd in SquidMainSafe ()
#26 0x004cf401 in _start ()
#27 0x000800ae4000 in ?? ()
#28 0x in ?? ()


The cache.log file gets a few lines looking like this:

2016/03/31 11:51:04 kid1| local=192.168.1.15:3128
remote=192.168.1.164:49540 FD 339 flags=1: read/write failure: (60)
Operation timed out

and some others looking like this:

2016/03/31 14:40:05 kid1|  FD 16, 192.168.1.15 [Stopped, reason:Listener
socket closed job3132772]: (53) Software caused connection abort


Does anybody have any suggestions on how to fix/improve this?  Currently
I have cron restarting squid every morning.

Should I file a bug?

Thanks
Mark

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Core dump / xassert on FwdState::unregister (3.5.15)

2016-03-15 Thread squid
On 2016-03-15 09:40, sq...@peralex.com wrote:
> On 2016-03-15 09:05, Amos Jeffries wrote:
>> On 15/03/2016 7:34 p.m., squid wrote:
>>
>> This is bug 4447. Please update to a build from the 3.5 snapshot.
>>
> 
> Thanks.  I'll give that a try.
> 

Looks like it's working correctly now - been running for 4 hours without
any problems.  Thanks for the assistance.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Landing- Disclaimer-Page for an Exchange 2013 Reverse Proxy

2016-03-15 Thread Squid Users
Hi,

I've installed a Squid reverse proxy for a MS-Exchange Test-Installation to 
reach OWA from the outside.

My current environment is as follows:

Squid Version 3.4.8 with ssl on a Debian Jessie (self compiled)
The Squid and the exchange system are in the internal network with private 
ip-addresses (same network segment)
The access to the squid system is realized by port forwarding (tcp/80, tcp/443, 
tcp/22) from a public ip-address
Used certificate is from letsencrypt (san-certificate, used by both servers)

Current Status:

Pre-Login works
Outlook-Access to OWA works (other protocolls not tested yet)
https://portal.xxx.de doesn't work (Forwarding denied)
(which is quite normal because there is no acl for it)

Ho can I reach that:

1) Access to https://portal.xxx.de ends up on a kind of "landing-page" with 
instructions how to use the exchange test-installation
(web server can be the iis oh the exchange system, apache on the squid system 
or a third system)

2) Is there a way to integrate the initial password dialog in that web page? 

Kind regards
Bob


Squid configuration:

# Hostname
visible_hostname portal.xxx.de

# Externer Zugriff
https_port 192.168.xxx.21:443 accel 
cert=/root/letsencrypt/certs/xxx.de/cert.pem 
key=/root/letsencrypt/certs/xxx.de/privkey.pem 
cafile=/root/letsencrypt/certs/xxx.de/fullchain.pem defaultsite=portal.xxx.de

# Interner Server
cache_peer 192.168.xxx.20 parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER sslcert=/root/letsencrypt/certs/xxx.de/cert.pem 
sslkey=/root/letsencrypt/certs/xxx.de/privkey.pem name=ExchangeServer

# Zugriff auf folgende Adressen ist erlaubt
acl EXCH url_regex -i ^https://portal.xxx.de$
acl EXCH url_regex -i ^https://portal.xxx.de/owa.*$
acl EXCH url_regex -i ^https://portal.xxx.de/Microsoft-Server-ActiveSync.*$
acl EXCH url_regex -i ^https://portal.xxx.de/ews.*$
acl EXCH url_regex -i ^https://portal.xxx.de/autodiscover.*$
acl EXCH url_regex -i ^https://portal.xxx.de/rpc/.*$

# Auth
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

# Regeln
acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users
cache_peer_access ExchangeServer allow EXCH
never_direct allow EXCH
http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all

# Logging
access_log /var/log/squid3/access.log squid
debug_options ALL,9

cache_mgr mailto:x...@xxx.de



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Core dump / xassert on FwdState::unregister (3.5.15)

2016-03-15 Thread squid

I'm running FreeBSD 9.3-STABLE and Squid 3.5.15 and I'm getting regular
core dumps with the following stack.  Note that I have disabled caching.
 Any suggestions?  I've logged a bug (4467):

#0  0x000801b8c96c in thr_kill () from /lib/libc.so.7
#1  0x000801c55fcb in abort () from /lib/libc.so.7
#2  0x005d2545 in xassert (msg=0x8b8816 "serverConnection() ==
conn", file=0x8b8513 "FwdState.cc", line=447) at debug.cc:544
#3  0x005fb184 in FwdState::unregister (this=0x80be1e258,
conn=@0x80be164c8) at FwdState.cc:447
#4  0x0061fcec in HttpStateData::processReplyBody
(this=0x80be16418) at http.cc:1447
#5  0x00627e8c in HttpStateData::processReply (this=0x80be16418)
at http.cc:1241
#6  0x006284c8 in HttpStateData::readReply (this=0x80be16418,
io=@0x80bff3008) at http.cc:1213
#7  0x006291da in CommCbMemFunT<HttpStateData,
CommIoCbParams>::doDial (this=0x80bff2ff0) at CommCalls.h:205
#8  0x00629b9c in JobDialer::dial
(this=0x80bff2ff0, call=@0x80bff2fc0) at AsyncJobCalls.h:174
#9  0x00629ddd in AsyncCallT<CommCbMemFunT<HttpStateData,
CommIoCbParams> >::fire (this=0x80bff2fc0) at AsyncCall.h:145
#10 0x00792345 in AsyncCall::make (this=0x80bff2fc0) at
AsyncCall.cc:40
#11 0x0079704b in AsyncCallQueue::fireNext (this=0x809fffa70) at
AsyncCallQueue.cc:56
#12 0x0079722f in AsyncCallQueue::fire (this=0x809fffa70) at
AsyncCallQueue.cc:42
#13 0x005e8509 in EventLoop::dispatchCalls (this=0x7fffe9c0)
at EventLoop.cc:143
#14 0x005e889a in EventLoop::runOnce (this=0x7fffe9c0) at
EventLoop.cc:120
#15 0x005e8a59 in EventLoop::run (this=0x7fffe9c0) at
EventLoop.cc:82
#16 0x00660af4 in SquidMain (argc=3, argv=0x7fffebb0) at
main.cc:1539
#17 0x00660c2c in SquidMainSafe (argc=3, argv=0x7fffebb0) at
main.cc:1263
#18 0x00660ebb in main (argc=3, argv=0x7fffebb0) at main.cc:1256

_______
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Debugging http_access and http_reply_access

2016-02-02 Thread squid-users
Hi Squid users,

I'm seeking some guidance regarding the best way to debug the http_access
and http_reply_access configuration statements on a moderately busy Squid
3.5 cache.  In cases where a number (say, 5 or more) of http_access lines
are present, the goal is to find which configuration statement (if any) was
found to match for a given request, then write this information to a log for
further processing.  Example:

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny out_working_hours
http_access allow working_hours whitelist
http_access allow network
http_access deny all

Let's assume each of those lines have an index (0, 1, 2 thru 8 in the
example above).  Is there any way to find which one matched?

Explored so far: using debug_options to look at sections 33 (Client Side
Routines), 88 (Client-side Reply Routines) and 85 (Client Side Request
Routines) return useful information, but it's hard to use it to identify
(programmatically) which log entries relate to which request on a busy
cache.  Activating debug logging on a busy cache also doesn't seem like the
right approach.

Also explored: creating a pair of logformat and access_log statements
corresponding to each http_access and http_reply_access statement, with the
same ACL conditions as their policy counterparts.  The idea being to create
a log entry for each http_access and http_reply_access statement, to which
Squid will write matching requests.  This approach only partially achieves
the goal, because although it collects matching requests, it doesn't take
into account the sequential nature of policy rule processing.  Eg, in the
example above, even though a request to manager may be denied by rule 3, it
might still have matched the conditions associated with rule 7, and thus be
written to that log, even though it never hit that policy rule.

Are there any other debug sections which would be more appropriate to the
task?  If not, is there another more suitable approach?

Luke


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] V3.5.12 SSL Bumping Issue with one Website

2016-01-13 Thread squid


Hello together,

I am using Squid 3.5.12 with Kerberos Authentication only and ClamAV  
on Debian Jessie.


My Proxy is working very nice, but now I've found an issue with just  
one SSL Website.


It would be nice to know if others can reproduce this Issue.

Target website is: https://www.shop-fonic-mobile.de/

While trying to access this website, a blank page is displayed without  
any source code in it.


Cache Log says on each attempt:
Squid 2016/01/13 17:43:43 kid1| Error negotiating SSL on FD 22:  
error:14090086:SSL routines:ssl3_get_server_certificate:certificate  
verify failed (1/-1/0)


Access Log for each attempt:
1452703599.547  0 10.0.0.4 TCP_DENIED/407 4189 CONNECT  
www.shop-fonic-mobile.de:443 - HIER_NONE/- text/html
1452703599.832272 10.0.0.4 TAG_NONE/200 0 CONNECT  
www.shop-fonic-mobile.de:443 MYUSER HIER_NONE/- -
1452703599.888 52 10.0.0.4 TCP_MISS/503 402 GET  
https://www.shop-fonic-mobile.de/ MYUSER HIER_DIRECT/85.158.6.195  
text/html


SSL Bumping generated a valid certificate for this site using my internal CA.

I can reproduce the error only on this website everything else is  
working nicely and if Squid can't validate an external SSL Certificate  
it display an error of course.


I currently fixed it by adding it to my SSL_TrustedSites ACL.


This is my Bump config:

http_port 8080 ssl-bump generate-host-certificates=on  
dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl/myca.pem

ssl_bump splice localhost
ssl_bump bump all
sslproxy_cert_error allow SSL_TrustedSites
sslproxy_cert_error deny all


Expected behavior of Squid: If Squid can't validate an SSL Certificate  
then an error should be displayed as it does on all other sites with  
invalid certificates.
But it seems that the first check of squid recognizes the Certificate  
as valid otherwise it would display an error and squid generates a  
valid cert for the client and then squid seems to no beeing able to  
validate it at this point again.


The Target Website SSL Chain is as follows:
CA  <- Part of the Ca certificates
-- Intermediate <- not a part of the ca-certificates
-website

So I believe somehow on the initial request squid can validate the  
full chain and as soon as the client receives the generated cert it  
can't look up the whole chain because it trys to validate against the  
intermediate CA only and lost the path to the Root CA and fails of  
course. Again only the Root CA is known by the system (ca-certificates).


Please let me know if someone can reproduce this Issue.

BTW:
Found another Issue in Squid 3.5.12 regarding Error Messages,  
"mailto:; links which are generating an error mail do not work  
anymore. Maybe this is related to Kerberos Authentication which maybe  
makes the url encoded string longer than before. I've found out that  
somewhere at the last part of the urlencoded link the error is in.  
Couldn't pin point it.


Best regards,

Enrico






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-21 Thread Squid admin

 Dear Alex,

unfortunately not really fixed.

The upload speed using squid 4.0.1 with this patch has bettered significant
but is far away from squid 3.4.x performance.

The used test client can reach a maximum upload speed of 115 MBIT if the
apache server is directly reachable.
If a SQUID 3.4.X PROXY is inbetween, the speed is also 115MBIT but only
16MBIT when USING SQUID 4.0.1

TcpSegmentOffloading has been turned off for this dump:
(Note: turning off TSO to see the real packet sizes the measured speeds are
nearly the same.)

11:28:24.917866 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [S], seq
3288613551, win 29200, options [mss 1460,sackOK,TS val 104477831 ecr
0,nop,wscale 7], length 0
11:28:24.918225 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [S.], seq
2608168273, ack 3288613552, win 14480, options [mss 1460,sackOK,TS val
1398719113 ecr 104477831,nop,wscale 7], length 0
11:28:24.918256 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], ack 1, win
229, options [nop,nop,TS val 104477831 ecr 1398719113], length 0
11:28:24.922831 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [P.], seq 1:583,
ack 1, win 229, options [nop,nop,TS val 104477832 ecr 1398719113], length
582
11:28:24.923118 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 583, win
123, options [nop,nop,TS val 1398719114 ecr 104477832], length 0
11:28:24.924689 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
583:2031, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924694 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
2031:3479, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924699 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
3479:4927, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924701 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
4927:6375, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924703 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
6375:7823, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924719 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
7823:9271, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924720 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
9271:10719, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924722 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
10719:12167, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924724 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
12167:13615, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924726 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [P.], seq
13615:15063, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719114], length 1448
11:28:24.924930 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 7823,
win 236, options [nop,nop,TS val 1398719115 ecr 104477833], length 0
11:28:24.924949 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
15063:16511, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719115], length 1448
11:28:24.924955 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [P.], seq
16511:17477, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719115], length 966
11:28:24.924971 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 15063,
win 275, options [nop,nop,TS val 1398719115 ecr 104477833], length 0
11:28:24.925125 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 17477,
win 261, options [nop,nop,TS val 1398719115 ecr 104477833], length 0
11:28:24.926496 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [P.], seq
17477:17516, ack 1, win 229, options [nop,nop,TS val 104477833 ecr
1398719115], length 39
11:28:24.926586 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 17516,
win 331, options [nop,nop,TS val 1398719115 ecr 104477833], length 0
11:28:24.928261 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
17516:18964, ack 1, win 229, options [nop,nop,TS val 104477834 ecr
1398719115], length 1448
11:28:24.928266 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
18964:20412, ack 1, win 229, options [nop,nop,TS val 104477834 ecr
1398719115], length 1448
11:28:24.928274 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [P.], seq
20412:21611, ack 1, win 229, options [nop,nop,TS val 104477834 ecr
1398719115], length 1199
11:28:24.928481 IP 10.1.1.19.81 > 10.1.1.210.49321: Flags [.], ack 21611,
win 321, options [nop,nop,TS val 1398719116 ecr 104477834], length 0
11:28:24.930037 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
21611:23059, ack 1, win 229, options [nop,nop,TS val 104477834 ecr
1398719116], length 1448
11:28:24.930041 IP 10.1.1.210.49321 > 10.1.1.19.81: Flags [.], seq
23059:24507, ack 1, win 229, options [nop,nop,TS val 104477834 ecr
1398719116], length 1448
11:28:24.930048 IP 10.1.1.210.49

Re: [squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-21 Thread Squid admin

Dear Alex,

using squid 3.5.10 with patch the upload speed problem seems to be fixed.
Now I get 112Mbit upload speed from a possible maximum of 115Mbit.
Squid 4.0.1 still has a performance problem on unencrypted POST upload ...

BR, Toni

(TSO off)

12:10:16.343559 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [S], seq  
1106586391, win 29200, options [mss 1460,sackOK,TS val 105105687 ecr  
0,nop,wscale 7], length 0
12:10:16.343928 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [S.], seq  
2709051093, ack 1106586392, win 14480, options [mss 1460,sackOK,TS val  
1399346969 ecr 105105687,nop,wscale 7], length 0
12:10:16.343948 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], ack 1,  
win 229, options [nop,nop,TS val 105105687 ecr 1399346969], length 0
12:10:16.344092 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
1:585, ack 1, win 229, options [nop,nop,TS val 105105687 ecr  
1399346969], length 584
12:10:16.344174 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
585:2033, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 1448
12:10:16.344179 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
2033:3481, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 1448
12:10:16.344183 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
3481:4929, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 1448
12:10:16.344185 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
4929:6377, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 1448
12:10:16.344188 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
6377:7825, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 1448
12:10:16.344196 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
7825:8542, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 717
12:10:16.344217 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
8542:8581, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346969], length 39
12:10:16.344248 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
585, win 123, options [nop,nop,TS val 1399346970 ecr 105105687],  
length 0
12:10:16.344288 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
8581:10029, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1448
12:10:16.344293 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
10029:11477, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1448
12:10:16.344299 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
11477:12676, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1199
12:10:16.344382 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
4929, win 191, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.344410 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [.], seq  
12676:14124, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1448
12:10:16.344420 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
14124:14512, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 388
12:10:16.35 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
8542, win 247, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.344469 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
8581, win 247, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.344485 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
12676, win 266, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.344588 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
14512, win 285, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.344993 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
14512:14551, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 39
12:10:16.345032 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
14551:15960, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1409
12:10:16.345105 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
15960:15999, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 39
12:10:16.345113 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
14551, win 285, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.345129 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
15999:17408, ack 1, win 229, options [nop,nop,TS val 105105688 ecr  
1399346970], length 1409
12:10:16.345225 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
15960, win 274, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.345242 IP 10.1.1.19.81 > 10.1.1.210.49388: Flags [.], ack  
15999, win 274, options [nop,nop,TS val 1399346970 ecr 105105688],  
length 0
12:10:16.345287 IP 10.1.1.210.49388 > 10.1.1.19.81: Flags [P.], seq  
17408:17447, ack 1, 

[squid-users] POST upload splits tcp stream in many small 39byte sized pakets

2015-10-20 Thread Squid admin

Dear squid team,

first of all thanks for developing such a great product!

Unfortunately on uploading a big test file (unencrypted POST) to  
apache webserver using a squid proxy (V 3.5.10 or 4.0.1) the upstream  
pakets get slized into thousands of small 39 byte sized pakets.


Excerpt from cache.log:

2015/10/20 13:51:08.201 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 583:  
asynCall 0x244b670*1
2015/10/20 13:51:08.201 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 583.
2015/10/20 13:51:08.203 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 16422:  
asynCall 0x2447d40*1
2015/10/20 13:51:08.203 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz  
16422.
2015/10/20 13:51:08.204 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2448ec0*1
2015/10/20 13:51:08.205 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.
2015/10/20 13:51:08.206 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2464bb0*1
2015/10/20 13:51:08.207 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.
2015/10/20 13:51:08.208 kid1| 5,5| Write.cc(35) Write:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: sz 39:  
asynCall 0x2448ec0*1
2015/10/20 13:51:08.209 kid1| 5,5| Write.cc(66) HandleWrite:  
local=10.1.1.210:46731 remote=10.1.1.19:81 FD 17 flags=1: off 0, sz 39.

...



Attached you can find a tar file containing squid configuration,
test network topology, network trace from traffic from client to squid,
network trace from squid to webserver and a full debug log from squid

One incoming paket of size ~ 1500 bytes gets sliced into more as 40 pakets.
On the target webserver the squid upstream traffic therefore looks  
like a DOS attack.


The problem can be reproduced using squid 3.5.x and squid 4.0.x (32bit  
and 64bit variants)

The where no such problems using squid 3.2.x

Hopefully you can help me to fix this problem as this is a showstopper  
for me to upgrade to squid 3.5.x and higher.


Best regards,

Toni



squid_upload_splits_tcp_traffic_into_39byte_packets.tar.gz
Description: application/compressed-tar
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reg - Squid can cache the chrome OS updates.

2015-06-26 Thread ViSolve Squid

Thanks for your valuable information Amos.

Regards,
Nithi

On Friday 26 June 2015 10:48 AM, Amos Jeffries wrote:

On 26/06/2015 4:36 p.m., Squid List wrote:

Hi,

Is the Squid can cache Microsoft Updates and IOS Updates?

If its cache means, please help me out for cache Chrome OS updates in
latest squid version that is installed in CentOS 6.6.

The short answer (FWIW):

Squid can (and does) cache any HTTP content which is cacheable. With the
exception of 206 responses and PUT request payloads.


The long answer:

Whether the cached content is used depends entirely on what the client
requests. It has the power to request that cached content be ignored.

Whether content is cacheable depends entirely on what the server
delivers. It has the power to place limits on cache times up to and
including stating an object is already stale (ie not usefully cached).

There are also some mechanisms which when used MAY make content
completely untrustworthy or and uncacheable:
* connection based authentication (NTLM, Negotiate)
* traffic interception (NAT, TPROXY, SSL-Bump)
* broken Vary headers (though this causes caching when it shouldn't)
*


I hope that explains why you wont get a clear simple answer to your
question.

To help any further we will need information about;
- what Squid version you are using (if its not the latest 3.5 please try
an upgrade),
- how its configured (squid.conf without the comment lines please),
- how its being used (explicit forward-, reverse-, or interception proxy)
- what exactly the request messages you are trying to make into HITs are
(debug_options 11,2 produces a traces of those),
- what response messages the server is delivering on the MISS (the same
11,2 trace)
- what Squid is logging for them (access.log entries)

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Reg - Squid can cache the chrome OS updates.

2015-06-25 Thread Squid List

Hi,

Is the Squid can cache Microsoft Updates and IOS Updates?

If its cache means, please help me out for cache Chrome OS updates in 
latest squid version that is installed in CentOS 6.6.



Thanks  Regards,
Nithi

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid authentication , ACL select from databse SQL , is that possible ?

2015-02-11 Thread squid-list

Hi,

You can authenticate user and password from sql database using the 
helper squid_db_auth.


But, allowing website for corresponding user by storing in db is not 
possible. You can use various ACLs to control the site access for the 
individual users.


Instead of storing website in particular column in db, you can store it 
in separate txt file and can control the site access of the users.


Squid will support user defined helper. If it necessary to verify site 
from db, you can create your own helper as per you requirement and you 
can use it. If you need any customization assistance, you can contact 
us(sq...@visolve.com).


Regards,
Siva Prakash
ViSolve Squid Team

On 02/12/2015 06:25 AM, snakeeyes wrote:


Hi

I need to do many operations :

I need squid with sql with the following needs :

1-Squid authenticate user/pwd from sql databse.

2-Then if authentication was okay  , they I need to see that username 
logged in and go to sql databse and select from there a cloum with the 
websites correspond to that user


3-Then I will do access list that permit the websites domain name only 
for that user based on info from sql.


Is that possible with squid ?

Im using last squid stable version 3.5.1 and hope it be okay .

cheers



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] google always requesting captach through transparent proxy

2015-02-01 Thread squid

On 19/01/2015 3:39 a.m., sq...@proxyplayer.co.uk wrote:

Google is requesting a captcha everytime I request a page as it is
saying that my computer is doing something weird (via a proxy).


What *exactly* is it saying?




Your systems have detected unusual traffic from your computer network.
Some computers it presents a captcha, on others it just plain refuses
to show search results.


How can I get rid of this message from Google. I tried redirecting
directly but it makes no difference. It seems like Google is
pickingup a lack of headers as an issue.


acl google dstdom_regex -i google
http_access deny google

but I suspect maybe you might not actually like the results of what
you are asking for.



What's the best directive to use to make sure that google doesn't go
through the proxy at all?
acl google dstdom_regex -i google
?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] google always requesting captach through transparent proxy

2015-01-19 Thread squid

Quoting Jason Haar jason_h...@trimble.com:


On 19/01/15 16:30, sq...@proxyplayer.co.uk wrote:

Your systems have detected unusual traffic from your computer network.
Some computers it presents a captcha, on others it just plain refuses
to show search results.



Umm, your email address is proxyplayer.co.uk. That isn't some kind of
anonymizing service is it? Google will force Captcha onto anyone using
their services that come from a known proxy network (such as Tor), so
could that be the reason?



I don't think that's the reason as with another server with 2.6 it  
works fine with Google.

It's only since the upgrade of squid so must be something in the config.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] google always requesting captach through transparent proxy

2015-01-18 Thread squid

Quoting Amos Jeffries squ...@treenet.co.nz:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/01/2015 3:39 a.m., sq...@proxyplayer.co.uk wrote:

Google is requesting a captcha everytime I request a page as it is
saying that my computer is doing something weird (via a proxy).


What *exactly* is it saying?



Your systems have detected unusual traffic from your computer network.
Some computers it presents a captcha, on others it just plain refuses  
to show search results.


How can I get rid of this message from Google. I tried redirecting
directly but it makes no difference. It seems like Google is
pickingup a lack of headers as an issue.


 acl google dstdom_regex -i google
 http_access deny google

but I suspect maybe you might not actually like the results of what
you are asking for.


What's the best directive to use to make sure that google doesn't go  
through the proxy at all?

acl google dstdom_regex -i google
?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] google always requesting captach through transparent proxy

2015-01-18 Thread squid
Google is requesting a captcha everytime I request a page as it is  
saying that my computer is doing something weird (via a proxy).


How can I get rid of this message from Google. I tried redirecting  
directly but it makes no difference. It seems like Google is pickingup  
a lack of headers as an issue.


auth_param basic realm AAA proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access allow ncsa_users
http_access deny all
icp_access allow all
http_port 8080
http_port 80
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
half_closed_clients off
visible_hostname AAAProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 40/40
acl google1 dstdomain .google.com
acl google2 dstdomain .google.co.uk
always_direct allow google1 google2
via off
forwarded_for off
follow_x_forwarded_for deny all
cache_mem 512 MB

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid ACL, SSL-BUMP and authentication questions

2014-11-07 Thread squid
Hi Amos,

The configuration I post last time still cannot accomplish the tasks. So, you 
mean the CONNECT ACL and must pair with normal GET command ACL to be 
evaluated by squid ? 

Best,
Kelvin Yip

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Friday, November 07, 2014 4:29 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid ACL, SSL-BUMP and authentication questions

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/11/2014 8:35 p.m., squid-list wrote:
 Hi, * **Access to google maps(https://www.google.com/maps) should 
 prevent any authentication need*
 
 I could understand that all users should be able to access the google 
 maps link without any authentication. For this you could add the site 
 acl before the authentication part in the squid conf. So that users 
 will not prompt for the authentication when the user try to access the 
 google map site. But when they try to access any other site 
 authentication will be prompted.

This cannot be done.

You can authenticate the user setting up a CONNECT tunnel, OR you can bypass 
authentication for them.

That authentication choice applies equally all requests sent over the tunnel. 
Whether they are for maps or for any other Google service. And it must be made 
*before* the tunnel is setup. Thus *before* the URL inside the tunnel becomes 
known.


Amos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUXIKwAAoJELJo5wb/XPRjMoMH/2yCMjxisbxWBAYnp+96908O
W46taJk7kqwUbtv76aOsSEcPpc3cBl4E+nFv7cQofRqgobcR2wTsJtgRupjuIgSb
SYPQKqJolbs/7wF5nhxbggewSfRU7B21aULKStkXV7BUWNlUIaV1vUsv+J1JV8OP
U/HkcVeXny1khCjF9nEKeXNUpOioUQ0LpPboAOrLnfZZfY098NkGubJF04/stUCQ
QXIErZ8cwX7yJ1x+yIwlVw4KVbtGaBJ8dd8PH4q3DknzAVxfJ0LZgYJC3nKTQMZ3
vUTMV33Rf94Y9x/yNrs6AVWcR3rLl08GkpFv3owqItkHa1hi7yFCuEg5e3bOFFA=
=AMi0
-END PGP SIGNATURE-
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid ACL, SSL-BUMP and authentication questions

2014-11-06 Thread squid
Hello all,

 

As our company policy only allow some machines to access to some SSL website
URL(eg. https://www.google.com/maps). However, they do not have access to
https://www.google.com/ Before, we tried to implement authentication,
everything works fine. We try to allow https access to
https://www.google.com/maps and CONNECT request to www.google.com
http://www.google.com  using SSL bump. Now, I want to preserve this
config, and let user to authenicate to access to any website. Access to
google maps(https://www.google.com/maps) should prevent any authentication
need. However, I am not success to figure this out. I have tried different
kinds of configuration, some will prompt for authentication. Some will not
allow the authenticated users to access to https://www.google.com. From the
access log, after I authenticate and try to access to
https://www.google.com, the authentication information is not displayed.
Seems squid do not use the authentication information when matching the this
rule: http_access allow   CONNECT google.

The CONNECT method is success. Then, the squid will continue use no
authentication information to process the GET command, causing the
authenticated user to denied access to https://www.google.com.

Can I make squid always use the authentication information if already
authenticate ? Or any suggestion to implement this policy.

Thanks.

 

Here is an extracted version of config which should state the related
configuration:

 

auth_param basic children 5

auth_param basic realm Welcome to Our Website!

auth_param basic program /usr/lib64/squid/basic_ncsa_auth
/etc/squid/squid_user

auth_param basic credentialsttl 2 hours

auth_param basic casesensitive off

 

acl my_auth proxy_auth REQUIRED

 

acl SSL_ports port 443

acl Safe_ports port 443 # https

acl CONNECT method CONNECT

 

acl GoogleMaps   url_regex -i^https://www.google.com/maps*.

acl test_net src 192.168.1.253/32

acl googledstdomainwww.google.com
http://www.google.com 

http_access deny CONNECT !SSL_ports

 

http_access allow   GoogleMaps

 

http_access allow   CONNECT google

http_access denyCONNECT google
my_auth

#http_accessallow   CONNECT test_net
google

 

http_access allow   my_authall

 

http_access denyall

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https issues for google

2014-10-09 Thread Visolve Squid

Hi,

Check the below acl rule in your squid configuration file to Block the 
particular Domain URLs and also block keywords itself.


# ACL block sites
acl blocksites dstdomain  .youtube.com

# ACL block keywords
acl blockkeywords url_regex -i .youtube.com

#Deny access to block keywords ACLblock sites ACL's
http_access deny blockkeywords
http_access deny blocksites

And check the access.log file in the squid.

Regards,
ViSolve Squid
On 10/10/2014 4:32 AM, glenn.gro...@bradnams.com.au wrote:

I was able to capture the log at the time this happened to me, I got the 
following in the access.log:

1412895309.389 84 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895311.770  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895311.852 77 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895311.855  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895311.937 77 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895311.941  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895312.053107 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895312.056  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895312.124 65 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895312.680  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895312.765 79 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895312.768  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895312.846 74 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895312.851  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html
1412895312.927 73 10.10.10.69 TCP_MISS/200 0 CONNECT www.youtube.com:443 
MYADUSER DIRECT/74.125.237.160 -
1412895312.931  0 10.10.10.69 TCP_DENIED/407 3983 CONNECT 
www.youtube.com:443 - NONE/- text/html

Not sure why it would be saying TCP_MISS, I assume the TCP_DENIED is expected 
as it happens after the TCP_MISS and has no authentication information.


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of glenn.gro...@bradnams.com.au
Sent: Thursday, 9 October 2014 9:04 AM
To: elie...@ngtech.co.il; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] https issues for google

Hi Eliezer,

The DNS we are using is the ISP default for external, our internal domain DNS 
for internal. Nslookup works for all tests.

I would like to update to the latest stable, but I am concerned of breaking the 
current setup. It took a little work to get it working correctly particularity 
on the multiple authentication methods working with our domain and trust.

I support what has been said - to check the logs. This will likely take time as 
I cannot reproduce this issue on demand - and I think users are starting to not 
report the issue and just living with it (or it is not getting all the way to 
me at least). I will have to get lucky at some point on my computer and look 
into it then.

Could squid be getting mixed up when mulipule https requests are to the same 
address (e.g. https://google.com.au)?

Thanks,

Glenn

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Eliezer Croitoru
Sent: Wednesday, 8 October 2014 7:39 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] https issues for google

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hey Glenn,

Since you are not using intercept or tproxy the basic place to look at is the 
access.log.
You can see there if the proxy is trying for example to reach an IPV6 address 
(by mistake).

Also to make sure there is an issue you can use specific exception like the 
cacheadmin acl you are using to allow the cacheadmin access without 
authentication for the basic test.

Also you are indeed using the latest CentOS 6.5 squid but since the current 
stable version is 3.4.8 you should try to upgrade(to something else then 3.1) 
due to other issues.

The issue can be a network or dns related issue which was not detected until 
now.

Please first make sure that the access.log and cache.log files are clean for 
errors or issues.

What dns servers are you using?

Eliezer

On 10/07/2014 06:51 AM, glenn.gro...@bradnams.com.au wrote:

Hi All,

We have a weird issue where https sites apparently don't respond (get
message this page can't be displayed). This mainly affects google
websites and to a lesser affect youtube. It has been reported it may
have affected some banking sites

Re: [squid-users] redirect all ports to squid

2014-10-04 Thread Visolve Squid
Spam detection software, running on the system master.squid-cache.org,
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
@@CONTACT_ADDRESS@@ for details.

Content preview:  Hi, Yes, we can redirect the ports to squid through our 
firewall
   rules. Check below lines to redirect the ports. We have some different 
methods
   to do. 1. In first Method: First, we need to machine that squid will be 
running
   on, You do not need iptables or any special kernel options on this machine,
   just squid. You *will*, however, need the 'http_accel' options as described
   above. [...] 

Content analysis details:   (5.9 points, 5.0 required)

 pts rule name  description
 -- --
 0.0 URIBL_BLOCKED  ADMINISTRATOR NOTICE: The query to URIBL was 
blocked.
See

http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block
 for more information.
[URIs: squid-cache.org]
 0.0 HTML_MESSAGE   BODY: HTML included in message
 1.6 RCVD_IN_BRBL_LASTEXT   RBL: No description available.
[182.73.50.82 listed in bb.barracudacentral.org]
 0.7 RCVD_IN_XBLRBL: Received via a relay in Spamhaus XBL
[182.73.50.82 listed in zen.spamhaus.org]
 3.6 RCVD_IN_PBLRBL: Received via a relay in Spamhaus PBL
 0.0 UNPARSEABLE_RELAY  Informational: message has unparseable relay lines

The original message was not completely plain text, and may be unsafe to
open with some email clients; in particular, it may contain a virus,
or confirm that your address can receive spam.  If you wish to view
it, it may be safer to save it to a file and open it with an editor.

---BeginMessage---

Hi,

Yes, we can redirect the ports to squid through our firewall rules.

Check below lines to redirect the ports.
We have some different methods to do.
1. In first Method:
First, we need to machine that squid will be running on, You do not 
need iptables or any special kernel options on this machine, just squid. 
You *will*, however, need the 'http_accel' options as described above.


You'll want to use the following set of commands on iptables-box:

 * iptables -t nat -A PREROUTING -i eth0 -s ! *squid-box* -p tcp
   --dport 80 -j DNAT --to *squid-box*:3128
 * iptables -t nat -A POSTROUTING -o eth0 -s *local-network* -d
   *squid-box* -j SNAT --to *iptables-box*
 * iptables -A FORWARD -s *local-network* -d *squid-box* -i eth0 -o
   eth0 -p tcp --dport 3128 -j ACCEPT

2. And have another method:

 * iptables -t mangle -A PREROUTING -j ACCEPT -p tcp --dport 80 -s
   *squid-box*
 * iptables -t mangle -A PREROUTING -j MARK --set-mark 3 -p tcp --dport 80
 * ip rule add fwmark 3 table 2
 * ip route add default via *squid-box* dev eth1 table 2

(OR)

iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT 
--to-port 3128


Regards,
Visolve Squid

On 9/30/2014 10:11 PM, hadi wrote:

It's possible to redirect all ports to squid ? thru iptables ?
For example port 25 smtp,143 imap, etc...
Can squid handle that. In transparent mode.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


---End Message---
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-cache.org won't redirect to www.squid-cache.org?

2014-09-30 Thread Visolve Squid

Hi,

The http://www.squid-cache.org/ domain web site is working fine.

We have accessed the site a min ago.

Regards,
ViSolve Squid

On 9/30/2014 1:47 PM, Neddy, NH. Nam wrote:

Hi,

I accidentally access squid-cache.org and get 403 Forbidden error,
and am wondering why NOT redirect to WWW.squid-cache.org
automatically?

I'm sorry if it's intention.
~Ned
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Nudity Images Filter for Squid

2014-08-23 Thread Squid

Hi Fred,

Sounds good, Already we have some proxy servers (like squid with 
dansguardian ) tools to block the Nudity sites(including the images, 
contents and videos etc..).


Is their any specific reason for going this API ( 
nudityimagesfilterforsquid )?



Thanks,
Visolve Squid


On 8/23/2014 12:38 AM, Stakres wrote:

Hi Guys,

We just released a new free tool for Squid:  Nudity Images Filter for Squid
https://sourceforge.net/projects/nudityimagesfilterforsquid/   


You can specify the MaxResol and the MaxScore for the block.
All details are in the  readme.txt
http://sourceforge.net/projects/nudityimagesfilterforsquid/files/readme.txt/download   


Important:
- We provide the API for free, we can not warranty it'll work with your
Squid installation, that's why you must test on a separated Squid before
going to production.
- We do not compile statistics based on your requests and we do not share
data with Marketing teams or external companies, we also do not use your
data for our internal needs.
- If you are interested for a local implementation of our API in your
network, just drop us an email atsupp...@unveiltech.com

Your feadbacks are welcome...

Bye Fred



--
View this message in 
context:http://squid-web-proxy-cache.1019090.n4.nabble.com/Nudity-Images-Filter-for-Squid-tp4667345.html
Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] Nudity Images Filter for Squid

2014-08-23 Thread Squid

Hi Fred,

Sure we may need a real time image filter for advanced image filtering.

It can also be possible if we configured bannedregular expression list 
in dansguardian.
It will count for words in a site and if the word exceeds it's limit (3 
- 4 words as same eg:porn) then dansguardian will automatically block 
the sites.


And also we are not sure with very newly released domains.

Thanks  Regards,
Visolve Squid


On 8/23/2014 2:47 PM, Vdoctor wrote:

Hello Visolve,

Is your DansGuardian able to block all porn/sexy websites/images, including
the very new domains just released ?
How do you block those images from google/yahoo search in https ?

Here, WebFilter is not enough... you need a real-time images filter :o)

Bye Fred

-Message d'origine-
De : Squid [mailto:sq...@visolve.com]
Envoyé : samedi 23 août 2014 11:08
À :squid-users@squid-cache.org
Objet : Re: [squid-users] Nudity Images Filter for Squid

Hi Fred,

Sounds good, Already we have some proxy servers (like squid with
dansguardian ) tools to block the Nudity sites(including the images,
contents and videos etc..).

Is their any specific reason for going this API ( nudityimagesfilterforsquid
)?


Thanks,
Visolve Squid


On 8/23/2014 12:38 AM, Stakres wrote:

Hi Guys,

We just released a new free tool for Squid:  Nudity Images Filter for

Squid
https://sourceforge.net/projects/nudityimagesfilterforsquid/


You can specify the MaxResol and the MaxScore for the block.
All details are in the  readme.txt


http://sourceforge.net/projects/nudityimagesfilterforsquid/files/readme.txt
/download

Important:
- We provide the API for free, we can not warranty it'll work with your
Squid installation, that's why you must test on a separated Squid before
going to production.
- We do not compile statistics based on your requests and we do not share
data with Marketing teams or external companies, we also do not use your
data for our internal needs.
- If you are interested for a local implementation of our API in your
network, just drop us an emailatsupp...@unveiltech.com

Your feadbacks are welcome...

Bye Fred



--
View this message in

context:http://squid-web-proxy-cache.1019090.n4.nabble.com/Nudity-Images-Fil
ter-for-Squid-tp4667345.html

Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] Nudity Images Filter for Squid

2014-08-23 Thread Squid

Hello Fred,

Thanks for your suggestion. Surely we will look for your API.

Regards,
Visolve Squid

On 8/23/2014 5:57 PM, Vdoctor wrote:

Hi Visolve,

Sure, you could do it with DansGuardian, personaly I prefer and advise
UfdbGuard that is - from my point of view - much more powerful in term of
possibilities than DansGuardian, that's is my opionion only, people are free
to use what they need...

Did you try our API ? maybe you could find new opportinities with :o)

Bye Fred


-Message d'origine-
De : Squid [mailto:sq...@visolve.com]
Envoyé : samedi 23 août 2014 13:54
À : squid-users@squid-cache.org
Objet : Re: [squid-users] Nudity Images Filter for Squid

Hi Fred,

Sure we may need a real time image filter for advanced image filtering.

It can also be possible if we configured bannedregular expression list in
dansguardian.
It will count for words in a site and if the word exceeds it's limit (3
- 4 words as same eg:porn) then dansguardian will automatically block the
sites.

And also we are not sure with very newly released domains.

Thanks  Regards,
Visolve Squid


On 8/23/2014 2:47 PM, Vdoctor wrote:

Hello Visolve,

Is your DansGuardian able to block all porn/sexy websites/images,
including the very new domains just released ?
How do you block those images from google/yahoo search in https ?

Here, WebFilter is not enough... you need a real-time images filter
:o)

Bye Fred

-Message d'origine-
De : Squid [mailto:sq...@visolve.com]
Envoyé : samedi 23 août 2014 11:08
À :squid-users@squid-cache.org
Objet : Re: [squid-users] Nudity Images Filter for Squid

Hi Fred,

Sounds good, Already we have some proxy servers (like squid with
dansguardian ) tools to block the Nudity sites(including the images,
contents and videos etc..).

Is their any specific reason for going this API (
nudityimagesfilterforsquid )?


Thanks,
Visolve Squid


On 8/23/2014 12:38 AM, Stakres wrote:

Hi Guys,

We just released a new free tool for Squid:  Nudity Images Filter for

Squid

https://sourceforge.net/projects/nudityimagesfilterforsquid/

You can specify the MaxResol and the MaxScore for the block.
All details are in the  readme.txt


http://sourceforge.net/projects/nudityimagesfilterforsquid/files/readme.txt

/download

Important:
- We provide the API for free, we can not warranty it'll work with
your Squid installation, that's why you must test on a separated
Squid before going to production.
- We do not compile statistics based on your requests and we do not
share data with Marketing teams or external companies, we also do not
use your data for our internal needs.
- If you are interested for a local implementation of our API in your
network, just drop us an emailatsupp...@unveiltech.com

Your feadbacks are welcome...

Bye Fred



--
View this message in

context:http://squid-web-proxy-cache.1019090.n4.nabble.com/Nudity-Imag
es-Fil
ter-for-Squid-tp4667345.html

Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] what AV products have ICAP support?

2014-08-22 Thread Visolve Squid

Hi Jason Haar,

Trend micro (Stop inbound threats  Secure outbound data) is one of the 
best Inter Scan Web Security Virtual Appliance.


And also have listed other AV vendor:
Samba-vscan-ICAP  isilonicap AV scan (EC2) , etc..

Regards,
Visolve Squid

On 8/18/2014 3:00 PM, Jason Haar wrote:

Hi there

I've been testing out squidclamav as an ICAP service and it works well.
I was wondering what other AV vendors have (linux) ICAP-capable
offerings that could similarly be hooked into Squid?

Thanks





Re: [squid-users] store_id and key in store.log

2014-08-20 Thread Squid

Hello Stepanenko,

The store.log is a record of Squid's decisions to store and remove 
objects from the cache. Squid creates an entry for each object it stores 
in the cache, each uncacheable object, and each object that is removed 
by the replacement policy.

The log file covers both in-memory and on-disk caches.

The store.log provides some values it's can't get from access.log.
Mainly the response's cache key only in (i.e., MD5 hash value are present).

refresh_pattern 
^http://(youtube|ytimg|vimeo|[a-zA-Z0-9\-]+)\.squid\.internal/.* 10080 
80%  79900 override-lastmod override-expire ignore-reload 
ignore-must-revalidate ignore-private


Simple example for StoreID refresh pattern:

acl rewritedoms dstdomain .dailymotion.com .video-http.media-imdb.com  
av.vimeo.com .dl.sourceforge.net .vid.ec.dmcdn.net .videoslasher.com


store_id_program /usr/local/squid/bin/new_format.rb
store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms !banned_methods
store_id_access deny all

root# /usr/local/squid/bin/new_format.rb

ERR
http://i2.ytimg.com/vi/95b1zk3qhSM/hqdefault.jpg
OK store-id=http://ytimg.squid.internal/vi/95b1zk3qhSM/hqdefault.jpg

Thanks,
ViSolve Squid

On 8/14/2014 1:07 PM, Степаненко Сергей wrote:

Hi All!

I'm try use store_id helper, and i'm try debug regexp for url (which
processed by helper) I'm turn on store.log and I expect to see in store log
changed key value. But key in store.log is oiginal URL for object.
Maybe I'm wrong and this normal behavior?r
My squid version 3.4.5


Stepanenko Sergey








Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread squid

why are you using unbound for this at all?

Well, we use a geo location service much like a VPN or a proxy.
For transparent proxies, it works fine, squid passes through the SSL  
request and back to the client.

For VPN, everything is passed through.
But with unbound, we only want to pass through certain requests and  
some of them have SSL sites.
Surely, there's a way to pass a request from unbound, and redirect it  
through the transparent proxy, returning it straight to the client?







Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-20 Thread squid



which one?
It's client -- unbound -- if IP listed in unbound.conf -- forwarded  
to proxy -- page or stream returned to client


For others it's client -- unbound -- direct to internet with normal DNS



Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-19 Thread squid



Take a look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP

Your squid.conf seems to be too incomplete to allow SSL-Bump to work.

Eliezer


I recompiled to 3.4.6 and ran everything in your page there.
squid started correctly.
However, it is the same problem. Any https page that I had configured  
does not resolve. It is being redirected by unbound but as soon as it  
hits the proxy, it just gets dropped somehow:


# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5454:2633080]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
-A INPUT -s 213.171.217.173/32 -p udp -m udp --dport 161 -m state  
--state NEW -j ACCEPT

-A INPUT -p udp -m udp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 110 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 143 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 21 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state NEW -j ACCEPT
COMMIT
# Completed on Tue Aug 19 03:14:13 2014
# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*nat
:PREROUTING ACCEPT [23834173:1866373947]
:POSTROUTING ACCEPT [22194:1519446]
:OUTPUT ACCEPT [22194:1519446]
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
-A POSTROUTING -s 0.0.0.0/32 -o eth0 -j MASQUERADE
COMMIT
# Completed on Tue Aug 19 03:14:13 2014

#acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7# RFC 4193 local private network range
acl localnet src fe80::/10# RFC 4291 link-local (directly  
plugged) machines


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access deny to_localhost
external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
https_port 3130 intercept ssl-bump generate-host-certificates=on  
dynamic_cert_mem_cache_size=16MB   
cert=/usr/local/squid/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s  
/usr/local/squid/var/lib/ssl_db -M 16MB

sslcrtd_children 10
ssl_bump server-first all
#sslproxy_cert_error allow all
#sslproxy_flags DONT_VERIFY_PEER
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%0
refresh_pattern .   020% 4320



Re: [squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-18 Thread squid




What are the iptables rules for that?
Also look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP


I recompiled to 3.4.6
and ran everything in your page there.
squid started correctly.
However, it is the same problem. Any https page that I had configured  
does not resolve. It is being redirected by unbound but as soon as it  
hits the proxy, it just gets dropped somehow:


# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5454:2633080]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
-A INPUT -s 213.171.217.173/32 -p udp -m udp --dport 161 -m state  
--state NEW -j ACCEPT

-A INPUT -p udp -m udp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 110 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 143 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 21 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state NEW -j ACCEPT
COMMIT
# Completed on Tue Aug 19 03:14:13 2014
# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*nat
:PREROUTING ACCEPT [23834173:1866373947]
:POSTROUTING ACCEPT [22194:1519446]
:OUTPUT ACCEPT [22194:1519446]
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
-A POSTROUTING -s 0.0.0.0/32 -o eth0 -j MASQUERADE
COMMIT
# Completed on Tue Aug 19 03:14:13 2014

#acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access deny to_localhost
external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
https_port 3130 intercept ssl-bump generate-host-certificates=on  
dynamic_cert_mem_cache_size=16MB   
cert=/usr/local/squid/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s  
/usr/local/squid/var/lib/ssl_db -M 16MB

sslcrtd_children 10
ssl_bump server-first all
#sslproxy_cert_error allow all
#sslproxy_flags DONT_VERIFY_PEER
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320





Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-17 Thread squid

You are at least missing https_port and all the sslproxy_* directives
for outgoing HTTPS. Then also you are probably missing the TLS/SSL
certificate security keys, including any DNS entries for IPSEC, DNSSEC,
DANE, HSTS etc.



Ok, so I generated some keys and added the directives.
On restarting squid it askes for the certificate password and starts  
ok but it still won;t resolve the SSL websites.

I also added an iptables forward directive:
iptables  -t nat -A PREROUTING  -i eth0 -p tcp --dport  443 -j  
REDIRECT --to-port 3130


CONF:
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
https_port 3130 transparent cert=/etc/squid/server.crt  
key=/etc/squid/server.key

hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320






Re: [squid-users] unbound and squid not resolving SSL sites

2014-08-07 Thread squid

Current config below:


In my network I have unbound redirecting some sites through the proxy
server and checking authentication, If I redirect www.thisite.com it
works corectly. However, as soon as SSL is used https://www.thissite.com
it doesn't resolve at all. Any ideas what I have to do to enable ssl
redirects in unbound or squid?


Handle port 443 traffic and the encrypted traffic there.
You are only receiving port 80 traffic in this config file.


I am already redirecting 443 traffic but the proxy won't pick it up.
There is a SSL ports directive in the squid.conf so it should accept them?
For example, this line redirect all HTTP traffic but as soon as the  
browser wants a SSL connection, it is dropped:

local-data: anywhere.mysite.com. 600 IN A 109.xxx.xx.xxx
local-zone: identity.mysite.com. redirect




external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth


What does this helper do exactly to earn the term authentication?
TCP/IP address alone is insufficient to verify the end-users identity.

This helper checks that an IP address is contained within a database table.
If the IP address exists, then it allows them to use the proxy server.

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access deny to_localhost
external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
#http_access allow all
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 80 accel vhost allow-direct
hierarchy_stoplist cgi-bin ?
#cache_dir ufs /var/spool/squid 100 16 256
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320





Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-06 Thread Squid user

Hi Amos.

Understood... thanks.

Then I think the names of the flags are a bit misleading:
they all end with _hash, even if mask assignment is used.

Also, with respect to that fixed mask, 0x1741  I know that is the 
default value, but it then means that there is no way to use a different 
mask.


If the number of cache-engines is low, one could think on having a mask 
of just 1 or 2 bits for instance, so that the processing time at the 
router is minimized.


What do you think?

Thanks.




On 08/06/2014 11:16 AM, Amos Jeffries wrote:

On 5/08/2014 12:27 a.m., Squid user wrote:

Hi Amos.

Could you please be more specific?

I cannot find any wccp-related directive in Squid named IIRC or similar.


IIRC = If I Recall Correctly.
I am basing my answer on code knowledge I gained a year or two back.

Just re-checked the code and confirmed. The flag names on
wccp2_service_info are the same for both hash and mask methods. What
they do is different and hard-coded into Squid.

For mask assignment the static mask of 0x1741 is sent from Squid for
each of the fields you configure a flag for.

http://www.squid-cache.org/Doc/config/wccp2_service_info/


Examples of what you need for your earlier requested config (Sorry about
the line wrap):

   wccp2_service_info 80 protocol=tcp flags=src_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
src-IP when protocol is TCP and dst-port 80.


   wccp2_service_info 90 protocol=tcp flags=dst_ip_hash
priority=240 ports=80

with mask assignment method sets the mask to be 0x1741 on the packet
dst-IP when protocol is TCP and dst-port 80.


Amos



[squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-04 Thread Squid user

Hi.

Could you provide any help on the below?

Basically, what I need is to know whether Squid has a directive to be 
used when Mask assignment is used, allowing to send to the WCCP client 
what is the mask that should be used.

I have seen none, so far.
It is possible to set the assignment to Mask, but if Squid cannot tell 
the WCCP client which mask should be used, then mask assignment will not 
work.


Thanks a lot.


On 07/31/2014 11:45 AM, Squid user wrote:

Hi.

I'm trying to configure my squid as a WCCPv2 cache engine, according 
to the following requirements:

- Assignment method: Mask assignment
- Mask based on source ip (for one service group)
- Mask based on destination ip (for another service group)

The problem is I do not know how to specify those mask elements with 
the current squid conf directives.


The assignment method is easy to handle with wccp2_assignment_method.

But how can I set the Mask Elements according to source ip address and 
destination ip address?


If the assignment method is Hash, then I can use the 
wccp2_service_info flags: src_ip_hash and dst_ip_hash.


But with Mask assignment, I do not find any directive allowing me to 
send the router that I want to perform masking based on src ip and dst 
ip.


Do you have any idea?

My system details are:

Squid version: 3.2.6
O.S: Ubuntu server 14.04


Thanks a lot.




Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-04 Thread Squid user

Hi Amos.

Could you please be more specific?

I cannot find any wccp-related directive in Squid named IIRC or similar.

Yes, it can be set in the router, but, according to the WCCP Internet Draft:

It is the responsibility of the Service Group’s designated web-cache to 
assign each router’s mask/value sets.


This means that the router could be forced to require from Squid a mask 
to be sent.


Thanks a lot.


-

IIRC it is the same flags, or set in the router.

Amos


Re: [squid-users] Re: Configuring WCCPv2, Mask Assignment

2014-08-04 Thread Squid user

Hi Amos.

When you say it is the same flags .

Do you mean the same flags hash assignment uses?

Thanks.


On 08/04/2014 02:42 PM, Squid user wrote:

Hi Amos.

Could you please be more specific?

I cannot find any wccp-related directive in Squid named IIRC or similar.

Yes, it can be set in the router, but, according to the WCCP Internet 
Draft:


It is the responsibility of the Service Group’s designated web-cache 
to assign each router’s mask/value sets.


This means that the router could be forced to require from Squid a 
mask to be sent.


Thanks a lot.


-

IIRC it is the same flags, or set in the router.

Amos




[squid-users] unbound and squid not resolving SSL sites

2014-08-04 Thread squid
In my network I have unbound redirecting some sites through the proxy  
server and checking authentication, If I redirect www.thisite.com it  
works corectly. However, as soon as SSL is used  
https://www.thissite.com it doesn't resolve at all. Any ideas what I  
have to do to enable ssl redirects in unbound or squid?


squid.conf
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7# RFC 4193 local private network range
acl localnet src fe80::/10# RFC 4291 link-local (directly  
plugged) machines


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7# RFC 4193 local private network range
acl localnet src fe80::/10# RFC 4291 link-local (directly  
plugged) machines


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%0
refresh_pattern .   020% 4320



[squid-users] Configuring WCCPv2, Mask Assignment

2014-07-31 Thread Squid user

Hi.

I'm trying to configure my squid as a WCCPv2 cache engine, according to 
the following requirements:

- Assignment method: Mask assignment
- Mask based on source ip (for one service group)
- Mask based on destination ip (for another service group)

The problem is I do not know how to specify those mask elements with the 
current squid conf directives.


The assignment method is easy to handle with wccp2_assignment_method.

But how can I set the Mask Elements according to source ip address and 
destination ip address?


If the assignment method is Hash, then I can use the wccp2_service_info 
flags: src_ip_hash and dst_ip_hash.


But with Mask assignment, I do not find any directive allowing me to 
send the router that I want to perform masking based on src ip and dst ip.


Do you have any idea?

My system details are:

Squid version: 3.2.6
O.S: Ubuntu server 14.04


Thanks a lot.


[squid-users] unbound and squid not resolving SSL sites

2014-07-29 Thread squid
In my network I have unbound redirecting some sites through the proxy  
server and checking authentication, If I redirect www.thisite.com it  
works corectly. However, as soon as SSL is used  
https://www.thissite.com it doesn't resolve at all. Any ideas what I  
have to do to enable ssl redirects in unbound or squid?


squid.conf
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320




RE: Fwd: Re: [squid-users] google picking up squid as

2014-07-09 Thread squid
That's very odd. I'd try calling them... There are quite a few folks  
blocking proxies these days. What I do is remove the via and  
forwarded for headers with the following command:

check_hostnames off
forwarded_for delete
via off
The same configuration in an earlier version of squid doesn;t get  
rejected by Google but in the new version of squid it is rejected by  
Google so is it possible squid is doing something differently?




Re: [squid-users] Why squid doesn't log anything when applying transparent proxy?

2014-07-05 Thread ViSolve Squid

Check whether your browser goes through squid or not?

You can find this by using the url: http://cbe.visolve.com/

If your browser goes through squid then the above url shows that the 
proxy detected column. Eventhough your access log is not shown 
anything then let us know your squid.conf file so that

we will check the issue and help you out.

If it is not going through squid then let us know your iptables rules.

Thanks
Visolve Squid Support Team

On 7/5/2014 2:59 PM, Mark jensen wrote:

I have deploy Transparent proxy using this tutorials:

on L3 switch:

http://wiki.squid-cache.org/ConfigExamples/Intercept/Cisco2501PolicyRoute

on centos 6.5 box ( squid ):

http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect


when I request the web page from one client, It returns to me, so I thought 
that transparent proxy works fine.

but the problem is that I can't find any records in the access.log file, so 
it seems that the client get the page from the server directly.

1- Is the problem that squid doesn't log when it is in a transparent mode?

2- Or is the client get the page directly from the server( if so, how can I add 
a rule to the Iptables or an access list to forbid the client from getting the 
page directly from the server) ?

Mark





Re: Fwd: Re: [squid-users] google picking up squid as

2014-06-27 Thread squid

How about contacting google for advise?
They are the one that forces you to the issue.
They don't like it that you have a 1k clients behind your IP address.
They should tell you what to do.
You can tell them that you are using squid as a forward proxy to  
enforce usage acls on users inside the network.

It's not a share to use squid...
It's a shame that you cannot get a reasonable explanation to the  
reason you are blocked...


There is only 1 client behind the IP address as it is a test server so  
something is going wrong with either routing or requests to google.

Google will not answer any emails.
I suppose one alternative is to use unbound in conjunction with squid  
and not redirect any requests to google?




Re: [squid-users] google picking up squid as

2014-06-26 Thread squid
So, I added those and restarted...still get the your computer may be  
sending automated queries error form google.

I then set x forwarded for to off, no change.
Then commented out via, no change.

Current conf:

auth_param basic realm AAA proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access allow ncsa_users
http_access deny all
icp_access allow all
http_port 8080
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
half_closed_clients off
visible_hostname AAAProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 40/40
forwarded_for on
via on
cache_mem 256 MB


Quoting Amos Jeffries squ...@treenet.co.nz:


On 8/06/2014 5:06 a.m., Lawrence Pingree wrote:
I use the following but you need to make sure you have no looping  
occurring in your nat rules if you are using Transparent mode.


forwarded_for delete
via off


Given that the notice is above traffic volume arriving at Google (not
looping) you probably actually need via on to both protect against
looping and tell google there is a proxy so they should use different
metrics.

You could also cache to reduce the upstream connection load. Squid does
in-memory caching well enough for up to MB sized objects if you give it
some cache_mem and remove that cache deny all (cache_dir is optional
and disabled by default in squid-3).

Amos








Fwd: Re: [squid-users] google picking up squid as

2014-06-26 Thread squid
So, I added those and restarted...still get the your computer may be  
sending automated queries error form google.

I then set x forwarded for to off, no change.
Then commented out via, no change.

Current conf:

auth_param basic realm AAA proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access allow ncsa_users
http_access deny all
icp_access allow all
http_port 8080
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
half_closed_clients off
visible_hostname AAAProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 40/40
forwarded_for on
via on
cache_mem 256 MB


Quoting Amos Jeffries squ...@treenet.co.nz:


On 8/06/2014 5:06 a.m., Lawrence Pingree wrote:
I use the following but you need to make sure you have no looping  
occurring in your nat rules if you are using Transparent mode.


forwarded_for delete
via off


Given that the notice is above traffic volume arriving at Google (not
looping) you probably actually need via on to both protect against
looping and tell google there is a proxy so they should use different
metrics.

You could also cache to reduce the upstream connection load. Squid does
in-memory caching well enough for up to MB sized objects if you give it
some cache_mem and remove that cache deny all (cache_dir is optional
and disabled by default in squid-3).

Amos







- End forwarded message -




[squid-users] google picking up squid as

2014-06-07 Thread squid

I get the following notice from google's site when connected to the proxy:
Our systems have detected unusual traffic from your computer network.

Any ideas how I can prevent this? I presume it might be the forwarded  
for argument?

The following is my conf:

auth_param basic realm proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access allow ncsa_users
http_access deny all
icp_access allow all
http_port 8080
http_port 123.123.123.123:80
cache deny all
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
half_closed_clients off
visible_hostname ProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 40/40
forwarded_for off
via off





[squid-users] problem migrating from 2 to v3 and to new server: video streaming

2014-06-06 Thread squid

I have migrated to a new server and upgraded the version.
I can connect to the proxy and all webpages seem to work except when I  
access a video site.

I'm just getting lots of TCP_MISS in the logs.
Is there anything in the conf that kight cause this?
The video sites can be accessed but when I press play, they just hang  
continiuously on downloading.


auth_param basic realm proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access deny manager
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny maxuser
http_access allow localhost
http_access deny all
icp_access allow all
http_port 8080
http_port aa.aaa.aaa.aa:80
cache deny all
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
half_closed_clients off
visible_hostname ProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
coredump_dir /var/spool/squid
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 40/40
forwarded_for off
via off
url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidguard.conf
url_rewrite_children 8 startup=0 idle=1 concurrency=0




confirm subscribe to squid-users@squid-cache.org

2014-05-10 Thread squid-users-help
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.

I'm working for my owner, who can be reached
at squid-users-ow...@squid-cache.org.

This is an automated response from the squid-cache.org list server
to confirm the requested action.

If you did not send the subscribe request below then please ignore
this message and no harm will be done.

To confirm that you would like

   arch...@mail-archive.com

added to the squid-users mailing list, please send
an empty reply to this address:

   
squid-users-sc.1399784615.jlglkpckfmmbmjkjijcn-archive=mail-archive@squid-cache.org

Usually, this happens when you just hit the reply button.
If this does not work, simply copy the address and paste it into
the To: field of a new message.

This confirmation serves two purposes. First, it verifies that I am able
to get mail through to you. Second, it protects you in case someone
forges a subscription request in your name.



--- Administrative commands for the squid-users list ---

I can handle administrative requests automatically. Please
do not send them to the list address! Instead, send
your message to the correct command address:

For help and a description of available commands, send a message to:
   squid-users-h...@squid-cache.org

To subscribe to the list, send a message to:
   squid-users-subscr...@squid-cache.org

To remove your address from the list, just send a message to
the address in the ``List-Unsubscribe'' header of any list
message. If you haven't changed addresses since subscribing,
you can also send a message to:
   squid-users-unsubscr...@squid-cache.org

or for the digest to:
   squid-users-digest-unsubscr...@squid-cache.org

For addition or removal of addresses, I'll send a confirmation
message to that address. When you receive it, simply reply to it
to complete the transaction.

If you need to get in touch with the human owner of this list,
please send a message to:

squid-users-ow...@squid-cache.org

Please include a FORWARDED list message with ALL HEADERS intact
to make it easier to help you.

--- Enclosed is a copy of the request I received.

Return-Path: arch...@mail-archive.com
Received: (qmail 33123 invoked by uid 26); 11 May 2014 05:03:35 -
Received: from mail-archive.com (mail-archive.com [72.52.77.8])
by squid-cache.org (8.14.3/8.14.2) with ESMTP id s4B53YHo033119
(version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=NOT)
for squid-users-subscr...@squid-cache.org; Sat, 10 May 2014 23:03:35 
-0600 (MDT)
(envelope-from arch...@mail-archive.com)
Received: from root by mail-archive.com with local (Exim 4.76)
(envelope-from arch...@mail-archive.com)
id 1WjLvD-0003HI-2c
for squid-users-subscr...@squid-cache.org; Sat, 10 May 2014 22:03:35 
-0700
From: arch...@mail-archive.com
Subject: subscribe
To: squid-users-subscr...@squid-cache.org
X-Mailer: mail (GNU Mailutils 2.2)
Message-Id: e1wjlvd-0003hi...@mail-archive.com
Date: Sat, 10 May 2014 22:03:35 -0700

subscribe


Already subscribed to squid-users@squid-cache.org

2014-05-10 Thread squid-users-help
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.

I'm working for my owner, who can be reached
at squid-users-ow...@squid-cache.org.

The address

   arch...@mail-archive.com

was already on the squid-users mailing list when I received
your request, and remains a subscriber.

If you have trouble posting to the list then please note that
the list only accepts plain text email, not HTML email. If you
receive a strange error message mentioning multipart/alternative
then your email program sends HTML email and must be reconfigured
to send plain-text before you can post messages to the list.
You will still be able to receive messages via the list however.


--- Administrative commands for the squid-users list ---

I can handle administrative requests automatically. Please
do not send them to the list address! Instead, send
your message to the correct command address:

For help and a description of available commands, send a message to:
   squid-users-h...@squid-cache.org

To subscribe to the list, send a message to:
   squid-users-subscr...@squid-cache.org

To remove your address from the list, just send a message to
the address in the ``List-Unsubscribe'' header of any list
message. If you haven't changed addresses since subscribing,
you can also send a message to:
   squid-users-unsubscr...@squid-cache.org

or for the digest to:
   squid-users-digest-unsubscr...@squid-cache.org

For addition or removal of addresses, I'll send a confirmation
message to that address. When you receive it, simply reply to it
to complete the transaction.

If you need to get in touch with the human owner of this list,
please send a message to:

squid-users-ow...@squid-cache.org

Please include a FORWARDED list message with ALL HEADERS intact
to make it easier to help you.

--- Enclosed is a copy of the request I received.

Return-Path: arch...@mail-archive.com
Received: (qmail 33572 invoked by uid 26); 11 May 2014 05:07:06 -
Received: from mail-archive.com (mail-archive.com [72.52.77.8])
by squid-cache.org (8.14.3/8.14.2) with ESMTP id s4B5755r033568
(version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=NOT)
for 
squid-users-sc.1399784615.jlglkpckfmmbmjkjijcn-archive=mail-archive@squid-cache.org;
 Sat, 10 May 2014 23:07:06 -0600 (MDT)
(envelope-from arch...@mail-archive.com)
Received: from archive by mail-archive.com with local (Exim 4.76)
(envelope-from arch...@mail-archive.com)
id 1WjLyc-00055R-3m
for 
squid-users-sc.1399784615.jlglkpckfmmbmjkjijcn-archive=mail-archive@squid-cache.org;
 Sat, 10 May 2014 22:07:06 -0700
Subject: subscribe
To: 
squid-users-sc.1399784615.jlglkpckfmmbmjkjijcn-archive=mail-archive@squid-cache.org
X-Mailer: mail (GNU Mailutils 2.2)
Message-Id: e1wjlyc-00055r...@mail-archive.com
From: The Mail Archive arch...@mail-archive.com
Date: Sat, 10 May 2014 22:07:06 -0700



[squid-users] Squid is not caching large objects!

2014-01-05 Thread Aris Squid Team

I configured squid to cache large files i.e. 100MB
but it does not cache these files.
any idea?
--
Aris System Squid Development


Re: [squid-users] Squid is not caching large objects!

2014-01-05 Thread Aris Squid Team

On 1/5/2014 4:45 PM, Kinkie wrote:

On Sun, Jan 5, 2014 at 1:06 PM, Aris Squid Team
squid@arissystem.com wrote:

I configured squid to cache large files i.e. 100MB
but it does not cache these files.
any idea?


Have you checked whether these files are cacheable, e.g. with redbot ?
(http://redbot.org/).




I've tested same file with different sizes:
1MB: cached
2MB: cached
4MB: cached
6MB: no
8MB: no
.


--
Aris System Squid Development


[squid-users] Refresh pattern - object size

2014-01-03 Thread Aris Squid Team

Hi,
Is there any way to apply refresh patterns to object of specific size 
range. I want to apply refresh patterns to objects which are bigger than 
a specific size.


thanks
--
Aris System Squid Development


Re: [squid-users] Tracing squid 3.1 functions

2013-12-25 Thread Aris Squid Team

On 12/26/2013 9:31 AM, m.shahve...@ece.ut.ac.ir wrote:



Not possible because there is none that recognize request protocol.

What happens is admin configure squid.conf ports manually, one per
protocol type to be recieved. Squid only supports HTTP, HTTPS, ICP,
HTCP, and SNMP incoming traffic.

The non-HTTP traffic support in Squid is for gatewaying traffic, where
Squid makes the outbound connection in FTP/Gopher/HTTP/HTTPS/Wais/ etc
so there is no detection or recognizing going on.


Sorry, I don't understand. Could you please explain the squid scenario for
a FTP request for example?




I think it's possible to use debug mode and compile squid with extra log 
messages to find the protocol detection point.

I've done this before to find an inside buffering algorithm.


--
Aris System Squid Development


Re: [squid-users] ip hiding in squid-3.3.8

2013-12-13 Thread Squid

On 12/12/2013 1:50 PM, 0bj3ct wrote:

Hi, Am using Squid 3.3.8. Want to prevent the Squid server to change ip
addresses of clients. How can I do it? How to disable ip replacing in Squid?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ip-hiding-in-squid-3-3-8-tp4663800.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Hi,

To do this you've to consider multiple things:
First: Squid placement in traffic flow
As I know you've to choose one of these methods
1. Installing squid in the middle of your traffic
2. Using Cisco WCCP[2]
3. Using Policy Based Routing, this scenario ss possible using Vyatta or 
Linux as router


Second: User's IP address
They should use public IP addresses.

Third: TProxy on Squid
TProxy configuration is needed on Squid itself and it's OS.

I suggest using lates stable release, not head branch. I think 3.2 or 
even 3.1 are better to use specially in production environments.



--
Aris System Squid Development


[squid-users] cache clearing without rebuildung the directoty structure

2013-08-16 Thread Squid Mailinglist
Hello community,

I know that the question of the emptying of the Squid cache has already been
asked very often.
Is there now a function to empty the cache without deleting the directory
structure? A customer asks for it, because he wants to use Squid as a reverse
proxy in a high availability environment. Deleting and rebuilding the cache
would take him too long.
I do not really know why the deletion is necessary for him. But the question of
whether there is in the meantime (squid V3.x) a new squid function empty
cache, I want to ask anyway.

Helmut


[squid-users] Re: Meaning of negative sizes in store.log

2013-07-31 Thread squid

 
 Good question. And one which will require digging through the code to
 answer I'm afraid.

Thanks - I've been digging into the code, but I have not quite figured
it out yet.

This is part of my effort to get caching of windows updates working.
I've followed the rules on the wiki in this regard, but it seems like
these unmatched lengths are preventing the caching.

 FWIW I suspect that is a bug in the header vs object calculations. 

A tcpdump of the corresponding connections shows that squid is closing
the connection to the server before the download is finished.  I'll
continue looking.

 The wiki has this to say about the value:

Thanks, yes, I saw that.

Mark





[squid-users] Meaning of negative sizes in store.log

2013-07-30 Thread squid
In the sizes fields of store.log, what do negative sizes mean?  For
instance, I'm getting this, and I'm interested in knowing the meaning of
the -312:

...  -1 application/octet-stream 96508744/-312 GET
http://au.v4.download.windowsupdate.com/msdownload/update/software

Thanks
Mark



[squid-users] Connection reset when accessing java servlet report page via squid

2013-07-02 Thread Visolve Squid Support

Hello,

We have a problem with the squid when accessing a servlet page through
the squid proxy.

It is report page where the inputs are taken from the user and the
servlet manipulates the report and present it in the page.

Normally it takes around 45-60 seconds to generate the report. So we are
getting the Connection reset' message in firefox and 'Error 324
(net::ERR_EMPTY_RESPONSE): The server closed the connection without
sending any data' in chrome.

But works normally without a proxy.

Please suggest a solution for this issue if there is any config change
need to be done.

Regards,
Manoj






RE: [squid-users] Unit of measure for st

2012-10-12 Thread squid squid

Hi,
Any advice on the unit of measure for st??? Thank you.
 From: squid...@hotmail.com
 To: squid-users@squid-cache.org
 Date: Thu, 11 Oct 2012 23:26:56 +0800
 Subject: [squid-users] Unit of measure for st
 
 
 Hi,
 
 There is a parameter in the logformat and the format code is st which is 
 Sent reply size including HTTP headers.
 
 I would like to know is the unit of measure of unit for this parameter in 
 bits or bytes.
 
 Thank you.
 
 


[squid-users] Unit of measure for st

2012-10-11 Thread squid squid

Hi,

There is a parameter in the logformat and the format code is st which is Sent 
reply size including HTTP headers.

I would like to know is the unit of measure of unit for this parameter in bits 
or bytes.

Thank you.
 
  

Re: [squid-users] http to squid to https

2012-04-30 Thread Squid Tiz

On Apr 29, 2012, at 10:36 PM, Amos Jeffries wrote:

 On 28/04/2012 10:37 a.m., Squid Tiz wrote:
 I am kinda new to squid.  Been looking over the documentation and I just 
 wanted a sanity check on what I am trying to do.
 
 I have a web client that hits my squid server.  The squid connects to an 
 apache server via ssl.
 
 Here are the lines of interest from my squid.conf for version 3.1.8
 
 http_port 80 accel defaultsite=123.123.123.123
 cache_peer 123.123.123.123 parent 443 0 no-query originserver ssl 
 sslflags=DONT_VERIFY_PEER name=apache1
 
 The good news is, that works just as I hoped.  I get a connection.
 
 But I am questioning the DONT_VERIFY_PEER.Don't I want to verify peer?
 
 Ideally yes. It is better security. But up to you whether you need it or not.
 It means having available to OpenSSL on the squid box (possibly via 
 squid.conf settings) the CA certificate which signed the peers certificate, 
 so that verification will not fail.
 
 
 I simply hacked up a self signed cert on the apache server.  Installed 
 mod_ssl and restarted apache and everything started to work on 443.
 
 On the command line for the squid server I can curl the apache box with:
 
 curl --cacert  _the_signed_cert_from_the_apache_node_ https://apache.server
 
 Is there a way with sslcert and sslkey to setup a keypair that will verify?
 
 They are for configuring the *client* certificate and key sent by Squid to 
 Apache. For when Apache is doing the verification of its clients.
 
 Squid has a sslcacert= option which does the same as curl --cacert option. 
 For validating the Apache certificate(s).
 
   Do I need a signed cert?
 
 Yes, TLS requires signing. Your self-signing CA will do however, so long as 
 both ends of the connection are in agreement on the CA trust.
 
 
 I tried to add the cert and key to the cach_peer line in the config.  Squid 
 did restart.  But no connection.  Why would curl work but not squid?
 
 see above.
 
 Amos

Amos,

Thanks for the reply.  

I was just curious to see if I good get this to fly.  The goal is to attach to 
the squid server via http and have squid verify and attach to the SSL server 
using a self signed cert.  This seems to work.  Squid starts OK and my logs are 
clean.  No validation errors.

Comments appreciated.


Create the CA stuff on the apache server:

Key
openssl genrsa -des3 -out ca.key 4096
CRT
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

Create a server cert:

Key
openssl genrsa -des3 -out server.key 4096
CSR
openssl req -new -key server.key -out server.csr
CRT
openssl x509 -req -days 3650 -in server.csr -CA ca.crt -CAkey ca.key 
-set_serial 01 -out server.crt

Then go a head and install these certs on the server.  Test the server on port 
443/SSL etc.

Then create a client cert:

Key
openssl genrsa -des3 -out client.key 2048
CSR
openssl req -new -key client.key -out client.csr
CRT
openssl ca -in client.csr -cert ca.crt -keyfile ca.key -out client.crt

Touch up the key - don't want to enter the password on start-up.

openssl rsa -in client.key -out client.key.insure
mv client.key client.key.secure
mv client.key.insecure client.key

Then take the ca.crt, the client.key and the client.crt and deploy them on the 
squid server.

Update the /etc/hosts file:

ip-address cn-name-of-apache-server

Then the squid.conf:

http_port 8080 accel defaultsite=cn-name-of-apache-server
cache_peer cn-name-of-apache-server parent 443 0 no-query originserver ssl \
sslcafile=/path/ca.crt sslcert=/path/client.crt sslkey=/path/client.key 
name=yum1


-- 
Regs
-Dean



[squid-users] Prevent client spamming

2012-04-29 Thread squid squid

Hi,

I have a server running Squid 2.7 stable 15 and facing client spamming. The 
problem happen when a client press and hold on to the F5 button on the PC and 
this will generate few hundred of requests to the my squid proxy.

Please advise how can I prevent or drop the client traffic when the above 
happen.

Thank you.

  


[squid-users] http to squid to https

2012-04-27 Thread Squid Tiz
I am kinda new to squid.  Been looking over the documentation and I just wanted 
a sanity check on what I am trying to do.

I have a web client that hits my squid server.  The squid connects to an apache 
server via ssl. 

Here are the lines of interest from my squid.conf for version 3.1.8

http_port 80 accel defaultsite=123.123.123.123
cache_peer 123.123.123.123 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=apache1

The good news is, that works just as I hoped.  I get a connection.

But I am questioning the DONT_VERIFY_PEER.Don't I want to verify peer?

I simply hacked up a self signed cert on the apache server.  Installed mod_ssl 
and restarted apache and everything started to work on 443. 

On the command line for the squid server I can curl the apache box with:

curl --cacert  _the_signed_cert_from_the_apache_node_ https://apache.server

Is there a way with sslcert and sslkey to setup a keypair that will verify?  Do 
I need a signed cert?

I tried to add the cert and key to the cach_peer line in the config.  Squid did 
restart.  But no connection.  Why would curl work but not squid?

-- 
-Dean

[squid-users] Need 413 status code when reply_body_max_size is hit

2012-03-13 Thread squid-list
I limit the maximum file size an employee can download to our network using 
reply_body_max_size 100 MB proxy_user1. If this limit is hit, Squid returns a 
403. 

My problem is that I would like to differentiate between the status code 403 
that comes from a target website that does not allow access at all to download 
a specific file and to the intentional 403 that is generated from Squid 
because the file size exceeds the limit we set. 

The reason is that I want to display the user a message like The target 
website does not allow access and You have requested a file to download that 
is too large. Please contact the IT department. I was thinking that the 413 
is useful for that.

Is there a way to change the status code/message when a user hits the 
reply_body_max_size and differentiate that case? Is there any other 
workaround?

Thanks a lot for your help.


[squid-users] https bypass squid cache in reverse proxy mode

2011-04-30 Thread Support Squid
Dear all,

I'm using accel (reverse proxy) with vhost in squid, but it not work
when received https request. I know i can set the https_port and add
the cert to my squid. But I just want to pass my squid cache server
and let the request just redirect to the web server. How to do this in
the setting?

Regards,

Gary Kei


[squid-users] ask n need youtube cache

2011-04-27 Thread rioda78.squid
(Help)
im new one in squid
im will build squid server with youtube cache.
can  anyone  squid master here to help me to enable youtube with squid
or third software (like youtube_cache).
thanks before

  

-- 
Best regards,
 rioda78.squid  mailto:rioda7878.sq...@gmail.com



Re: [squid-users] TCP Flooding attack and DNS Poisioning attack

2011-04-14 Thread squid
Good day,
Thanks all for concern. The network topology is as follow:
Workstations are installed with Windows 7 Pro with spyware terminator with
integrated ClamAV all link to a Cisco 2950 switch and a multihome server
with Windows 7 Ultimate with ESET AV and Squid has one NIC connected to
the Cisco switch for LAN connection and the other to internet through
broadband device. Windows 7 on the server is used to share the internet
connection and the workstation browsers are configure to use server IP and
port 3128.
Thanks for your assistance,
regards,
Yomi

 On 12/04/2011 08:37, Amos Jeffries wrote:

 On 12/04/11 15:51, Eliezer Croitoru wrote:
 On 12/04/2011 06:15, Amos Jeffries wrote:

 On Mon, 11 Apr 2011 22:34:02 +0300, Eliezer Croitoru wrote:
 On 11/04/2011 20:53, sq...@sourcesystemsonline.com wrote:

 Good day,
 Some times when i check my ESET Antivirus LogFile, it shows that
 some
 activities of clients in my network are attacking my network
 especially
 squid port (3128) with TCP Flooding or DNS Poisioning. I check the
 internet for there meaning and found out that they are not good
 activities
 on any network.
 What?
 it's nice t know that you do have tcp flooding.. or what so..
 but the problem is that the AV is not providing any details on how it
 is getting this conclusion.
 i would start with a simple wireshark on this specific machine that
 you are getting the warnings
 in case you do have some problems on your network setup.
 by the way proxy traffic can indeed in a way be misunderstood as TCP
 flood and DNS spoofer.

 NOTE: Usually TCP flooding is a warning thrown up by the kernel when
 TCP has a lot of new connections made. A busy proxy will easily hit
 the default thresholds for this.

 TCP offers a feature called SYN cookies which can help with this
 problem.

 see
 http://squid-web-proxy-cache.1019090.n4.nabble.com/possible-SYN-flooding-on-port-3128-Sending-cookies-td2242687.html


 so it's almost sure that the same mechanism that works on linux
 kernel..
 is been used on the eset..
 the thing is that we are talking about the AV that sits on other
 machine..
 so, it's seems kind of odd for the AV\FW on other machine to actually
 be
 100% reliable on the analysis in this case?


 Yes. Is it getting a copy of all the packets? either by port mirroring
 or being a bridge?
  It could be checking the same things, but without the benefits of
 tuning the Squid box has.

 How its getting the poisoning attack conclusion baffles me a bit.
 Though working blind as to how the EV integrates with the network that
 is not hard.

 Amos
 I work with eset AV and FW systems and as far as i know they dont have
 IDS systems so it seems to me a malfunctioning or flooded switch
 cause most of the IDS systems knows how to understand network
 streams.(or at least suppose to)
 i really would like to know the network topology in this place :)

 Eliezer





[squid-users] TCP Flooding attack and DNS Poisioning attack

2011-04-11 Thread squid
Good day,
Some times when i check my ESET Antivirus LogFile, it shows that some
activities of clients in my network are attacking my network especially
squid port (3128) with TCP Flooding or DNS Poisioning. I check the
internet for there meaning and found out that they are not good activities
on any network.
Is there any configuration option(s) in squid that i can use to drop/block
such TCP Flooding and DNS Poisioning traffic?
Any suggestion?
Regards,
Yomi.


[squid-users] Strange messages in cache Log

2011-04-11 Thread squid
Good day,
what is the meaning of :httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;?
squidaio_queue_request: WARNING - Queue congestion?
http://ocsp.entrust.net/MEUwQzBBMD8wPTAJBgUrDgMCGgUABBSgLXLbL4La7i%2B3dMpUpZCcZtKubgQU6r8QpQEelY%2FJVbRnYKSP%2FYsPErQCBEKHQKU%3D;?
storeAufsOpenDone: (13) Permission denied ?
See excerpt from Cache log below
Regards,
Yomi


2011/04/11 09:12:54| Beginning Validation Procedure
2011/04/11 09:12:55|   Completed Validation Procedure
2011/04/11 09:12:55|   Validated 6840 Entries
2011/04/11 09:12:55|   store_swap_size = 73204k
2011/04/11 09:12:56| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 09:12:56| storeLateRelease: released 162 objects
2011/04/11 09:53:21| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 10:09:49| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 10:22:19| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 10:39:47| httpReadReply: Excess data from GET
http://ocsp.entrust.net/MEUwQzBBMD8wPTAJBgUrDgMCGgUABBSgLXLbL4La7i%2B3dMpUpZCcZtKubgQU6r8QpQEelY%2FJVbRnYKSP%2FYsPErQCBEKHQKU%3D;
2011/04/11 10:43:14| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 10:57:25| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 11:02:23| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 11:41:05| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 11:49:50| httpReadReply: Excess data from GET
http://ocsp.entrust.net/MEUwQzBBMD8wPTAJBgUrDgMCGgUABBSgLXLbL4La7i%2B3dMpUpZCcZtKubgQU6r8QpQEelY%2FJVbRnYKSP%2FYsPErQCBEKHQKU%3D;
2011/04/11 12:15:14| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 12:35:50| storeAufsOpenDone: (13) Permission denied
2011/04/11 12:35:50|d:/squid/var/cache/00/0D/0DE0
2011/04/11 12:35:50| storeSwapOutFileClosed: dirno 1, swapfile 0DE0,
errflag=-1
(13) Permission denied
2011/04/11 12:35:50| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 13:08:15| squidaio_queue_request: WARNING - Queue congestion
2011/04/11 13:11:53| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 13:26:26| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 13:26:27| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;
2011/04/11 13:37:09| httpReadReply: Excess data from GET
http://webcs.msg.yahoo.com/crossdomain.xml;


  1   2   3   4   5   6   >