Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Eliezer Croitoru

Hey Amos,

I didn't had the chance to follow the PROXY protocol advancements.
Was there any fix for the PROXY protocol issue that I can test?

Thanks,
Eliezer

On 09/06/2015 02:06, Amos Jeffries wrote:

We are somewhat recently added basic support for the PROXY protocol to
Squid. So HAProxy can relay port 80 connections to Squid-3.5+ without
processing them fully. However Squid does not yet support that on
https_port, which means the TLS connections still wont have client IP
details passed through.

Amos



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_crtd breaks after short time

2015-06-09 Thread Klavs Klavsen

Hi,

James Lay just replied to me with his current config.. (pretty much like 
what he posted), and it seems he does not even try to use http_access 
rules to filter on urls from https requests..


@Amos: are you certain that there's not an error in how http_access 
rules are applied to bumped connections?


What I noted was:

Instead of having:
http_access allow CONNECT bumpedPorts

he has:
http_access allow SSL_ports

which somehow seems to work instead.

He only uses http_access allow rules for http sites.. he filters https 
on domain only - using:

acl allowed_https_sites ssl::server_name_regex /opt/etc/squid/http_url.txt
ssl_bump bump allowed_https_sites
ssl_bump terminate !allowed_https_sites

in my access log - using james lay's format - squid only logs CONNECT.. 
so it seems its not registering the step AFTER CONNECT as something 
seperate - which would explain why its not applying http_access 
filtering to it ?


10.xx.131.244 - - [09/Jun/2015:08:40:15 +0200] CONNECT 
64.233.184.94:443 HTTP/1.1 www.google.dk - 200 20042 
TCP_TUNNEL:ORIGINAL_DST peek
10.xx.131.244 - - [09/Jun/2015:08:40:19 +0200] CONNECT 72.51.34.34:443 
HTTP/1.1 lwn.net - 200 28295 TCP_TUNNEL:ORIGINAL_DST peek
10.xx.131.244 - - [09/Jun/2015:08:42:30 +0200] CONNECT 72.51.34.34:443 
HTTP/1.1 lwn.net - 200 28258 TCP_TUNNEL:ORIGINAL_DST peek



Amos Jeffries wrote on 06/05/2015 12:18 AM:

On 5/06/2015 3:34 a.m., Klavs Klavsen wrote:

I would be perfectly fine with allowing the SSL bumping to finish for
ALL https sites - and then only block when the http request comes..

I'm hoping someone can tell me what I've done wrong in my config.. I'm
obviously not understanding how it works when https is envolved.. it
works as intended with http..


It should be working. I'm a bit confused myself now why that CONNECT
line would be matching the decrypted requests, they definitely should
not be having the CONNECT request method as they are destined to an
origin server.

We've missed something basic, and will probably kick ourselves at how
simple when its reavealed. :-(
  All I can think of now is that James log format should be indicating
more clearly whats going on than the default Squid one will.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users




--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Rafael Akchurin
Hi Amos,

snip

 There seems to be a bit of a myth going around about how HAProxy does
 load balancing. HAProxy is an HTTP layer proxy. Just like Squid.
 
 They both do the same things to received TCP connections. But HAProxy
 supports less HTTP features, so its somewhat simpler processing is also
 a bit faster when you want it to be a semi-dumb load balancer.

 We are somewhat recently added basic support for the PROXY protocol to Squid. 
 So HAProxy can relay port 80 connections to Squid-3.5+ without
 processing them fully. However Squid does not yet support that on
 https_port, which means the TLS connections still wont have client IP
 details passed through.

So what would be your proposition for the case of SSL Bump? 
How to get the connecting client IP and authenticated user name passed to the 
ICAP server when a cluster of squids somehow getting the CONNECT tunnel 
established? 

Assume we left away the haproxy and rely solely on squid - how would you 
approach this and how many instances of squid would you deploy?

From my limited knowledge the FQDN proxy name being resolved to a number of IP 
addresses running one squid per IP address is the simplest approach. 


Best regards,
Rafael
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Upload issue with squid 3.5.5

2015-06-09 Thread TarotApprentice
I have a number of machines running BOINC which are having issues uploading 
with one particular project (climateprediction.net) however if I redirect the 
client to a Squid 2.7 server they work fine. It doesn't do it every time, some 
files work just fine. They are usually 15Mb or 47Mb uploads.

Below is the http debug messages from a BOINC client indicating it can't 
understand the encoding after its made initial contact. This was taken when 
talking to Squid 3.5.5 (Win Server 2008).

Cheers,
MarkJ


09-06-2015 11:45 PM Started upload of 
hadam3p_anz_n9oe_2007_1_009869130_0_12.zip 
09-06-2015 11:45 PM [http] [ID#82] Info:  timeout on name lookup is not 
supported 
09-06-2015 11:45 PM [http] [ID#82] Info:  Hostname was found in DNS cache 
09-06-2015 11:45 PM [http] [ID#82] Info:Trying 192.168.0.*... 
09-06-2015 11:45 PM [http] [ID#82] Info:  Connected to abc123 (192.168.0.*) 
port 3128 (#132) 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: POST 
http://cpdn-upload4.oerc.ox.ac.uk/cgi-bin/file_upload_handler HTTP/1.0
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: User-Agent: BOINC 
client (windows_x86_64 7.6.2)
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Host: 
cpdn-upload4.oerc.ox.ac.uk
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Accept: */*
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Accept-Encoding: 
deflate, gzip
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Proxy-Connection: 
Keep-Alive
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Content-Type: 
application/x-www-form-urlencoded
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Accept-Language: en_AU
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: Content-Length: 294
 
09-06-2015 11:45 PM [http] [ID#82] Sent header to server: 
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: HTTP/1.1 200 OK
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Date: Tue, 09 
Jun 2015 13:45:10 GMT
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Server: 
Apache/2.4.7 (Ubuntu)
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Vary: 
Accept-Encoding
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: 
Content-Encoding: gzip
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Content-Length: 
75
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Content-Type: 
text/plain
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: X-Cache: MISS 
from abc123
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Via: 1.1 abc123 
(squid/3.5.5)
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: Connection: 
keep-alive
 
09-06-2015 11:45 PM [http] [ID#82] Received header from server: 
 
09-06-2015 11:45 PM [http] [ID#82] Info:  Error while processing content 
unencoding: invalid block type 
09-06-2015 11:45 PM [http] [ID#82] Info:  Closing connection 132 
09-06-2015 11:45 PM [http] HTTP error: Unrecognized or bad HTTP Content or 
Transfer-Encoding *
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] howto disable tls compression when using sslbump in squid-3.5.5 between squid and https webserver ?

2015-06-09 Thread Dieter Bloms
Hello,

I use squid 3.5.5 and use the sslbump feature.
When I activate sslbump, the browsertest on www.ssllabs.com
( https://www.ssllabs.com/ssltest/viewMyClient.html )
says TLS compression is activated and insecure.
I use openssl 1.0.1m on my proxyserver

I tried some settings like:

sslproxy_flags No_Compression

but squid claims FATAL: Unknown ssl flag 'No_Compression'.

Is it possible to disable TLS compression for the connection from squid
to the webserver when sslbump is used ?

Thank you very much.


-- 
Regards

  Dieter Bloms

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_crtd breaks after short time

2015-06-09 Thread Amos Jeffries
On 10/06/2015 2:51 a.m., Klavs Klavsen wrote:
 Amos Jeffries wrote on 06/09/2015 03:06 PM:

 The HTTP message log (access.log) is only logging the HTTP(S) messages.
 The non-HTTP protools are not logged.


 10.xx.131.244 - - [09/Jun/2015:08:40:15 +0200] CONNECT
 64.233.184.94:443 HTTP/1.1 www.google.dk - 200 20042
 TCP_TUNNEL:ORIGINAL_DST peek

 This got peeked then spliced (not decrypted). There is no decrypted
 message(s) to be logged or even to pass through http_access.

 I'm obviously not understanding something.. I would like squid to fake
 the certificate - and then when the clients sends an actual request -
 run that through http_access.. so I can match on urls..
 
 I'd rather not filter on only domain if possible..
 
 Is that not possible currently with squid?

That is the bump action and depends on what TLS details are presented
by client and server vs what you have configured to be done.

You have to first configure ssl_bump in a way that lets Squid receive
the clientHello message (step1 - peek) AND the serverHello message
(step2 - peek). Then you can use those cert details to bump (step3 -
bump).
The config is quite simple:
  ssl_bump peek all
  ssl_bump bump all


But there are cases like the client is resuming a previous TLS session
where there is no certificates involved. Squid cannot do anything, so it
automatically splices (3.5.4+ at least do). Or if you have configured
your Squid in a way that there are no mutually supported ciphers.


It may just be your ssl_bump rules. But given that this is a google
domain there is a strong chance that you are encountering one of those
special case.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lag Time Displaying SVG files

2015-06-09 Thread JR Swartz
I traced the problem to the persistent_request_timeout variable.  Once I set 
this from 2 Min to 10 Seconds, it resolved the issue.



==
J.R. Swartz
Northern Computer Service, LLC
Owner

8821 Hwy 47 East
Woodruff, WI 54568
715.358.9806
Email:  jrswa...@ncswi.com
Web Site:  www.ncswi.com




-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Monday, June 08, 2015 6:45 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Lag Time Displaying SVG files

On 9/06/2015 9:21 a.m., JR Swartz wrote:
 We have a customer that uses Squid (3.x).  When viewing the 
 www.fmcdealer.com web site for their business, standard web pages load 
 as expected.  However, when drilling down into the wiring diagrams 
 (which are in SVG format), there is exactly a 2 minute (120 sec) delay 
 before the diagrams are displayed.
 
  
 
 We've tested this on several svg diagrams and, regardless of the size 
 of the diagram, the delay is exactly 2 minutes.
 
  
 
 Additionally, we've run tcpdump -vv and it appears there is no traffic 
 during this 2 minute period.
 
  
 
 We're hoping someone has seen this or the 120 second delay may trigger 
 someone's memory about this issue?

If the end of the server reply message has not yet been received then this is 
not a Squid problem.


It may be server processing delays, but given that exact timing it is more 
likely to be a TCP timeout on the connection.


If the server emits a message without Content-Length or Transfer-Encoding 
headers then end-of-message is the TCP connection close signal.
 If the server fails to close Squid is left waiting for more bytes which will 
never arrive, until the TCP networking stack times out and closes it. Then 
Squid relays the end-of-message signal to the client and everything works again.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_crtd breaks after short time

2015-06-09 Thread Klavs Klavsen
Amos Jeffries wrote on 2015-06-09 17:10:
[CUT]
 You have to first configure ssl_bump in a way that lets Squid receive
 the clientHello message (step1 - peek) AND the serverHello message
 (step2 - peek). Then you can use those cert details to bump (step3 -
 bump).
 The config is quite simple:
   ssl_bump peek all
   ssl_bump bump all
 
I have this:
ssl_bump peek step1 broken
ssl_bump peek step2 broken
ssl_bump splice broken
ssl_bump peek step1 all
ssl_bump peek step2 all
ssl_bump bump all

 
 But there are cases like the client is resuming a previous TLS session
 where there is no certificates involved. Squid cannot do anything, so it
 automatically splices (3.5.4+ at least do). Or if you have configured
 your Squid in a way that there are no mutually supported ciphers.
 

My client is curl.. I don't think that its caching any TLS sessions.

 
 It may just be your ssl_bump rules. But given that this is a google
 domain there is a strong chance that you are encountering one of those
 special case.

I'd like squid to disallow queries where it cannot see what domain name
/ url is going to be accessed.

I'd like all GET/POST etc. requests to go through squid - so they are
controlled by the normal http_access rules as http (intercepted) is
currently.

This worked with 3.4.12 :( (but only for 30 minutes or less)

You saw my full config.. how is it supposed to look with 3.5.5, for this
to work as it did with 3.4.12 ?

sorry I'm a bit frustrated.. I can't seem to grasp what changed from
3.4.12 to 3.5.5, which means I suddenly can't filter https traffic
anymore :(

-- 
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Amos Jeffries
On 9/06/2015 7:15 p.m., Rafael Akchurin wrote:
 Hi Amos,
 
 snip
 
 There seems to be a bit of a myth going around about how HAProxy does
 load balancing. HAProxy is an HTTP layer proxy. Just like Squid.

 They both do the same things to received TCP connections. But HAProxy
 supports less HTTP features, so its somewhat simpler processing is also
 a bit faster when you want it to be a semi-dumb load balancer.
 
 We are somewhat recently added basic support for the PROXY protocol to 
 Squid. 
 So HAProxy can relay port 80 connections to Squid-3.5+ without
 processing them fully. However Squid does not yet support that on
 https_port, which means the TLS connections still wont have client IP
 details passed through.
 
 So what would be your proposition for the case of SSL Bump? 
 How to get the connecting client IP and authenticated user name passed to the 
 ICAP server when a cluster of squids somehow getting the CONNECT tunnel 
 established? 
 
 Assume we left away the haproxy and rely solely on squid - how would you 
 approach this and how many instances of squid would you deploy?
 
 From my limited knowledge the FQDN proxy name being resolved to a number of 
 IP addresses running one squid per IP address is the simplest approach. 
 

Yes, it would seem to be the only form which meets all your criteria
too. Everything else runs up against the HTTPS brick wall.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Amos Jeffries
On 9/06/2015 9:36 p.m., Eliezer Croitoru wrote:
 Hey Amos,
 
 I didn't had the chance to follow the PROXY protocol advancements.
 Was there any fix for the PROXY protocol issue that I can test?

IIRC the issues we found are all resolved. Though I've had no confirmation.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_crtd breaks after short time

2015-06-09 Thread Amos Jeffries
On 9/06/2015 6:44 p.m., Klavs Klavsen wrote:
 Hi,
 
 James Lay just replied to me with his current config.. (pretty much like
 what he posted), and it seems he does not even try to use http_access
 rules to filter on urls from https requests..
 
 @Amos: are you certain that there's not an error in how http_access
 rules are applied to bumped connections?

As far as I know its working as designed.

You can enable debug_options 28,5 to see what access controls are
being run.


 
 What I noted was:
 
 Instead of having:
 http_access allow CONNECT bumpedPorts

... which matches only the pre-bumping CONNECT requests.

 
 he has:
 http_access allow SSL_ports

... which matches anything going to port 443 etc. *bumped or not.*

 
 which somehow seems to work instead.

The working config when applied to HTTPS requests is equivalent to:

  http_access deny CONNECT !SSL_Bump
  http_access allow all


 
 He only uses http_access allow rules for http sites..

Yes, read that back to yourself.


 he filters https
 on domain only - using:
 acl allowed_https_sites ssl::server_name_regex
 /opt/etc/squid/http_url.txt
 ssl_bump bump allowed_https_sites
 ssl_bump terminate !allowed_https_sites
 
 in my access log - using james lay's format - squid only logs CONNECT..
 so it seems its not registering the step AFTER CONNECT as something
 seperate - which would explain why its not applying http_access
 filtering to it ?

The HTTP message log (access.log) is only logging the HTTP(S) messages.
The non-HTTP protools are not logged.

 
 10.xx.131.244 - - [09/Jun/2015:08:40:15 +0200] CONNECT
 64.233.184.94:443 HTTP/1.1 www.google.dk - 200 20042
 TCP_TUNNEL:ORIGINAL_DST peek

This got peeked then spliced (not decrypted). There is no decrypted
message(s) to be logged or even to pass through http_access.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Recommended multi-worker setup?

2015-06-09 Thread TarotApprentice
In the examples on the squid site it gives a multi-worker example using carp 
(http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster). Now that rock 
storage has been updated with 3.5.5 is that still the best approach?

I was thinking of a single rock cache so the workers could share it rather than 
the example which has a shared rock cache plus each worker having its own aufs 
cache. I need to cache windows updates which can get fairly big and would 
rather not duplicate them between workers.

Cheers,
MarkJ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended multi-worker setup?

2015-06-09 Thread Amos Jeffries
On 10/06/2015 12:35 p.m., TarotApprentice wrote:
 In the examples on the squid site it gives a multi-worker example using carp 
 (http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster). Now that rock 
 storage has been updated with 3.5.5 is that still the best approach?
 
 I was thinking of a single rock cache so the workers could share it rather 
 than the example which has a shared rock cache plus each worker having its 
 own aufs cache. I need to cache windows updates which can get fairly big and 
 would rather not duplicate them between workers.
 

It is still far better to use UFS caches for large objects.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upload issue with squid 3.5.5

2015-06-09 Thread Amos Jeffries
On 10/06/2015 1:11 p.m., TarotApprentice wrote:
 Yes I noticed that and assumed that was because 2.7 wasn't able to handle 
 HTTP 1.1 fully.
 
 I think I better keep the squid 2.7 machine around for a bit. It was due to 
 be retired as it's an old WinXP machine.
 

Maybe not.

I took a look through the BOINC code and found that your version will
only output HTTP/1.0 like I see in your logs if it is forced to.

From the BOINC client release notes:

David  19 Dec 2006
 - core client: add http_1_0/ config flag for
 people with proxies that require HTTP 1.0.
 Curl's default is 1.1


Please try removing that config option from the boinc-client config file
for the Squid-3 traffic.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High-availability and load-balancing between N squid servers

2015-06-09 Thread Alex Samad
Hi

I run 2 squid boxes, and I use pacemaker to float 2 VIP's between the 2 boxes.

Basically I just run squid on both and I create a VIP resource that
test if squid is running to allocate the VIP.

But this doesn't really give you load balancing. but very good resilience.


Pacemaker and Linux have the ability to do load balancing, by using a
share IP and some hashing algo , I haven't tested it though



On 9 June 2015 at 22:51, Amos Jeffries squ...@treenet.co.nz wrote:
 On 9/06/2015 7:15 p.m., Rafael Akchurin wrote:
 Hi Amos,

 snip

 There seems to be a bit of a myth going around about how HAProxy does
 load balancing. HAProxy is an HTTP layer proxy. Just like Squid.

 They both do the same things to received TCP connections. But HAProxy
 supports less HTTP features, so its somewhat simpler processing is also
 a bit faster when you want it to be a semi-dumb load balancer.

 We are somewhat recently added basic support for the PROXY protocol to 
 Squid.
 So HAProxy can relay port 80 connections to Squid-3.5+ without
 processing them fully. However Squid does not yet support that on
 https_port, which means the TLS connections still wont have client IP
 details passed through.

 So what would be your proposition for the case of SSL Bump?
 How to get the connecting client IP and authenticated user name passed to 
 the ICAP server when a cluster of squids somehow getting the CONNECT tunnel 
 established?

 Assume we left away the haproxy and rely solely on squid - how would you 
 approach this and how many instances of squid would you deploy?

 From my limited knowledge the FQDN proxy name being resolved to a number of 
 IP addresses running one squid per IP address is the simplest approach.


 Yes, it would seem to be the only form which meets all your criteria
 too. Everything else runs up against the HTTPS brick wall.

 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Installing certificate on Andriod to use with SSL-bump

2015-06-09 Thread dkandle
I would like to be able to inspect traffic from my android device. I have a
transparent squid proxy working with SSL bump (using WiFi to get traffic
through my proxy server). Everything works fine as long as I go through a
browser. But I would like to see the other traffic which the OS and other
apps are sending. Squid uses a certificate I generated for the web sites and
I create an exception for those without issue.
If I install my certificate on the phone will it then accept the certificate
when squid returns it during the ssl setup? To be clear, I see the phone use
port 443 to setup a secure session. However it rejects the certificate (as
it should) and terminates the session with no data being passed. I can
install my certificate on the phone, but will the android OS use that
certificate for all services or only for browser sessions? If not, is there
some other way I can get my fake certificate accepted for all sessions for
which it is used?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Installing-certificate-on-Andriod-to-use-with-SSL-bump-tp4671645.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users