Re: [squid-users] c-icap documentation getting stuck

2019-12-21 Thread Alex Crow




i dont get any errors but when i run the below i get warnings

 /usr/local/bin/c-icap
WARNING Bad configuration keyword: enable_libarchive 0
WARNING Bad configuration keyword: banmaxsize 2M

thanks,
rob

You should be asking these questions on whatever resources c-icap 
provide for that purpose, eg their GitHub issues page. c-icap is not 
related in any way to the Squid project.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] c-icap documentation getting stuck

2019-12-21 Thread Alex Crow

robert,

I'd go the ecap way if I was you - no daemons to set up, just a library. 
c-icap has always been an issue as distros packages have never really 
acknowledged it exists in terms of permissions.


The ecap way avoids all of that mess entirely.

http://www.e-cap.org/docs/

http://www.e-cap.org/downloads/

https://wiki.squid-cache.org/Features/eCAP


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cant download microsoft cert file

2019-12-16 Thread Alex Crow


On 16/12/2019 09:10, robert k Wild wrote:

Would this work aswell

refresh_pattern -i 
/etc/squid/wu.txt/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip) 
 4320 
80% 43200 reload-into-ims


And in wu.txt

.microsoft.com 
.windows.com 
.windowsupdate.com 

Exactly like my dstdomain




No, because /etc/squid/wu.txt would be taken literally as part of the 
URL. And I don't think filenames are supported by that directive anyway.


Alex

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cant download microsoft cert file

2019-12-16 Thread Alex Crow

On 16/12/2019 08:06, robert k Wild wrote:

How can I make a pattern that matches multiple domains please Amos?


>
> refresh_pattern -i .microsoft.com 
.windows.com 
>
.windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)


>


 4320
> 80% 43200 reload-into-ims
> 



That's not really a subject for this list - search online for "regex" 
and you will see multiple tutorials about it.


You use a syntax like "(.microsoft.com 
|.windows.com|( 
.windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|[ap]sf|wm[v|a]|dat|zip)|foo.com)" 



eg (x|y|z(a|b)) would match x, y, za and zb.

Cheers

Alex

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Digicert replacing couple root CA, why it wasn't mentioned here?

2019-01-17 Thread Alex Crow
It was all over the IT news sites I read (Register, Slashdot, etc). 
Changed all our Thawte certs from Symantec to Digicert a few months ago. 
Pretty painless actually.


Alex

On 17/01/2019 17:03, Eliezer Croitoru wrote:


I noticed that there was a change in the RootCA world:

https://www.digicert.com/replace-your-symantec-ssl-tls-certificates/

Anyone else knew about it?

Thanks,

Eliezer




--
Insert pointless drivel here.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is this the next step of SSL encryption? Fwd: Encrypted SNI

2018-10-19 Thread Alex Crow



... until the browser starts using DNS over HTTPS (with a pinned
certificate of the "resolving" HTTPS server)?
  Alex.


It is relatively easy to block DNS over HTTPS and I think there will 
be demand for that.
And I predict that Squid will have a feature to selectively block 
connections with ESNI to force clients to use the plain text SNI.


Marcus



I can still see the endpoint security companies will be raking it in. 
Any of those fallbacks could be disabled by the browsers.


We're going to have to make sure that the endpoint solution is able to 
see all content before it is rendered or interpreted in the browser too.


The problem is that the whole SSL/TLS trust management system is 
fundamentally broken and I can't see that changing soon. PGP's model was 
great in theory (web of trust) but most people simply don't care who 
sends them what and can't be bothered to complicate their lives any 
more. And why should they? If their bank site works, Farcebook works and 
Hotmail works, why worry? We've built an entire social structure on two 
basic principles - "if I've done nothing wrong..." and "who'd be 
interested in my data?".




--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] want to change squid name

2018-10-03 Thread Alex Crow

Hi Ahmad,

I still don't understand properly. Do you want to run Squid as your own 
nonprivileged user, "ahmad" or "stinger", instead of the "squid" or 
"webproxy" user that is the usual in distros? That is easy, but trying 
to sed squid to  in the codebase is likely to fail, 
imagine trying to do that with the Linux kernel!


If that is the case, just set the user and group in the squid.conf and 
make sure that said user/group has the right privileges to access the 
various directories and files it needs. No problem at all, I've done it 
myself.


There should be no need to edit all the source code and recompile for 
this. Then for "hiding" your proxy use there are the other parameters in 
squid.conf that remove HTTP headers (but by doing do, be aware it's an 
RFC violation and hiding that your clients are behind a proxy can cause 
lots of issues, no. 1 being that Google will keep telling you that 
you're launching a DoS attack). It may also have legal implications in 
some countries, especially if you're forwarding for devices/clients not 
owned by you or your organisation.


I'm just curious as to why you have so little detail of what you need in 
this request when other posts you have made have supplied much more 
detail, logs, etc.


I'm sorry if this is all down to a language barrier and English is not 
your first language.  I've posted to German mailing lists when I only 
did 2 years of it at school and it's really hard!


I hope this helps you,

Best regards

Alex


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] want to change squid name

2018-10-02 Thread Alex Crow

What about this?

http://www.squid-cache.org/Doc/config/via/



we just don't understand the reason you are asking for this.

As was already mentioned (iirc), technically  you can change the name
"squid" to something else, but it is not supported (which means, 
there's no
standard way to do that) and you may expect problems (and we even 
don't know

what kind of problems).


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Using CA signed certificate for SSL bump

2018-09-05 Thread Alex Crow
You can set up your own internal CA. You then have the CA key (so can 
generate certificates for any domain) and install the CA public 
certificate on all client machines.



That CA can be anything from a local CA on the squid box, using a 
central VM with something like XCA installed, all the way to an 
enterprise HSM.



But you must have the CA key. There is no way a commercial CA would give 
you a universal signing key.



Alex


On 05/09/18 08:02, Arshad Ansari wrote:


Hi All,

I have setup squid 4.2 for forward proxy and caching. It is working 
fine when I am using self-signed certificate for SSL bump.


However, our security requirement is to use only CA signed certificate 
and not self-signed certificate.


I have tried various options like using Https and intercept but 
nothing seems to be working.


My question is does SSL work with CA signed certificate?

Regards,
Arshad




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] simple question Installed squid right now all internet access is blocked

2018-08-16 Thread Alex Crow
If it's an internal/RFC1918  IP then it makes no difference to your 
security in telling the list. If it's a public IP address then I hope 
you have your squid firewalled off from the internet.


If you at least paste your access.log and cache.log it will help.

Alex


On 16/08/18 12:29, Oldman wrote:

You wanted to know my server ip  and did you expect me to publish this
online?

I chose to beleive you are wasting my time :)

I am sorry I do not want to be rude but you are wasting my time.



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NgTech repo new service: fastest.ngtech.co.il/repo/

2018-07-17 Thread Alex Crow

On 16/07/18 00:17, Eliezer Croitoru wrote:


Hey Squid-Users,

I am running a trial period to see how it works for these who needs it.

The RPM’s repository is sitting at:

http://fastest.ngtech.co.il/repo/

and will give faster speed ie 10Mbps++ compared to the local server 
which has only 1Mbps upload with QOS on it.


Please use it will care since the service is there for you and these 
who need it.


If the service bandwidth will be abused I will take it down.

Thanks,

Eliezer



Thanks Eleizer - *much* faster - I was having problems just getting the 
metadata on the old repo.


Alex



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question about traffic calculate

2018-06-08 Thread Alex Crow



On 08/06/18 17:29, Amos Jeffries wrote:

On 09/06/18 02:56, Tiraen wrote:

Small clarification

If the normal behavior of the proxy server described above is correct,
then maybe there are other methods of gathering information on traffic
in online mode?

What is "online mode" ?


SNMP is built in to squid. You can use it in conjunction with net-snmp 
proxy mode to gather far more granular performance/caching/response 
time/per-ip stats than squidclient or logs if that's what you're after.



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Sibling cache with ssl peek/splice/bump?

2018-05-15 Thread Alex Crow

Hi list,

Is it currently possible in v4 with bumping to have a cache_peer setup 
so that https:// resources can be fetched from a peer if they are 
available there?


Many thanks

Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Certificate transparency: problem for ssl-bumping, no effect, or?

2018-04-13 Thread Alex Crow



Unless the protocol design changes to expose full URLs and/or MIME types,
nothing will replace Squid Bumping.

That being said, we are headed to the vortex by 2018.05.01. Let's drown
together, while we yell and curse at Google!

MK





Erm, can someone elucidate the issue here? Can't see anything about this 
in the last year of mails from this list ;-)


Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assertion failed on Squid 4 when peer restarted.

2018-03-28 Thread Alex Crow

On 28/03/18 02:22, Amos Jeffries wrote:

On 28/03/18 03:24, Alex Crow wrote:

I have a squid 4.0.22 running peered with a 3.5.24 proxy. The latter
machine stopped responding and I had to reboot it, and then the 4.0.22
one crashed. Here's a log snippet:

2018/03/27 15:01:48 kid1| WARNING: failed to unpack metadata because
store entry metadata is too big
2018/03/27 15:04:09 kid1| Detected DEAD Sibling: webproxy.ifa.net
2018/03/27 15:04:09 kid1| Detected REVIVED Sibling: webproxy.ifa.net
2018/03/27 15:06:01 kid1| Detected DEAD Sibling: webproxy.ifa.net
2018/03/27 15:06:01 kid1| Detected REVIVED Sibling: webproxy.ifa.net
2018/03/27 15:06:44 kid1| Error negotiating SSL connection on FD 216:
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 199:
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 169:
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 29:
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 188:
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 190:
(104) Connection reset by peer
2018/03/27 15:07:12 kid1| Error negotiating SSL connection on FD 912:
(104) Connection reset by peer
2018/03/27 15:07:13 kid1| Error negotiating SSL connection on FD 514:
(104) Connection reset by peer
2018/03/27 15:07:26 kid1| ERROR: negotiating TLS on FD 236:
error::lib(0):func(0):reason(0) (5/-1/104)

2018/03/27 15:07:41 kid1| Error negotiating SSL connection on FD 129:
(104) Connection reset by peer
2018/03/27 15:08:17 kid1| assertion failed: store.cc:1690: "!mem_obj"

Any ideas?


First idea is to check bugzilla. I see nothing there.

Second is to upgrade to the latest v4 beta release (4.0.24 right now).

Third idea is to report to bugzilla or ask on squid-dev.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


I'll probably upgrade and if we still see it raise a BZ.

Cheers

Alex


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assertion failed on Squid 4 when peer restarted.

2018-03-27 Thread Alex Crow
I have a squid 4.0.22 running peered with a 3.5.24 proxy. The latter 
machine stopped responding and I had to reboot it, and then the 4.0.22 
one crashed. Here's a log snippet:


2018/03/27 15:01:48 kid1| WARNING: failed to unpack metadata because 
store entry metadata is too big

2018/03/27 15:04:09 kid1| Detected DEAD Sibling: webproxy.ifa.net
2018/03/27 15:04:09 kid1| Detected REVIVED Sibling: webproxy.ifa.net
2018/03/27 15:06:01 kid1| Detected DEAD Sibling: webproxy.ifa.net
2018/03/27 15:06:01 kid1| Detected REVIVED Sibling: webproxy.ifa.net
2018/03/27 15:06:44 kid1| Error negotiating SSL connection on FD 216: 
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 199: 
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 169: 
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 29: 
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 188: 
(104) Connection reset by peer
2018/03/27 15:06:57 kid1| Error negotiating SSL connection on FD 190: 
(104) Connection reset by peer
2018/03/27 15:07:12 kid1| Error negotiating SSL connection on FD 912: 
(104) Connection reset by peer
2018/03/27 15:07:13 kid1| Error negotiating SSL connection on FD 514: 
(104) Connection reset by peer
2018/03/27 15:07:26 kid1| ERROR: negotiating TLS on FD 236: 
error::lib(0):func(0):reason(0) (5/-1/104)


2018/03/27 15:07:41 kid1| Error negotiating SSL connection on FD 129: 
(104) Connection reset by peer

2018/03/27 15:08:17 kid1| assertion failed: store.cc:1690: "!mem_obj"

Any ideas?

Thanks,

Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Allow some domains to bypass Squid

2018-03-11 Thread Alex Crow

.



The alternative for ssl-bump is the splice action. For that you only
need to know the server names each company uses.




OP,

It would be a lot easier to just create exceptions on the squid device 
for sites where bumping doesn't work which cause then to be tunnelled or 
spliced rather then bumped. You can then at least use dstdomain or 
ssl:servername rules. dstdomain will let you tunnel or splice, whereas 
ssl servername you will only be able to splice as an SSL connection must 
already have been started AFAIK. Your firewall will probably need 
restarting every time one of the IP addresses behind those hostnames 
changes. Squid will at least do a lookup every request for dstdomain 
(you need a good DNS server nearby or on the squid box).


BTW, peek/splice/bump is not just install and forget. It needs 
maintenance and care in deployment.


Adding transparent into the mix makes it more difficult, as I can see 
you have found.


Try to keep the architecture as simple as you can and use each part to 
its best ability. Simple firewalls using hostnames for rules is a path 
to severe pain where round-robin is in place. Might be OK with a big, 
expensive FW appliance that has the ability to DNS lookup for every 
connection.


Cheers

Alex


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] I can't understand the SSL connectios interception concept in explicit mode

2018-02-02 Thread Alex Crow

On 02/02/18 15:12, Roberto Carna wrote:

OK Matus, now I understandbut let me ask one more question:

In explicit mode, is it possible that a given person with Squid
advanced knowledge can see the plain text of the traffic? Because if
this person is the admin of the proxy server, I think it may be a way
to read the plain content of the connection user-remote server.

Thanks a lot again !!!


Unless you are using ssl-bump/peek and splice (which will be show up a 
warning in the browser if squid's CA in not installed in its list of 
authorities) the traffic is tunneled through squid still encrypted. You 
can't see anything but the domain part of the URL.


If you are bumping, and have installed CAs into browsers, just, of 
course it's possible for a proxy admin to see the plaintext.


Cheers

Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and missing intermediate certs

2018-01-29 Thread Alex Crow

On 26/01/18 17:50, Alex Rousskov wrote:

On 01/26/2018 02:30 AM, Alex Crow wrote:


I've just set up a new SSL interception proxy using peek/splice/bump
using squid 4.0.22 and I'm getting SSL errors on some site indicating
missing intermediate certs as described here:

https://blog.diladele.com/2015/04/21/fixing-x509_v_err_unable_to_get_issuer_cert_locally-on-ssl-bumping-squid/

I have read the wiki and I see this on the SslBumpExplicit page:

"Squid-4 <https://wiki.squid-cache.org/Squid-4> is capable of
downloading missing intermediate CA certificates, like popular browsers do."

However I'm finding that I have to follow the procedure in the diladele
article and manually install the intermediate certs into the PKI trust
to work around this.


Several cases are possible here:

1. Squid is missing the root certificate used by the origin server.
Neither Squid nor browsers can fetch root certificates automatically
(for hopefully obvious reasons).

2. Squid is missing an intermediate certificate used by the origin
server, and the origin server provided no instructions on how to fetch
that missing certificate automatically. Neither Squid (for sure) nor
browsers (AFAIK) can fetch missing intermediate certificates
automatically if they are not given origin server instructions of where
to get them. Those instructions are usually given as various extension
fields in signed certificates.

3. Squid is missing an intermediate certificate used by the origin
server, the origin server provided instructions on how to fetch that
missing certificate automatically, but Squid does not understand/support
those instructions. There are several instruction formats/variants, and
Squid does not support some of them. Please consider adding that support
to Squid (requires writing code or sponsoring development).

4. Squid is missing an intermediate certificate used by the origin
server, the origin server provided instructions on how to fetch that
missing certificate automatically, Squid followed those instructions,
but something went wrong. Study detailed Squid debugging logs or post
them for analysis by others.

You need to study each error to understand which case applies to it.

To make matters worse, a combination of #1 and other cases is possible:
Sometimes, automatically fetching a missing certificate leads to
certificate validation problems that could have been avoided if Squid
had the right (and different) trusted certificate in the first place:
https://github.com/squid-cache/squid/commit/9ef7d9d5ddef54283cea4f1fdb7b3bbc1715755c


I doubt Squid logs enough information (by default) to quickly and easily
distinguish the four cases for a given error -- you may need to study
the origin server certificates and Squid logs. For example, #4 should
manifest itself as access.log errors associated with failed certificate
fetching requests.


As the solution for #1-2 or workaround for #3-4, if you trust the
missing certificate, manually add it to your trust store (which is what
you were doing).


HTH,

Alex.


Thanks very much Alex. I thought it might be something like that. I'm 
guessing it's most likely #3 or #4 as the site works direct from the 
browser.


Cheers

Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid asking for authentication repeatedly

2017-12-11 Thread Alex Crow
Firefox is not great at Auth. Chrome works better imho. FF seems ok with 
digest, ie AD.

⁣Sent from TypeApp ​

On 11 Dec 2017, 22:05, at 22:05, Paul Hackmann  wrote:
>Has anyone had the instance where the proxy will ask the user to
>authenticate several times as they are browsing the web?  I have been
>seeing this as a random occurrence for some of the users on the server.
> It
>will pop up a login prompt in the browser repeatedly for a minute or
>two.
>Then it will settle down and be fine for hours.  I'm trying to track it
>down, but I can't find anything amiss.  The access logs haven't shown
>anything unusual.  I am using basic authentication with the proxy
>settings
>set in firefox.  Is this something that a spike in traffic on the
>server
>could cause?  Anybody have any suggestions?  The server is linux based.
>
>PH
>
>
>
>
>___
>squid-users mailing list
>squid-users@lists.squid-cache.org
>http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https://wiki.squid-cache.org provides invalid certificate chain ...

2017-11-18 Thread Alex Crow



On 18/11/17 12:56, Walter H. wrote:

On 18.11.2017 13:51, Walter H. wrote:

Hello,

still certificate issues: missing intermediate certificate

Greetings,
Walter

@Amos:


 There is
 *no* chain. Our cert is directly signed by the LetsEncrypt CA.
 Amos


that's wrong;  LetsEncrypt is only an intermediate, and MUST be given 
by the server,

as it isn't in any Trust Store by default.




Yep, I use LE and an it has a root CA and an intermediate - mine has:

DSA Root CA X3 -> Let's Encrypt Authority X3 -> .

Cheers

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Website pointed to 127.0.0.1

2017-09-15 Thread Alex Crow


On 15/09/17 13:58, Matheus Fernandes wrote:

Hello!
I have a fqdn that points to 127.0.0.1, when I try to access it 
through squid, I get an error. I need to make it process on the same 
machine that made the request, and not on squid server. I tried using 
always_direct directive, but squid always tries to process at the 
server side.


This issue is the same presented at 
http://lists.squid-cache.org/pipermail/squid-users/2015-May/003477.html
except that in my case I have hundreds of computers running squid, 
making it a lot difficult to put an exception on every single browser.


Is there any way around this?

Thanks


WPAD/PAC auto-config for your browser?

http://findproxyforurl.com/wpad-introduction/ 



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-17 Thread Alex Crow


On 17/11/16 18:11, Patrick Chemla wrote:
>
> Hi Alex, sorry for disturbing, but it works with
>
> https_port 5.39.105.241:443 accel defaultsite=www.sempli.com
> cert=/etc/squid/ssl/sempli.com.crt
> key=/etc/squid/ssl/sempli.com.key
>
> Many, many, many Thanks for valuable help.
>
> Patrick

No problem.

I think we all tend to overthink things until we've got used to them.
Glad you got it sorted.

Alex



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-16 Thread Alex Crow


On 16/11/16 17:33, Patrick Chemla wrote:
> Thanks for your answers, I am not doing anything illegal, I am trying to
> build a performant platform.
> 
> I have a big server running about 10 different websites.
> 
> I have on this server virtual machines, each specialized for one-some
> websites, and squid help me to send the traffic to the destination
> website on the internal VM according to the URL.
> 
> Some VMs are paired, so squid will loadbalance the traffic on group of
> VMs according to the URL/acls.
> 
> All this works in HTTP, thanks to Amos advices few weeks ago.
> 
> Now, I need to set SSL traffic, and because the domains are different I
> need to use different IPs:443 to be able to use different certificates.
> 
> I tried many times in the past to make squid working in SSL and never
> succeed because of so many options, and this question: Does the traffic
> between squid and the backend should be SSL? If yes, it's OK for me.
> nothing illegal.
> 
> The second question: How to set up the SSL link on squid getting the SSL
> request and sending to the backend. Actually the backend can handle SSL
> traffic, it's OK for me if I find the way to make squid handle the
> traffic, according to the acls. squid must decrypt the request, compute
> the acls, then re-crypt to send to the backend.
> 
> The reason I asked not to reencrypt is because of performances. All this
> is on the same server, from the host to the VMs and decrypt, the
> reencrypt, then decrypt will be ressources consumming. But I can do it
> like that.
> 
> Now, do you have any Howto, clear, that will help? I found many on
> Google and not any gave me the solution working.
> 
> The other question is about Trusted Certificates. We have on the
> websites trusted certificates. Should we use the same on the squid?
> 
> Thanks for appeciate help
> 
> Patrick
> 
> 

You are using a reverse proxy/web accelerator setup. Nothing you do
there will be illegal if you're using it for your own servers! You
should be able to use HTTP to the backend and just offer HTTPS from
squid. This will avoid loading the backend with encryption cycles. You
don't need any certificate generation as AFAIK you already have all the
certs you need.

See:

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

for starters. You can adapt the wildcard example; if you have specific
certs for each domain, just listen on a different IP for each domain and
set up multiple https_port with a different listening IP for each site.
If you have a wildcard cert, ie *.mydomain.com, follow it directly.

Here's a couple more:

http://wiki.univention.com/index.php?title=Cool_Solution_-_Squid_as_Reverse_SSL_Proxy

(I found the above with a simple google for "squid reverse ssl proxy".
Google is your friend here... )

http://www.squid-cache.org/Doc/config/https_port/

That's as far as my knowledge goes on reverse in Squid, at my site we
use nginx.But AFAIK if you're doing what I think you're doing that
should be enough. Squid does have a lot of config parameters, but then
so does any other fully capable proxy server. Just focus on the parts
you need for your role and it will be much easier. Specifically ignore
bump/peek+splice, it's just for forward proxy.

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-15 Thread Alex Crow

On 15/11/16 16:22, Yuri Voinov wrote:



You can if you have control over the clients, ie install your CA into
the browser/OS.

... and this can be illegal ;)



YMMV (depending on where you live/work)!
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-15 Thread Alex Crow



On 15/11/16 14:28, Yuri Voinov wrote:



So, you can't do SSL bump without users notification.


You can if you have control over the clients, ie install your CA into 
the browser/OS.


Alex
--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-15 Thread Alex Crow

On 15/11/16 14:22, Sergio Belkin wrote:

Hi,

When using something like that:

http_port 8080 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB 
cert=/home/proxy/ssl_cert/example.com.cert 
key=/home/proxy/ssl_cert/example.com.private



Is possible to use a certificate generated by a trusted CA?


Thanks in advance!
--
--
Sergio Belkin
LPIC-2 Certified - http://www.lpi.org


If you mean a normal commercial CA, then no, because you would need the 
CA's signing key, which I very much doubt they would give you, and your 
cert would need to have signing capability, which it won't.


Cheers

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Alex Crow

On 24/10/16 11:26, Yuri wrote:


No, Amos, I'm not trolling your or another developers.

I just really do not understand why there is a caching proxy, which is 
almost nothing can cache in the modern world. And that in vanilla 
version gives a maximum of 10-30% byte hit. From me personally, it 
needs no justification and no explanation. And the results.


I can not explain to management why no result, referring to your 
explanations or descriptions of standards. I think it's understandable.


At the present time to obtain any acceptable result it is necessary to 
make a hell of a lot of effort. To maintenance such installation is 
not easy.


And as with every new version of the caching level it falls - and it 
is very easy to check - it is very difficult to explain to management, 
is not it?


It's not my imagination - this is confirmed by dozens of Squid 
administrators, including me personally familiar. Therefore, I would 
heed to claim that I lie or deliberately introduce someone else astray.




I'd rather have to explain to management about a low hitrate than have 
to explain why they weren't seeing the content they expected to see, or 
that some vital transaction did not go through, but, hey look here, 
we're saving 80% of web traffic bill!



--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Force DNS queries over TCP?

2016-06-30 Thread Alex Crow
Packt Publishing has a book about FreeSWAN (don't use that) which is
almost all applicable to LibreSWAN (do use this, it's a newer fork).

Easiest is to set up a tunnel with PSKs, more secure is with RSA keys or
X509 certs.

Alex

On 30/06/16 19:20, Chris Horry wrote:
>
> On 06/30/2016 13:34, Alex Crow wrote:
>> I'd suggest changing IP as this practice is
>>
>> a) a violation of trust, forcing you to use a potentially compromised
>> resource you have no control over
>> b) a clear violation of net-neutrality
>> c) a violation of standards (as it's probably one of those that instead
>> of returning NXDOMAIN as required sends you to an advertising page.
>> )
> Tell me about it.  My ISP and I are having a pitched battle about it
> now.  Unfortunately my options are limited in my current area but at
> least it's not Comcast!
>
>> I'm pretty sure you /can/ configure BIND to work like that. I should
>> imagine you could set up forwarders to TCP-based DNS servers.
>>
>> The other option is to get a DNS server set up on a VPS and tunnel your
>> requests to it via IPSEC.
> Sounds like a good idea, time to learn IPSEC!
>
> Thanks,
>
> Chris
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Force DNS queries over TCP?

2016-06-30 Thread Alex Crow
I'd suggest changing IP as this practice is

a) a violation of trust, forcing you to use a potentially compromised
resource you have no control over
b) a clear violation of net-neutrality
c) a violation of standards (as it's probably one of those that instead
of returning NXDOMAIN as required sends you to an advertising page.
)
I'm pretty sure you /can/ configure BIND to work like that. I should
imagine you could set up forwarders to TCP-based DNS servers.

The other option is to get a DNS server set up on a VPS and tunnel your
requests to it via IPSEC.

Alex

On 30/06/16 18:21, Chris Horry wrote:
> Hello,
>
> My ISP have started forcing DNS queries to pass through their own DNS
> server, which appears to have many issues (can't resolve twitter.com for
> one).  I won't bore the list with my conversations with them over that part.
>
> They are not actively blocking TCP DNS queries so I have a workaround.
>
> Recognising that DNS over TCP is not an ideal solution
>
> 1. Can Squid be configured to use TCP by default for DNS inquiries?  If
> not consider this a feature request :)
> 2. Is there a DNS caching server that can do this instead (BIND9 doesn't
> seem to have it as an option)
>
> Any help appreciated.
>
> Thanks,
>
> Chris
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Bump with valid CA

2016-06-16 Thread Alex Crow



> 
> Now i need to try to configurate squid with a non self-signed certificate
> 

This is impossible, as you don't have access to the CA's signing key,
for very good reason (you could create certs for any site in the world
and it would be trusted by any browser that trusts StartSSL's CA).

You can ask them for it and see what they say, but be prepared for a
rude response!

Cheers

Alex


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL certifcate on android device not working

2016-05-06 Thread Alex Crow

On 06/05/16 14:09, Reet Vyas wrote:

Hi

I have squid ssl bump working but when I added squid.crt  to my 
android , it not working but working with Iphone cause they have 
certificate installer app , I dont know exact issue cause my apps are 
on working . I have installed squid.crt on mobile browsers ,internet 
is working but not any app like youtube, instagram etc


Please let know what issue with certificate installation on Android 
devices




I think the problem is simply that CA cert management on Android simply 
sucks. That is my experience, YMMV.


:-)

Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.5 CentOS RPMs release

2015-06-30 Thread Alex Crow

Thanks for this Eliezer - however I can't rebuild the SRPM on latest CentOS:

configure: Authentication support enabled: yes
checking for ldap.h... (cached) no
checking winldap.h usability... no
checking winldap.h presence... no
checking for winldap.h... no
configure: error: Basic auth helper LDAP ... found but cannot be built
error: Bad exit status from /var/tmp/rpm-tmp.EEzUBx (%build)

And I definitely have openldap-devel installed. I'm not sure where the 
(cached) comes from, but it's the same for both 3.4.4 and 3.4.5 SRPMs.


Best regards

Alex

On 29/06/15 00:22, Alex Samad wrote:

Thanks

On 29 June 2015 at 00:59, Eliezer Croitoru elie...@ngtech.co.il wrote:

Hey list,

I have created the new RPM's for CentOS 6 and 7 while not mentioning I also
created the package for OracleLinux.(which was very annoy to find out that
the download file from Oracle was not matching an ISO but something else)

The 3.5.5 and 3.5.4 was published here:
http://www1.ngtech.co.il/wpe/?p=90

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.5 CentOS RPMs release

2015-06-30 Thread Alex Crow
Thanks for the quick reply. I managed to fix it, by removing my old 
rpmbuild directory and starting again, and of course making sure that 
gcc-c++ was installed (which it wasn't!)


Cheers

Alex

On 30/06/15 20:34, Eliezer Croitoru wrote:
If you will look on the configure options I have used in the RPMs you 
would have seen that I changed\removed a helper or two from the build.

I didn't had time to inspect the issue yet.
How do you rebuild from SRPM?(important)

Eliezer

On 30/06/2015 21:48, Alex Crow wrote:

Thanks for this Eliezer - however I can't rebuild the SRPM on latest
CentOS:

configure: Authentication support enabled: yes
checking for ldap.h... (cached) no
checking winldap.h usability... no
checking winldap.h presence... no
checking for winldap.h... no
configure: error: Basic auth helper LDAP ... found but cannot be built
error: Bad exit status from /var/tmp/rpm-tmp.EEzUBx (%build)

And I definitely have openldap-devel installed. I'm not sure where the
(cached) comes from, but it's the same for both 3.4.4 and 3.4.5 SRPMs.

Best regards

Alex

On 29/06/15 00:22, Alex Samad wrote:

Thanks

On 29 June 2015 at 00:59, Eliezer Croitoru elie...@ngtech.co.il 
wrote:

Hey list,

I have created the new RPM's for CentOS 6 and 7 while not mentioning
I also
created the package for OracleLinux.(which was very annoy to find out
that
the download file from Oracle was not matching an ISO but something
else)

The 3.5.5 and 3.5.4 was published here:
http://www1.ngtech.co.il/wpe/?p=90

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Centos7 rpms?

2015-06-11 Thread Alex Crow

On 11/06/15 20:25, Eliezer Croitoru wrote:

What is the issue??
Did you tried the latest RPM's ??
http://wiki.squid-cache.org/KnowledgeBase/CentOS

Eliezer 


Hi,

Are there any plans to build centos/rhev7 packages? Native LVM caching 
on SSD is something that may well benefit Squid performance.


Cheers

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tracking user connection times

2015-04-20 Thread Alex Crow

On 20/04/15 15:34, Dan Berry wrote:


I have setup a squid proxy as a POC for user tracking. I am looking 
for a way to track for close events, most of the customer sites that 
are accessed are HTTPS so I can’t track activity. I might be able to 
get by with tracking total connect time, so I know the windows of time 
users were connected to a specific site. Is this possible?


Dan Berry

Data Network Engineer




I doubt it unless you are in control of the sites the users are 
visiting. When a page is loaded, the browser instructs the OS to open a 
TCP connection to download the page. When all the data has been 
downloaded, the TCP connection becomes idle and after a short time the 
OS will close it.


If the sites are yours I suppose you could add some JS that would get 
the browser to repeatedly make a request to the site with a 
page-specific ID so you could track how long they were on that page.


Cheers

Alex

--
This message is intended only for the addressee and may contain 
confidential information. Unless you are that person, you may not 
disclose its contents or use it in any way and are requested to delete 
the message along with any attachments and notify us immediately. 
Transact is operated by Integrated Financial Arrangements plc. 29 
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 
5300. (Registered office: as above; Registered in England and Wales 
under number: 3727592). Authorised and regulated by the Financial 
Conduct Authority (entered on the Financial Services Register; no. 190856).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 100Mbps Connection Issues

2015-01-09 Thread Alex Crow
Speed tests will always enforce nocache so you will always see 
overhead from a speed test site.


That's just the way proxies work. You can't make a single, new 
download any quicker that it would be, and since it has a flag telling 
Squid not to cache it, Squid has to go the the trouble of both 
retrieving the content and then passing it on to the client.


Those figures are very good so I'd not actually worry about them. Using 
delay pools will show bouncy performance as it's based on buckets - 
when the bucket of data is empty the server have to start refilling it 
before anything comes back to your client.


Cheers

Alex


On 09/01/15 18:22, bradley.le...@zenergyok.com wrote:

I have a VM that is running Ubuntu 14.04 LTS 2.5G RAM 32-bit, that is serving 
up our Squid proxy 3.3. We recently upgraded our internet connecting to 
100Mbps/100Mbps+. When running a speed test from the server, the results are 
100Mbps/100Mbps+. But when running the speed test from a proxy client, the 
results are only 85Mbps/50Mbps. I have tried “Delay Pools”, and also created a 
fresh install physical server with two NIC’s (one public facing, one private) 
getting the same results. Speeds hit 85Mbps/50Mbps, then abruptly stop and 
bounce of those numbers.

Please advise, Thank you
Bradley Lemon




This e-mail transmission (and/or the documents accompanying it) is intended 
only for the use of the individual(s) or entity to which it is addressed, and 
may contain information that is PRIVILEGED, CONFIDENTIAL and exempt from 
disclosure under applicable law. If you are not the intended recipient, you are 
hereby notified that any disclosure, distribution, copying or the taking of any 
action in reliance on the contents of this information is strictly prohibited. 
If you have received this e-mail in error, please notify us immediately, and 
delete the e-mail and the accompanying documents, if any, without saving. Thank 
you.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] You MUST specify at least one Domain Controller.You can use either \ or / as separator between the domain name

2014-12-19 Thread Alex Crow

Hi,

That is how NTLM works. It doesn't (normally) indicate anything is 
wrong. You do seem to have a /lot/ of DENIED though.


NTLM Auth will slow down browsing somewhat because authentication is 
performed for every object retrieved. Google Maps can be a real nasty 
because it loads lots of small images for the map tiles. However I don't 
know /how/ slow your access is so I can't really say if it's likely to 
be a problem.


Cheers

Alex

On 19/12/14 23:50, Ahmed Allzaeem wrote:

Thank  you Amos , don’t know wt to say , u helped me a lot !

Now it get user/pwd

But still a new issue appeared !!

Now the browsing is so slow !!

I check the logs of squid I found a lot of TCP_denied and some of TCP_MISS


The question is being asked ... why a lot of requests is being deinied ans some 
is  being accepted ???

Here is a sample :
1418996889.904  1 192.168.1.5 TCP_DENIED/407 3972 GET http://google.com/ - 
NONE/- text/html
1418996889.925  1 192.168.1.5 TCP_DENIED/407 4189 GET http://google.com/ - 
NONE/- text/html
1418996889.936  2 192.168.1.5 TCP_DENIED/407 4506 GET http://google.com/ - 
NONE/- text/html
1418996889.943  2 192.168.1.5 TCP_DENIED/407 4189 GET http://google.com/ - 
NONE/- text/html
1418996897.774   7830 192.168.1.5 TCP_MISS/302 1258 GET http://google.com/ b 
DIRECT/74.125.232.228 text/html
1418996905.927   8142 192.168.1.5 TCP_MISS/302 1266 GET http://www.google.ps/? 
b DIRECT/74.125.232.247 text/html
1418996905.943  3 192.168.1.5 TCP_DENIED/407 4128 CONNECT 
dtex4kvbppovt.cloudfront.net:443 - NONE/- text/html
1418996905.946  2 192.168.1.5 TCP_DENIED/407 4128 CONNECT 
dtex4kvbppovt.cloudfront.net:443 - NONE/- text/html
1418996905.949  4 192.168.1.5 TCP_DENIED/407 4128 CONNECT 
dtex4kvbppovt.cloudfront.net:443 - NONE/- text/html
1418996905.949  4 192.168.1.5 TCP_DENIED/407 4128 CONNECT 
dtex4kvbppovt.cloudfront.net:443 - NONE/- text/html
1418996905.953  2 192.168.1.5 TCP_DENIED/407 3851 CONNECT www.google.ps:443 
- NONE/- text/html
1418996905.955  4 192.168.1.5 TCP_DENIED/407 4128 CONNECT 
dtex4kvbppovt.cloudfront.net:443 - NONE/- text/html
1418996905.969  2 192.168.1.5 TCP_DENIED/407 4068 CONNECT www.google.ps:443 
- NONE/- text/html
1418996905.973  1 192.168.1.5 TCP_DENIED/407 4393 CONNECT www.google.ps:443 
- NONE/- text/html
1418996905.980  1 192.168.1.5 TCP_DENIED/407 4068 CONNECT www.google.ps:443 
- NONE/- text/html
1418996908.011  1 192.168.1.5 TCP_DENIED/407 4103 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996908.015  1 192.168.1.5 TCP_DENIED/407 4320 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996908.019  2 192.168.1.5 TCP_DENIED/407 4661 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996909.041  1 192.168.1.5 TCP_DENIED/407 3859 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1418996909.089  2 192.168.1.5 TCP_DENIED/407 4076 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1418996909.097  2 192.168.1.5 TCP_DENIED/407 4405 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1418996909.104  2 192.168.1.5 TCP_DENIED/407 4076 CONNECT 
ssl.gstatic.com:443 - NONE/- text/html
1418996910.755  1 192.168.1.5 TCP_DENIED/407 3859 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1418996910.784  1 192.168.1.5 TCP_DENIED/407 4076 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1418996910.791  2 192.168.1.5 TCP_DENIED/407 4405 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1418996910.796  1 192.168.1.5 TCP_DENIED/407 4076 CONNECT 
www.gstatic.com:443 - NONE/- text/html
1418996917.152  2 192.168.1.5 TCP_DENIED/407 4103 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996917.156  2 192.168.1.5 TCP_DENIED/407 4320 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996917.161  2 192.168.1.5 TCP_DENIED/407 4663 POST 
http://clients1.google.com/ocsp - NONE/- text/html
1418996920.312  1 192.168.1.5 TCP_DENIED/407 3903 CONNECT 
tiles.services.mozilla.com:443 - NONE/- text/html
1418996920.334  4 192.168.1.5 TCP_DENIED/407 4120 CONNECT 
tiles.services.mozilla.com:443 - NONE/- text/html
1418996920.471  2 192.168.1.5 TCP_DENIED/407 4483 CONNECT 
tiles.services.mozilla.com:443 - NONE/- text/html
1418996926.896  1 192.168.1.5 TCP_DENIED/407 4120 CONNECT 
tiles.services.mozilla.com:443 - NONE/- text/html
1418996935.623  1 192.168.1.5 TCP_DENIED/407 4079 POST 
http://ocsp.digicert.com/ - NONE/- text/html
1418996935.630  3 192.168.1.5 TCP_DENIED/407 4296 POST 
http://ocsp.digicert.com/ - NONE/- text/html
1418996935.633  2 192.168.1.5 TCP_DENIED/407 4635 POST 
http://ocsp.digicert.com/ - NONE/- text/html
1418996935.640  2 192.168.1.5 TCP_DENIED/407 4296 POST 
http://ocsp.digicert.com/ - NONE/- text/html
1418996935.810   7242 192.168.1.5 TCP_MISS/200 6448 GET 
http://whatismyipaddress.com/ b DIRECT/66.171.248.172 text/html
1418996935.852  1 192.168.1.5 TCP_DENIED/407 4349 GET 
http://maps.google.com/maps/api/js? - 

Re: [squid-users] Unhandled exception: c

2014-08-18 Thread Alex Crow

Hi,

Anyone have any ideas on this?

Thanks

Alex


Hi Amos,

I spoke to soon. I have this (maybe more informative than the original 
error though).


2014/07/31 11:57:45 kid1| assertion failed: String.cc:201: len_ + len 
 65536
2014/07/31 11:58:07 kid1| Starting Squid Cache version 
3.3.12-20140309-r12678 for x86_64-pc-linux-gnu...

2014/07/31 11:58:07 kid1| Process ID 14375
2014/07/31 11:58:07 kid1| Process Roles: worker
2014/07/31 11:58:07 kid1| With 65535 file descriptors available

This is on 3.3. 12 again. I have set up 3.4.x to remove NTLM auth (in 
fact all auth) but we are going to try to give our users a break for a 
couple of months until we throw this at them in an attempt to get to 
the bottom of the high CPU usage on 3.4.


Cheers

Alex






Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-18 Thread Alex Crow


http://www.squid-cache.org/Doc/config/cache/

On 03/08/14 10:25, nuhll wrote:

Seems like acl all src all fixed it. Thanks!

One problem is left. Is it possible to only cache certain websites, the rest
should just redirectet?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667127.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Unhandled exception: c

2014-07-31 Thread Alex Crow



Hi Amos,

That patch seems to have worked. No crashes so far since it went into 
production.


Thanks very much!

Alex


Hi Amos,

I spoke to soon. I have this (maybe more informative than the original 
error though).


2014/07/31 11:57:45 kid1| assertion failed: String.cc:201: len_ + len  
65536
2014/07/31 11:58:07 kid1| Starting Squid Cache version 
3.3.12-20140309-r12678 for x86_64-pc-linux-gnu...

2014/07/31 11:58:07 kid1| Process ID 14375
2014/07/31 11:58:07 kid1| Process Roles: worker
2014/07/31 11:58:07 kid1| With 65535 file descriptors available

This is on 3.3. 12 again. I have set up 3.4.x to remove NTLM auth (in 
fact all auth) but we are going to try to give our users a break for a 
couple of months until we throw this at them in an attempt to get to the 
bottom of the high CPU usage on 3.4.


Cheers

Alex




Re: [squid-users] why squid can block https when i point my browser to port , and cant when its transparent ?

2014-07-27 Thread Alex Crow


On 27/07/14 16:00, Dr.x wrote:

hi all ,

i have 2 questions.


1- why when i make a normal squid with normal http port , and i direct my
browser to ip/port it can block https facebook


Because the browser is aware of the cache and issues CONNECT requests 
for SSL sites. Squid can see these and block them,





while
if it was transparent proxy it cant block https facebook ??


You can't use CONNECT with a transparent proxy as it implies the client 
has been configured with a proxy (which would not be transparent).




im talking about im configuraing normal http proxy not https !

wish a clarification.


2-now if i use ssl pump and used transparent tproxy with https ... can i buy
a trusted certificate and install it on squid and the users will not face
certificate not trusted message ?


NO! This is about the 3rd or 4th time this question has appeared on this 
list. You can't use a cert from a commercial provider because you need 
the cert's private key to produce new certs signed by it (which the cert 
provider will not give you in a million years). If this worked it would 
make SSL useless.





i mean , in production network with much users , i need to block https
youtube/facebook while keep using  transparent tproxy.



You need to create your own CA, import the CA cert into your client 
browsers (which will get rid of the warning) and use the key to do 
dynamic cert generation in squid. Then it is possible to do either WPAD 
based browser config, or, I think (harder) do TPROXY with bumping.


NB unless you can import your own CA cert into all client browsers you 
*WILL* get certificate validation failures in the browser.


Cheers

Alex



with to help

regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/why-squid-can-block-https-when-i-point-my-browser-to-port-and-cant-when-its-transparent-tp4667069.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-06-21 Thread Alex Crow



Another experiment is to try purging and rebuilding the ssl_crtd helper
cache.


Hi Amos,

We do the above on every squid restart anyway (via a wrapper script).





Your config file has some nits (may not be relevant to the problem though):

  * Try switching the order of manager localhost so localhost is tested
first. Manager has become a regex ACL.

  * hierarchy_stoplist can be removed completely. It is serving no
purpose in your config.


Yeah, I know! This config has pretty much just been tweaked from an 
original one that's about 11 years old. I'm still really keen to figure 
out why we can't really proceed to 3.4 and then hopefully get it fixed, 
Management have already asked to get in a reseller to look at 
Bluecoat/Barracuda/Websense etc so I'll try my best to get a good number 
of users on each config change I do to diagnose the problem.


Thanks for your time.

Cheers,

Alex



Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-06-20 Thread Alex Crow


On 20/06/14 14:28, Eliezer Croitoru wrote:





OK after reading the config file it seems like there are couple things 
that we\you should be aware of when looking at the issue:

1. External helpers code was changed from 3.3 to 3.4 (one way)
2. you are using delay_pools.
3. you are using ntlm authentication.

In the past there was suspect which said that the new helpers related 
code might cause an issue like that but yet to be verified.
(this needs testing and idea on how to show and proof that this is 
either a real suspect or a bogus one)


About ntlm auth.. There is sure some overhead related to using ntlm 
and cpu usage due to couple layers one on top of the other and it was 
proofed that there is a difference between using ntlm and not using 
ntlm at all.
It dosn't proof what in ntlm is causing the issue and I am not sure it 
will be fixed due to the basic fact that ntlm maintenance stopped at 
200X 3 or 6 and which I am not sure about the accurate date yet.


The only options I see is doing two things:
Remove the ntlm and group external helpers related acls for a testing 
period to verify that only when these works\runs the high cpu usage is 
there and while the delay_pools are still intact the system runs fine.

This will narrow down the issues from 3 to 2 ideal suspects.

There is also another suspect which is over-usage of squid ACLs to 
block or allow domains\regex\etc but it can be verified that these are 
not an issue by removing the external_acl and ntlm helpers and test 
how squid behave.


** Another tiny detail would be: what bandwidth is this server 
pushing? How many MBps or Mbps(MBps = mbps/8)?


I know that it can be painful to run these tests but if you have the 
option to verify the issue it will narrow the issue down pretty fast.


Also I am almost sure that this thread should be summarized into 
either a bug report or first a thread in squid-dev list so you would 
get better help and directions from the developers.


Thanks,
Eliezer


Hi,

The first thing I'm going to try is disabling delay pools for CONNECT, 
then after that for all requests.


As disabling NTLM will leave us more open than I'd like that would be 
the following step.


Cheers

Alex


Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-06-19 Thread Alex Crow


On 21/05/14 08:30, Amos Jeffries wrote:

On 21/05/2014 8:11 a.m., Alex Crow wrote:

Wrong on my part again.

Changing the memory_replacement_policy still got to 100% cpu after
Shift-reload in Thunderbird a few times - even disabling cache_mem
entirely did not eliminate it. 3.3 never gets about about 67% load no
matter how many time the page is reloaded.

Thunderbird, are these troubles all coming from  HTML emails?

Does using AUFS instead of diskd cache types help? there are a lot of
calls in that trace polling the diskd helpers.

Amos



Hi Amos,

aufs is no better - in fact it seems to build up CPU much faster than 
diskd on just a couple of page reloads.


Alex


Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-06-02 Thread Alex Crow


On 02/06/14 15:12, Antoine Klein wrote:

Ok I'm understanding !

Finally I'm going to change strategy, if it isn't possible to decrypt
HTTPS without warning for client, I shall make differently.
You will have to, as it's impossible to do so without interfering with 
the user's client devices.




So there is two solutions, the first one is to use Squid without
deciphering SSL request. So Amos you explained that but I don't
understand what bugs is encountered. So in this case, how can I
configure Squid ? I didn't find example and I have already asked for
that but i was told it would be impossible, but they were not sure.


Just use delay pools as described in the docs. The bugs will not be 
showstoppers, they might just bias the pools unexpectedly but given 
you'll have lots of random clients it will probably even out.




The second solution consists in not using Squid, but to apply a QoS
differently, but I need a QoS like the Squid delay pool, do you know
if it is possible ? Alex you already spoken to me about LARTC, but I
need to find a solution quickly, so I fear that it was too long to
understand the Linux QoS possibilities.


How about Shorewall, pfSense, etc? No-one here probably has the time to 
give you an out-of box setup that will suit you. I know for sure I 
don't. You also have a pre-existing firewall and given it looks fairly 
magical it should be able to do per-ip QoS (at least if you just drop 
the Squid before it hits the FW)


I can't understand how you've been persuaded to accept a project that 
you should have been doing months of research on and then agree to 
deliver in days (not knowing what was actually possible). Did you 
over-promise you your boss? If so, don't!


I never promise to deliver anything. I give an estimate that is bases on 
(((Time I expect to take this given I know everything *3) + (Time I 
think I'll need to find something out when I find I don't know 
everything *3)) * (Time it will take me to reconcile what people said 
they want vs what thet actually need *3) * 3). If an external supplier 
is involved multiply the whole lot by *at least* 10.


That works out to about 2 months for what your average 
client/boss/marketing person says will take a week...


Cheers

Alex








Regards.

2014-06-02 10:06 GMT-04:00 Antoine Klein klein.a...@gmail.com:

Ok I'm understanding !

Finally I'm going to change strategy, if it isn't possible to decrypt HTTPS
without warning for client, I shall make differently.

So there is two solutions, the first one is to use Squid without deciphering
SSL request. So Amos you explained that but I don't understand what bugs is
encountered. So in this case, how can I configure Squid ? I didn't find
example and I have already asked for that but i was told it would be
impossible, but they were not sure.

The second solution consists in not using Squid, but to apply a QoS
differently, but I need a QoS like the Squid delay pool, do you know if it
is possible ? Alex you already spoken to me about LARTC, but I need to find
a solution quickly, so I fear that it was too long to understand the Linux
QoS possibilities.

Regards.


2014-05-31 12:54 GMT-04:00 Amos Jeffries squ...@treenet.co.nz:


On 1/06/2014 3:49 a.m., Alex Crow wrote:
snip

But given all you really need is QoS, why don't you either (a) dispense
with Squid and just to QoS on the firewall for your Wifi subnet or (b)
put a transparent firewall between your clients and the Squid server
that does QoS? Or just see if Squid delay pools work for SSL (I think
they *do*, the traffic still passes via Squid as a CONNECT request -
it's just that Squid can't see or proxy the plaintext content.)


I second all of the above. In particular that the built-in QoS features
of the firewall or router device neworking config is far better place to
be doing the delay actions than Squid.

In regards to delay pools and HTTPS. As far as I know the pools work
without decrypting, although you may encounter one of a handful of bugs
which trigger over or under counting of bytes (depending on the bug
hit). So you may need a special delay pool configured with a hack on the
speed value of port 443 traffic to make the user-visible speed what they
expect.

Amos




--
Antoine KLEIN







Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-31 Thread Alex Crow


On 30/05/14 21:12, Antoine Klein wrote:

Ok I agree with you, i wasn't clear to describe my issue :/ I'll try
to be more understandable.

My company is a bus company, the Clients aren't specific, they are
like lambda users. In fact, the WIFI is deployed in bus station so
everybody can use this WIFI, and there is not authentification, just a
page to accept terms of use.


OK so at least you have something for them to agree to. You have to 
state there that the usage including content may be intercepted and 
logged - certainly if the laws in your country require you to.




I don't need to decrypt SSL, I just need to use Delay Pool, so I
believed it wasn't possible to apply a Delay Pool without decrypt SSL
on HTTPS request, anyway i didn't find how to do that. The cache is an
option but really not necessary.


I see nothing on the delay pools page to suggest that you need to 
decrypt https to make it work.




No it's not a firewall of the NSA :) , anyway i don't believe, it's my
boss who explained me that, the firewall inspect the packets, and he
confirms that it's not illegal else they wouldn't do that.


You can't intercept the /content/ of https packets without an MITM attack.



In my mine, i think when a WIFI user wants to connect on HTTPS page,
the request detect a MITM attack but the certificate assure that it's
normal and secure because godaddy know that we are a trusted company.
After that, the request on the proxy is redirected on specific squid
port, squid decipher the SSL request and it create a new https request
on the web with its certificates from user request.


SSL does not work like this! If a user requests site 
https://mylittlepony.com, they expect the SSL certificate's subject name 
to be mylittlepony.com not whatever the domain you got for your 
godaddy certificate. If the subject name of the cert does not match the 
visited site, there will always be a warning in the browser. You also 
cannot use the Godaddy cert as a CA cert as the certificate basic 
constraints on a commercially issued cert prevent it from being used as 
a CA.


You need to create your own CA with a private key, then, and only then, 
can you use those two to issue certs, signed by your private key, with 
the subject name of each site the clients visit. The clients will still 
get a warning as your CA cert is not in their built-in list of trusted CAs.


That is all there is to it. You will in no way be able to get rid of 
warnings in the browser without both bumping and dynamic cert 
generation, plus your CA (*NOT* GoDaddy's) installed on the clients.


The only way you could do this (and no even marginally savvy user would 
ever trust it) would be to used a browser-in-browser frame 
portal/web-services proxy. This is way out of scope for this list.


But given all you really need is QoS, why don't you either (a) dispense 
with Squid and just to QoS on the firewall for your Wifi subnet or (b) 
put a transparent firewall between your clients and the Squid server 
that does QoS? Or just see if Squid delay pools work for SSL (I think 
they *do*, the traffic still passes via Squid as a CONNECT request - 
it's just that Squid can't see or proxy the plaintext content.)


Cheers

Alex





2014-05-30 11:44 GMT-04:00 Alex Crow a...@nanogherkin.com:

Hi Antoine,

Replies below:


On 30/05/14 15:44, Antoine Klein wrote:

Ok i'm really sorry, i don't understand the english very well...
I read again the discussion but i am confused :/

Before this project i had not any knowledge about certificates and SSL
connexions but i did several research on the subject, especially on
squid wiki.
I also read again the documentation here :
http://wiki.squid-cache.org/Features/SslBump
http://wiki.squid-cache.org/Features/DynamicSslCert
http://wiki.squid-cache.org/Features/HTTPS
But nothing concern trusted signed certificate :/

My company wishes to offer to its clients a public WIFI, i need to use
squid for the delay pool, and possibly the cache. There is already a
warning given on the connexion where we have to accept terms of use
which warns the user.


Who are your clients - by which I mean not only what devices/browsers but
also what relationship do they have to your company?

I think (anyone correct me if I'm wrong) that delay pools do not require you
to decrypt *anything*. To cache SSL replies, inspect for viruses/malware/bad
URL paths,  you do need to do so, hence SSLBump.



So, according to you, isn't it possible ?
I think it's strange, because the WIFI is deployed, and the connexion
of clients passes by the firewall which already decipher packets.


I have no idea what you are talking about here. How can your firewall
possibly decipher SSL communications between some random Wifi Connected
device and some web server out on the internet. Again, this would mean
that SSL would be utterly worthless (which despite recent developments, it
is not). Unless you gor your firewall from the NSA in which case I'd not
recommend advertising

Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-30 Thread Alex Crow

Hi Antoine,

Replies below:

On 30/05/14 15:44, Antoine Klein wrote:

Ok i'm really sorry, i don't understand the english very well...
I read again the discussion but i am confused :/

Before this project i had not any knowledge about certificates and SSL
connexions but i did several research on the subject, especially on
squid wiki.
I also read again the documentation here :
http://wiki.squid-cache.org/Features/SslBump
http://wiki.squid-cache.org/Features/DynamicSslCert
http://wiki.squid-cache.org/Features/HTTPS
But nothing concern trusted signed certificate :/

My company wishes to offer to its clients a public WIFI, i need to use
squid for the delay pool, and possibly the cache. There is already a
warning given on the connexion where we have to accept terms of use
which warns the user.


Who are your clients - by which I mean not only what devices/browsers 
but also what relationship do they have to your company?


I think (anyone correct me if I'm wrong) that delay pools do not require 
you to decrypt *anything*. To cache SSL replies, inspect for 
viruses/malware/bad URL paths,  you do need to do so, hence SSLBump.



So, according to you, isn't it possible ?
I think it's strange, because the WIFI is deployed, and the connexion
of clients passes by the firewall which already decipher packets.


I have no idea what you are talking about here. How can your firewall 
possibly decipher SSL communications between some random Wifi Connected 
device and some web server out on the internet. Again, this would 
mean that SSL would be utterly worthless (which despite recent 
developments, it is not). Unless you gor your firewall from the NSA in 
which case I'd not recommend advertising that fact on here!






I don't understand why do you speak about dynamic certificate
generation, does it concern my problem ? Because finally i have the
certificate signed by godaddy and the private key of this certificate.


I feel like you might be wasting your time (and money) if you paid for 
this, You presumably have submitted a CSR for foo.whatever.domain to 
be signed by Godaddy. and received a certificate (.pem/.p12/.crt 
whatever) back How do you propose to use the certificate (which only 
certifies that domain) to somehow provide client browsers with a valid 
certificate for whatever https:// site they choose to visit? How would a 
cert for foo.whatever.domain have any use for someone visiting 
https://mylittlepony.com (example!). Or have we just completely missed 
the point and this SSL stuff is just for your own web server behind 
squid - in which case you have gone completely in the wrong direction 
and need to be looking at setting up a reverse prosy, which does not 
require SSLBump at all and would indeed work with what you've just done.




Anyway, thanks for your patience. :)


I fear that even if mine does not run out then that of others may do so 
first. You really need to state exactly what it is you are trying to 
achieve, and this has so far IMHO not happened - and your English is 
perfectly good enough to do so.


Thanks

Alex



Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-29 Thread Alex Crow

Antoine,

I really think you are completely missing the point of what everyone has 
said to you on this list.


1. SSL bumping is effectively an MITM attack against users/clients and 
they must be aware that it is happening and it must be legal in your 
country and also comply with company policy (if this is for corporate use).
2. You *CAN NOT* use a certificate issued by a commercial CA to do SSL 
bumping with dynamic certificate generation, full stop. It *CANNOT* work 
- if it did, SSL would be utterly useless. For everyone on the internet, 
not just your clients.
3. You *CAN NOT* prevent an SSL warning appearing for bumped connections 
unless you are able to install on the clients *your own CA cert*, ie 
*the very same CA* you use in Squid. Squid will need that CA's private 
key to be able to generate certs for every https site your clients visit.


Please read all the Squid docs about SSL and a lot of general info about 
how SSL works (ie the trust model) as I feel we are all now at a loss in 
helping you further!


Alex


On 29/05/14 20:02, Antoine Klein wrote:

Thanks for your answers !

Alex your last answer is for me ? What is illegal ?

Finally, i managed to install the certificate, in fact my boss had the
private key...

So i have another problem, squid start correctly with the certificate
but on the client with firefox i have this error
ssl_error_bad_cert_domain when i make an HTTPS connexion.
Furthermore, Squid displays an error 2014/05/29 14:15:53 kid1|
clientNegotiateSSL: Error negotiating SSL connection on FD 11:
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca
(1/0)

Do you know these errors ?

2014-05-28 11:39 GMT-04:00 Alex Crow a...@nanogherkin.com:

You cannot generate on the fly new certs that are signed by a commercial CA.
You need a generated cert for every site your clients visit.

And if you are not in control of your clients this would be not only
unethical but also most likely illegal - and you won't get any further help
from this list with either of those.

On 28 May 2014 15:55:04 BST, Antoine Klein klein.a...@gmail.com wrote:

I send back my post because i'm not sur it is sent...

Ok thanks all !

I haven't in control of clients so it's the real problem, i can't
install certificate on their smartphone ^^.

So according to you, if i create a CA with openssl, and create a
certification signing request (.csr) with a private key, and if i send
my csr to a trusted authority to sign it, i could use it in squid
without problem, then clients wouldn't have any warning ?
I would like to be sure to avoid every problem.

2014-05-28 2:47 GMT-04:00 Alex Crow a...@nanogherkin.com:


  On 28/05/14 03:43, Amos Jeffries wrote:


  On 28/05/2014 8:19 a.m., Antoine Klein wrote:


  I want to bump ssl connections, but without produce a warning of
course.

  I read it is possible to generate a request of certification with a
  key and send this file to an authority to sign it, do you know that ?


  Having your cert signed by a widely trusted certificate authority is
one
  thing, and the basis of how TLS/SSL works.

  SSL-bump cannot be used with that type of key for the reasons Alex
  already mentioned. He also mentioned the steps you have to take instead
  to get it going.

  Amos



  Hi Antoine,

  You need to be a CA, ie have the CA private key, to be able to do this.
If
  you are in control of the clients and know how to use OpenSsl to create
a CA
  you can do this without paying any money to anyone. You simply create
the CA
  br /
and use it and its private key in your ssl-bump configuration.


  http_port 3128 sslBump generate-host-certificates=on
  dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/proxy.pem

  proxy.pem is your private key and CA certificate concatenated.

  sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB

  The above line configures the crtd helpers that actually generate the
certs
  for the requests, see
http://wiki.squid-cache.org/Features/DynamicSslCert

  Cheers

  Alex




--
Sent from my Android device with K-9 Mail. Please excuse my brevity.







Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-28 Thread Alex Crow


On 28/05/14 03:43, Amos Jeffries wrote:

On 28/05/2014 8:19 a.m., Antoine Klein wrote:

I want to bump ssl connections, but without produce a warning of course.

I read it is possible to generate a request of certification with a
key and send this file to an authority to sign it, do you know that ?

Having your cert signed by a widely trusted certificate authority is one
thing, and the basis of how TLS/SSL works.

SSL-bump cannot be used with that type of key for the reasons Alex
already mentioned. He also mentioned the steps you have to take instead
to get it going.

Amos



Hi Antoine,

You need to be a CA, ie have the CA private key, to be able to do this. 
If you are in control of the clients and know how to use OpenSsl to 
create a CA you can do this without paying any money to anyone. You 
simply create the CA and use it and its private key in your ssl-bump 
configuration.


http_port 3128 sslBump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/proxy.pem


proxy.pem is your private key and CA certificate concatenated.

sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB

The above line configures the crtd helpers that actually generate the 
certs for the requests, see 
http://wiki.squid-cache.org/Features/DynamicSslCert


Cheers

Alex


Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-27 Thread Alex Crow

Hi,

You can't possibly do this. To ssl-bump you need access to a private key 
to sign the certs you offer to clients. Not in a million years is a 
Commercial CA going to give you their private key. Such a key can sign 
any certificate which would then be trusted by any software that 
includes GoDaddy's CA (ie IE, Firefox, Chrome etc).


You need to use OpenSSL to set up your own CA and use its private key in 
Squid as the key to generate new certificates. And preferably install 
your new CA cert into your users' certificate stores as a Trusted CA.


The private key is basically the thing that any CA has to keep the most 
private for SSL to work. Providers like GoDaddy would probably have the 
machine that holds the private keys for at least their Root CA on a 
private network (if even it's networked at all) and use subordinate CAs 
to issue certificates to their clients (ie you). Unless you are a very 
large trusted organisation and jump through many hoops you will get a 
subordinate signing key from a reputable commercial CA.


Otherwise, the internet and SSL would already be more borken than it is 
right now ;-)


Alex


On 27/05/14 19:13, Antoine Klein wrote:

Hi there,

My boss give me a certificate purchased from Godaddy to intercept HTTPS request.

squid.conf :
http_port 3127 transparent
http_port 3128
https_port 3129 transparent ssl-bump cert=/etc/ssl/myGodaddyCertif.crt
sslproxy_capath /etc/ssl/certs

When i restart squid i have an error :
ERROR: Failed to acquire SSL private key
'/etc/ssl/myGodaddyCertif.crt': error:0906D06C:PEM
routines:PEM_read_bio:no start line

I haven't a private key, so is this normal ?

Thanks !





Re: [squid-users] Install Godaddy certificate on squid to use ssl-bumping functionnality

2014-05-27 Thread Alex Crow

Hi,

Mistake in my post: should be:

 and jump through many hoops you will *NOT* get a subordinate signing 
key from a reputable commercial CA.


Otherwise, the internet and SSL would already be more borken than it 
is right now ;-)


Alex


On 27/05/14 19:13, Antoine Klein wrote:

Hi there,

My boss give me a certificate purchased from Godaddy to intercept 
HTTPS request.


squid.conf :
http_port 3127 transparent
http_port 3128
https_port 3129 transparent ssl-bump cert=/etc/ssl/myGodaddyCertif.crt
sslproxy_capath /etc/ssl/certs

When i restart squid i have an error :
ERROR: Failed to acquire SSL private key
'/etc/ssl/myGodaddyCertif.crt': error:0906D06C:PEM
routines:PEM_read_bio:no start line

I haven't a private key, so is this normal ?

Thanks !







Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-05-21 Thread Alex Crow



Thunderbird, are these troubles all coming from  HTML emails?



I meant Firefox, sorry - I was writing the email in Thunderbird so typed 
that in instead. Not quite 40 yet but already losing it!



Does using AUFS instead of diskd cache types help? there are a lot of
calls in that trace polling the diskd helpers.


I've not tried it but I'll have a go.

Cheers

Alex



Re: [squid-users] Unhandled exception: c

2014-05-20 Thread Alex Crow




I will apply this over the weekend and we'll keep our fingers crossed
for Monday.

Would a similar patch be required for 3.4 assuming this fixes the 
problem?


Cheers

Alex


Hi Amos,

That patch seems to have worked. No crashes so far since it went into 
production.


Thanks very much!

Alex


[squid-users] Squid 3.4 very high cpu - strace.

2014-05-20 Thread Alex Crow

Hi Amos, all,

I have set up a test box with latest 3.4.5 nightly. I get 95-100% cpu 
even with one client accessing the cache. I've attached a compressed 
strace of the child process in case anything is evident from that. 
Please tell me what else I might need to do to help resolve this issue.


I'm hoping this will help get to the bottom of why a number of people 
are having this issue on 3.4.x.


Any help much appreciated as always.

Alex


strace.txt.bz2
Description: application/bzip


Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-05-20 Thread Alex Crow

I think I've just found something. I had this set:

memory_replacement_policy heap GDSF

replacing this with:

memory_replacement_policy lru

got rid of the high CPU in 3.4 (works ok in 3,3).

I will try heap LRU.

Cheers

Alex

On 20/05/14 19:54, Alex Crow wrote:

Hi Amos, all,

I have set up a test box with latest 3.4.5 nightly. I get 95-100% cpu 
even with one client accessing the cache. I've attached a compressed 
strace of the child process in case anything is evident from that. 
Please tell me what else I might need to do to help resolve this issue.


I'm hoping this will help get to the bottom of why a number of people 
are having this issue on 3.4.x.


Any help much appreciated as always.

Alex




Re: [squid-users] Squid 3.4 very high cpu - strace.

2014-05-20 Thread Alex Crow

Wrong on my part again.

Changing the memory_replacement_policy still got to 100% cpu after 
Shift-reload in Thunderbird a few times - even disabling cache_mem 
entirely did not eliminate it. 3.3 never gets about about 67% load no 
matter how many time the page is reloaded.


So again hope the the trace shows something up.

Cheers

Alex



On 20/05/14 20:04, Alex Crow wrote:

I think I've just found something. I had this set:

memory_replacement_policy heap GDSF

replacing this with:

memory_replacement_policy lru

got rid of the high CPU in 3.4 (works ok in 3,3).

I will try heap LRU.

Cheers

Alex

On 20/05/14 19:54, Alex Crow wrote:

Hi Amos, all,

I have set up a test box with latest 3.4.5 nightly. I get 95-100% cpu 
even with one client accessing the cache. I've attached a compressed 
strace of the child process in case anything is evident from that. 
Please tell me what else I might need to do to help resolve this issue.


I'm hoping this will help get to the bottom of why a number of people 
are having this issue on 3.4.x.


Any help much appreciated as always.

Alex






Re: [squid-users] Unhandled exception: c

2014-05-19 Thread Alex Crow

On 2014-05-16 07:01, Amos Jeffries wrote:

On 16/05/2014 7:42 a.m., Alex Crow wrote:

Grr, I apologise profusely. The server does run 3.3.11, *not* 3.2.11,
Had a couple of nights being waken up by our devs askng about DNS...


Right lot of fun we are. I too seem to have been working on a bit
outdated version of the 3.3 branch when I checked. :-( sorry.

I think I have found the point of crash. Does this patch fix it in that
latest 3.3 code?
 http://master.squid-cache.org/~amosjeffries/patches/AlexCrow_s33.patch

Amos


Hi Amos,

I will apply this over the weekend and we'll keep our fingers crossed 
for Monday.


Would a similar patch be required for 3.4 assuming this fixes the 
problem?


Cheers

Alex


Re: [squid-users] Unhandled exception: c

2014-05-15 Thread Alex Crow

Hi

Thanks for that. This is odd because I compiled .debs myself from the 
source using the debian folder from an older version of squid as a 
template. I'm pretty sure I cleaned out the debian/patches folder and 
removed the lines in the rules file before building, but I will check this.


Alex

On 15/05/14 08:06, Amos Jeffries wrote:

On 15/05/2014 7:37 a.m., Alex Crow wrote:

Hi,

Is this any good at all or do I need ro provide more? It seems a trivial
issue to restart a browser but the bigwigs are climbing all over me now!

Cheers

Alex


On 12/05/14 16:22, Alex Crow wrote:

Hi Amos,

New backtrace - I hope this helps!




#3  0x005279d1 in CbcPointerConnStateData::operator-
(this=value optimized out) at base/CbcPointer.h:147
 c = value optimized out
#4  0x0057238e in FwdState::initiateSSL (this=0x80f14ba8) at
forward.cc:827
 hostname = 0x80e6d7e8 secure.flashtalking.com
 isConnectRequest = value optimized out
 peer = value optimized out
 fd = 812
 __FUNCTION__ = initiateSSL
 peeked_cert = value optimized out
 ssl = 0x940e87e0
 sslContext = value optimized out
#5  0x005725e3 in FwdState::connectDone (this=0x80f14ba8,
conn=..., status=value optimized out, xerrno=0) at forward.cc:895
 __FUNCTION__ = connectDone

Bit of a strange trace there.

It is in forward.cc which does not exist in any 3.3 or later release of
Squid. Correlating to your info about it being 3.2.11.

But is using variables isConnectRequest and peeked_cert which only exist
in the 3.HEAD releases of Squid. So your Squid is patched in the area of
code crashing. Time to direct this bug at the vendor who backported that
patch for you.

Amos





Re: [squid-users] Intercept HTTPS without using certificates - Just apply a QoS on the connexion

2014-05-15 Thread Alex Crow

Hi,

Welcome to the practically incomprehensible world of QoS on Linux - look 
up LARTC and then feel the fear!


It's really powerful but even after 14 years of managing Linux gateways 
I still prefer you just use shorewall to take away the complexity - and 
you are welcome to call me lazy ;-)


Alex

On 15/05/14 20:04, Antoine Klein wrote:

Ok thanks, it could be a good idea !

Do you know if we can apply a QoS with the bucket concept of delay
pool using the Linux QoS Tools ?

2014-05-15 14:41 GMT-04:00 Leonardo Rodrigues leolis...@solutti.com.br:

Em 15/05/14 14:59, Antoine Klein escreveu:


Hi there,

I need to install squid to apply a QoS in a private network with the delay
pool.
In fact, this network offer a public WIFI, so that's not possible to
configure a proxy on clients.

Is it possible to intercept HTTPS connexion, apply a Delay Pool and
forward the request without decipher the SSL packet ?


 I really dont think that's possible. Anyway, you can always use your
Linux (or whatever OS you're using) QoS tools to acchieve something similar
to delay pools but on NATted connections. You can have squid intercepting
TCP/80 connections and apply delay pools, the TCP/443 (and all other indeed)
connections can be throttled by QoS SO tools.



--


 Atenciosamente / Sincerily,
 Leonardo Rodrigues
 Solutti Tecnologia
 http://www.solutti.com.br

 Minha armadilha de SPAM, NÃO mandem email
 gertru...@solutti.com.br
 My SPAMTRAP, do not email it










Re: [squid-users] Unhandled exception: c

2014-05-15 Thread Alex Crow
Grr, I apologise profusely. The server does run 3.3.11, *not* 3.2.11, 
Had a couple of nights being waken up by our devs askng about DNS...


However - i just downloaded squid-3.3.12-20140309-r12678, unpacked it, 
and see this:


root@user-ThinkPad-T61p:/home/user/Downloads/squid-3.3.12-20140309-r12678# 
find -name forward.cc

./src/forward.cc

So the source file still exists in 3.3.11, and is referenced in 
Makefile.in/.am:


root@user-ThinkPad-T61p:/home/user/Downloads/squid-3.3.12-20140309-r12678# 
grep -r forward.cc *

ChangeLog:  - Bug 3111: Mid-term fix for the forward.cc err assertion
src/Makefile.in:forward.cc forward.h fqdncache.h fqdncache.cc 
ftp.h ftp.cc \
src/Makefile.in:forward.cc fqdncache.h fqdncache.cc ftp.h ftp.cc 
gopher.h \
src/Makefile.in:FileMap.h filemap.cc forward.cc fqdncache.h 
fqdncache.cc ftp.h \
src/Makefile.in:FileMap.h filemap.cc forward.cc fqdncache.h 
fqdncache.cc ftp.h \
src/Makefile.in:tests/stub_fatal.cc fd.h fd.cc fde.cc forward.cc 
fqdncache.h \
src/Makefile.in:forward.cc fqdncache.h fqdncache.cc ftp.h ftp.cc 
gopher.h \
src/Makefile.in:forward.cc fqdncache.h fqdncache.cc ftp.h ftp.cc 
gopher.h \
src/Makefile.in:FileMap.h filemap.cc forward.cc forward.h 
fqdncache.h \

src/Makefile.in:forward.cc \
src/Makefile.in:forward.cc \
src/Makefile.in:forward.cc \
src/Makefile.in:forward.cc \
src/Makefile.in:forward.cc \
src/Makefile.in:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \
src/Makefile.am:forward.cc \

Again I'm sorry about giving you the wrong version, but I'm really 
scratching my head now as you did say that forward.cc should not be used 
in 3.3, However I've also done this:


root@user-ThinkPad-T61p:/home/user/Downloads/squid-3.3.12-20140309-r12678# 
grep -ri isconnectreq *
src/forward.cc:const bool isConnectRequest = 
!request-clientConnectionManager-port-spoof_client_ip 
src/forward.cc:if (request-flags.sslPeek  
!isConnectRequest) {
src/forward.cc:const bool isConnectRequest = 
!request-clientConnectionManager-port-spoof_client_ip 

src/forward.cc:if (!request-flags.sslPeek || isConnectRequest)
src/client_side.cc:const bool isConnectRequest = 
!port-spoof_client_ip  !port-intercepted;

src/client_side.cc:if (intendedDest.IsAnyAddr() || isConnectRequest)

and the isConnectRequest is still there!

Am I really missing something here? Do I need to adjust my debain rules 
file or similar?


Cheers

Alex

On 15/05/14 17:51, Alex Crow wrote:

Hi

Thanks for that. This is odd because I compiled .debs myself from the 
source using the debian folder from an older version of squid as a 
template. I'm pretty sure I cleaned out the debian/patches folder and 
removed the lines in the rules file before building, but I will check 
this.


Alex

On 15/05/14 08:06, Amos Jeffries wrote:

On 15/05/2014 7:37 a.m., Alex Crow wrote:

Hi,

Is this any good at all or do I need ro provide more? It seems a 
trivial
issue to restart a browser but the bigwigs are climbing all over me 
now!


Cheers

Alex


On 12/05/14 16:22, Alex Crow wrote:

Hi Amos,

New backtrace - I hope this helps!




#3  0x005279d1 in CbcPointerConnStateData::operator-
(this=value optimized out) at base/CbcPointer.h:147
 c = value optimized out
#4  0x0057238e in FwdState::initiateSSL (this=0x80f14ba8) at
forward.cc:827
 hostname = 0x80e6d7e8 secure.flashtalking.com
 isConnectRequest = value optimized out
 peer = value optimized out
 fd = 812
 __FUNCTION__ = initiateSSL
 peeked_cert = value optimized out
 ssl = 0x940e87e0
 sslContext = value optimized out
#5  0x005725e3 in FwdState::connectDone (this=0x80f14ba8,
conn=..., status=value optimized out, xerrno=0) at forward.cc:895
 __FUNCTION__ = connectDone

Bit of a strange trace there.

It is in forward.cc which does not exist in any 3.3 or later release of
Squid. Correlating to your info about it being 3.2.11.

But is using variables isConnectRequest and peeked_cert which only exist
in the 3.HEAD releases of Squid. So your Squid is patched in the area of
code crashing. Time to direct this bug at the vendor who backported that
patch for you.

Amos







Re: [squid-users] Unhandled exception: c

2014-05-14 Thread Alex Crow

Hi,

Is this any good at all or do I need ro provide more? It seems a trivial 
issue to restart a browser but the bigwigs are climbing all over me now!


Cheers

Alex


On 12/05/14 16:22, Alex Crow wrote:

Hi Amos,

New backtrace - I hope this helps!

Core was generated by `(squid-1) -YC -f /etc/squid3/squid.conf'.
Program terminated with signal 6, Aborted.
#0  0x7f2f758a81b5 in raise () from /lib/libc.so.6
(gdb) bt full
#0  0x7f2f758a81b5 in raise () from /lib/libc.so.6
No symbol table info available.
#1  0x7f2f758aafc0 in abort () from /lib/libc.so.6
No symbol table info available.
#2  0x0054670f in xassert (msg=0x7bb62c c, file=0x7ea5f8 
base/CbcPointer.h, line=147) at debug.cc:565

__FUNCTION__ = xassert
#3  0x005279d1 in CbcPointerConnStateData::operator- 
(this=value optimized out) at base/CbcPointer.h:147

c = value optimized out
#4  0x0057238e in FwdState::initiateSSL (this=0x80f14ba8) at 
forward.cc:827

hostname = 0x80e6d7e8 secure.flashtalking.com
isConnectRequest = value optimized out
peer = value optimized out
fd = 812
__FUNCTION__ = initiateSSL
peeked_cert = value optimized out
ssl = 0x940e87e0
sslContext = value optimized out
#5  0x005725e3 in FwdState::connectDone (this=0x80f14ba8, 
conn=..., status=value optimized out, xerrno=0) at forward.cc:895

__FUNCTION__ = connectDone
#6  0x006a6f69 in AsyncCall::make (this=0x950cf990) at 
AsyncCall.cc:32

__FUNCTION__ = make
#7  0x006aa215 in AsyncCallQueue::fireNext (this=value 
optimized out) at AsyncCallQueue.cc:52

call = {p_ = 0x950cf990}
__FUNCTION__ = fireNext
#8  0x006aa3c0 in AsyncCallQueue::fire (this=0xfb53f0) at 
AsyncCallQueue.cc:38

made = true
#9  0x005633dc in EventLoop::runOnce (this=0x7fffd3a62b20) at 
EventLoop.cc:132

sawActivity = false
waitingEngine = 0x7fffd3a62ba0
__FUNCTION__ = runOnce
#10 0x00563518 in EventLoop::run (this=0x7fffd3a62b20) at 
EventLoop.cc:96

No locals.
#11 0x005d3a25 in SquidMain (argc=value optimized out, 
argv=value optimized out) at main.cc:1520

WIN32_init_err = value optimized out
__FUNCTION__ = SquidMain
signalEngine = {AsyncEngine = {_vptr.AsyncEngine = 
0x7cc770}, loop = @0x7fffd3a62b20}
store_engine = {AsyncEngine = {_vptr.AsyncEngine = 
0x7cc7d0}, No data fields}
comm_engine = {AsyncEngine = {_vptr.AsyncEngine = 0xa78f30}, 
No data fields}
mainLoop = {errcount = 0, last_loop = false, engines = 
{capacity = 16, count = 4, items = 0x1426140}, timeService = 
0x7fffd3a62b90, primaryEngine = 0x7fffd3a62ba0, loop_delay = 0, error 
= false, runOnceResult = false}

time_engine = {_vptr.TimeEngine = 0x7dbe90}
#12 0x005d4213 in SquidMainSafe (argc=3051, argv=0xbeb) at 
main.cc:1242

No locals.
#13 main (argc=3051, argv=0xbeb) at main.cc:1234
No locals.

We are also getting a lot of this sort of thing in the logs since I've 
patched that Assert. Not sure If it's related.


2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 13:22:57 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 13:22:57 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 13:22:57 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 13:22:57 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 13:22:57 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 13:22:57 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 13:22:57 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 13:22:57 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' 
process.

2014/05/09 14:21:49 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 14:21:49 kid1| helperOpenServers: Starting 1/75 'ntlm_auth

Re: [squid-users] Unhandled exception: c

2014-05-12 Thread Alex Crow
 helpers...
2014/05/09 14:21:50 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 14:21:50 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:50 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2014/05/09 14:21:50 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 14:21:50 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 14:21:50 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:50 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2014/05/09 14:21:50 kid1| Starting new ntlmauthenticator helpers...
2014/05/09 14:21:50 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/05/09 14:21:50 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:50 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
2014/05/09 14:21:51 kid1| WARNING: Error Pages Missing Language: en-gb
2014/05/09 14:21:51 kid1| WARNING: Error Pages Missing Language: en
2014/05/09 14:21:51 kid1| Starting new ssl_crtd helpers...
2014/05/09 14:21:51 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

2014/05/09 14:21:51 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:51 kid1| WARNING: Cannot run '/usr/lib/squid3/ssl_crtd' 
process.

2014/05/09 14:21:51 kid1| Starting new ssl_crtd helpers...
2014/05/09 14:21:51 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

2014/05/09 14:21:51 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:51 kid1| WARNING: Cannot run '/usr/lib/squid3/ssl_crtd' 
process.

2014/05/09 14:21:51 kid1| Starting new ssl_crtd helpers...
2014/05/09 14:21:51 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

2014/05/09 14:21:51 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:21:51 kid1| WARNING: Cannot run '/usr/lib/squid3/ssl_crtd' 
process.
2014/05/09 14:39:16 kid1| helperOpenServers: Starting 10/70 
'ext_wbinfo_group_acl' processes

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.

2014/05/09 14:39:16 kid1| ipcCreate: fork: (12) Cannot allocate memory
2014/05/09 14:39:16 kid1| WARNING: Cannot run 
'/usr/lib/squid3/ext_wbinfo_group_acl' process.


Thanks very much for looking at this!

Cheers

Alex

On 30/04/14 20:15, Amos Jeffries wrote:

On 1/05/2014 6:19 a.m., Alex Crow wrote:

Brilliant! I will try to apply this and see if we get more detail. Will
it apply to 3.2.x? I can't run 3.4.x in prod due to the CPU load issue -
and I only see the crash in prod, never managed to trigger it in a test.


Yes it should apply on 3.2, though I have not tested that.

Amos



Cheers

Alex


On 29/04/14 20:45, Amos Jeffries wrote:

On 30/04/2014 7:30 a.m., Alex Crow wrote:

dying from an unhandled exception: c

I just realised what is generating this is a Must(c). There are only
two of them Squid, but unfortunately in the generic and widely used
CbcPointer template.

Can you apply this patch please and see if we get a useful backtrace
next time:

http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-13386.patch


(this will also be in 3.4.5 to help with future issues hidden by the
same message).

Amos





Re: [squid-users] Unhandled exception: c

2014-04-30 Thread Alex Crow
Brilliant! I will try to apply this and see if we get more detail. Will 
it apply to 3.2.x? I can't run 3.4.x in prod due to the CPU load issue - 
and I only see the crash in prod, never managed to trigger it in a test.


Cheers

Alex


On 29/04/14 20:45, Amos Jeffries wrote:

On 30/04/2014 7:30 a.m., Alex Crow wrote:

dying from an unhandled exception: c

I just realised what is generating this is a Must(c). There are only
two of them Squid, but unfortunately in the generic and widely used
CbcPointer template.

Can you apply this patch please and see if we get a useful backtrace
next time:

http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-13386.patch

(this will also be in 3.4.5 to help with future issues hidden by the
same message).

Amos





Re: [squid-users] Unhandled exception: c

2014-04-29 Thread Alex Crow
 /var/spool/squid3 swaplog 
(2073803 entries)

2014/04/29 13:07:51 kid1| Finished rebuilding storage from disk.
2014/04/29 13:07:51 kid1|   1870450 Entries scanned
2014/04/29 13:07:51 kid1|77 Invalid entries.
2014/04/29 13:07:51 kid1| 0 With invalid flags.
2014/04/29 13:07:51 kid1|   1667162 Objects loaded.
2014/04/29 13:07:51 kid1| 0 Objects expired.
2014/04/29 13:07:51 kid1|203203 Objects cancelled.
2014/04/29 13:07:51 kid1|76 Duplicate URLs purged.
2014/04/29 13:07:51 kid1| 9 Swapfile clashes avoided.
2014/04/29 13:07:51 kid1|   Took 8.56 seconds (194868.89 objects/sec).
2014/04/29 13:07:51 kid1| Beginning Validation Procedure
2014/04/29 13:07:52 kid1|   262144 Entries Validated so far.
2014/04/29 13:07:52 kid1|   524288 Entries Validated so far.
2014/04/29 13:07:52 kid1|   786432 Entries Validated so far.
2014/04/29 13:07:52 kid1|   1048576 Entries Validated so far.
2014/04/29 13:07:52 kid1|   1310720 Entries Validated so far.
2014/04/29 13:07:52 kid1|   1572864 Entries Validated so far.
2014/04/29 13:07:52 kid1|   Completed Validation Procedure
2014/04/29 13:07:52 kid1|   Validated 1667161 Entries
2014/04/29 13:07:52 kid1|   store_swap_size = 44324352.00 KB
2014/04/29 13:07:53 kid1| storeLateRelease: released 3 objects
2014/04/29 13:07:56 kid1| Starting new ntlmauthenticator helpers...
2014/04/29 13:07:56 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/04/29 13:07:56 kid1| Starting new ntlmauthenticator helpers...
2014/04/29 13:07:56 kid1| helperOpenServers: Starting 1/75 'ntlm_auth' 
processes

2014/04/29 13:08:08 kid1| Starting new ssl_crtd helpers...
2014/04/29 13:08:08 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

2014/04/29 13:08:08 kid1| Starting new ssl_crtd helpers...
2014/04/29 13:08:08 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

2014/04/29 13:08:09 kid1| Starting new ssl_crtd helpers...
2014/04/29 13:08:09 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes


On 26/04/14 10:19, Amos Jeffries wrote:

On 26/04/2014 5:38 a.m., Alex Crow wrote:

HI all,

I forgot I still have the issue in the subject bugging me too. Is the
below backtrace of any use or do I need to provide more?

Unfortunately yes these unhandled excetion ones do not show where the
exception came from. cache.log should contain the error/fatal message
which is a better clue.

Amos




Re: AW: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth

2014-04-25 Thread Alex Crow

Hi,

I use NTLM with Squid and also wbinfo_group helper. In 3.2 series 
everything is fine but in 3.4 after a few hours everything slows down 
and CPU usage is over 90%. In 3.2 it's in the teens.


I also use ssl_bump if that helps - does anyone else with this problem 
also use it?


Cheers

Alex

On 24/04/14 22:57, Carlos Defoe wrote:

Just updating... I tried one more time, with squid 3.4.4.

Same thing, 100% CPU after some minutes and with few hundreds of
users. It doesn't stay at 100% all the time, like an infinite loop...
it goes in a range from 90, 95, 99, 100%.

Almost sure it is something with auth helper handling. My line in
squid.conf is the following:

auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth --ntlm
/usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp
--domain=EXAMPLE --kerberos
/usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME

Majority of authentications are handled by negotiate_kerberos_auth.

I also tried to attach strace (strace -p pid) to squid process for a
minute, when it reached 100%. The result is on the following link. I
don't know if it is useful. The only thing that I found weird is the
Broken Pipes lines.

http://goo.gl/WhssSh

Squid 3.3 series is ok, I'm running 3.3.12.

bye guys

On Wed, Feb 5, 2014 at 7:33 AM, Rietzler, Markus (RZF, SG 324 /
RIETZLER_SOFTWARE) markus.rietz...@fv.nrw.de wrote:

that's not bad to hear. I have seen the new version. at the moment the only way to 
test is to use this new version in production and see what happens. very 
annoying.

with switch back to 3.3.x or 3.2.x everything works perfect!

no it is not ntlm/Kerberos.

see my posting from January:


-Ursprüngliche Nachricht-
Von: Rietzler, Markus (RZF, SG 324 / RIETZLER_SOFTWARE)
Gesendet: Mittwoch, 8. Januar 2014 10:48
An: 'Eliezer Croitoru'
Betreff: AW: AW: AW: [squid-users] Squid 3.4 sends Windows username
without backslash to external wbinfo_group helper

just a quick answer:

yesterday we switched to ntlm fakeauth to eliminate any problems with
samba/winbind protocol talking to DC (every now and then winbind losts its
trust with DC, so with fakeauth we can be sure there is now influence with
tcp connections and talking to DC).

but also with fakeauth we can see the rising of cpu usage. we then enabled
2 workers and this seems to reduce the problem somewhat. the rise is not
that fast...

... but also happens in the end!



-Ursprüngliche Nachricht-
Von: Carlos Defoe [mailto:carlosde...@gmail.com]
Gesendet: Dienstag, 4. Februar 2014 20:38
An: squid-users
Betreff: Re: AW: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth

For me, the version 3.4.3 have the same behavior. It uses 100% CPU (in
one core, the others are normal). For the users, it's just a slowed
down navigation. As soon as I change back to the 3.3.8, everything
works fine.

Actually I'm not sure the problem is caused by ntlm or kerberos or
external_acl_type or anything related to authentication. But I can't
disable it to be sure.

This time I will leave one server runnnig with 3.4.3 and try to debug.
I have already tried to increase the debug level on every auth helper,
but I couldn't see nothing wrong. I'll try debug_options ALL,9
tomorrow.

With strace, should I look for something? System calls squid does all
the time...


On Sun, Jan 26, 2014 at 11:47 PM, Alan lameventa...@gmail.com wrote:

On Wed, Jan 8, 2014 at 1:05 PM, Amos Jeffries squ...@treenet.co.nz

wrote:

On 7/01/2014 10:21 p.m., Rietzler, Markus (RZF, SG 324 /
RIETZLER_SOFTWARE) wrote:

thanxs,

our assumption is, that it is related to helper management. with 3.4.

there is a new helper protocol, right?

Right. That is the big user-visible bit in 3.4.

But there are other background changes involving TCP connection
management, authentication management, ACL behaviours and some things

in

3.3 series also potentially affecting NTLM.

The feature changes just give us a direction to look in. We still have
to diagnose each new bug in detail to be sure. There are others already
using NTLM in older 3.3/3.4 versions without seing this problem for

example.

our environment worked with 3.2 without problems. now with the jump to

3.4. it will not work anymore. so number of requests are somehow important
but as it worked in the past...

if we go without ntlm_auth we can't see any high cpu load. so the

first thought ACL and eg. regex problems can be

discarded. maybe there are some cross influences. but we think it lies

somewhere in helpers/auth.

Did you get any better cache.log trace with the debug_options 29,9

84,9?

Amos


I have the same problem here, I noticed it when I went from 3.3.8 to

3.4.2.

I assumed the problem was introduced with 3.4.x, so I went back to
3.3.11 and it is working fine.
I'm using aufs, negotiate_kerberos_auth and a custom external acl

helper.

Unfortunately these are production servers, so I can't strace or
increase logging as suggested.




[squid-users] Unhandled exception: c

2014-04-25 Thread Alex Crow

HI all,

I forgot I still have the issue in the subject bugging me too. Is the 
below backtrace of any use or do I need to provide more?


Thanks

Alex
On 07/02/14 10:41, Alex Crow wrote:



Hi

Thanks for that - I did get a backtrace today...

Program terminated with signal 6, Aborted.
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
(gdb) backtrace
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
#1  0x7fa89b3fdfc0 in abort () from /lib/libc.so.6
#2  0x7fa89bc8fdc5 in __gnu_cxx::__verbose_terminate_handler() () 
from /usr/lib/libstdc++.so.6

#3  0x7fa89bc8e166 in ?? () from /usr/lib/libstdc++.so.6
#4  0x7fa89bc8e193 in std::terminate() () from 
/usr/lib/libstdc++.so.6

#5  0x7fa89bc8e216 in __cxa_rethrow () from /usr/lib/libstdc++.so.6
#6  0x005d457c in SquidMainSafe (argc=value optimized out, 
argv=value optimized out) at main.cc:1246
#7  main (argc=value optimized out, argv=value optimized out) at 
main.cc:1234


Thanks

Alex



Hi,

Is this of any help?

Also I've had to go back to 3.2.11 as 3.4.x is still using way too 
much CPU, I get users (about 350) complaining about extreme slowness 
by lunchtime, and squid is using 90% CPU. In 3.2.11 it's always 
around 15%.


Request are fairly low:

Average HTTP requests per minute since start:2647.4

Cheers

Alex








Re: [squid-users] squid advice needed.

2014-04-18 Thread Alex Crow

LDAP with Samba with Squid using NTLM auth would work for Windows machines.

For non-windows you would have to enter credentials and/or store then in 
the client device for BASIC auth.


Printing has nothing to do with Squid, Samba/CUPS will deal with that bit.

Cheers

Alex

On 18/04/14 07:01, rontopia wrote:

first time poster/e mailer.

here is what I want to do. I understand there are about a dozen ways to do
this. that is why i need a little guidance.

this is for my home network. I have many family members from age 6 to 46
that all use in Internet and printer.  there are several operating systems.
windows7  8, Linux, android tablets, android phones, ipads, ipods, and im
sure im forgetting some thing.

I have a squid3.1.19 running on a small system runing headless ubuntu
install.

I want to implement a SSO but after days and maybe weeks of reading. I am a
little more confused than when I started. so I am going to ask or list the
things I want to do and maybe some of you can guide me in the right
direction?

I want a SSO. (I was planing to use LDAP with samba.)
squid
dansgardian
lighttpd

the goals for this config=

1 single sign on
2 mulitble filter groups based on username
3 everyone should have access to printers

I have a 6 year old that showed me on his moms ipad “ look dad I can do a
web search for batman”. which is great! but it could have been a whole
different story if he had misspelled batman to something like buttman..

after doing all this reading. I understand that squid might be about to
handle the SSO duties? but i am having some doubts because of all the IOS
stuff thats on my network(I personally dont like the apple products but many
family members do) that is leading me to what to use samba. is this wrong
thinking?

  is an illistartion of what I thought I was going to do.. before I started
reading to try to educate myself.

http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4665626/samba_ldap_squid_dansgardian.png
I am looking for advice.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-advice-needed-tp4665626.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: squid3 block all 443 ports request

2014-02-12 Thread Alex Crow

Hi Khalil,

You've supplied a logically invalid access rule, ie an impossible match. 
You're trying to block everything that is on port 445 and also at the 
same time everything that is *not* on 443.


I'd be surprised if you can get any access with that!

What you need is something like (if you want to block ssl)

http_access allow !SSL_ports
http_access deny  SSL_ports

Swap it around if you want to allow SSL only.

Read the docs, the way acls and access rules is clearly explained there.

Access lists are logically and'ed in the same entry., and or'ed (in 
order before a deny rule) over multiple entries.


acls are or'ed in the same entry, and across multiple entries.

Cheers

Alex


On 12/02/14 15:27, khadmin wrote:

Hi,
here is my squid.conf file.
here is my configuration concerning ssl ports:
acl SSL_ports port 443
http_access deny SSL_ports !SSL_ports

Regards,
Khalil squid.conf
http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4664752/squid.conf



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid3-block-all-443-ports-request-tp4664735p4664752.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: AW: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2014-02-07 Thread Alex Crow



Hi

Thanks for that - I did get a backtrace today...

Program terminated with signal 6, Aborted.
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
(gdb) backtrace
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
#1  0x7fa89b3fdfc0 in abort () from /lib/libc.so.6
#2  0x7fa89bc8fdc5 in __gnu_cxx::__verbose_terminate_handler() () 
from /usr/lib/libstdc++.so.6

#3  0x7fa89bc8e166 in ?? () from /usr/lib/libstdc++.so.6
#4  0x7fa89bc8e193 in std::terminate() () from 
/usr/lib/libstdc++.so.6

#5  0x7fa89bc8e216 in __cxa_rethrow () from /usr/lib/libstdc++.so.6
#6  0x005d457c in SquidMainSafe (argc=value optimized out, 
argv=value optimized out) at main.cc:1246
#7  main (argc=value optimized out, argv=value optimized out) at 
main.cc:1234


Thanks

Alex



Hi,

Is this of any help?

Also I've had to go back to 3.2.11 as 3.4.x is still using way too much 
CPU, I get users (about 350) complaining about extreme slowness by 
lunchtime, and squid is using 90% CPU. In 3.2.11 it's always around 15%.


Request are fairly low:

Average HTTP requests per minute since start:2647.4

Cheers

Alex






Re: AW: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2014-02-06 Thread Alex Crow


On 06/02/14 07:56, Amos Jeffries wrote:

On 2014-02-06 11:09, Alex Crow wrote:

Amos,

Yes, I compiled a Debian package and installed the squid3*dbg*.deb file.

This is a bit tricky as this is a production just from testing with a
few clients the problem does not appear.

I can definitely say that

/usr/lib/debug/usr/sbin/squid3

Is there as is fairly large so I don't know why there are missing 
symbols.




For production servers I use this minimal-downtime debugging script. 
It or small variations have worked well for a few clients on 
production server passing upwards of 10K req/sec for several hundred 
users.
http://wiki.squid-cache.org/SquidFaq/BugReporting#Using_gdb_debugger_on_a_live_proxy_.28with_minimal_downtime.29 



Amos




Hi

Thanks for that - I did get a backtrace today...

Program terminated with signal 6, Aborted.
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
(gdb) backtrace
#0  0x7fa89b3fb1b5 in raise () from /lib/libc.so.6
#1  0x7fa89b3fdfc0 in abort () from /lib/libc.so.6
#2  0x7fa89bc8fdc5 in __gnu_cxx::__verbose_terminate_handler() () 
from /usr/lib/libstdc++.so.6

#3  0x7fa89bc8e166 in ?? () from /usr/lib/libstdc++.so.6
#4  0x7fa89bc8e193 in std::terminate() () from /usr/lib/libstdc++.so.6
#5  0x7fa89bc8e216 in __cxa_rethrow () from /usr/lib/libstdc++.so.6
#6  0x005d457c in SquidMainSafe (argc=value optimized out, 
argv=value optimized out) at main.cc:1246
#7  main (argc=value optimized out, argv=value optimized out) at 
main.cc:1234


Thanks

Alex



Re: AW: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2014-02-05 Thread Alex Crow

Hi Amos,

I get the following:

# gdb squid3 core
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/sbin/squid3...Reading symbols from 
/usr/lib/debug/usr/sbin/squid3...done.

(no debugging symbols found)...done.

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /lib/libpthread.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libcrypt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /usr/lib/libxml2.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libxml2.so.2
Reading symbols from /usr/lib/libexpat.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libexpat.so.1
Reading symbols from /usr/lib/libssl.so.0.9.8...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libssl.so.0.9.8
Reading symbols from /usr/lib/libcrypto.so.0.9.8...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libcrypto.so.0.9.8
Reading symbols from /usr/lib/libgssapi_krb5.so.2...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib/libgssapi_krb5.so.2
Reading symbols from /usr/lib/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libkrb5.so.3
Reading symbols from /usr/lib/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libk5crypto.so.3
Reading symbols from /lib/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libcom_err.so.2
Reading symbols from /lib/libnsl.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libresolv.so.2
Reading symbols from /lib/libcap.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libcap.so.2
Reading symbols from /lib/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /usr/lib/libltdl.so.7...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libltdl.so.7
Reading symbols from /lib/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /usr/lib/libstdc++.so.6...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libstdc++.so.6
Reading symbols from /lib/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libgcc_s.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libgcc_s.so.1
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /usr/lib/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /usr/lib/libkrb5support.so.0...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib/libkrb5support.so.0
Reading symbols from /lib/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libkeyutils.so.1
Reading symbols from /lib/libattr.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libattr.so.1
Reading symbols from /lib/libnss_files.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libnss_files.so.2
Reading symbols from /lib/libnss_compat.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libnss_compat.so.2
Reading symbols from /lib/libnss_nis.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libnss_nis.so.2
Core was generated by `(squid-1) -YC -f /etc/squid3/squid.conf'.
Program terminated with signal 6, Aborted.
#0  0x7f0a2911a1b5 in raise () from /lib/libc.so.6


Not sure if that helps, it doesn't look too helpful.

Any ideas what else I can do?

Cheers

Alex

On 23/01/14 08:31, Amos Jeffries wrote:

On 23/01/2014 4:12 a.m., Alex Crow wrote:

Hi,

Just noticed something in the changelogs for the nightly build that
might mean this is fixed - I'm optimistic anyway:

Tue 2014-01-21 20:29:15 -0700
http://www.squid-cache.org/Versions/v3/3.4/changesets/squid-3.4-13079.patch
 Amos Jeffries +10 -2
 Fix external_acl_type async loop failures


If it does then we can peg the problem down to being the well-known
systemic issues in NTLM handshake.

Amos



Now I just need to figure out the Unhandled exception: c

Re: AW: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2014-02-05 Thread Alex Crow

Amos,

Yes, I compiled a Debian package and installed the squid3*dbg*.deb file.

This is a bit tricky as this is a production just from testing with a 
few clients the problem does not appear.


I can definitely say that

/usr/lib/debug/usr/sbin/squid3

Is there as is fairly large so I don't know why there are missing symbols.

Cheers

Alex

On 05/02/14 15:10, Amos Jeffries wrote:

On 6/02/2014 2:17 a.m., Alex Crow wrote:

Hi Amos,

I get the following:

# gdb squid3 core
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/sbin/squid3...Reading symbols from
/usr/lib/debug/usr/sbin/squid3...done.
(no debugging symbols found)...done.



Not sure if that helps, it doesn't look too helpful.

Any ideas what else I can do?


Is the squid3-dbg package available? It has the debug symbols which are
needed to make these traces meaningful.

Amos





Re: AW: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2014-01-22 Thread Alex Crow

Hi,

Just noticed something in the changelogs for the nightly build that 
might mean this is fixed - I'm optimistic anyway:


Tue 2014-01-21 20:29:15 -0700 
http://www.squid-cache.org/Versions/v3/3.4/changesets/squid-3.4-13079.patch 
	Amos Jeffries 	+10 -2 	

Fix external_acl_type async loop failures



Now I just need to figure out the Unhandled exception: c errors that 
kill my squid every so often. It seems to be a rare issue as from 
googling only myself and two other people seem to have faced it.


Cheers

Alex

On 06/01/14 20:14, Eliezer Croitoru wrote:

Hey,

There was someone in the past that asked about this ntlm helper issue.
I would try in a way to use only squid to make sure what is going on 
at the same time.
If you do have a number of Requests Per Second rate it will help to 
understand the basic issue.


There is a complexity issue when for example a proxy was hit by 400 
requests in one sec and authentication is being used.
There is also the basic issue that when authentication is being done 
on a network and the network is not fast enough or it has too much 
latency, the process will grow and grow over time.


100% cpu usage can be discovered sometimes but in a case the cache-mgr 
is not responsive the only tools available are:

top
netstat
ss
iptables
iptraf

And there are couple other nice tools which can verify the basic 
assumption that this network might need more then it have or need.


Eliezer

On 06/01/14 11:53, Rietzler, Markus (RZF, SG 324 / 
RIETZLER_SOFTWARE) wrote:

i want to join this discussion.
we are using squid 3.4.2 also with ntlm_auth and winbind. only 
difference is that we don't use wbinfo_group. we just need or use 
the username. we also have the problem, that after a few minutes 
squid uses 100% cpu and is getting very slow. in the cache-log I can 
see increase ntmm-helper as the max number is used. I also can see 
in the cache manager menu output (ntlmauthenticator) that all the 
configured helper are busy.


any idea about the 100% cpu usage?






Re: [squid-users] Squid 3.4.1 Workers Option

2013-12-30 Thread Alex Crow

On 30/12/13 15:25, Will Roberts wrote:

Hi,

I'm trying to use the SMP Scale feature added in 3.2 and I'm having a 
little trouble activating it. If I add workers = 2 to my squid.conf 
I get the following error during startup:


FATAL: Bungled /etc/squid3/squid.conf line 3: workers = 1

I built my own instead of using a pre-built binary. Are there any 
specific configure options that control whether this feature is enabled?


Thanks,
--Will


Hi,

Are you sure you don't have it in twice once you add your line? Check 
line 3 of the conf to make sure it's not there already.


Cheers

Alex


Re: [squid-users] Squid 3.4.1 Workers Option

2013-12-30 Thread Alex Crow

On 30/12/13 16:21, Will Roberts wrote:

On 12/30/2013 11:16 AM, Alex Crow wrote:

Hi,

Are you sure you don't have it in twice once you add your line? Check 
line 3 of the conf to make sure it's not there already.


Cheers

Alex


Alex,

Yes I'm sure it's only in the file once, it's pretty small:

# CUSTOM OPTIONS
# 
-

workers = 2

if ${process_number} = 1
  http_port 80
  http_port 8080
  http_port 10080
endif

if ${process_number} = 2
  http_port 180
  http_port 18080
  http_port 11080
endif


I realize there's not a lot of content in my config. I'm just trying 
to get an understanding of how the option works before I integrate it 
with my more complicated set up.


Thanks,
--Will


Should it not be:

workers 2

rather than

workers = 2

?

Alex


Re: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2013-12-29 Thread Alex Crow

Hi Eliezer,

I can confirm it is the subprocess, can't get a snapshot now as it's in 
prod, but I did the same myself and it was definitely the kid (only 1 
kid is configured).


Cheers

Alex

On 27/12/13 19:21, Eliezer Croitoru wrote:

Hey Alex,

Can you by any chance get a top snapshot output to verify if this 
issue is related to the subprocess or the parent process.


Thanks,
Eliezer

On 27/12/13 19:58, Alex Crow wrote:


Hi Amos,

Yes, this works re: the helper, but unfortunately we get very high CPU
usage in 3.4.1 as opposed to 3.3.11. I was getting 80-100% after a few
minutes whereas when I reverted back to 3.3.11, I only saw the odd peak
at about 27%, and most of the time it was 10%.

No other change other than the version, config was identical.

Cheers

Alex






Re: [squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2013-12-27 Thread Alex Crow

On 24/12/13 02:39, Amos Jeffries wrote:

On 24/12/2013 2:28 a.m., Alex Crow wrote:

Hi,

I use the below:

external_acl_type nt_group ttl=20 children-startup=10 children-max=70
children-idle=10 %LOGIN /usr/lib/squid3/ext_wbinfo_group_acl

to be able to use NT groups in my squid config. This works fine in 3.2
and 3.3, but I recently tried to upgrade to 3.4 and this stopped
working. In the cache.log there are hundreds of entries like:

Could not get groups for user DOMAINuser

Whereas the correct user name should be DOMAIN\user. If I pass the
correct username to the wbinfo_group helper it works, so it seems squid
is dropping the backslash in my 3.4 install (squid-3.4.1-20131216-r13058).

Going back the 3.3.11 makes everything work as expected.


Can you test 3.4 latest snapshot with this patch on top please?

Amos



Hi Amos,

Yes, this works re: the helper, but unfortunately we get very high CPU 
usage in 3.4.1 as opposed to 3.3.11. I was getting 80-100% after a few 
minutes whereas when I reverted back to 3.3.11, I only saw the odd peak 
at about 27%, and most of the time it was 10%.


No other change other than the version, config was identical.

Cheers

Alex


[squid-users] Squid 3.4 sends Windows username without backslash to external wbinfo_group helper

2013-12-23 Thread Alex Crow

Hi,

I use the below:

external_acl_type nt_group ttl=20 children-startup=10 children-max=70 
children-idle=10 %LOGIN /usr/lib/squid3/ext_wbinfo_group_acl


to be able to use NT groups in my squid config. This works fine in 3.2 
and 3.3, but I recently tried to upgrade to 3.4 and this stopped 
working. In the cache.log there are hundreds of entries like:


Could not get groups for user DOMAINuser

Whereas the correct user name should be DOMAIN\user. If I pass the 
correct username to the wbinfo_group helper it works, so it seems squid 
is dropping the backslash in my 3.4 install (squid-3.4.1-20131216-r13058).


Going back the 3.3.11 makes everything work as expected.

Thanks

Alex


Re: [squid-users] Does Squid 3.3 AD authentication

2013-12-23 Thread Alex Crow

On 23/12/13 18:57, javed_samtiah wrote:

Hi,

Does SQUID 3.3 supports active directory authentication in Transparent Poxy
mode ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Does-Squid-3-3-AD-authentication-tp4664003.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Hi,

As far as I know, there is no such thing as transparent proxying with 
authentication, full stop. It's just fundamentally impossible.


Cheers

Alex


Re: [squid-users] SSL bump interception and certificates warnign

2013-09-12 Thread Alex Crow

On 11/09/13 20:56, Loïc BLOT wrote:

Then, if i add my own CA to firefox warning will disappear ?


Yes, that is the way SSL works. Just make sure you install the proxy's 
CA cert in trusted root CAs in Windows cert store and/or other browsers' 
stores and you are good to go.


NB this may not be legal in your jurisdiction if you are doing this for 
others than yourself. Especially if it's for a company and you don't 
have it mentioned in your employee contract as it could well be used 
against you. SSL provides an expectation (albeit rather optimistic at 
the moment given the NSA debacle) of end-to-end privacy and you cannot 
with good conscience violate that trust even if it is technically 
possible. Remember that you are effectively doing an MITM attack.


Cheers

Alex


Re: [squid-users] Updating Squid

2013-07-13 Thread Alex Crow

Hi Gustavo,

Eliezer has RPMs for 6.4 (x64 only) here:

http://repo.ngtech.co.il/rpm/centos/6/x86_64/

Cheers

Alex


On 13/07/13 20:06, Gustavo Esquivel wrote:

HI Antony,
my Linux distribution version is CentOS release 6.4 (Final)
about the package manager, i'm not sure how it works in console mode...

i install the Squid version on linux using yum install squid

hope this info can you let you help!
thanks a lot!


On Sat, Jul 13, 2013 at 11:41 AM, Amos Jeffries squ...@treenet.co.nz wrote:

On 14/07/2013 3:30 a.m., Gustavo Esquivel wrote:

Hi Everybody!
i',m new in linux and i don't have too much idea how to update my old
squid (is already working)
i have the version 3.1.10.
i make this steps:

wget http://www.squid-cache.org/Versions/v3/3.3/squid-3.3.8.tar.gz
tar czvf squid-3.3.8.tar.gz
cd squid-3.3.8.tar.gz
./configure

but in this step i get this error:
[root@ProxyServer squid-3.3.8]# ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... configure: error: newly
created file is older than distributed files!

any idea?


The clock on your system is not correctly set?

Amos




[squid-users] Memory leaks in squid 3.3.5?

2013-07-11 Thread Alex Crow

Hi all,

I've been running 3.3.5 with NTLM auth an icap service (c-icap with 
clamav) and SSL Bump/Dynamic cert, and I've noticed that the squid3 
process rapidly consumes almost all of my RAM (12G) within just a few hours:


16143 proxy 20   0 8554m 8.2g 5788 S0 69.6  35:09.43 squid3

My cache_mem is 4GB, and my disk cache is 48GB, which should, according 
to estimates, use between 4.5 and 5.5G. (We only have about 350 users).


We were quite happily using 3.2.11 with the same parameters. Has anyone 
else noticed very high memory usage with Squid 3.3.x in a similar setup?


Thanks

Alex


Re: [squid-users] Memory leaks in squid 3.3.5?

2013-07-11 Thread Alex Crow

Hi Eliezer,

I build .debs for squeeze, basically copying the debian subdir from the 
source packages into the extracted archives and adjusting accordingly 
(ie modifying Changelog and deleteting old patches) I tried wheezy but 
the OpenSSL 1.0.1 horribly breaks *loads* of sites when using SSLBump.


Cheers

Alex



On 11/07/13 20:30, Eliezer Croitoru wrote:

Squid 3.3.7 is out and there was a new leak that was fixed and might
caused the problem you are referring to.

If you have used my RPM there is an update to 3.3.6 which not includes
the latest patches and a 3.3.7 with all the patches will probably be out
next week since it builds fine.
What version of linux are you using?

Eliezer

On 07/11/2013 08:32 PM, Alex Crow wrote:

Hi all,

I've been running 3.3.5 with NTLM auth an icap service (c-icap with
clamav) and SSL Bump/Dynamic cert, and I've noticed that the squid3
process rapidly consumes almost all of my RAM (12G) within just a few
hours:

16143 proxy 20   0 8554m 8.2g 5788 S0 69.6  35:09.43 squid3

My cache_mem is 4GB, and my disk cache is 48GB, which should, according
to estimates, use between 4.5 and 5.5G. (We only have about 350 users).

We were quite happily using 3.2.11 with the same parameters. Has anyone
else noticed very high memory usage with Squid 3.3.x in a similar setup?

Thanks

Alex




Re: [squid-users] Memory leaks in squid 3.3.5?

2013-07-11 Thread Alex Crow

Hi Eliezer,

I can tell you that we have come across specific sites that were OK 
being bumped in squeeze (which comes with OpenSSL 0.9.8) did not work in 
wheezy, which uses 1.0.1.


Here are the example sites we found so far (in the form of acls):

acl nobump dstdomain .cardsonline-commercial.com
acl nobump dstdomain .nwolb.com
acl nobump dstdomain .studentloanrepayment.co.uk
acl nobump dstdomain .shareview.co.uk
acl nobump dstdomain .cahoot.com
acl nobump dstdomain .firstdirect.com
acl nobump dstdomain .nab.com.au
acl nobump dstdomain .rbs.co.uk

I think it's something to do with TLS 1.2 vs SSL3 negotiation. And from 
testing with sslclient it seems it decides to ignore quite a lot of 
installed CA certs, and sslclient will fail unless I specifically point 
to to the CA cert the relevant site uses.


Thanks

Alex


On 11/07/13 21:03, Eliezer Croitoru wrote:

Hey Alex,

I am unsure about the reason of breakage of these sites since I have
never used squid SSL-BUMP else then compiling it yet.
Claiming it's a specific version of OpenSSL is quite a claim.
If you have tried with another version I would say you can claim it.

I would say that breaking any full duplex protocol is always seems like
a bad idea to me.
I have seen other systems that *breaks* and bump ssl connections like
gmail and other sites.
And since I have seen other software *results* I would say the reason is
probably not OpenSSl directly but I cannot prove it yet.

I do hope that you can give examples to sites that do not play well with
SSLBump so I and others can test it.
If we test we can try to fix and debug it.
Please take your time and give a list of sites that can be tested which
are not banks or money originations to make sure that the root and
source of the problem with SSL-BUMP is one way or another solvable.

If you can take a sec to file at http://bugs.squid-cache.org/ it will
help the project a lot.

Thanks,
Eliezer

On 07/11/2013 10:39 PM, Alex Crow wrote:

Hi Eliezer,

I build .debs for squeeze, basically copying the debian subdir from the
source packages into the extracted archives and adjusting accordingly
(ie modifying Changelog and deleteting old patches) I tried wheezy but
the OpenSSL 1.0.1 horribly breaks *loads* of sites when using SSLBump.

Cheers

Alex





On 11/07/13 20:30, Eliezer Croitoru wrote:

Squid 3.3.7 is out and there was a new leak that was fixed and might
caused the problem you are referring to.

If you have used my RPM there is an update to 3.3.6 which not includes
the latest patches and a 3.3.7 with all the patches will probably be out
next week since it builds fine.
What version of linux are you using?

Eliezer

On 07/11/2013 08:32 PM, Alex Crow wrote:

Hi all,

I've been running 3.3.5 with NTLM auth an icap service (c-icap with
clamav) and SSL Bump/Dynamic cert, and I've noticed that the squid3
process rapidly consumes almost all of my RAM (12G) within just a few
hours:

16143 proxy 20   0 8554m 8.2g 5788 S0 69.6  35:09.43 squid3

My cache_mem is 4GB, and my disk cache is 48GB, which should, according
to estimates, use between 4.5 and 5.5G. (We only have about 350 users).

We were quite happily using 3.2.11 with the same parameters. Has anyone
else noticed very high memory usage with Squid 3.3.x in a similar setup?

Thanks

Alex




Re: [squid-users] Re: https traffic using squid and icap

2013-06-21 Thread Alex Crow

Hi,

If you go here:

http://www.eicar.org/85-0-Download.html

And try one of the https links, and c-icap gives you a virus warning, 
then the content is being passed to c-icap.


Cheers

Alex

On 21/06/13 02:49, sjaipuri wrote:

Now it make more sense to me.

Yes, right now I am only seeing plain text ICAP headers for all https
traffic. But I see whole payload for http traffic on ICAP port. Which you
already mentioned that squid sends http message if it is able to parse it.

As you say that ssl-bump will convert CONNECT to series of http request. I
tried tcpdump on port 3128 (squid)/80/443/1344(ICAP) . But in all this case
I only see unencrypted HTTP request for https traffic. However not able to
see payload.
Does ssl-bump decrypt the payload as well and make it available as plain
text. ???






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/https-traffic-using-squid-and-icap-tp4660720p4660733.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] https traffic using squid and icap

2013-06-20 Thread Alex Crow
Where are you doing the packet capture, ie are you doing it on the 
host+interface with address 172.30.20.212?


I'm also not sure if the always_direct bypasses bumping, I'm sure Amos 
or others would tell you.


Alex

On 20/06/13 19:49, sjaipuri wrote:

Hi,

I am working on one of my project in which I have to capture https traffic
in plain text format. I am using squid with sslbump along with c-icap, both
running on Fedora.

Below is the part of squid.conf I am using.

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 10240
icap_service service_req reqmod_precache bypass=0
icap://172.30.30.212:1344/virus_scan
icap_service service_resp respmod_precache bypass=0
icap://172.30.30.212:1344/virus_scan
adaptation_access service_req allow all
adaptation_access service_resp allow all

http_access allow all

http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/etc/ssl/certs/perCA.pem

always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER


Even though above setting, when I capture https traffic using tcpdump, its
still encrypted.
Can anyone help me or guide me to right direction?

Thanks in advance.

Sagar




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/https-traffic-using-squid-and-icap-tp4660720.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] running squid by only one eth

2013-04-25 Thread Alex Crow

On 25/04/13 09:42, John Doe wrote:

From: ma~sha sspard...@gmail.com


Is it possible to run squid by only one eth, for example eth0 only? If
this is possible how do I do it?

What about the following?
  http_port eth0_IP:PORT

JD


Exactly, there is no requirement for Squid to be dual-homed.

(apols John Doe for sending to you instead of list. Oops!).


Re: [squid-users] YAALQ

2013-03-31 Thread Alex Crow
You have allowed the http request to the site, but you have denied the 
reply. http_access and http_reply access are different rule types.


If you add an http_reply_access allow no_filter_dest above the last 
rule I think it will work.


Thanks

Alex

On 31/03/13 12:21, richard lucassen wrote:

Hello list, Yet Another Access List Question.

As the doc says:

Access list rules are checked in the order they are written. List
searching terminates as soon as one of the rules is a match.

Well, that's quite clear I'd say. But why isn't this working properly:


acl richard2_src 92.68.12.178

[..]
acl no_filter_dst dstdomain /etc/squid/nofilter.domains.txt

acl allow_mime_types rep_mime_type -i ^text/.* ^image/.*
^text/plain ^text/html ^application/.*ms.*word.*
^application/.*ms.*excel.* ^application/.*pdf.* ^application/.*xml.*
^application/.*java.*

[..]

http_access allow no_filter_dst
http_reply_access deny !allow_mime_types richard2_src
[..]


$ cat /etc/squid/nofilter.domains.txt
.xaq.nl

The MIME type filter is working properly. But if I visit
http://www.xaq.nl/ there is an swf file which should be blocked by the
allow_mime_types. But as the domain is allowed in the rule above
allow_mime_types, the no_filter_dst, I'd expect that squid accepts
the swf on that particular page. But it is denied:

1364728671.633  7 92.68.12.178 TCP_DENIED/403 1532 GET
http://www.xaq.nl/clock.swf - DIRECT/192.87.112.211 text/html

Why is that?

R.





Re: [squid-users] Bypass bumping all websites in SSL transparent mode

2013-03-12 Thread Alex Crow
I thought ssl_bump should be defined on the http port, not the https 
one. However I've not done transparent for ages so I could be wrong.


If you don't want it, why put it in the *_port directives at all?

Alex

On 12/03/13 19:00, David Touzeau wrote:

Dear

I would like to use Squid 3.3x in transparent SSL mode (in order to 
build a kind of HotSpot systems.)

My issue is :

squid force to bump all websites and change the certificate even an 
ACL is created to deny bump websites.


I would like to know if it is possible to do that ?

I have set this in the squid.conf

# - SSL Listen Port
https_port 192.168.1.204:3130 intercept ssl-bump 
cert=/etc/squid3/ssl/cacert.pem key= /etc/squid3/ssl/privkey.pem

# - SSL Rules
ssl_bump deny all
always_direct allow all

-A PREROUTING -p tcp -m tcp --dport 3128  -j DROP
-A PREROUTING -p tcp -m tcp --dport 3130  -j DROP
-A PREROUTING -s 192.168.1.204/32 -p tcp -m tcp --dport 80 -j ACCEPT
-A PREROUTING -s 192.168.1.204/32 -p tcp -m tcp --dport 443 -j ACCEPT
-A PREROUTING -s 192.168.0.4/32 -p tcp -m tcp --dport 80  -j ACCEPT
-A PREROUTING -s 192.168.0.4/32 -p tcp -m tcp --dport 443 -j ACCEPT
-A PREROUTING -p tcp -m tcp --dport 80 -m comment --to-ports 3128
-A PREROUTING -p tcp -m tcp --dport 443 -m comment -j REDIRECT 
--to-ports 3130

-A POSTROUTING -m comment  -j MASQUERADE






Re: [squid-users] Re: ipv6 support for 3.1.16

2013-02-21 Thread Alex Crow
Kaspersky do an icap server as well, and they are one of the best 
(obviously not gratis or libre but as it's ICAP it will work with Squid).


Alex

On 21/02/13 10:39, anita wrote:

Hi Amos,

Thanks for a very quick reply.
I have a couple of more questions.

1. What is a WCCP setting?
2. How can I check if the ipv4-mapping feature is disabled or not available
in my kernel? I am using Red Hat Linux 6.2 flavour with a GNU/Linux OS.

Thanks in advance.

Regards,
Anita



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ipv6-support-for-3-1-16-tp4658490p4658609.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Can squid be a fully transparent proxy ?

2013-01-17 Thread Alex Crow

On 17/01/13 20:00, Holmes, Michael A (Mike) wrote:

Basically, can squid be the endpoint for TCP connections, and establish a new 
outgoing TCP connection to the destination server?

Mike



That's not really transparent if the client knows that Squid is the 
endpoint. Transparent means that the client just does business as usual, so:


Do you mean that the client is unaware of (and does not have to be 
configured for) the Squid server? If so, yes, of course it can be fully 
transparent, ie the client connects to an external IP but it gets passed 
through Squid. See intercept and tproxy in the docs. Both of course 
require that you are in control of the network from which the clients 
connect to the internet!


Alex


Re: [squid-users] FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2012-09-13 Thread Alex Crow



I have occasionally seen a couple of different problems with the SSL 
certificate database. One is where invalid certificates are generated somehow, 
such as when the signing certificate is no longer valid, and another is where 
the size file is empty. I think the problem with the size file has been fixed 
in 3.3-head, but I'm not sure about 3.2.1.

This is an old patch that I used to help diagnose problems in the SSL 
certificate database. I have no idea if it will still apply to the sources, but 
maybe you can manually apply it and see if it helps track down the problem…





Hi,

It seems to be for me that the size file is empty, even when it seems to 
be running fine.


Cheers

Alex


Re: [squid-users] FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2012-09-13 Thread Alex Crow

On 13/09/12 14:33, Alex Crow wrote:


I have occasionally seen a couple of different problems with the SSL 
certificate database. One is where invalid certificates are generated 
somehow, such as when the signing certificate is no longer valid, and 
another is where the size file is empty. I think the problem with the 
size file has been fixed in 3.3-head, but I'm not sure about 3.2.1.


This is an old patch that I used to help diagnose problems in the SSL 
certificate database. I have no idea if it will still apply to the 
sources, but maybe you can manually apply it and see if it helps 
track down the problem…






Hi,

It seems to be for me that the size file is empty, even when it seems 
to be running fine.


Cheers

Alex


Oops,

Didn't see it because it has no LF char, but the file does contain a 
value. Will check when it breaks.


Cheers

Alex


[squid-users] FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2012-09-11 Thread Alex Crow

Hi all, Amos.

I've been running 3.2.1 for 2-3 of weeks in production. All was well for 
a couple of weeks, but over the last few days, approximately every 2 
days we get people saying they have lost web access. This coincided with 
the above error message repeating and squid workers constantly restarting.


This morning I had a look back in the logs and it seemingly started just 
after this point:


2012/09/11 10:13:32 kid1| WARNING: ssl_crtd #1 exited
2012/09/11 10:13:32 kid1| Too few ssl_crtd processes are running (need 1/32)
2012/09/11 10:13:32 kid1| Starting new helpers
2012/09/11 10:13:32 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes
2012/09/11 10:13:32 kid1| client_side.cc(3477) sslCrtdHandleReply: 
ssl_crtd helper return NULL reply
2012/09/11 10:13:33 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 315: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 
alert bad certificate (1/0)

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/11 10:19:08 kid1| WARNING: ssl_crtd #1 exited
2012/09/11 10:19:08 kid1| Too few ssl_crtd processes are running (need 1/32)
2012/09/11 10:19:08 kid1| Starting new helpers
2012/09/11 10:19:08 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/11 10:19:08 kid1| client_side.cc(3477) sslCrtdHandleReply: 
ssl_crtd helper return NULL reply

2012/09/11 10:19:08 kid1| WARNING: ssl_crtd #1 exited
2012/09/11 10:19:08 kid1| Too few ssl_crtd processes are running (need 1/32)
2012/09/11 10:19:08 kid1| Starting new helpers
2012/09/11 10:19:08 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes
2012/09/11 10:19:09 kid1| client_side.cc(3477) sslCrtdHandleReply: 
ssl_crtd helper return NULL reply
2012/09/11 10:19:09 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 692: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 
alert bad certificate (1/0)
2012/09/11 10:19:09 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 694: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 
alert bad certificate (1/0)
2012/09/11 10:19:21 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 936: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/11 10:19:38 kid1| WARNING: ssl_crtd #1 exited
2012/09/11 10:19:38 kid1| Too few ssl_crtd processes are running (need 1/32)
2012/09/11 10:19:38 kid1| Starting new helpers
2012/09/11 10:19:38 kid1| helperOpenServers: Starting 1/32 'ssl_crtd' 
processes
2012/09/11 10:19:38 kid1| client_side.cc(3477) sslCrtdHandleReply: 
ssl_crtd helper return NULL reply
2012/09/11 10:19:38 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 192: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 
alert bad certificate (1/0)
2012/09/11 10:20:03 kid1| UserRequest.cc(200) authenticate: need to ask 
helper
2012/09/11 10:28:32 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 198: error:140760FC:SSL 
routines:SSL23_GET_CLIENT_HELLO:unknown protocol (1/-1)
2012/09/11 10:28:32 kid1| clientNegotiateSSL: Error negotiating SSL 
connection on FD 200: error:140760FC:SSL 
routines:SSL23_GET_CLIENT_HELLO:unknown protocol (1/-1)
2012/09/11 10:38:27 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 714: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:38:28 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 537: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:38:28 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 767: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:40:40 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 551: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:40:52 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 198: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:40:53 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 296: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)
2012/09/11 10:40:59 kid1| fwdNegotiateSSL: Error negotiating SSL 
connection on FD 682: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol (1/-1/0)

2012/09/11 10:41:11 kid1| assertion failed: forward.cc:199: err
2012/09/11 10:41:14 kid1| Starting Squid Cache version 3.2.1 for 
x86_64-pc-linux-gnu...


You can see that I'm constantly getting 2012/09/11 10:13:32 kid1| Too 
few ssl_crtd processes are running (need 1/32). From then on I get 
something like this:


2012/09/11 10:43:01 kid1|   4128768 entries written so far.
2012/09/11 10:43:01 kid1|   4194304 entries written so far.
2012/09/11 

[squid-users] Re: Delay_pools

2012-09-09 Thread Alex Crow

On 07/09/12 15:49, Landucci L. wrote:

Hi,

I read this discussion :

http://www.squid-cache.org/mail-archive/squid-users/201006/0501.html

talking about the configuration of your squid.conf.

It was, i think, on squid 2. I'am interesting of 
delay_client_reply_access that you use in your conf especially to slow 
down connexion by mime types.


I think that you since upgrade on squid 3 and want to know how you did 
to still slowdown by mime types.


I hope that you would be abble to help me.

Thanks a lot for your answer.

Regards,

Ludo.


Hi,

Unfortunately this is no longer possible with Squid 3. The code from 
Squid2-HEAD was not incorporated into Squid 3.


Please keep this on the list so others can comment and benefit.

Cheers

Alex


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-11 Thread Alex Crow

On 11/08/12 08:20, J Webster wrote:
Is there a way to push all openvpn connections using http ports 
through a transparent squid and how?
Also, can I log which openvpn certificate/client is accessing which 
pages in this way?
I assume I would have to use an alternative port or use firewall rules 
to only allow squid connections from the network 10.8.x.x

Squid is an HTTP proxy, so no.

You can't really proxy OpenVPN as it's end-to-end encrypted with SSL. If 
you issued the certs from your CA it might be possible to MITM it but 
that may be illegal in many jurisdictions.


Alex


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-11 Thread Alex Crow

On 11/08/12 14:27, Eliezer Croitoru wrote:

On 8/11/2012 2:57 PM, J Webster wrote:

But once the tunnel reaches the OpenVPN server, you can direct port 80
and 443 traffic from it via the proxy server can't you?
Once it gets to the OpenVPN server (where you would also have the proxy
server), isn't it decrypted?
Lots of companies have VPN tunnels and then route web traffic through a
proxy so it must be possible somehow.

On 11/08/12 13:54, Alex Crow wrote:

On 11/08/12 08:20, J Webster wrote:

Is there a way to push all openvpn connections using http ports
through a transparent squid and how?
Also, can I log which openvpn certificate/client is accessing which
pages in this way?
I assume I would have to use an alternative port or use firewall
rules to only allow squid connections from the network 10.8.x.x

Squid is an HTTP proxy, so no.

You can't really proxy OpenVPN as it's end-to-end encrypted with SSL.
If you issued the certs from your CA it might be possible to MITM it
but that may be illegal in many jurisdictions.

Alex




of course you can.
it's a basic IPTABLES rules and since openvpn uses a tunX interface 
you can intercept all traffic from the tunX interface to the proxy.
but you cant force the clients to use the vpn as gateway to the whole 
word but only to the VPN connection.


Regards,
Eliezer



I thought the OP was referring to proxying the SSL connection through 
Squid. That of course won't work, but indeed you can redirect or forward 
the packets at the gateway with iptables depending on which interface or 
address range they arrive on.


Apologies to J Webster!

Alex


Re: [squid-users] External IP in access.log

2012-08-02 Thread Alex Crow

On 02/08/12 15:25, Usuário do Sistema wrote:

Hi, today wake up me more an doubt.

795035 112.215.36.175 TCP_MISS/200 96944
GEThttp://ads.xlxtra.com%2Ferrors%2F%3Ftype=4...@efreephoto.com/pictures/9612330624e58d492b8555.jpg
-DIRECT/74.204.173.205 image/jpeg


The question is, why are you even allowing external IPs access to your 
Squid server? If this is for internal use you should firewall it 
appropriately.



Alex


Re: [squid-users] External IP in access.log

2012-08-02 Thread Alex Crow

On 02/08/12 16:29, Usuário do Sistema wrote:

Hello,

The question is, why are you even allowing external IPs access to your Squid
server? If this is for internal use you should firewall it appropriately

sorry, I have done the Deny. now nobody is able to connect any more by
Internet. I solved the problem. but I wonder if it was appear only ips
address that logs how they had been authenticated ? it's very strange


thanks




Perhaps your config only forces auth for your own subnets, but not for 
others. Quite easy to do, even by accident.


Alex


  1   2   >