[squid-users] Rock store limit

2024-04-16 Thread FredB

Hello,

I'm trying to use rock store with 6.9, there is a limitation about the 
size of cache ? I tried 15000 but there is no rock db created with squid 
-z but it works with 1000

My goal is using a 200G SSD disk

cache_dir rock /cache 1000 max-swap-rate=250 swap-timeout=350


Thanks

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 5.4 is available

2022-02-11 Thread FredB

Hi,


What is this image general purpose?

Have a containerized Squid, easy to install and upgrade, and In my case 
use multi proxies on same machine


Enabled options, here: 
https://gitlab.com/fredbcode-images/squid/-/blob/master/Dockerfile#L8


Squid is automatically compiled, tested (I will add more tests soon) and 
finally released as image every weeks


When a test fail, there is no new release.

I'm already using this process for e2guardian, a pipeline runs every 
time a commit is merged:


You can click on each state to see the process:
https://gitlab.com/fredbcode/e2guardian/-/pipelines/463682244
Example Debian compilation: 
https://gitlab.com/fredbcode/e2guardian/-/jobs/2055075483


Packages, docker images, are generated when nothing is wrong -> In this 
situation I'm testing the web filtering with e2guardian and SSL MITM 
mode enabled



In what environment can it be used?

Any 64 bits with docker (I think it could works also on windows, not 
sure), but only for x86 and ARM v8 architectures



I have seen that the docker-compose contains three containers:

  * Squid
  * e2guardian
  * other

It's just a basic example for a simple web filtering machine in icap 
mode, works in progress ...
When I have more time, hum, I will add a load balancer (traefik, ha 
proxy, ?) for an out of box little platform with squid multi instances
I also added some options to my image like supgethosts: - squid stop 
when it can't reach Internet, useful for multi machines and load 
balancer (or proxy pac) - autoreload: - If a file is 
changed/deleted/created squid reloads automatically -/

/

Personally I'm using many squid on each machine for better performance, 
especially with ssl bump


But of course scalability, dead and live of process are using a more 
complex mechanism that my simple example


In _my case_ with same hardware the performance has increased 
significantly, I used a single squid by machine before.

Also better than some proprietary products that I had tried.

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 5.4 is available

2022-02-09 Thread FredB
Hello All

Here docker image builds, automatic at each official release

Amd64 and Arm (64 bits os only, tested on raspberry v3,v4)

https://hub.docker.com/r/fredbcode/squid

Fred
-- 
Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma 
brièveté.___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.16, docker many CLOSE_WAIT

2021-12-07 Thread FredB
s, but it may be the same.
> 
> Needless to say, bugs notwithstanding, too small client_lifetime
> values
> will kill too many innocent transactions. Please see my first
> response

Yes It's just for testing purpose, I'm seeing no impact but only for my usage 
case ...
For now I will try client_lifetime 4 hours in production to reduce the 
complaints and try to understand the close_wait issue 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.16, docker many CLOSE_WAIT

2021-12-07 Thread FredB
Do you think, client lifetime 1 minute works (there is no minimal value 
in documentation)


For testing purpose I'm trying in a test platform and I'm seeing no 
impact, for example download a large file is not interrupted


There is no error in squid parse but I found nothing in debug about 
lifetime


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.16, docker many CLOSE_WAIT

2021-12-07 Thread FredB


Le 07/12/2021 à 08:11, FredB a écrit :

Thanks, I will try with one proxy

FI: The close_wait are well deleted, but I don't know if there is an 
important impact or not for my users


My browser was still connected to a secure website, but I did nothing


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.16, docker many CLOSE_WAIT

2021-12-06 Thread FredB

Thanks, I will try with one proxy

For now I'm trying with the latest version of docker without more success

Do you think a wrong configuration parameters related with close_wait 
could be set in squid ?


At the end of the days I have more than 35 000 close wait for each squid 
...



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.16, docker many CLOSE_WAIT

2021-12-06 Thread FredB

Hello,

I'm struggling with close_wait and squid in docker, after some hours I 
have thousand of close_wait , a lot more than the others status


I tried some sysctl options but without more success, I guess because 
the close_wait can be related to my clients (many simultaneous)


Maybe this is related with docker, for now I don't know, but after a 
while some "randomly" users can't reach some https websites, it's like 
the session is just dropped without any information in cache.log


As a quick fix I'm thinking about client_lifetime, reduce the value to 4 
hours -> Just to calm my users and try to understand without stress


But I found nothing about impact, what happen exactly to a user when the 
time is out ?



Regards
Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] unsuscribe

2020-01-20 Thread FredB

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-02-26 Thread FredB

Yes, here my usage case

1- Squid as explicit proxy connected to e2guardian with ICAP

2 - E2guardian block a SSL website (no bump) a 403 header is returned -> 
I tried 302, 307, 200, without more success


3 - With IE or chrome the connection is well dropped but with FF (61 -> 
next 67) the connection seems dropped but still active, you can see this 
issue with a simple refresh Firefox is waiting the website until there 
is a timeout


Fred


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-02-24 Thread FredB
Thanks, there a lot of impacts here, response time, load average, etc, 
unfortunately we should wait that FF 66 (and after) is installed everywhere to 
fix that ...  

I'm really surprised that there is no more messages about this

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-02-19 Thread FredB

Amos, Alex

Ithought you might beinterested, there was a bug in Firefox with huge 
impact for some configurations


https://bugzilla.mozilla.org/show_bug.cgi?id=1522093


Regards

Fredb


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-01-23 Thread FredB




As a workaround, you can try disabling client-to-Squid persistent
connections (client_persistent_connections off) or changing your ICAP
service to produce a response with a non-empty 403 body.



You are right this is a browser bug (firefox at least recent versions) 
and this issue can be resolved by client_persistent_connections off 
unfortunately non-empty body is not enough


I will post a bug report to firefox

I found nothing in documentation about client_persistent_connections off 
impact, do you think this can be problematic with high load ?


Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-01-22 Thread FredB

Hello Alex



But unfortunately Squid adds a "Connection: keep-alive" header

It is not clear _why_ you consider that header "unfortunate" and the
connection "wasted". That header may or may not be wrong, and the
connection may or may not be reusable, depending on many factors (that
you have not shared).

Your are right, it's not clear for me too, the only thing I'm seeing 
it's that a keep-alive is not present in my answer from ICAP but well 
added in header to client, after that if there is a refresh the browser 
waits for the page a long time


But perhaps this is not related to my issue




work. Otherwise, a packet capture (in pcap format) is probably the
easiest sharing method.



Here a short tcpdump trace 
https://nas.traceroot.fr:8081/owncloud/index.php/s/egrcXnU3lxiU0mi


  1 - I'm surfing to the website https://www.toto.fr

  2 - I receive a 403 (blank page)

  3 - I refresh the page, and I wait a long time before timeout

A real issue is filtering ADS, surf to www.aaa.com and block www.bbb.com 
(ads), there are multiple links to bbb in aaa, in this case www.aaa.com 
never appears completely (or after a long time) the browser freeze and 
still waiting bbb  (the name appears in bottom: waiting for bbb)





Yes, by ICAP design, an ICAP service does not have direct control over
HTTP connections maintained by the host application (e.g., Squid).


Yes it's what I saw and read in the rfc

Thank you

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ICAP and 403 Encapsulated answers (SSL denied domains)

2019-01-21 Thread FredB

Hello all,

I'm playing with Squid4 and e2guardian as ICAP server.

I'm seeing something I misunderstand, when a SSL website is blocked 
e2guardian returns a encapsulated "HTTP/1.1 403 Forbidden" header this 
part seems good to me with an encrypted website a denied or redirection 
page can't be added


But unfortunately Squid adds a "Connection: keep-alive" header and if I 
just reload the page I'm waiting a timeout a long moment, (and there is 
no ICAP request between squid and e2) it's like the previous connection 
still opened.


So the first request is well denied, but the second is without answer

I tried to add "Connection: close" in encapsulated header from 
e2guardian without more success, but anyway "Connection: close" value is 
removed by squid


I'm doing something wrong ? This wastes connections and from user point 
of view the proxy is (very) slow, for example with ADS filtering some 
websites freezes


FI the request is well denied in squid and E2 logs

Maybe this is a bug, but I don't known if the issue is from Squid or E2 
? What is the correct response from an ICAP server with a denied SSL 
website request ?


Thank you

Fred



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.5 and intermediate CA

2019-01-17 Thread FredB

Hi,

I'm speaking about Intermediate CA (not root) with squid as client 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-4-and-missing-intermediate-certs-td4684653.html


Not directly related, how you usually update your root CA for squid ? 
I'm just using ca-certificate directory from my system and it seems 
pretty outdated (Debian 9) there is a link somewhere, for example, 
using  the latest mozilla CA in Squid ?


FredB


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.5 and intermediate CA

2019-01-16 Thread FredB

Hi Amos,

Yes it works, and I guess I found where the problem is, this is a 
pkix-cert mime type and I wonder, but maybe I'm wrong, that Squid can't 
use the file


openssl x509 -inform DER -in myfile shows the CA as text file, after 
that I can use the CA file with browser unable to download CA (wget for 
example)


Perhaps this is a "bug" because pkix-cert is used by browsers (or 
clients software) to automatically adds CA


https://www.iana.org/assignments/media-types/application/pkix-cert

FredB


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.5 and intermediate CA

2019-01-16 Thread FredB

Yes it works, my first issue is now resolved

There is a 200 when automatic download occurs, so this part is good

Unfortunately still there is a code 503 at the third request, a specific 
bump configuration is needed ?


- - - [15/Jan/2019:16:33:43 +0100] "GET 
http://cert.int-x3.letsencrypt.org/ HTTP/1.1" 200 9737 0 NONE:HIER_NONE 
"-" -
172.23.0.9 - - [15/Jan/2019:16:33:43 +0100] "CONNECT 
bugs.squid-cache.org:443 HTTP/1.1" 200 0 447 NONE:HIER_DIRECT 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 
Firefox/64.0" bump
172.23.0.9 - - [15/Jan/2019:16:33:43 +0100] "GET 
https://bugs.squid-cache.org/ HTTP/1.1" 503 353 349 NONE:HIER_NONE 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" -





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.5 and intermediate CA

2019-01-15 Thread FredB
Now squid can get directly the intermediate CA as a browser does, it's a 
very interesting feature to me


Maybe I'm missing something, but I can see the request from squid now 
(with squid 4) it's a good point, my sslbump config is very basic, 
perhaps to basic cl step at_step SslBump1


ssl_bump peek step1 all

ssl_bump splice nobump -> just simple acl dstdomain

ssl_bump splice nobump


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump, CA certificate renewal, how to?

2019-01-15 Thread FredB

Sorry wrong topic

Le 15/01/2019 à 18:08, FredB a écrit :
Now squid can get directly the intermediate CA as a browser does, it's 
a very interesting feature to me


Maybe I'm missing something, but I can see the request from squid now 
(with squid 4) it's a good point, my sslbump config is very basic, 
perhaps to basic cl step at_step SslBump1


ssl_bump peek step1 all

ssl_bump splice nobump -> just simple acl dstdomain

ssl_bump splice nobump



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump, CA certificate renewal, how to?

2019-01-15 Thread FredB
Now squid can get directly the intermediate CA as a browser does, it's a 
very interesting feature to me


Maybe I'm missing something, but I can see the request from squid now 
(with squid 4) it's a good point, my sslbump config is very basic, 
perhaps to basic cl step at_step SslBump1


ssl_bump peek step1 all

ssl_bump splice nobump -> just simple acl dstdomain

ssl_bump splice nobump



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.5 and intermediate CA

2019-01-15 Thread FredB

Hi Eliezer

It's just what I'm seeing and it works well, so with fetched_certificate 
rule the first point is now fixed




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.5 and intermediate CA

2019-01-15 Thread FredB

Hi all,

I'm testing squid 4.5 and facing two issues with intermediate CA download

At first there is no source IP and I don't know how to allow this kind 
of requests with an identification acl


172.23.0.9 - user2 [15/Jan/2019:16:34:51 +0100] "CONNECT 
bugs.squid-cache.org:443 HTTP/1.1" 407 4442 447 TCP_DENIED:HIER_NONE 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" -
- - - [15/Jan/2019:16:34:51 +0100] "GET 
http://cert.int-x3.letsencrypt.org/ HTTP/1.1" 407 3536 0 
TCP_DENIED:HIER_NONE "-" -
172.23.0.9 - user2 [15/Jan/2019:16:34:51 +0100] "CONNECT 
bugs.squid-cache.org:443 HTTP/1.1" 200 0 447 NONE:HIER_DIRECT 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 
Firefox/64.0" bump


As you can see the request to letsencrypt is denied because a basic 
authentication is needed, how I can do a global ACL allow requests from 
squid ? I tested 127.0.0.1,local addresses but without any success


So for testing purpose I removed my identification rules

Now Squid can get the certificate

- - - [15/Jan/2019:16:33:43 +0100] "GET 
http://cert.int-x3.letsencrypt.org/ HTTP/1.1" 200 9737 0 NONE:HIER_NONE 
"-" -
172.23.0.9 - - [15/Jan/2019:16:33:43 +0100] "CONNECT 
bugs.squid-cache.org:443 HTTP/1.1" 200 0 447 NONE:HIER_DIRECT 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 
Firefox/64.0" bump
172.23.0.9 - - [15/Jan/2019:16:33:43 +0100] "GET 
https://bugs.squid-cache.org/ HTTP/1.1" 503 353 349 NONE:HIER_NONE 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" -


Cache.log

ssl3_get_server_certificate:certificate verify failed (1/-1/0)

I'm missing something?

Thanks

FredB


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSLBump, system requirements ?

2018-03-21 Thread FredB
I agree, to be honest I started with low values updated again and again, I 
should have post my previous tests rather than the latest :)
 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSLBump, system requirements ?

2018-03-21 Thread FredB
Sorry, it was just a wrong cut/paste cache_size=50MB the previous result still 
the same
About children I tried with 256, unfortunately squid is still stuck at 100% 

Regards

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSLBump, system requirements ?

2018-03-21 Thread FredB
/21 09:45:30| Error negotiating SSL on FD 4782: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed (1/-1/0)

It can be very, very, useful for analysis 

Thanks

FredB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSLBump, system requirements ?

2018-03-20 Thread FredB
Hi Yuri,

200 mbits, more or less 1000/2000 simultaneous users 

I increase children value, because the limit is reached very quickly 

> and only 100 MB on disk?

100 MB by process, no ? I think I should reduce this value and rather increase 
the max of children

Maybe such load is just impossible because I reached a limit with a single core 
Perhaps I should retry SMP but unfortunately in the past I had many issues 
with, and some features I'm using still SMP-unaware 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSLBump, system requirements ?

2018-03-20 Thread FredB
Hi all,

I'm testing SSLBump and Squid eats up all my CPU, maybe I made something wrong 
or maybe some updates are required ? Any advice would be greatly appreciated.

Debian 8.10 64 bits, Squid 3.5.27 + 64 Go ram + SSD + 15 Cores Xeon(R) CPU 
E5-2637 v2 @ 3.50GHz 
FI, I don't see anything about limit reached in kern.log (File descriptor or 
network)

acl nobump dstdomain "/home/squid/domains" -> Some very used websites (google, 
fb, etc) otherwise the system dies after less 1 minute 
http_port 3128 ssl-bump cert=/etc/squid/ca_orion/cert 
generate-host-certificates=on dynamic_cert_mem_cache_size=500MB
sslcrtd_program /usr/lib/squid/ssl_crtd -s /usr/lib/squid/ssl_db -M 100MB
sslcrtd_children 2000 startup=100 idle=20 
sslproxy_capath /etc/ssl/certs/
sslproxy_foreign_intermediate_certs /etc/squid/ssl_certs/imtermediate.ca.pem
acl step1 at_step SslBump1
ssl_bump peek step1 all
ssl_bump splice nobump
ssl_bump bump all

The sslcrtd_children increases quickly and permanently

root@proxyorion5:/tmp# ps -edf | grep ssl | wc -l
1321
root@proxyorion5:/tmp# ps -edf | grep ssl | wc -l
1341
root@proxyorion5:/tmp# ps -edf | grep ssl | wc -l
1341
root@proxyorion5:/tmp# ps -edf | grep ssl_crt | wc -l
1380
root@proxyorion5:/tmp# ps -edf | grep ssl_crt | wc -l
1381
root@proxyorion5:/tmp# ps -edf | grep ssl_crt | wc -l
1382
root@proxyorion5:/tmp# ps -edf | grep ssl_crt | wc -l
1395

Of course after a while 2000 is reached and the system becomes completely mad, 
but I already tried 200, 500, 1000, etc 

Right after squid start CPU and load average values are very, very, high 

top - 16:06:17 up 13 days,  2:46,  3 users,  load average: 102,02, 56,67, 30,75
Tasks: 1964 total,   3 running, 1961 sleeping,   0 stopped,   0 zombie
%Cpu(s): 15,3 us,  3,7 sy,  0,0 ni, 80,2 id,  0,4 wa,  0,0 hi,  0,4 si,  0,0 st
KiB Mem:  66086692 total, 52378248 used, 13708444 free,  2899764 buffers
KiB Swap:  1952764 total,0 used,  1952764 free. 32798948 cached Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND 
 
23711 squid 20   0 3438832 2,976g  13784 R 100,0  4,7   6:01.02 squid   
 
23724 squid 20   0   24868   8552   4340 S   3,6  0,0   0:02.46 ssl_crtd
 
23712 squid 20   0   25132   8896   4428 R   3,0  0,0   0:02.62 ssl_crtd
 
23714 squid 20   0   24868   8556   4344 S   2,3  0,0   0:02.43 ssl_crtd
 
23716 squid 20   0   24868   8636   4428 S   2,3  0,0   0:02.26 ssl_crtd
 
23720 squid 20   0   24868   8612   4400 S   2,3  0,0   0:02.58 ssl_crtd
 
23771 squid 20   0   24868   8580   4368 S   2,0  0,0   0:01.86 ssl_crtd
 
23780 squid 20   0   24872   8484   4268 S   2,0  0,0   0:01.86 ssl_crtd
 
23787 squid 20   0   24868   8612   4404 S   2,0  0,0   0:01.92 ssl_crtd  

The same system without SSLBump and e2guardian (web filtering) added (I tried 
without more or less 10% CPU )

Tasks: 304 total,   2 running, 302 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2,0 us,  1,1 sy,  0,0 ni, 95,9 id,  0,1 wa,  0,0 hi,  0,9 si,  0,0 st
KiB Mem:  66086700 total, 65627952 used,   458748 free,  2652264 buffers
KiB Swap:  1952764 total,20884 used,  1931880 free. 32639208 cached Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND 

20389 e2guard+  20   0  0,122t 1,133g   6144 S  28,6  1,8 191:06.50 e2guardian  

20283 squid 20   0 21,761g 0,021t   8128 R  24,2 34,0 145:00.09 squid   

  101 root  20   0   0  0  0 S   1,3  0,0  19:05.09 kswapd1 

  100 root  20   0   0  0  0 S   1,0  0,0  22:41.82 kswapd0 

8 root  20   0   0  0  0 S   0,7  0,0  68:49.48 rcu_sched   

   24 root  20   0   0  0  0 S   0,3  0,0   8:37.14 ksoftirqd/3 

   65 root  20   0   0  0  0 S   0,3  0,0   8:05.02 
ksoftirqd/11
  929 root  20   0   71928   6984   4716 S   0,3  0,0  17:53.57 syslog-ng   

 8069 root  20   0   0  0  0 S   0,3  0,0   0:22.35 kworker/0:0 

16624 root  20   0   25868   3236   2592 R   0,3  0,0   0:00.19 top 

20291 squid 20   0   59504   5228   4568 S   0,3  0,0   0:03.41 digest_
  
FredB

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] filtering HTTPS sites with transparent child Squid

2017-12-03 Thread FredB

> > 
> > I’ve set up a Squid as a transparent child-proxy. Every request is
> > redirected to another Squid with the content filtering add-on
> > e2guardian. I encounter the problem that the transparent child
> > Squid
> > only forwards IP-Addresses to the e2guardian when HTTPS is used and
> > so
> > e2guardian cant filter anything because it can only filter by URL.
> > 
> 


In your case enable SSLMITM in e2guardian


> A good demonstration of why calling a URL-rewrite helper a "content
> filter" is completely wrong.


Actually E2guardian is also a proxy, proxy chaining mode   


> 
> Real content filters receive the actual content and can filter it.
> ICAP
> and eCAP exist for that and get passed the decrypted HTTPS messages
> (if
> any).
> 

Next version, soon, very soon :)

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid and SSLBump

2017-06-09 Thread FredB
Hi all,

There is way to approximately estimate the "cost" of CPU/Memory usage of 
SSLbump ?
What do you see in practice ? 
Some features are incompatibles with SMP so I'm using a single process, Squid 
is using more or less 30/40 % of CPU

I have approximately 1000 users simultaneously connected 
Squid 3.5.25

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] retrieve amount of traffic by username

2017-06-06 Thread FredB

My answer was only for this point 

> Would be necessary for me to do so for including some traffic based 
> limitations for each user 

I don't known radius with Squid but I guess you have an acl like this
acl radius-auth proxy_auth REQUIRED ?? (or something close)

In this case I guess you can easily mixed this acl with delay_pool 
http://wiki.squid-cache.org/Features/DelayPools

Here, I have bandwidth limitation for each account also based on time (not 
limitation at night)


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] retrieve amount of traffic by username

2017-06-06 Thread FredB
delay_pool mixed with an acl like this acl ldap_auth proxy_auth REQUIRED

delay_access 1 allow ldap_auth
delay_access 1 deny all

A delay_class 4 should be good 

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.23 X-forwader and log bug ?

2017-04-10 Thread FredB
Hello, 

I'm debugging e2guardian and I found something in squid log the X-forwarwed IP 
seems not always recorded? I saw nothing particular with tcpdumd so I made a 
change in code (e2guardian) to show the header passed 

--- With problem -
E2 Debug:
Apr 10 09:07:49 proxytest1 e2guardian[27726]: Client: 192.168.0.5 
START---
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: CONNECT avissec.centprod.com:443 HTTP/1.0
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 
Firefox/45.0
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: Connection: close
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: Connection: keep-alive
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: Host: avissec.centprod.com
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: Proxy-Authorization: Digest username="test", realm="PROXY", 
nonce="RS/rWADwSIzYAQAAADkmpUg", uri="avissec.centprod.com:443", 
response="b02fa966d373a2aaf06c43bc24a180b2", qop=auth, nc=0001, 
cnonce="750b04766a809d18"
Apr 10 09:07:49 proxytest1 e2guardian[27726]: OUT: Client IP at 192.168.0.5 
header: X-Forwarded-For: 192.168.0.5
Apr 10 09:07:49 proxytest1 e2guardian[27726]: Client: 192.168.0.5 
END---
 
Squid log:
127.0.0.1 - test [10/Apr/2017:09:07:54 +0200] "CONNECT avissec.centprod.com:443 
HTTP/1.1" 200 33960 451 TCP_TUNNEL:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.3; 
WOW64; rv:45.0) Gecko/20100101 Firefox/45.0"
---

 Without problem --

E2:
Apr 10 09:07:45 proxytest1 e2guardian[27726]: Client: 192.16.0.2 
START---
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: CONNECT 0.client-channel.google.com:443 HTTP/1.0
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 
Firefox/45.0
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: Connection: close
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: Connection: keep-alive
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: Host: 0.client-channel.google.com
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: Proxy-Authorization: Digest username="test", realm="PROXY", 
nonce="NC/rWAAgRqZUAQAAAKXVAHc", uri="0.client-channel.google.com:443", 
response="ec5d46ce223d987f95393e2a35557bd0", qop=auth, nc=0036, 
cnonce="877bd5e852b857c5"
Apr 10 09:07:45 proxytest1 e2guardian[27726]: OUT: Client IP at 192.16.0.2 
header: X-Forwarded-For: 192.16.0.2
Apr 10 09:07:45 proxytest1 e2guardian[27726]: Client: 192.16.0.2 
END---

Squid log:
192.16.0.2 - test [10/Apr/2017:09:07:16 +0200] "CONNECT 
0.client-channel.google.com:443 HTTP/1.0" 200 62701 486 TCP_TUNNEL:HIER_DIRECT 
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0"

--

This not related with user, the same machine have no problem at all every day 
but sometime one request is logged as 127.0.0.1, of course exactly the same 
request have not problem at another time.
More strange there is no problem at all with HTTP requests, only HTTPS

I'm not using SSLBump just basic proxy chaining 

strip_query_terms off
logformat mylog %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %st %Ss:%Sh 
"%{User-Agent}>h"
access_log stdio:/var/log/squid/access.log mylog
cache_log /var/log/squid/cache.log
logfile_daemon /var/log/squid/log_file_daemon
log_icp_queries off
shutdown_lifetime 1 second

coredump_dir /home/squid

pid_filename /var/run/squid.pid

follow_x_forwarded_for allow all
forwarded_for off
cache_store_log none
buffered_logs on

request_header_access X-Forwarded-For deny all
request_header_access Via deny all

acl_uses_indirect_client on
log_uses_indirect_client on
client_db off
half_closed_clients off
quick_abort_min 0 KB
quick_abort_max 0 KB

Perhaps 2/5 % of requests are wrong. 

What do you think, I open a bug ticket ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_bump and source IP

2017-02-02 Thread FredB

> 
> acl tls_s1_connect  at_step SslBump1
> 
> acl tls_vip_usersfill-in-your-details
> 
> ssl_bump splicetls_vip_users  # do not peek/bump vip users
> ssl_bump peek  tls_s1_connect # peek at connections of other
> users
> ssl_bump stare all# peek/stare at the server side 
> of
> connections of other users
> ssl_bump bump  all# bump connections of other 
> users
> 


Great, I will take a look there are some words about this in wiki ? 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_bump and source IP

2017-02-02 Thread FredB
Thanks Eliezer

Unfortunately my "lan" is huge, many thousands of people, and MAC addresses are 
not known
I'm very surprised, I'm alone with this ? Nobody needs to exclude some users 
from SSLBump ?

Fredb 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Buy Certificates for Squid 'man in the middle'

2017-02-02 Thread FredB

From: http://wiki.squid-cache.org/Features/DynamicSslCert

"In theory, you must either import your root certificate into browsers or 
instruct users on how to do that. Unfortunately, it is apparently a common 
practice among well-known Root CAs to issue subordinate root certificates. If 
you have obtained such a subordinate root certificate from a Root CA already 
trusted by your users, you do not need to import your certificate into 
browsers. However, going down this path may result in removal of the well-known 
Root CA certificate from browsers around the world. Such a removal will make 
your local SslBump-based infrastructure inoperable until you import your 
certificate, but that may only be the beginning of your troubles. Will the 
affected Root CA go after you to recoup their world-wide damages? What will 
your users do when they learn that you have been decrypting their traffic 
without their consent?" 

The last sentence is ambiguous the users can known, you can inform that you 
have been decrypting their traffic. 
There is no difference (from user point of view I mean) between a well-known 
Root CAs or a self-signed certificate with a CA injected by a local GPO. 
 
But in practice I don't how how you can do that, just hello I want a 
subordinate root certificates ?

FredB  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_bump and source IP

2017-02-02 Thread FredB
So how I can manage computers without my CA ? (eg: laptop temporary connected) 
In my situation I have also some smartphones in some case, connected to my 
squids, how I can exclude them from SSLBump ?
I have already some ACL based on authentication (user azerty = with/without 
some rules)  

FredBb

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.23 little fixes

2017-01-24 Thread FredB
Hello,

FI, I'm reading some parts of code and I found two little spelling errors

FredB

---

--- src/client_side.cc  2016-10-09 21:58:01.0 +0200
+++ src/client_side.cc  2016-12-14 10:57:12.915469723 +0100
@@ -2736,10 +2736,10 @@ clientProcessRequest(ConnStateData *conn
 
 request->flags.internal = http->flags.internal;
 setLogUri (http, urlCanonicalClean(request.getRaw()));
-request->client_addr = conn->clientConnection->remote; // XXX: remove 
reuest->client_addr member.
+request->client_addr = conn->clientConnection->remote; // XXX: remove 
request->client_addr member.
 #if FOLLOW_X_FORWARDED_FOR
 // indirect client gets stored here because it is an HTTP header result 
(from X-Forwarded-For:)
-// not a details about teh TCP connection itself
+// not a details about the TCP connection itself
 request->indirect_client_addr = conn->clientConnection->remote;
 #endif /* FOLLOW_X_FORWARDED_FOR */
 request->my_addr = conn->clientConnection->local;
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_bump and source IP

2017-01-11 Thread FredB

> but not all requests from a specific source

> what do you mean here?

I mean no ssl-bump at all for a specific user, no matter the destinations
I tried some acl without success

>>, maybe because I'm using x-forwarded ?

> x-forwarded-for has nothing to do with this

There is a known bug with sslbump and x-forwarded (bug about log) maybe there 
is a relation, my "fake" address is not known or something like this
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL_bump and source IP

2017-01-11 Thread FredB
Hello,

I'm searching a way to exclude an user (account) or an IP from my lan 
I can exclude a destination domain to decryption with SSL_bump but not all 
requests from a specific source, maybe because I'm using x-forwarded ?

Thanks

Fred  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid freeze each hour.

2016-12-20 Thread FredB
I do not see this, do you have something particular ? SSLBump maybe ? SMP ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.21 ssl bump and x-forward

2016-12-14 Thread FredB
If really needed, there is a patch here 
http://bugs.squid-cache.org/show_bug.cgi?id=3792
But as Amos said this patch is incomplete the CONNECT XFF header contents 
should also be added to the bumped request

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: The userIp helpers are crashing too rapidly, need help!

2016-12-13 Thread FredB

Now, You should use another directory, less insecure I mean
/tmp is r/w for all ...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: The userIp helpers are crashing too rapidly, need help!

2016-12-13 Thread FredB

/root/soso/userIP.conf

Make a try with /tmp 

/tmp/userIP.conf

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.21 ssl bump and x-forward

2016-11-30 Thread FredB

> 
> I have the same issue and racked my brain trying to find a solution.
> Now, I
> see there is no solution for this yet.
> 
> I would appreciate so much if this feature were made available in the
> future.
> 
> Eduardo Carneiro
> 
> 

Yes http://bugs.squid-cache.org/show_bug.cgi?id=4607
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.x and NTLM

2016-11-28 Thread FredB


> The SMB_LM helper performs a downgrade attack on the NTLM protocol
> and
> decrypts the resulting username and password. Then logs into AD using
> Basic auth.
>  This requires that the client supports the extremely insecure LM
>  auth.
> Any sane client will not.
> 
> Alternatively, the 'fake' helper accepts any credentials the client
> presents as long as they are correctly formatted in NTLM syntax.

Thanks, It's what the old helper ntlm_smb_lm_auth does ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.x and NTLM

2016-11-28 Thread FredB
Hello

I wonder if I can use NTLM auth without any integration in AD ?
Just interrogate the AD for user/password, I can do that ?

Regards

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Login/Pass from squid to Squid

2016-11-08 Thread FredB

> 
> I have my ACLs based off what group an individual belongs to in a
> LDAP
> tree.
> 
> Perhaps something like that would be helpful in your setup.
> 
> -Dan
> ___

Thank you

If you have an example, I would be happy to look into

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Login/Pass from squid to Squid

2016-11-07 Thread FredB

> Use "login=PASS" (exact string) on the cache_peer.
> 
> Along with an http_access check that uses an external ACL helper
> which
> produces "OK user=X password=Y" for whatever credentials need to be
> sent.
> 
> NP: on older Squid that may be "pass=" instead of "password=".
> 
> Amos
> 


Ok thanks, and what do you think of using a helper (or something similar, I 
mean an external program) ? 
Potentially I will have 2500 accounts ...

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Login/Pass from squid to Squid

2016-11-03 Thread FredB

> Authentication credentials represent and verify the identity of your
> proxy. That is a fixed thing so why would the credentials used to
> verify
> that static identity need to change?


I'm only speaking about users identities, not something like cache_peer 
login=XXX 
So each user must have is own ID 


> 
> NP: Proxy-auth is not related to the message itelf, but to the
> transport
> mechanism. Do not confuse the identity of the proxy/sender with the
> traffic flowing through it from other sources.

Yes

> 
> That said, you can use request_header_add to add whatever headers you
> like to upstream requests. Even proxy-auth headers. You just cant
> easily
> handle any 407 which result from that when the credentials are not
> accepted. So the ACL you use better be 100% accurate when it matches.

Ah ok great, so maybe we can imagine something like this

If an acl match a specific address (eg 10.1.1.1) I put Authorization: BASIC 
Z3Vlc3Q6Z3Vlc3QxMjM= ?
It's what I was talking about helper, maybe a separate program should be better 
for matching IP=USERNAME 

If there is many users the ACL will be very long and complex ... 

Thanks for your help

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Login/Pass from squid to Squid

2016-11-03 Thread FredB
Hello,

I wonder if Squid can pass different login/password to another, depending to an 
ACL ?
I mean: 

1) a client connected to Squid without any identification helper like ntlm, 
basic, etc ...
2) an ACL like IP src, or browser, header, ... forward the request to an 
another squid with a login/passwd, but the login is different for each match 
(IP A = user1, IP B = user2, etc)
3) the second squid match login an allow the request

I can do something like that ?

Regards 
Fred  



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-19 Thread FredB
I have this problem regularly with aufs (long time ...)
Sorry I know no solution, except purge cache 

I'm using diskd to avoid this 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-18 Thread FredB
Aufs ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP and user ID

2016-10-10 Thread FredB

Thanks great, if I understand right there is no missing data, all the complete 
request (HEADER + DATA) can be transmitted to an ICAP server ?

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-10-07 Thread FredB

> I am aware of folks successfully using certificate-based
> authentication
> in production today, but they are still running v3.3-based code (plus
> many patches). I am not aware of any regressions in that area, but
> since
> there is no adequate regression testing, Amos is right: YMMV.
> 
> Alex.
> 
> 

Ok thanks, I will investigate 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ICAP and user ID

2016-10-07 Thread FredB
Hello All,

When Squid is connected to an ICAP server, there is a know list of informations 
transmitted ?
I'm thinking of username with kerberos, or some specific headers 

Regards 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-10-06 Thread FredB
Hello,

I found no way to do that, so I changed my mind
I can authenticate a user to squid with a certificate ? I'm thinking about a 
smart card 

If yes the user name can be saved in squid log file ?

Thanks

Fred


 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-23 Thread FredB

> 
> 
> Proxies only support "HTTP authentication" methods: Basic, Digest,
> NTLM ,etc. So you either have to use one of those, or perhaps "fake"
> the creation of one of those...?
> 
> 
> eg you mentioned SAML, but gave no context beyond saying you didn't
> want AD. So let's say SAML is a requirement. Well that's directly
> impossible as it isn't an "HTTP authentication" method, but you
> could hit it from the sides...
> 
> 
> How about putting a SAML SP on your squid server, and it generates
> fresh random Digest authentication creds for any authenticated user
> (ie same username, but 30char random password), and tells them to
> cut-n-paste them into their web browser proxy prompt and "save"
> them. That way the proxy is using Digest and it involved a one-off
> SAML interaction. I say Digest instead of Basic because Digest is
> more secure over cleartext - but it's also noticeably slower than
> Basic over latency links, so you can choose your poison there
> 
> 
> If you're really keen, you can actually do proxy-over-TLS via WPAD
> with Firefox/Chrome - at which point I'd definitely recommend Basic
> for the performance reasons ;-)
> 

Hi,

I'm using Digest now, with a large network for me it's fast enough (more than 
100  users), we remove BASIC identification for security reasons and the 
web browsers aren't all in AD.

The point about SSO is to remove the popup with a web portal (Identification 
for all internal websites + Internet proxy) 

I mentioned SAML, and yes there is no real context :) because I'm just 
searching informations, in my company a team thinks about SAML for the portal 
(SSO Intranet) so I thought why not ?

I guess some companies are using identifications with a web portal ? No ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-21 Thread FredB

> Hi Fred,
>   I assume that by "implicit" you mean "transparent" or
> "interception". Short answer, not possible: there is nothing to
> anchor
> cookies to. It could be possible to fake it by having an auxiliary
> website doing standard SAML and feeding a database of associations
> userid-ip. It will fail to account for cases where multiple users
> share the same IP, but that doesn't stop many vendors from caliming
> they do "transparent authentication".
> 


Hi Kinkie,

No, sorry, I mean explicit (not transparent) 
And yes, I have some multiple users with the same IP 

Regards 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSO and Squid, SAML 2.0 ?

2016-09-20 Thread FredB
I forgot, if possible a method without active directory 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSO and Squid, SAML 2.0 ?

2016-09-20 Thread FredB
Hello All,

I'm searching a way to use a secure SSO with Squid, how did you implement the 
authenticate method with an implicit proxy ? 
I'm reading many documentations about SAML, but I found nothing about Squid 

I guess we can only do something with cookies ? 

Anyone know if it's possible?

Thanks

Regards 

Fred


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.21 ssl bump and x-forward

2016-09-15 Thread FredB

> 
> Above are bumped requests sent inside the tunnel. Proxy #1 did not
> interact with them, so it has no way to add XFF headers.
> 
> The SSL-Bump logic does not yet store some things like indirect
> client
> IP and associate them with the bumped requests.
> 
> Amos
> 


Ok thank you, there is a plan to add this ? Without identification we are in 
the fog all bumped requests are only recorded with 127.0.0.1

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.21 ssl bump and x-forward

2016-09-15 Thread FredB
Hello,

I'm testing SSlBump and it works good, however I'm seeing something strange 
with two proxies and x-forwarded enabled to the first, some requests are wrote 
with the first proxy address. 

user -> squid (fowarded_for on) -> squid (follow_x_forwarded_for allow all) -> 
Net 

Here log from the second squids, on same server, (same result when there are 
separate 127.0.0.1 = IP FIRST SQUID) 

10.x.x.x.x - myaccount [15/Sep/2016:09:40:07 +0200] "CONNECT www.google.fr:443 
HTTP/1.0" 200 0 440 TAG_NONE:HIER_NONE "Mozilla/5.0 (Windows NT 6.1; rv:48.0) 
Gecko/20100101 Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:07 +0200] "GET http://www.google.fr/ 
HTTP/1.0" 302 643 1575 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:07 +0200] "CONNECT www.google.fr:443 
HTTP/1.0" 200 0 440 TAG_NONE:HIER_NONE "Mozilla/5.0 (Windows NT 6.1; rv:48.0) 
Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:07 +0200] "POST 
https://www.google.fr/gen_204?atyp=i=slh==EVDaV-rAOcS7adLmucAF=3=2=0.19272099408438004=4:1473925301533,e,U=1473925301536
 HTTP/1.1" 204 401 1571 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:08 +0200] "GET 
https://www.google.fr/?gws_rd=ssl HTTP/1.1" 200 61953 1387 TCP_MISS:HIER_DIRECT 
"Mozilla/5.0 (Windows NT 6.1; rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:08 +0200] "POST 
https://www.google.fr/gen_204?atyp=i=slh==EVDaV-rAOcS7adLmucAF=4=2=0.19272099408438004=5:1473925302218,e,H=1473925302220
 HTTP/1.1" 204 401 1571 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:08 +0200] "GET 
https://www.google.fr/complete/search?sclient=psy-ab==hp==_l==1=on.2,or.r_cp.=1=995=554=1.25=p_rn=64_ri=psy-ab=yZHeL-_L5Be_JazeSm0Mtg=0_id=0=t=1=1=tVDaV7_DMsXqauCygeAF.1473925302436.1
 HTTP/1.1" 200 913 1618 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:08 +0200] "GET 
https://www.google.fr/gen_204?v=3=webhp=csi=tVDaV7_DMsXqauCygeAF=2=2=0==init.26.20.sb.18.p.3.jsa.1.abd.1.foot.1=0=xjsls.21,prt.41,iml.41,dcl.82,xjses.124,jraids.149,jraide.153,xjsee.185,xjs.185,ol.217,aft.41,wsrt.748,cst.1,dnst.0,rqst.522,rspt.533,rqstt.161,unt.143,cstt.144,dit.816
 HTTP/1.1" 204 401 1616 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:08 +0200] "CONNECT 
plus.google.com:443 HTTP/1.0" 200 0 446 TAG_NONE:HIER_NONE "Mozilla/5.0 
(Windows NT 6.1; rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:08 +0200] "POST 
https://plus.google.com/u/0/_/n/gcosuc HTTP/1.1" 200 862 1388 
TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; rv:48.0) Gecko/20100101 
Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:18 +0200] "CONNECT 
p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-i1-v6exp3-v4.metric.gstatic.com:443 
HTTP/1.0" 200 0 617 TAG_NONE:HIER_NONE "Mozilla/5.0 (Windows NT 6.1; rv:48.0) 
Gecko/20100101 Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:18 +0200] "CONNECT 
p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-i2-v6exp3-ds.metric.gstatic.com:443 
HTTP/1.0" 200 0 617 TAG_NONE:HIER_NONE "Mozilla/5.0 (Windows NT 6.1; rv:48.0) 
Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:18 +0200] "GET 
https://p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-i2-v6exp3-ds.metric.gstatic.com/v6exp3/6.gif
 HTTP/1.1" 200 1214 702 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:18 +0200] "GET 
https://p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-i1-v6exp3-v4.metric.gstatic.com/v6exp3/6.gif
 HTTP/1.1" 200 1214 702 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 
10.x.x.x.x - myaccount [15/Sep/2016:09:40:48 +0200] "CONNECT 
p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-s1-v6exp3-v4.metric.gstatic.com:443 
HTTP/1.0" 200 0 617 TAG_NONE:HIER_NONE "Mozilla/5.0 (Windows NT 6.1; rv:48.0) 
Gecko/20100101 Firefox/48.0" 
127.0.0.1 - myaccount [15/Sep/2016:09:40:48 +0200] "GET 
https://p5-d67enuz43bu7a-hck6hyjacaic2rnf-280807-s1-v6exp3-v4.metric.gstatic.com/gen_204?ipv6exp=3=1_img_dt=270_img_dt=253
 HTTP/1.1" 204 1393 601 TCP_MISS:HIER_DIRECT "Mozilla/5.0 (Windows NT 6.1; 
rv:48.0) Gecko/20100101 Firefox/48.0" 

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-09-14 Thread FredB
Hello Alex and thank you for the explanations, I forgot but of course the test 
is running on same hardware and same full caches (2 sata drives 15k rpm 123 Gb 
of caches each)

I will return to diskd now, because the point 2 is annoying for me, but rock 
seems very promising for me
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-09-13 Thread FredB
One thing, squid restart is very slow because of time required to rebuild the 
cache

2016/09/13 00:25:34|   Took 1498.42 seconds (3972.24 objects/sec). -> Rock
2016/09/13 00:00:51|   Took 5.71 seconds (533481.90 objects/sec). -> Diskd

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-09-12 Thread FredB
Just for for information, no problem after two weeks. 
Unfortunately I can't test now with IpcIo (a problem with systemd) but rock 
store is very stable 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-09-02 Thread FredB


I will take a look,thanks
But there is no smp configuration, just rock and squid with two caches
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-09-01 Thread FredB

> 
> [Unit]
> Description=Squid Web Proxy Server
> After=network.target
> 
> [Service]
> Type=simple
> ExecStart=/usr/sbin/squid -sYC -N


Yes this is the default value 

http://bazaar.launchpad.net/~squid/squid/3.5/view/head:/tools/systemd/squid.service

I guess this is wrong no ?

Fred



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-09-01 Thread FredB
I forgot


/cache1:
total 212380
drwxrwxrwx  3 squid root  4096 sept.  1 09:00 .
drwxr-xr-x 26 root  root  4096 nov.  17  2015 ..
drwxrwxrwx  2 squid root 16384 août  31 09:12 lost+found
-rwxrwxrwx  1 squid squid 13631488 sept.  1 09:14 rock

/cache2:
total 204584
drwxrwxrwx  3 squid root  4096 sept.  1 09:00 .
drwxr-xr-x 26 root  root  4096 nov.  17  2015 ..
drwxrwxrwx  2 squid root 16384 août  31 09:12 lost+found
-rwxrwxrwx  1 squid squid 13631488 sept.  1 09:14 rock
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-09-01 Thread FredB
Hi Alex


> Normally, you do not need any ./configure options to enable Rock
> support, including support for a stand-alone disker process. If you
> want
> to enable IpcIo explicitly, you may, but I would first check whether
> it
> was enabled without any --enable-disk-io options:
> 
> > $ fgrep IpcIo config.log
> > configure:21195: Enabling IpcIo DiskIO module
> > configure:21227: IO Modules built:  AIO Blocking DiskDaemon
> > DiskThreads IpcIo Mmapped
> 
> IpcIo requires shared memory support bust most modern build
> environments
> provide that.

Ok 

configure:21150: result:  Blocking DiskThreads IpcIo Mmapped AIO DiskDaemon
configure:21665: Enabling IpcIo DiskIO module
configure:21695: IO Modules built:  Blocking DiskThreads IpcIo Mmapped AIO 
DiskDaemon


> 
> > Perhaps this process is only created in smp mode ?
> 
> As the documentation tries to imply, the disker process is used when
> all
> of the statements below are true:
> 
> * there are rock cache_dir(s) in squid.conf


cache_dir rock /cache1 13 
cache_dir rock /cache2 13 

> * IpcIo disk I/O module is enabled (it usually is by default)

Yes

configure:21150: result:  Blocking DiskThreads IpcIo Mmapped AIO DiskDaemon
configure:21665: Enabling IpcIo DiskIO module
configure:21695: IO Modules built:  Blocking DiskThreads IpcIo Mmapped AIO 
DiskDaemon

> * Squid was started without the -N command line option.

This is the point ! 
By default, after the compilation the systemd is generated like this 

more ./tools/systemd/squid.service
## Copyright (C) 1996-2016 The Squid Software Foundation and contributors
##
## Squid software is distributed under GPLv2+ license and includes
## contributions from numerous individuals and organizations.
## Please see the COPYING and CONTRIBUTORS files for details.
##

[Unit]
Description=Squid Web Proxy Server
After=network.target

[Service]
Type=simple
ExecStart=/usr/sbin/squid -sYC -N
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process

[Install]
WantedBy=multi-user.target

But I have a new problem, not present without IpcIo or with squid -N 

FATAL: Rock cache_dir at /cache1/rock failed to open db file: (2) No such file 
or directory
Squid Cache (Version 3.5.20): Terminated abnormally.
CPU Usage: 0.056 seconds = 0.008 user + 0.048 sys
Maximum Resident Size: 104144 KB
Page faults with physical i/o: 0
2016/09/01 09:01:01 kid2| Store rebuilding is 61.69% complete
2016/09/01 09:01:01 kid3| Store rebuilding is 62.03% complete
2016/09/01 09:01:01 kid1| Set Current Directory to /home/squid
2016/09/01 09:01:01 kid1| Starting Squid Cache version 3.5.20 for 
x86_64-pc-linux-gnu...
2016/09/01 09:01:01 kid1| Service Name: squid
2016/09/01 09:01:01 kid1| Process ID 7454
2016/09/01 09:01:01 kid1| Process Roles: worker
2016/09/01 09:01:01 kid1| With 65535 file descriptors available
2016/09/01 09:01:01 kid1| Initializing IP Cache...
2016/09/01 09:01:01 kid1| DNS Socket created at 0.0.0.0, FD 12
2016/09/01 09:01:01 kid1| Adding nameserver 192.168.115.1 from /etc/resolv.conf
2016/09/01 09:01:01 kid1| Adding nameserver 192.168.115.2 from /etc/resolv.conf
2016/09/01 09:01:01 kid1| helperOpenServers: Starting 50/150 'digest_ldap_auth' 
processes
2016/09/01 09:01:01 kid1| helperOpenServers: Starting 40/150 'basic_ldap_auth' 
processes
2016/09/01 09:01:01 kid1| Logfile: opening log stdio:/var/log/squid/access.log
2016/09/01 09:01:01 kid1| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec
2016/09/01 09:01:01 kid1| Store logging disabled
2016/09/01 09:01:01 kid1| WARNING: disk-cache maximum object size is too large 
for mem-cache: 5242880.00 KB > 5120.00 KB
2016/09/01 09:01:01 kid1| Swap maxSize 0 + 16777216 KB, estimated 1290555 
objects
2016/09/01 09:01:01 kid1| Target number of buckets: 64527
2016/09/01 09:01:01 kid1| Using 65536 Store buckets
2016/09/01 09:01:01 kid1| Max Mem  size: 16777216 KB [shared]
2016/09/01 09:01:01 kid1| Max Swap size: 0 KB
2016/09/01 09:01:01 kid1| Using Least Load store dir selection
2016/09/01 09:01:01 kid1| Set Current Directory to /home/squid
2016/09/01 09:01:01 kid1| Finished loading MIME types and icons.
2016/09/01 09:01:01 kid1| HTCP Disabled.
2016/09/01 09:01:01 kid1| Squid plugin modules loaded: 0
2016/09/01 09:01:01 kid1| Adaptation support is on
2016/09/01 09:01:01 kid1| commBind: Cannot bind socket FD 20 to [::]: (2) No 
such file or directory
2016/09/01 09:01:08 kid1| ERROR: /cache1/rock communication channel 
establishment timeout
2016/09/01 09:01:08 kid1| Closing HTTP port 0.0.0.0:8080
FATAL: Rock cache_dir at /cache1/rock failed to open db file: (2) No such file 
or directory
Squid Cache (Version 3.5.20): Terminated abnormally.
CPU Usage: 0.056 seconds = 0.012 user + 0.044 sys
Maximum Resident Size: 103632 KB
Page faults with physical i/o: 0
2016/09/01 09:01:11 kid1| Set Current Directory to /home/squid
2016/09/01 09:01:11 kid1| Starting Squid Cache version 3.5.20 for 
x86_64-pc-linux-gnu...
2016/09/01 09:01:11 kid1| Service Name: squid
2016/09/01 09:01:11 kid1| Process 

Re: [squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-08-31 Thread FredB

> 
> --enable-disk-io=AIO,Blocking,DiskThreads,IpcIo,Mmapped

Wrong sorry, crash with diskd only because DiskDaemon is missing 

> 
> But there is a segfault at start, FI same result with diskd ...
> 
> OK so I'm trying now --enable-disk-io=yes and there no more disker
> process, I'm doing something wrong ?
> Perhaps this process is only created in smp mode ?

Still present 


> 
> Fred
> ___
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.20 rock store and enable-disk-io

2016-08-31 Thread FredB
Hello,

I saw this in rock store documentation

If possible, Squid using Rock Store creates a dedicated kid
process called "disker" to avoid blocking Squid worker(s) on disk
I/O. One disker kid is created for each rock cache_dir.  Diskers
are created only when Squid, running in daemon mode, has support
for the IpcIo disk I/O module.

So I tried

--enable-disk-io=AIO,Blocking,DiskThreads,IpcIo,Mmapped

But there is a segfault at start, FI same result with diskd ...

OK so I'm trying now --enable-disk-io=yes and there no more disker process, I'm 
doing something wrong ?
Perhaps this process is only created in smp mode ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-08-19 Thread FredB

> 
> We use SMP and Rock under the 3.5 series without problems.  But I
> don't
> think any of our sites have as high req/sec load as you.

Thanks for your answer

Please can you describe your load and configurations ?
No crash ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Rock store status

2016-08-17 Thread FredB
Hello All,

I tried rock store and smp long time ago (squid 3.2 I guess), Unfortunately I 
definitely drop smp because there are some limitations (In my case), and I 
fall-back to diskd because there were many bugs with rock store. FI I also 
switched to aufs without big differences.

But now with the latest 3.5.20 ? Sadly SMP still not for me but rock store ?

There is someone who are using rock store with a high load, more than 800 r/s, 
without any problem ? There is a real difference in this situation, cpu, speed, 
memory ?

My configuration is 

Debian 8
Squid 3.5.20
16 cores Xeon E5-2637 3.50Ghz
64 Go ram
2 drives sata - 15k - dedicated to caches (+ one for OS) 150 Go each
Delay pool -> BP limitation by ldap account
Authentication digest + basic 

Any advice welcome.

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS and Headers

2016-07-22 Thread FredB
Ok thanks, so I will thinking about an another way ...
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS and Headers

2016-07-21 Thread FredB
Thanks Amos for your answer
Do you think I can use an alternate method to tag my users requests ? 
Modifiy/add Header seems a bad idea 

Regards

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] HTTPS and Headers

2016-07-21 Thread FredB
Hello,

I wonder what headers can be see by squid with a SSL website ? Without SSLBump 
of course
In my logs I'm seeing User-Agent, Proxy-Authorization and some others but when 
I try to put some new headers it works only with an HTTP website

I can't do that ? What are the limitations ?

My goal is to mark in logs a specific information from a user for all proxies 
(proxy chaining)

Regards

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

2016-06-15 Thread FredB

> 
> You are mentioning ufdbGuard. Are its lists free for government use?
> If not, then I can not use it, since we have very strict purchasing
> requirements, even if it costs $1. And of course, I would have to go
> through evaluation, the usual learning curve etc.
> 
> Don't get me wrong here, I'm not saying no. I'm just saying that even
> though it seems to be easy to say "yes", reality is much different.
>

You can also use E2guardian, a free web url and content filtering proxy
There is a package for Freebsd 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-15 Thread FredB

> 
> Yes I guess this is a good track for me (more or less 2 now ...)
> Maybe half_closed should be help but unfortunately it crashes squid,
> Bug 4156
> 
> Fred
> ___

Maybe this is also related with the post "Excessive TCP memory usage" because 
I'm using ICAP too 

netstat -lataupen | wc -l 
33683
netstat -lataupen | grep WAIT | wc -l 
28407

But I don't think, my case seems different 
I'm using E2guardian like front proxy 

Lan -> E2guardian (3128) -> Squid (8080)-> Net
With also Squid to load balancer ICAP (1025)

50% of TIME_WAIT - E2guardian to Squid
45% of TIME_WAIT - Squid to NET or Lan to E2Guardian 
5%  of TIME_WAIT - ICAP

Fred


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-15 Thread FredB
Yes I guess this is a good track for me (more or less 2 now ...)
Maybe half_closed should be help but unfortunately it crashes squid, Bug 4156

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-15 Thread FredB
Maybe I'm wrong, but the server is also using many memories for TCP 

cat /proc/net/sockstat
sockets: used 13523
TCP: inuse 8612 orphan 49 tw 31196 alloc 8728 mem 18237
UDP: inuse 14 mem 6
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

netstat -lataupen | wc -l
38780




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid high memory usage

2016-06-06 Thread FredB
Thanks for your answer

> What is cache_mem ?
> See also http://wiki.squid-cache.org/SquidFaq/SquidMemory
> 

Actually 25 Gb
I tried different values, but I guess no matter, the problem is that the squid 
limit is only 50% of ram

> > After that the swap is totally full and kswap process gone mad ...
> > I tried with vm.swappiness = 0 but same problem, perhaps a little
> > better, I also tried memory_pool off without any change.
> 
> I recommend vm.swappiness = 5 to have 5% of the memory be used for
> the file system cache and maintain good disk I/O.

More I increase vm.swappiness more I swap and more I have problems, but I will 
try your value

> 
> The values are too high (1024 times).  I think that you incorrectly
> set cache_mem.
> Start with setting  cache_mem to 16 GB
> 

Maybe I misunderstand your point, but when I reduce cache_mem yes there is no 
problem but Squid uses only 20/30 Go Max

With cache_men 15 Gb squid eats 36 % of memory
Htop and htops reports 30 Go of free memory

free -h
 total   used   free sharedbuffers cached
Mem:   63G62G   425M   122M   1,7G27G
-/+ buffers/cache:33G30G
Swap: 1,9G   102M   1,8G

All my RAM is consumed by cache/buffers and seems not be freed when it is 
needed by Squid  
 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid high memory usage

2016-06-06 Thread FredB
Hello all,
 
I'm trying to use a server with 64 Go of ram, but I'm faced with a problem, 
squid can't works with more than 50% of memory
After that the swap is totally full and kswap process gone mad ...
I tried with vm.swappiness = 0 but same problem, perhaps a little better, I 
also tried memory_pool off without any change.

As you can see in this picture linux are using 22 Go of cached memory
http://image.noelshack.com/fichiers/2016/22/1464965449-capture-squid.png
  
I'm using two caches (133 G each)with a dedicated sata (15 k) disk for each 
cache
Any advice will be very appreciate
 
OS Debian Jessie 64 bits and squid 3.5.19

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-12 Thread FredB
Amos I don't know if this is related or not, but I have a lot of

2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:50| Could not parse headers from on disk object
2016/04/12 13:00:51| Could not parse headers from on disk object
2016/04/12 13:00:51| Could not parse headers from on disk object
2016/04/12 13:00:56| Could not parse headers from on disk object
2016/04/12 13:00:56| Could not parse headers from on disk object
2016/04/12 13:00:56| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object
2016/04/12 13:00:57| Could not parse headers from on disk object

My cache was cleaned and squid patched

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-06 Thread FredB

> 
> Attached is a patch which I think will fix 3.5.16 (should apply fine
> on
> 4.0.8 too) without needing the cache reset. Anyone able to test it
> please?
> 

Reset the cache still needed, at least in my case 

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-06 Thread FredB
Oh sorry 
Ok it seems work for me
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-06 Thread FredB

> 
> As I'm currently updating too: is this a bug or have I only to clear
> the
> old cache directories to prevent these error messages?
> 

As far as I know, no, I tried
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-05 Thread FredB
Hi Amos,

I confirm, cleaning the cache (mkfs in my case) do not fix the issue 

Fred 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB

> 
> i can provide testing patchjust for testing  .. not for
> production until
> they find the right cause
> but   make shurr the header ar  public for those link might be your
> situation ar diff...

I will, but later on a platform test 
Now I will fallback to a previous release 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB

> 
> Thanks I will test, I confirm the problem still present after a while
> Eg: this object seems never cleaned/fixed from cache
> 

No more success with fresh cache, after 5 minutes the messages appears again 
and again 
Joe is right there is a bug somewhere 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB

> 
> mmm code ar the same must be something else corrupt the vary before
> varyEvaluateMatch()
> 

This ? 
http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-14016.patch
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB

>  
> Version 4.0.8 has the same issue after upgrading without cache
> clean-up.
> 

Thanks I will test, I confirm the problem still present after a while 
Eg: this object seems never cleaned/fixed from cache 

Snip, there are many requests before ...

2016/04/04 13:39:11 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:14 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:16 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:17 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:21 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:22 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:22 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:22 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:22 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:23 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:24 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:26 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:27 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:28 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:29 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:30 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:32 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:32 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 13:39:35 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://live.lemde.fr/mux.json' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 

Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB

> Objet: Re: [squid-users] Squid 3.5.16 and vary loop objects (bug ?)
> 
> intercept  ??

No, implicit proxy 

> i got excellent result but not the correct way its and old issue
> may be i was not posting the issue in correct way for the dev... to
> understand

Very recent for me, not problem with 6 proxies and squid 3.5.13 but present 
with 2 new 3.5.16

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.16 and vary loop objects (bug ?)

2016-04-04 Thread FredB
Hello

I migrated my Squid to the latest version 3.5.16 (from 3.5.10) and now I have 
many many "Vary loop objects"
What happen ? I made no configuration changes 

After 1 hours

Squid 3.5.16
grep "Vary" /var/log/squid/cache.log | wc -l
18176

Squid 3.5.10
grep "Vary" /var/log/squid/cache.log | wc -l
4

My cache value is also very slow, -15%

As you can see there many lines each seconds 

2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:07 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://abonnes.lemonde.fr/ajah/5m/lemonde/abonnes/Controller_Module_Pave_Edito_Chat/actionAfficherPave/WzM/yMT/Bd/EMPTY/?key=7d65cf7d4c3a74e05cb76a09e96f5afb430d22e3'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:07 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://abonnes.lemonde.fr/ajah/5m/lemonde/abonnes/Controller_Module_Abonnes_AppelJelec/actionAfficher/W3R/ydW/UsI/kJMT0NBQk9TRVFDT0xEUjE0IiwiIiwzMjEwXQ--/?key=3e9cf6640e7918a9414ffdf81f2d59ea943790df'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:07 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://abonnes.lemonde.fr/ajah/5m/lemonde/abonnes/Controller_Module_General_Colonne_Defaut/actionAfficher/W10/-/EMPTY/EMPTY/?key=dc4d5d30403d8d1a697e69255a95c47f05e387bd'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:07 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://abonnes.lemonde.fr/ws/1/jelec/kiosque/' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:07 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://abonnes.lemonde.fr/ajah/5m/lemonde/abonnes/Controller_Module_Pave_Edito_Item/actionAfficherPave/W25/1bG/wse/yJydWJyaXF1ZV9pZCI6MzIxMH0sNDg1NDMwNixudWxsXQ--/?key=2c4363f33e0fda86711e649d14ae9ec6f513ccbe'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:07 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:08 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://www.sudouest.fr/img/meteo/102.png' 'host="www.sudouest.fr", 
accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:08 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:08 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://www.sudouest.fr/img/meteo/10.png' 'host="www.sudouest.fr", 
accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:08 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://s2.cdscdn.com/cds/showCaseCss.css?LanguageCode=fr=100=89f22cc02227662988361ba3aed55805'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://s2.cdscdn.com/Css/cdsrwd/wl/rwd/master/fullrwd.css?LanguageCode=fr=100'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://s3.cdscdn.com/Js/cdsrwd/wl/rwd/block/recs.js' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://s3.cdscdn.com/cds/showCaseJs.js?md5=e2ef12f58f4161c79776f239ad0c34f0' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://s2.cdscdn.com/Css/cdsrwd/wl/rwd/block/button.css?LanguageCode=fr=100'
 'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://s3.cdscdn.com/Js/external/tagcommander/tc_nav.js' 
'accept-encoding="identity,gzip,deflate"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 'http://www.cdiscount.com/favicon.ico' 
'user-agent="Mozilla%2F5.0%20(Windows%20NT%206.1%3B%20rv%3A38.0)%20Gecko%2F20100101%20Firefox%2F38.0"'
2016/04/04 10:17:09 kid1| clientProcessHit: Vary object loop!
2016/04/04 10:17:09 kid1| varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'http://regie2.moto-net.com/adimage.php?filename=ban-starplaq-2014.gif=gif'
 'accept-encoding="identity,gzip,deflate", 
user-agent="Mozilla%2F5.0%20(compatible%3B%20MSIE%209.0%3B%20Windows%20NT%206.1%3B%20Trident%2F5.0)"'
2016/04/04 10:17:09 kid1| clientProcessHit: 

Re: [squid-users] Squid with LDAP-authentication: bypass selected URLs

2016-03-29 Thread FredB

> 
> auth_param basic program /usr/sbin/squid_ldap_auth -b T=MYDOMAIN -f
> "uid=%s"
> -s sub -h 192.168.1.1 acl password
> auth_param basic children 10
> auth_param basic realm Internetzugang im VERWALTUNGSNETZ FAL-BK:
> Bitte mit
> den Daten aus diesem Netzwerk anmelden!
> acl password proxy_auth REQUIRED
> auth_param basic credentialsttl 2 hours
> auth_param basic casesensitive off

> http_access allow password -->  http_access allow password !my acl 
> should be here, with the right acl just before

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with LDAP-authentication: bypass selected URLs

2016-03-15 Thread FredB
I guess you have an acl with proxy_auth ?
Something like acl ldapauth proxy_auth REQUIRED ?

So you can just add http_access allow ldapauth !pdfdoc and perhaps http_access 
allow pdfdoc after

Fred

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bandwidth control with delay pool

2016-03-15 Thread FredB
You can easily make this with an acl, delay_pool is a very powerful tool 

eg:

Bandwidth 64k for each users with an identification except for acl BP and only 
in time included in acl desk 

acl my_ldap_auth proxy_auth REQUIRED 
acl bp dstdom_regex "/etc/squid/limit"

acl desk time 09:00-12:00
acl desk time 13:30-16:00

delay_pools 1
delay_class 1 4
delay_access 1 allow my_ldap_auth desk !bp
delay_parameters 1 -1/-1 -1/-1 -1/-1 64000/64000

Be careful, a recent version is needed (squid 3.5) to avoid some bugs with https

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay Pools and HTTPS on Squid 3.x

2016-02-17 Thread FredB
There was a know bug about delay pool and HTTPS, but as far as I know it's 
fixed now 
you did a test with 3.5.x ?

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Authentification, the login prompt appears twice

2016-02-15 Thread FredB
Hi All,

With FF and Squid 3.5.10 do you notice whether the login prompt appears twice 
and the second time it works ?
Digest or Basic auth no matter, I tried with www.google.com like start page.

The only way to avoid this, save the account in the browser 

To reproduce remove the saved password, open the browser to a website, put your 
account and voilà the pop-up reappears again. 

Seems better with IE (11) 

Regards

Fred
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dansguardian Squid and HTTPS

2015-11-12 Thread FredB



This is not the right place to speak about DansGuardian

> OK, but in squid log i saw only the IP of listen
> dansguardian

Take a look at forwarder = on (dg) and forwarder_for on (squid)

> First, there is a way to dansguardian pass username to
> squid ? Second, in sites https

If I understand right Squid does the authentication, with AD, so there is 
something wrong with your log format 

> in squid log, i recive error 407 denied

So dropped by Squid, you should post your squid.conf 

FI: DG is really obsolete now 


 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   >