Re: [squid-users] 'Intercept' option on Windows

2017-05-07 Thread Yuri Voinov
He is talking about NAT *.h files on Windows. This is exactly required.


07.05.2017 22:05, Tobias Tromm пишет:
>
> I don't know what exactly you need to make it work.
>
>
> I found these APIs
> (https://msdn.microsoft.com/pt-br/library/windows/desktop/aa366278.aspx
> ,
> https://msdn.microsoft.com/pt-br/library/windows/desktop/aa366187(v=vs.85).aspx
> ),
> can they help?
>
>
> If not, can you explain to me what kind of access is necessary? Maybe
> I can find someone on the internet that can do something about, maybe
> create one, I don't know...
>
>
> Thanks.
>
> 
> *De:* squid-users  em nome
> de Amos Jeffries 
> *Enviado:* domingo, 7 de maio de 2017 08:35:02
> *Para:* squid-users@lists.squid-cache.org
> *Assunto:* Re: [squid-users] 'Intercept' option on Windows
>  
> On 07/05/17 09:08, Tobias Tromm wrote:
>
> > Hi Guys,
> >
> >
> > I am using squid on Windows (http://squid.diladele.com/), but, they
> > don't compile it with support to 'intercept' option.
> >
> >
> > Is that possible to enable it for Windows on current version?
> >
> >
> > I am using a very old version 2.7STABLE 8 with support 'transparent'
> > option on Windows, available here:
> > http://squid.acmeconsulting.it/download/dl-squid.html
> >  (maybe someone
> > who understand how it works, can port these option from the old system
> > to the new system, I don't know exactly...).
> >
>
> Windows does not provide any API to access its NAT system. Squid-2 only
> appeared to work because it has the CV-2009-0801 problems.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 'Intercept' option on Windows

2017-05-06 Thread Yuri Voinov
If this has not been done yet, it is impossible or not necessary.

PS. You always can setup VirtualBox (www.virtualbox.org) on your Windows
box, set up Linux/*BSD/Solaris inside and make all you want in guest OS.

07.05.2017 3:08, Tobias Tromm пишет:
>
> Hi Guys,
>
>
> I am using squid on Windows (http://squid.diladele.com/), but, they
> don't compile it with support to 'intercept' option.
>
>
> Is that possible to enable it for Windows on current version?
>
>
> I am using a very old version 2.7STABLE 8 with support 'transparent'
> option on Windows, available here:
> http://squid.acmeconsulting.it/download/dl-squid.html
>  (maybe someone
> who understand how it works, can port these option from the old system
> to the new system, I don't know exactly...).
>
>
> Based on Squid documentation there is these options, but they dont
> apply to Windows:
>
>
> -
> http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Concepts_of_Interception_Caching
> :
>
>  *
>
> For Linux configure Squid with the --enable-linux-netfilter option.
>
>  *
>
> For *BSD-based systems with IP filter configure Squid with the
> --enable-ipf-transparent option.
>
>  *
>
> If you're using OpenBSD's PF configure Squid with
> --enable-pf-transparent
>
> Here is how diladele compile it:
>
>
> https://docs.diladele.com/tutorials/build_squid_windows/index.html
>
>
> If someone can help, thanks!
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache to Users at Full Bandwidth

2017-05-05 Thread Yuri Voinov
http://wiki.squid-cache.org/


05.05.2017 21:18, christian brendan пишет:
> Squid Version 3.5.20
> Cento 7
> Mikrotik RouterBoard v 6.39.1
> Users IP: 192.168.1.0/24 
> Squid ip: 192.168.2.1
>
> Traffic to squid is routed
>
> i would like users to have full LAN bandwidth access to squid server,
> i have tried simple queue on mikrotik but it seems not to be working.
>
> Any guide will be appreciated.
>
> Best Regards
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge memory required for squid 3.5

2017-05-03 Thread Yuri Voinov
You sure?


http://wiki.squid-cache.org/SquidFaq/SquidMemory


03.05.2017 21:44, Nil Nik пишет:
>
> Hi,
>
>
> Its not disk cache, its due to in memory SSL context.
>
>
> Nil
>
>
> *From:* squid-users <squid-users-boun...@lists.squid-cache.org> on
> behalf of Yuri <yvoi...@gmail.com>
> *Sent:* Wednesday, May 3, 2017 11:55 AM
> *To:* squid-users@lists.squid-cache.org
> *Subject:* Re: [squid-users] Huge memory required for squid 3.5
>  
>
> How big disk cache(s) and how it full?
>
>
> 03.05.2017 17:54, Nil Nik пишет:
>> Hi,
>>
>>
>> NO_DEFAULT_CA doesn't help. Still goes in GB. Can anyone tell me area
>> so that i can work on?
>>
>>
>> Regards,
>>
>> Nil
>>
>>
>> 
>> *From:* squid-users <squid-users-boun...@lists.squid-cache.org> on
>> behalf of Alex Rousskov <rouss...@measurement-factory.com>
>> *Sent:* Wednesday, April 26, 2017 7:37 PM
>> *To:* squid-users@lists.squid-cache.org
>> *Subject:* Re: [squid-users] Huge memory required for squid 3.5
>>  
>> On 04/26/2017 09:35 AM, Yuri Voinov wrote:
>>
>> > This is openssl issue or squid's?
>>
>> AFAIK, the underlying issue (i.e., bug #4005) is mostly a Squid problem:
>> Squid is caching SSL contexts (instead of certificates) and does a poor
>> job maintaining that cache.
>>
>> Earlier OpenSSL versions (that had to be used when the original code was
>> written) complicated solving this problem. OpenSSL v1.0.1+ added APIs
>> that simplify some aspects of the anticipated fix. Certain OpenSSL
>> aspects will continue to hurt Squid, even with OpenSSL v1.0.1, but if
>> you want to blame a single project (instead of both), blame Squid.
>>
>>
>> > Why sessions can't share CA's data cached in memory? shared_ptr
>> invented
>> > already.
>>
>> OpenSSL knew how to share things well before std::shared_ptr became
>> available. However, it is the responsibility of the application to tell
>> OpenSSL what to create from scratch and what to share. A part of the
>> problem is that Squid tells OpenSSL to create many large things from
>> scratch and then caches those large things while underestimating their
>> size by several(?) orders of magnitude (and probably also missing many
>> cache hits).
>>
>> More details, including the difference between problems associated with
>> from-client and to-server connections, are documented in the "Memory
>> Usage" section of http://wiki.squid-cache.org/Features/SslBump
>> <http://wiki.squid-cache.org/Features/SslBump>
>> Features/SslBump - Squid Web Proxy Wiki
>> <http://wiki.squid-cache.org/Features/SslBump>
>> wiki.squid-cache.org
>> Squid-in-the-middle decryption and encryption of straight CONNECT and
>> transparently redirected SSL traffic, using configurable CA certificates.
>>
>>
>>
>> FWIW, we have spent a lot of resources on triaging this problem and
>> drafting possible solutions (in various overlapping areas), but there is
>> currently no sponsor to finalize and implement any of the fixes. AFAIK,
>> bug #4005 is stuck.
>>
>> I am glad that NO_DEFAULT_CA helps mitigate some of the problems in some
>> environments.
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>> > 26.04.2017 9:08, Amos Jeffries пишет:
>> >> On 26/04/17 10:53, Yuri Voinov wrote:
>> >>> Ok, but how NO_DEFAULT_CA should help with this?
>> >>
>> >> It prevents OpenSSL copying that 1MB into each incoming client
>> >> connections memory. The CAs are only useful there when you have some
>> >> of the global CAs as root for client certificates - in which case you
>> >> still only want to trust the roots you paid for service and not all of
>> >> them.
>> >>
>> >> Just something to try if there are huge memory issues with TLS/SSL
>> >> proxying. The default behaviour is fixed for Squid-4 with the config
>> >> options changes. But due to being a major surprise for anyone already
>> >> relying on global roots for client certs it remains a problem in 3.5.
>> >>
>> >> Amos
>> >>
>> >> ___
>> >> squid-users mailing list
>> >> squid-users@lists.squid-cache.org
>> >> http://lists.squid-cache.org/listinfo/squid-users
>> squid-users Info Page <http://lists.squid-cache.org/listinfo/squi

Re: [squid-users] URL sometimes reurns empty response

2017-05-02 Thread Yuri Voinov
Hm. See no issue from my side:

root @ khorne /patch # wget -S http://www.msftconnecttest.com/ncsi.txt
--2017-05-02 19:16:11--  http://www.msftconnecttest.com/ncsi.txt
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Cache-Control: max-age=30,must-revalidate
  Content-Length: 14
  Content-Type: text/plain
  Last-Modified: Fri, 04 Mar 2016 06:55:23 GMT
  ETag: "0x8D343F9F578A7F9"
  Server: Microsoft-IIS/7.5
  x-ms-request-id: 45e06e80-0001-000b-6f15-be91d000
  x-ms-version: 2009-09-19
  x-ms-meta-CbModifiedTime: Tue, 01 Mar 2016 21:41:22 GMT
  x-ms-lease-status: unlocked
  x-ms-blob-type: BlockBlob
  X-ECN-P: RD0003FF838204
  Access-Control-Expose-Headers: X-MSEdge-Ref
  Access-Control-Allow-Origin: *
  Timing-Allow-Origin: *
  X-CID: 7
  X-CCC: SE
  X-MSEdge-Ref: Ref A: FFF0B969CF5F48DA856B48165D32542E Ref B:
STOEDGE0510 Ref C: Tue May  2 06:16:14 2017 PST
  X-MSEdge-Ref-OriginShield: Ref A: 9D033F6C12734D3A839C09F1BAFF798E Ref
B: AMS04EDGE0220 Ref C: Thu Apr 27 07:42:11 2017 PST
  Date: Tue, 02 May 2017 13:16:13 GMT
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Length: 14 [text/plain]
Saving to: 'ncsi.txt'

ncsi.txt100%[===>]  14  --.-KB/sin
0s 

2017-05-02 19:16:14 (2.07 MB/s) - 'ncsi.txt' saved [14/14]

root @ khorne /patch # wget -S http://www.msftconnecttest.com/ncsi.txt
--2017-05-02 19:16:32--  http://www.msftconnecttest.com/ncsi.txt
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Cache-Control: max-age=30,must-revalidate
  Content-Length: 14
  Content-Type: text/plain
  Last-Modified: Fri, 04 Mar 2016 06:55:23 GMT
  ETag: "0x8D343F9F578A7F9"
  Server: Microsoft-IIS/7.5
  x-ms-request-id: 45e06e80-0001-000b-6f15-be91d000
  x-ms-version: 2009-09-19
  x-ms-meta-CbModifiedTime: Tue, 01 Mar 2016 21:41:22 GMT
  x-ms-lease-status: unlocked
  x-ms-blob-type: BlockBlob
  X-ECN-P: RD0003FF838204
  Access-Control-Expose-Headers: X-MSEdge-Ref
  Access-Control-Allow-Origin: *
  Timing-Allow-Origin: *
  X-CID: 7
  X-CCC: SE
  X-MSEdge-Ref: Ref A: FFF0B969CF5F48DA856B48165D32542E Ref B:
STOEDGE0510 Ref C: Tue May  2 06:16:14 2017 PST
  X-MSEdge-Ref-OriginShield: Ref A: 9D033F6C12734D3A839C09F1BAFF798E Ref
B: AMS04EDGE0220 Ref C: Thu Apr 27 07:42:11 2017 PST
  X-Origin-Date: Tue, 02 May 2017 13:16:13 GMT
  Date: Tue, 02 May 2017 13:16:32 GMT
  X-Cache-Age: 19
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 14 [text/plain]
Saving to: 'ncsi.txt.1'

ncsi.txt.1  100%[===>]  14  --.-KB/sin
0s 

2017-05-02 19:16:32 (1.90 MB/s) - 'ncsi.txt.1' saved [14/14]

root @ khorne /patch # wget -S http://www.msftconnecttest.com/ncsi.txt
--2017-05-02 19:18:06--  http://www.msftconnecttest.com/ncsi.txt
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Content-Length: 14
  ETag: "0x8D343F9F578A7F9"
  Cache-Control: max-age=30,must-revalidate
  Content-Type: text/plain
  Last-Modified: Fri, 04 Mar 2016 06:55:23 GMT
  Server: Microsoft-IIS/7.5
  x-ms-request-id: 45e06e80-0001-000b-6f15-be91d000
  x-ms-version: 2009-09-19
  x-ms-meta-CbModifiedTime: Tue, 01 Mar 2016 21:41:22 GMT
  x-ms-lease-status: unlocked
  x-ms-blob-type: BlockBlob
  X-ECN-P: RD0003FF838204
  Access-Control-Expose-Headers: X-MSEdge-Ref
  Access-Control-Allow-Origin: *
  Timing-Allow-Origin: *
  X-CID: 7
  X-CCC: SE
  X-MSEdge-Ref: Ref A: 1F682D6F87124BF28456EB937B49B208 Ref B:
STOSCHEDGE0116 Ref C: Tue May  2 06:18:06 2017 PST
  X-MSEdge-Ref-OriginShield: Ref A: 9D033F6C12734D3A839C09F1BAFF798E Ref
B: AMS04EDGE0220 Ref C: Thu Apr 27 07:42:11 2017 PST
  Vary: Accept-Encoding
  X-Origin-Date: Tue, 02 May 2017 13:18:05 GMT
  Date: Tue, 02 May 2017 13:18:06 GMT
  X-Cache-Age: 1
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 14 [text/plain]
Saving to: 'ncsi.txt.2'

ncsi.txt.2  100%[===>]  14  --.-KB/sin
0s 

2017-05-02 19:18:06 (2.03 MB/s) - 'ncsi.txt.2' saved [14/14]

Seems correct. Will dig more.


02.05.2017 19:15, Ralf Hildebrandt пишет:
> * Ralf Hildebrandt :
>
>> It seems that squid is returning an incorrect Content-Lenght: header 
>> while the revalidation is still fresh/ongoing.
>>
>> I haven't yet tried tcpdumping the response to check if the 14 bytes
>> do indeed contain the correct string.
> And voila - here we go (Content-Length: 0 but squid sends 14 bytes of excess 
> data: "Microsoft NCSI")
>
> 15:10:31.741436 IP proxy-cvk-1.charite.de.http-alt > vsw-it-nw-10.54228: 
> Flags [P.], seq 1:952, ack 156, win 235, options [nop,nop,TS val 126939588 
> ecr 1696144349], length 951: HTTP: HTTP/1.1 200 OK
> E...cR@.?*...*..V?.5U..
> e...HTTP/1.1 200 OK
> Cache-Control: 

Re: [squid-users] URL sometimes reurns empty response

2017-05-02 Thread Yuri Voinov
If you add this URL to cache deny rule - problem still exists?


02.05.2017 17:59, Ralf Hildebrandt пишет:
> In some cases, our proxies (got 4 of them) return a empty result when
> querying "http://www.msftconnecttest.com/ncsi.txt; (whcih is used by
> Microsoft Brwosers to check if they're online).
>
> I'm using this incantation to check the URL:
>
> watch -d curl --silent -v -x "http://proxy-cvk-1.charite.de:8080; 
> http://www.msftconnecttest.com/ncsi.txt
>
> Usually, the URL should just return "Microsoft NCSI".
> In some cases I get an empyt response, but curl reports:
>
> < Age: 5
> < X-Cache: HIT from proxy-cvk-1
> < Via: 1.1 proxy-cvk-1 (squid/5.0.0-20170421-r15126)
> < Connection: keep-alive
> <
> * Excess found in a non pipelined read: excess = 14 url = /ncsi.txt 
> (zero-length body)
> * Curl_http_done: called premature == 0
> * Connection #0 to host (nil) left intact
>
> As you can see, something is producing an excess of 14 Bytes (which
> coincides with the 14 bytes length of "Microsoft NCSI").
>
> < Cache-Control: max-age=30,must-revalidate
>
> Immediatly after revalidating, the problem occurs.
>
> I tried this with 5.0.0-20170421-r15126 as well as 4.0.19 - same result.
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.0.19 error with certificates

2017-04-30 Thread Yuri Voinov
Check this. It seems this is the issue:

http://bugs.squid-cache.org/show_bug.cgi?id=4711


30.04.2017 12:02, snable snable пишет:
> hello
>
> i am using squid on a external box.
> i forward all traffic from my openwrt router to it
> htto works fine
> https with youtube app doesnt work
> i get:
>
>  Error negotiating SSL connection on FD 73: error:14094416
> :SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown (1/0)
>
> errors
>
> other sites work well so far
>
> i heard that squid4 auto downloads intermediate certificates.. maybe
> thats the issue?
>
> i workarounded this with a white list of sites that work. but i wanna
> rollout this for all sites. (also see my other question)
>
> thanks!
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.0.19 bin question

2017-04-30 Thread Yuri Voinov


30.04.2017 11:59, snable snable пишет:
> hallo,
>
> I see the the following works for my site
>
> ssl_bump server-first mysite_acl
>
> whether 
> ssl_bump bump mysite_acl 
> gives me a corrupted site.
Details. "Corrupted site" is about nothing.
>
> any idea? i see that server-first shouldnt be used for squid4. why?
Idea for what? If you read squid.conf.documented, you know, why. BTW,
"server-first" is obsolete till 3.5.x and remains only for backward
compatibility.
>
> how can i fix it?
Fix what?
>
> ty
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2017-04-28 Thread Yuri Voinov
Raf,

intermediate CAs required anyway. Not all good good webmasters - just a
focus of the world's Good - add intermediate certificates to the chain. ;-)

Evil proxy administrators - the focus of the world's Evil - must do this
manually. Still :-D

28.04.2017 22:00, Rafael Akchurin пишет:
> Hello David and all,
>
> According to 
> https://www.ssllabs.com/ssltest/analyze.html?d=www.boutique.afnor.org=on
>  you do not need to add any intermediate certificates  to system storage - 
> site seems to be sending the whole chain as it should...
>
> BUT the overall site SSL rating is so bad..
>
> Raf
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of David Touzeau
> Sent: Friday, April 28, 2017 10:14 AM
> To: 'Yuri Voinov'; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: 
> SQUID_ERR_SSL_HANDSHAKE)
>
> I'm fighting to find the correct certificate chain for this website:
> https://www.boutique.afnor.org
>
> I have also added all certificates included in this package:
> https://packages.debian.org/fr/sid/ca-certificates
>
>
> Do you have any tips to help ?
>
> Best regards
>
> -Message d'origine-
> De : Yuri Voinov [mailto:yvoi...@gmail.com] Envoyé : jeudi 27 avril 2017 
> 23:26 À : David Touzeau <da...@articatech.com>; 
> squid-users@lists.squid-cache.org Objet : Re: [squid-users] 3.5.25: (71) 
> Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)
>
> Be careful with intermediate CA's you grabbed. Check they validity, 
> fingerprints and attributes.
>
> Proxying SSL requires much more work with Squid.
>
>
> 28.04.2017 3:12, David Touzeau пишет:
>> Thanks Yuri
>>
>>  ! but i have still have the error " Error negotiating SSL on FD 13: 
>> error::lib(0):func(0):reason(0) (5/0/0) " and cannot browse to 
>> site ( as i seen you can with your squid...??? )
> Yes. With two different versions.
>> Created a file /etc/squid3/cabundle.pem
>>
>> Added Symantec certificates available here:
>> https://knowledge.symantec.com/kb/index?page=content=CROSSLINK
>> =INFO2047
>>
>> add
>>
>> sslproxy_foreign_intermediate_certs  /etc/squid3/cabundle.pem
>>
>> and perform a squid -k reconfigure
>>
>> Missing something ???
> May be. I'm recommend to re-initialize mimic certificates DB also and restart 
> Squid, not reconfigure.
>
> Keep in mind, that SSL bump critical important for success. For example, 
> AFAIK stare often opposite to bump (in most cases). Read wiki article, but 
> also remember this functionality still evolving, and can changed without 
> notices. So, experiment.
>> Best regards
>>
>> -Message d'origine-
>> De : Yuri Voinov [mailto:yvoi...@gmail.com] Envoyé : jeudi 27 avril
>> 2017 22:52 À : David Touzeau <da...@articatech.com>; 
>> squid-users@lists.squid-cache.org Objet : Re: [squid-users] 3.5.25:
>> (71) Protocol error (TLS code:
>> SQUID_ERR_SSL_HANDSHAKE)
>>
>> Squid can't have any intermediate certificates. As by as root CA's.
>>
>> You can use this:
>>
>> #  TAG: sslproxy_foreign_intermediate_certs
>> #Many origin servers fail to send their full server certificate
>> #chain for verification, assuming the client already has or can
>> #easily locate any missing intermediate certificates.
>> #
>> #Squid uses the certificates from the specified file to fill in
>> #these missing chains when trying to validate origin server
>> #certificate chains.
>> #
>> #The file is expected to contain zero or more PEM-encoded
>> #intermediate certificates. These certificates are not treated
>> #as trusted root certificates, and any self-signed certificate in
>> #this file will be ignored.
>> #Default:
>> # none
>>
>> However, you should identiry and collect them by yourself.
>>
>> The biggest problem:
>>
>> Instead of root CA's, which can be taken from Mozilla's, intermediate 
>> CAs spreaded over CA's providers, have much shorter valid period (most 
>> cases up to 5-7 years) and, by this reason, should be continiously 
>> maintained by proxy admin.
>>
>> Also, remove this:
>>
>> sslproxy_flags DONT_VERIFY_PEER
>> sslproxy_cert_error allow all
>>
>> From your config. Don't. Never. This is completely disable ANY 
>> security checks for certificates, which leads to giant vulnerability to your 
>> users.
>> ssl_proxy_cert_error should be restricted by very specific ACL(s) in 
>> your co

Re: [squid-users] concurrency with ecap

2017-04-27 Thread Yuri Voinov
Alex,

is it possible to get comprehensive example?

Adapter sample is non-obvious, not complete (non-obvious where to put
mutex locking) and contains C-style rudiments (like external call,
pthread.h etc.).

I think, this will be actual in 2017, with CMT world around.

WBR, Yuri.


28.04.2017 3:29, Alex Rousskov пишет:
> On 04/27/2017 03:14 PM, joseph wrote:
>> is it possible in future  to change  the ecap to use concurrency like the
>> store-id so we can benefit from that 
> eCAP supports concurrent adaptations by design. No configuration is
> required to enable that support.
>
> Alex.
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2017-04-27 Thread Yuri Voinov
Be careful with intermediate CA's you grabbed. Check they validity,
fingerprints and attributes.

Proxying SSL requires much more work with Squid.


28.04.2017 3:12, David Touzeau пишет:
> Thanks Yuri
>
>  ! but i have still have the error " Error negotiating SSL on FD 13: 
> error::lib(0):func(0):reason(0) (5/0/0) " and cannot browse to site 
> ( as i seen you can with your squid...??? )
Yes. With two different versions.
>
> Created a file /etc/squid3/cabundle.pem
>
> Added Symantec certificates available here:
> https://knowledge.symantec.com/kb/index?page=content=CROSSLINK=INFO2047
>
> add
>
> sslproxy_foreign_intermediate_certs  /etc/squid3/cabundle.pem
>
> and perform a squid -k reconfigure
>
> Missing something ???
May be. I'm recommend to re-initialize mimic certificates DB also and
restart Squid, not reconfigure.

Keep in mind, that SSL bump critical important for success. For example,
AFAIK stare often opposite to bump (in most cases). Read wiki article,
but also remember this functionality still evolving, and can changed
without notices. So, experiment.
>
> Best regards
>
> -Message d'origine-
> De : Yuri Voinov [mailto:yvoi...@gmail.com]
> Envoyé : jeudi 27 avril 2017 22:52
> À : David Touzeau <da...@articatech.com>; squid-users@lists.squid-cache.org
> Objet : Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: 
> SQUID_ERR_SSL_HANDSHAKE)
>
> Squid can't have any intermediate certificates. As by as root CA's.
>
> You can use this:
>
> #  TAG: sslproxy_foreign_intermediate_certs
> #Many origin servers fail to send their full server certificate
> #chain for verification, assuming the client already has or can
> #easily locate any missing intermediate certificates.
> #
> #Squid uses the certificates from the specified file to fill in
> #these missing chains when trying to validate origin server
> #certificate chains.
> #
> #The file is expected to contain zero or more PEM-encoded
> #intermediate certificates. These certificates are not treated
> #as trusted root certificates, and any self-signed certificate in
> #this file will be ignored.
> #Default:
> # none
>
> However, you should identiry and collect them by yourself.
>
> The biggest problem:
>
> Instead of root CA's, which can be taken from Mozilla's, intermediate CAs 
> spreaded over CA's providers, have much shorter valid period (most cases up 
> to 5-7 years) and, by this reason, should be continiously maintained by 
> proxy admin.
>
> Also, remove this:
>
> sslproxy_flags DONT_VERIFY_PEER
> sslproxy_cert_error allow all
>
> From your config. Don't. Never. This is completely disable ANY security 
> checks for certificates, which leads to giant vulnerability to your users.
> ssl_proxy_cert_error should be restricted by very specific ACL(s) in your 
> config only for number of sites you trust.
>
> 28.04.2017 2:27, David Touzeau пишет:
>> Hi yuri
>>
>> I did not know if squid have Symantec intermediate certificate Squid
>> is installed as default...
>> Any howto ?
>>
>>
>> -Message d'origine-
>> De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De
>> la part de Yuri Voinov Envoyé : jeudi 27 avril 2017 22:09 À :
>> squid-users@lists.squid-cache.org Objet : Re: [squid-users] 3.5.25:
>> (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)
>>
>> Look. It can be intermediate certificates issue.
>>
>> Does Squid have Symantec intermediate certificates?
>>
>>
>> 27.04.2017 22:47, David Touzeau пишет:
>>> Hi,
>>> I'm unable to access to https://www.boutique.afnor.org website.
>>> I would like to know if this issue cannot be fixed and must deny bump
>>> website to fix it.
>>> Without Squid the website is correctly displayed
>>>
>>> Squid claim an error page with "(71) Protocol error (TLS code:
>>> SQUID_ERR_SSL_HANDSHAKE)"
>>>
>>> In cache.log: "Error negotiating SSL on FD 17:
>>> error::lib(0):func(0):reason(0) (5/0/0)"
>>>
>>> Using the following configuration:
>>>
>>> http_port 0.0.0.0:3128  name=MyPortNameID20 ssl-bump
>>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>>> cert=/etc/squid3/ssl/0c451f46b4d05031560d8195f30165cb.dyn
>>> sslproxy_foreign_intermediate_certs /etc/squid3/intermediate_ca.pem
>>> sslcrtd_program /lib/squid3/ssl_crtd -s
>>> /var/lib/squid/session/ssl/ssl_db -M 8MB sslcrtd_children 16
>>> startup=5
>>> idle=1 acl FakeCert ssl::server_name .apple.com acl FakeCert
>>&

Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2017-04-27 Thread Yuri Voinov
Squid can't have any intermediate certificates. As by as root CA's.

You can use this:

#  TAG: sslproxy_foreign_intermediate_certs
#Many origin servers fail to send their full server certificate
#chain for verification, assuming the client already has or can
#easily locate any missing intermediate certificates.
#
#Squid uses the certificates from the specified file to fill in
#these missing chains when trying to validate origin server
#certificate chains.
#
#The file is expected to contain zero or more PEM-encoded
#intermediate certificates. These certificates are not treated
#as trusted root certificates, and any self-signed certificate in
#this file will be ignored.
#Default:
# none

However, you should identiry and collect them by yourself.

The biggest problem:

Instead of root CA's, which can be taken from Mozilla's, intermediate
CAs spreaded over CA's providers, have much shorter valid period (most
cases up to 5-7 years) and, by this reason, should be continiously
maintained by proxy admin.

Also, remove this:

sslproxy_flags DONT_VERIFY_PEER
sslproxy_cert_error allow all

From your config. Don't. Never. This is completely disable ANY security checks 
for certificates, which leads to giant vulnerability to your users.
ssl_proxy_cert_error should be restricted by very specific ACL(s) in your 
config only for number of sites you trust.

28.04.2017 2:27, David Touzeau пишет:
> Hi yuri
>
> I did not know if squid have Symantec intermediate certificate
> Squid is installed as default...
> Any howto ?
>
>
> -Message d'origine-
> De : squid-users [mailto:squid-users-boun...@lists.squid-cache.org] De la 
> part de Yuri Voinov
> Envoyé : jeudi 27 avril 2017 22:09
> À : squid-users@lists.squid-cache.org
> Objet : Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: 
> SQUID_ERR_SSL_HANDSHAKE)
>
> Look. It can be intermediate certificates issue.
>
> Does Squid have Symantec intermediate certificates?
>
>
> 27.04.2017 22:47, David Touzeau пишет:
>> Hi,
>> I'm unable to access to https://www.boutique.afnor.org website.
>> I would like to know if this issue cannot be fixed and must deny bump 
>> website to fix it.
>> Without Squid the website is correctly displayed
>>
>> Squid claim an error page with "(71) Protocol error (TLS code:
>> SQUID_ERR_SSL_HANDSHAKE)"
>>
>> In cache.log: "Error negotiating SSL on FD 17:
>> error::lib(0):func(0):reason(0) (5/0/0)"
>>
>> Using the following configuration:
>>
>> http_port 0.0.0.0:3128  name=MyPortNameID20 ssl-bump 
>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
>> cert=/etc/squid3/ssl/0c451f46b4d05031560d8195f30165cb.dyn
>> sslproxy_foreign_intermediate_certs /etc/squid3/intermediate_ca.pem 
>> sslcrtd_program /lib/squid3/ssl_crtd -s 
>> /var/lib/squid/session/ssl/ssl_db -M 8MB sslcrtd_children 16 startup=5 
>> idle=1 acl FakeCert ssl::server_name .apple.com acl FakeCert 
>> ssl::server_name .icloud.com acl FakeCert ssl::server_name 
>> .mzstatic.com acl FakeCert ssl::server_name .dropbox.com acl ssl_step1 
>> at_step SslBump1 acl ssl_step2 at_step SslBump2 acl ssl_step3 at_step 
>> SslBump3 ssl_bump peek ssl_step1 ssl_bump splice FakeCert ssl_bump 
>> bump ssl_step2 all ssl_bump splice all
>>
>> sslproxy_options NO_SSLv2,NO_SSLv3,No_Compression sslproxy_cipher 
>> ALL:!SSLv2:!SSLv3:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:
>> !aNULL
>> :!eNULL
>> sslproxy_flags DONT_VERIFY_PEER
>> sslproxy_cert_error allow all
>>
>>
>>
>> Openssl info
>> --
>> --
>> --
>> --
>> ---
>>
>> openssl s_client -connect 195.115.26.58:443 -showcerts
>>
>> CONNECTED(0003)
>> depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU 
>> = "(c)
>> 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 
>> Public Primary Certification Authority - G5 verify return:1
>> depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, 
>> CN = Symantec Class 3 Secure Server CA - G4 verify return:1
>> depth=0 C = FR, ST = Seine Saint Denis, L = ST DENIS, O = ASSOCIATION 
>> FRANCAISE DE NORMALISATION, OU = ASSOCIATION FRANCAISE DE 
>> NORMALISATION, CN = www.boutique.afnor.org verify return:1
>> ---
>> Certificate chain
>>  0 s:/C=FR/ST=Seine Saint Denis/L=ST DENIS/O=ASSOCIATION FRANCAISE DE 
>> NORMALISATION/OU=ASSOCIATION FRANCAISE DE 
>> NORMALISATION/CN=www.boutique.afnor.org
>

Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2017-04-27 Thread Yuri Voinov
Look. It can be intermediate certificates issue.

Does Squid have Symantec intermediate certificates?


27.04.2017 22:47, David Touzeau пишет:
> Hi,
> I'm unable to access to https://www.boutique.afnor.org website.
> I would like to know if this issue cannot be fixed and must deny bump
> website to fix it.
> Without Squid the website is correctly displayed 
>
> Squid claim an error page with "(71) Protocol error (TLS code:
> SQUID_ERR_SSL_HANDSHAKE)"
>
> In cache.log: "Error negotiating SSL on FD 17:
> error::lib(0):func(0):reason(0) (5/0/0)"
>
> Using the following configuration:
>
> http_port 0.0.0.0:3128  name=MyPortNameID20 ssl-bump
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> cert=/etc/squid3/ssl/0c451f46b4d05031560d8195f30165cb.dyn
> sslproxy_foreign_intermediate_certs /etc/squid3/intermediate_ca.pem
> sslcrtd_program /lib/squid3/ssl_crtd -s /var/lib/squid/session/ssl/ssl_db -M
> 8MB
> sslcrtd_children 16 startup=5 idle=1
> acl FakeCert ssl::server_name .apple.com
> acl FakeCert ssl::server_name .icloud.com
> acl FakeCert ssl::server_name .mzstatic.com
> acl FakeCert ssl::server_name .dropbox.com
> acl ssl_step1 at_step SslBump1
> acl ssl_step2 at_step SslBump2
> acl ssl_step3 at_step SslBump3
> ssl_bump peek ssl_step1
> ssl_bump splice FakeCert
> ssl_bump bump ssl_step2 all
> ssl_bump splice all
>
> sslproxy_options NO_SSLv2,NO_SSLv3,No_Compression
> sslproxy_cipher
> ALL:!SSLv2:!SSLv3:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:!aNULL
> :!eNULL
> sslproxy_flags DONT_VERIFY_PEER
> sslproxy_cert_error allow all
>
>
>
> Openssl info 
> 
> 
> ---
>
> openssl s_client -connect 195.115.26.58:443 -showcerts
>
> CONNECTED(0003)
> depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = "(c)
> 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 Public
> Primary Certification Authority - G5
> verify return:1
> depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN =
> Symantec Class 3 Secure Server CA - G4
> verify return:1
> depth=0 C = FR, ST = Seine Saint Denis, L = ST DENIS, O = ASSOCIATION
> FRANCAISE DE NORMALISATION, OU = ASSOCIATION FRANCAISE DE NORMALISATION, CN
> = www.boutique.afnor.org
> verify return:1
> ---
> Certificate chain
>  0 s:/C=FR/ST=Seine Saint Denis/L=ST DENIS/O=ASSOCIATION FRANCAISE DE
> NORMALISATION/OU=ASSOCIATION FRANCAISE DE
> NORMALISATION/CN=www.boutique.afnor.org
>i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
> -BEGIN CERTIFICATE-
> ../..
> -END CERTIFICATE-
>  1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
>i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign,
> Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary
> Certification Authority - G5
> -BEGIN CERTIFICATE-
> ../..
> -END CERTIFICATE-
> ---
> Server certificate
> subject=/C=FR/ST=Seine Saint Denis/L=ST DENIS/O=ASSOCIATION FRANCAISE DE
> NORMALISATION/OU=ASSOCIATION FRANCAISE DE
> NORMALISATION/CN=www.boutique.afnor.org
> issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
> ---
> No client certificate CA names sent
> ---
> SSL handshake has read 3105 bytes and written 616 bytes
> ---
> New, TLSv1/SSLv3, Cipher is AES128-SHA
> Server public key is 2048 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> SSL-Session:
> Protocol  : TLSv1
> Cipher: AES128-SHA
> Session-ID:
> 833BA2346F50C5AAFC6B5188B4EBD9304CD25411BECFF0713F8D76C65D9D
> Session-ID-ctx:
> Master-Key:
> D2DF6C62264D03D7D44AF44EB8C0B1B7AD0E650D34DF6EBEB1CBEBFE4F30CB9C6F5080AA94F5
> D6B5955DD8DF06608416
> Key-Arg   : None
> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1493311275
> Timeout   : 300 (sec)
> Verify return code: 0 (ok)
> ---
> read:errno=0
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.5.25: (71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

2017-04-27 Thread Yuri Voinov
This one?

http://i.imgur.com/kI9SxiN.png

It's works under bump.


27.04.2017 22:47, David Touzeau пишет:
> Hi,
> I'm unable to access to https://www.boutique.afnor.org website.
> I would like to know if this issue cannot be fixed and must deny bump
> website to fix it.
> Without Squid the website is correctly displayed 
>
> Squid claim an error page with "(71) Protocol error (TLS code:
> SQUID_ERR_SSL_HANDSHAKE)"
>
> In cache.log: "Error negotiating SSL on FD 17:
> error::lib(0):func(0):reason(0) (5/0/0)"
>
> Using the following configuration:
>
> http_port 0.0.0.0:3128  name=MyPortNameID20 ssl-bump
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

> cert=/etc/squid3/ssl/0c451f46b4d05031560d8195f30165cb.dyn
What's is this? Which certificate?
> sslproxy_foreign_intermediate_certs /etc/squid3/intermediate_ca.pem
> sslcrtd_program /lib/squid3/ssl_crtd -s /var/lib/squid/session/ssl/ssl_db -M
> 8MB
> sslcrtd_children 16 startup=5 idle=1
> acl FakeCert ssl::server_name .apple.com
> acl FakeCert ssl::server_name .icloud.com
> acl FakeCert ssl::server_name .mzstatic.com
> acl FakeCert ssl::server_name .dropbox.com
> acl ssl_step1 at_step SslBump1
> acl ssl_step2 at_step SslBump2
> acl ssl_step3 at_step SslBump3
> ssl_bump peek ssl_step1
> ssl_bump splice FakeCert
> ssl_bump bump ssl_step2 all
> ssl_bump splice all
>
> sslproxy_options NO_SSLv2,NO_SSLv3,No_Compression
> sslproxy_cipher
> ALL:!SSLv2:!SSLv3:!ADH:!DSS:!MD5:!EXP:!DES:!PSK:!SRP:!RC4:!IDEA:!SEED:!aNULL
> :!eNULL
> sslproxy_flags DONT_VERIFY_PEER
> sslproxy_cert_error allow all
>
>
>
> Openssl info 
> 
> 
> ---
>
> openssl s_client -connect 195.115.26.58:443 -showcerts
>
> CONNECTED(0003)
> depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = "(c)
> 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 Public
> Primary Certification Authority - G5
> verify return:1
> depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN =
> Symantec Class 3 Secure Server CA - G4
> verify return:1
> depth=0 C = FR, ST = Seine Saint Denis, L = ST DENIS, O = ASSOCIATION
> FRANCAISE DE NORMALISATION, OU = ASSOCIATION FRANCAISE DE NORMALISATION, CN
> = www.boutique.afnor.org
> verify return:1
> ---
> Certificate chain
>  0 s:/C=FR/ST=Seine Saint Denis/L=ST DENIS/O=ASSOCIATION FRANCAISE DE
> NORMALISATION/OU=ASSOCIATION FRANCAISE DE
> NORMALISATION/CN=www.boutique.afnor.org
>i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
> -BEGIN CERTIFICATE-
> ../..
> -END CERTIFICATE-
>  1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
>i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign,
> Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary
> Certification Authority - G5
> -BEGIN CERTIFICATE-
> ../..
> -END CERTIFICATE-
> ---
> Server certificate
> subject=/C=FR/ST=Seine Saint Denis/L=ST DENIS/O=ASSOCIATION FRANCAISE DE
> NORMALISATION/OU=ASSOCIATION FRANCAISE DE
> NORMALISATION/CN=www.boutique.afnor.org
> issuer=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
> Class 3 Secure Server CA - G4
> ---
> No client certificate CA names sent
> ---
> SSL handshake has read 3105 bytes and written 616 bytes
> ---
> New, TLSv1/SSLv3, Cipher is AES128-SHA
> Server public key is 2048 bit
> Secure Renegotiation IS supported
> Compression: NONE
> Expansion: NONE
> SSL-Session:
> Protocol  : TLSv1
> Cipher: AES128-SHA
> Session-ID:
> 833BA2346F50C5AAFC6B5188B4EBD9304CD25411BECFF0713F8D76C65D9D
> Session-ID-ctx:
> Master-Key:
> D2DF6C62264D03D7D44AF44EB8C0B1B7AD0E650D34DF6EBEB1CBEBFE4F30CB9C6F5080AA94F5
> D6B5955DD8DF06608416
> Key-Arg   : None

> PSK identity: None
> PSK identity hint: None
> SRP username: None
> Start Time: 1493311275
> Timeout   : 300 (sec)
> Verify return code: 0 (ok)
> ---
> read:errno=0
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and chrome 58

2017-04-27 Thread Yuri Voinov
3.5 and above have "server-first" only for backward compatibility.


27.04.2017 22:50, William Lima пишет:
> Hi,
>
> The problem occurs due to some ssl_bump directive actions, so Squid cannot 
> get all information (X.509 v3 extensions) to mimic. "ssl_bump server-first 
> all" should work.
>
> William Lima
>
> - Original Message -
> From: "Flashdown" <flashd...@data-core.org>
> To: "Yuri Voinov" <yvoi...@gmail.com>
> Cc: squid-users@lists.squid-cache.org
> Sent: Thursday, April 27, 2017 1:41:48 PM
> Subject: Re: [squid-users] ssl bump and chrome 58
>
> I've tested the registry setting and it worked out. You can copy the 
> below lines in a .reg file and execute it.
>
> Windows Registry Editor Version 5.00
>
> [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]
> "EnableCommonNameFallbackForLocalAnchors"=dword:0001
>
>
> Best regards,
> Flashdown
>
> Am 2017-04-27 18:34, schrieb Flashdown:
>> Hello together,
>>
>> here is a workaround that you could use in the meanwhile.
>>
>> https://www.chromium.org/administrators/policy-list-3#EnableCommonNameFallbackForLocalAnchors
>>
>> Source:
>> https://www.chromium.org/administrators/policy-list-3#EnableCommonNameFallbackForLocalAnchors
>>>>>>> BEGIN
>> EnableCommonNameFallbackForLocalAnchors
>> Whether to allow certificates issued by local trust anchors that are
>> missing the subjectAlternativeName extension
>>
>> Data type:
>> Boolean [Windows:REG_DWORD]
>> Windows registry location:
>> 
>> Software\Policies\Google\Chrome\EnableCommonNameFallbackForLocalAnchors
>> Mac/Linux preference name:
>> EnableCommonNameFallbackForLocalAnchors
>> Android restriction name:
>> EnableCommonNameFallbackForLocalAnchors
>> Supported on:
>>
>> Google Chrome (Linux, Mac, Windows) since version 58 until 
>> version 65
>> Google Chrome OS (Google Chrome OS) since version 58 until 
>> version 65
>> Google Chrome (Android) since version 58 until version 65
>>
>> Supported features:
>> Dynamic Policy Refresh: Yes, Per Profile: No
>> Description:
>>
>> When this setting is enabled, Google Chrome will use the
>> commonName of a server certificate to match a hostname if the
>> certificate is missing a subjectAlternativeName extension, as long as
>> it successfully validates and chains to a locally-installed CA
>> certificates.
>>
>> Note that this is not recommended, as this may allow bypassing the
>> nameConstraints extension that restricts the hostnames that a given
>> certificate can be authorized for.
>>
>> If this policy is not set, or is set to false, server certificates
>> that lack a subjectAlternativeName extension containing either a DNS
>> name or IP address will not be trusted.
>> Example value:
>> 0x (Windows), false (Linux), false (Android),  
>> (Mac)
>> <<<<<<<<<<<< END
>>
>>
>>
>> Am 2017-04-27 18:16, schrieb Flashdown:
>>> Hello together,
>>>
>>> Suddenly I am facing the same issue when users Chrome has been updated
>>> to V58. I am running Squid 3.5.23.
>>>
>>> This is the reason:
>>> https://www.thesslstore.com/blog/security-changes-in-chrome-58/
>>> Short: Common Name Support Removed in Chrome 58 and Squid does not
>>> create certs with DNS-Alternatives names in it. Because of that it
>>> fails.
>>>
>>> Chrome says:
>>> 1. Subject Alternative Name Missing - The certificate for this site
>>> does not contain a Subject Alternative Name extension containing a
>>> domain name or IP address.
>>> 2. Certificate Error - There are issues with the site's certificate
>>> chain (net::ERR_CERT_COMMON_NAME_INVALID).
>>>
>>> Can we get Squid to add the DNS-Alternative Name to the generated
>>> certs? Since this is what I believe is now required in Chrome 58+
>>>
>>> Best regards,
>>> Enrico
>>>
>>>
>>> Am 2017-04-21 15:35, schrieb Yuri Voinov:
>>>> I see no problem with it on all five SSL Bump-aware servers with new
>>>> Chrome. So fare so good.
>>>>
>>>>
>>>> 21.04.2017 18:29, Marko Cupać пишет:
>>>>> Hi,
>>>>>
>>>>> I have squid setup with ssl bump which worked fine, but since I 
>>>>> updated
>>>>> chrome to 5

Re: [squid-users] Huge memory required for squid 3.5

2017-04-26 Thread Yuri Voinov


26.04.2017 21:47, Amos Jeffries пишет:
> On 27/04/17 03:35, Yuri Voinov wrote:
>> Amos, stupid question.
>>
>> Why sessions can't share CA's data cached in memory? shared_ptr invented
>> already.
>>
>> This is openssl issue or squid's?
>
> It is in OpenSSL. We use shared_ptr etc in Squid for the things we are
> responsible for.
Wwww. As I tought. So, in other versions of squid we're have the
same issue, right? Up to 5.x?
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge memory required for squid 3.5

2017-04-25 Thread Yuri Voinov
Ok, but how NO_DEFAULT_CA should help with this?


26.04.2017 4:29, Amos Jeffries пишет:
> On 26/04/17 09:58, Yuri Voinov wrote:
>>
>> Seriously? 2 Gb RAM for default CA?!
>>
>>
>
> 600 (number of default CAs) x 2048 (minimum size of CA cert)  -> ~1 MB
>
> All it would take is ~2000 TLS sessions.
>
> Since the session remains cached in OpenSSL after the TCP connection
> is gone ... 2GB is not that much.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge memory required for squid 3.5

2017-04-25 Thread Yuri Voinov
Ah, shi (goes to set flag)


26.04.2017 4:29, Amos Jeffries пишет:
> On 26/04/17 09:58, Yuri Voinov wrote:
>>
>> Seriously? 2 Gb RAM for default CA?!
>>
>>
>
> 600 (number of default CAs) x 2048 (minimum size of CA cert)  -> ~1 MB
>
> All it would take is ~2000 TLS sessions.
>
> Since the session remains cached in OpenSSL after the TCP connection
> is gone ... 2GB is not that much.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and chrome 58

2017-04-21 Thread Yuri Voinov
I see no problem with it on all five SSL Bump-aware servers with new
Chrome. So fare so good.


21.04.2017 18:29, Marko Cupać пишет:
> Hi,
>
> I have squid setup with ssl bump which worked fine, but since I updated
> chrome to 58 it won't display any https sites, throwing
> NTT:ERR_CERT_COMMON_NAME_INVALID. https sites still work in previous
> chrome version, as well as in IE.
>
> Anything I can do in squid config to get ssl-bumped sites in chrome
> again?
>
> Thank you in advance,

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS woes

2017-04-18 Thread Yuri Voinov
I have automated cron job to refresh Mozilla CA's bundle by monthly basis.

Intermediate CA's, however, requires non-scheduled maintenance. I've
maintain it by demand.


18.04.2017 20:17, Olly Lennox пишет:
> Thanks Yuri! The Mozilla Bundle has worked!! Most of the major sites
> seem to be working which is all we need. How often do these
> certificates refresh? Would they need updating every month or so?
>  
> oli...@lennox-it.uk
> lennox-it.uk <http://lennox-it.uk/>
> tel: 07900 648 252
>
>
> --------
> *From:* Yuri Voinov <yvoi...@gmail.com>
> *To:* Olly Lennox <oli...@lennox-it.uk>;
> "squid-users@lists.squid-cache.org" <squid-users@lists.squid-cache.org>
> *Sent:* Tuesday, 18 April 2017, 14:43
> *Subject:* Re: [squid-users] HTTPS woes
>
> You talked about two different things.
> 1. root CA usually built-in in clients. For standalone use, root CA
> (from Mozilla) usually distributes with openssl distributions. If you
> need (or your openssl distribution does not contains root CAs), you
> can find separately distributed Mozilla CA's by short googling:
> https://www.google.com/search?q=Mozilla+CA+bundle
> 2. Intermediate CA's is subordinate for roots CA. It does not exists
> by gouverned repository (because of supporting it is work, manual work
> and should be do by somebody), moreover, it spreaded across CA
> authorities. There is no automated tool to support this
> _intermediate_list. The problem also: intermediate CA's usuallu has
> much short validity period instead of roots, and should supports all
> time at time.
> Finally - it you want to use Squid with SSL Bump, you should
> understand PKI infrastructure and yes - you should support root CA &
> intermediate CAs on proxy by yourself all time. There is no free or
> payment basis service which is do it for you.
>
> 18.04.2017 19:35, Olly Lennox пишет:
>> So anyone who wants to use Squid over HTTPS in the way has to build
>> this repository themselves by manually downloading all the CA bundles?
>>  
>>
>>
>>
>> 
>> *From:* Yuri <yvoi...@gmail.com> <mailto:yvoi...@gmail.com>
>> *To:* Olly Lennox <oli...@lennox-it.uk> <mailto:oli...@lennox-it.uk>;
>> "squid-users@lists.squid-cache.org"
>> <mailto:squid-users@lists.squid-cache.org>
>> <squid-users@lists.squid-cache.org>
>> <mailto:squid-users@lists.squid-cache.org>
>> *Sent:* Tuesday, 18 April 2017, 14:03
>> *Subject:* Re: [squid-users] HTTPS woes
>>
>>
>>
>> 18.04.2017 18:56, Olly Lennox пишет:
>>> I'm using 
>>>
>>> sslproxy_foreign_intermediate_certs
>>>
>>> Is this the same thing?
>> No. You firstly required CA roots available for squid. CA roots and
>> intermediate is the different things.
>>>
>>> Also is there anywhere to get a bundle of all the major CA
>>> intermdiate certs or do you have to download them all manually?
>> No. You should build it by yourself.
>>
>>>
>>> Cheers,
>>>  
>>> oli...@lennox-it.uk <mailto:oli...@lennox-it.uk>
>>> lennox-it.uk <http://lennox-it.uk/>
>>> tel: 07900 648 252
>>>
>>>
>>> 
>>> *From:* Yuri <yvoi...@gmail.com> <mailto:yvoi...@gmail.com>
>>> *To:* squid-users@lists.squid-cache.org
>>> <mailto:squid-users@lists.squid-cache.org>
>>> *Sent:* Tuesday, 18 April 2017, 13:51
>>> *Subject:* Re: [squid-users] HTTPS woes
>>>
>>> Try to specify roots CA bundle/dir explicity by specifying one of this
>>> params:
>>>
>>>
>>> #  TAG: sslproxy_cafile
>>> #file containing CA certificates to use when verifying server
>>> #certificates while proxying https:// URLs
>>> #Default:
>>> # none
>>>
>>> #  TAG: sslproxy_capath
>>> #directory containing CA certificates to use when verifying
>>> #server certificates while proxying https:// URLs
>>> #Default:
>>> # none
>>>
>>>
>>>
>>> 18.04.2017 18:46, Olly Lennox пишет:
>>> > Hi All,
>>> >
>>> > Still having problems here. This is my https config now:
>>> >
>>> >
>>> > -https_port 3129 intercept
>>> ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_siz

Re: [squid-users] HTTPS woes

2017-04-18 Thread Yuri Voinov
You talked about two different things.

1. root CA usually built-in in clients. For standalone use, root CA
(from Mozilla) usually distributes with openssl distributions. If you
need (or your openssl distribution does not contains root CAs), you can
find separately distributed Mozilla CA's by short googling:

https://www.google.com/search?q=Mozilla+CA+bundle

2. Intermediate CA's is subordinate for roots CA. It does not exists by
gouverned repository (because of supporting it is work, manual work and
should be do by somebody), moreover, it spreaded across CA authorities.
There is no automated tool to support this _intermediate_list. The
problem also: intermediate CA's usuallu has much short validity period
instead of roots, and should supports all time at time.

Finally - it you want to use Squid with SSL Bump, you should understand
PKI infrastructure and yes - you should support root CA & intermediate
CAs on proxy by yourself all time. There is no free or payment basis
service which is do it for you.


18.04.2017 19:35, Olly Lennox пишет:
> So anyone who wants to use Squid over HTTPS in the way has to build
> this repository themselves by manually downloading all the CA bundles?
>  
>
>
>
> 
> *From:* Yuri 
> *To:* Olly Lennox ;
> "squid-users@lists.squid-cache.org" 
> *Sent:* Tuesday, 18 April 2017, 14:03
> *Subject:* Re: [squid-users] HTTPS woes
>
>
>
> 18.04.2017 18:56, Olly Lennox пишет:
>> I'm using 
>>
>> sslproxy_foreign_intermediate_certs
>>
>> Is this the same thing?
> No. You firstly required CA roots available for squid. CA roots and
> intermediate is the different things.
>>
>> Also is there anywhere to get a bundle of all the major CA
>> intermdiate certs or do you have to download them all manually?
> No. You should build it by yourself.
>
>>
>> Cheers,
>>  
>> oli...@lennox-it.uk 
>> lennox-it.uk 
>> tel: 07900 648 252
>>
>>
>> 
>> *From:* Yuri  
>> *To:* squid-users@lists.squid-cache.org
>> 
>> *Sent:* Tuesday, 18 April 2017, 13:51
>> *Subject:* Re: [squid-users] HTTPS woes
>>
>> Try to specify roots CA bundle/dir explicity by specifying one of this
>> params:
>>
>>
>> #  TAG: sslproxy_cafile
>> #file containing CA certificates to use when verifying server
>> #certificates while proxying https:// URLs
>> #Default:
>> # none
>>
>> #  TAG: sslproxy_capath
>> #directory containing CA certificates to use when verifying
>> #server certificates while proxying https:// URLs
>> #Default:
>> # none
>>
>>
>>
>> 18.04.2017 18:46, Olly Lennox пишет:
>> > Hi All,
>> >
>> > Still having problems here. This is my https config now:
>> >
>> >
>> > -https_port 3129 intercept ssl-bump
>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
>> cert=/etc/squid3/ssl_cert/squid.crt
>> key=/etc/squid3/ssl_cert/squid.key options=NO_SSLv3
>> dhparams=/etc/squid3/ssl_cert/dhparam.pem
>> >
>> > acl step1 at_step SslBump1
>> > ssl_bump peek step1
>> > ssl_bump bump all
>> > sslproxy_options NO_SSLv2,NO_SSLv3,SINGLE_DH_USE
>> > sslproxy_cipher
>> EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>> >
>> > sslcrtd_program /usr/lib/squid3/ssl_crtd -s /var/lib/ssl_db -M 4MB
>> > sslcrtd_children 8 startup=1 idle=1
>> >
>> > -
>> >
>> >
>> > I'm running version 3.5.23 with openssl 1.0. I've had to disable
>> libecap because I couldn't build 3.5 with ecap enabled. I'm getting
>> the following error when trying to connect with SSL:
>> >
>> > -
>> >
>> > The following error was encountered while trying to retrieve the
>> URL: https://www.google.co.uk/*
>> >
>> > Failed to establish a secure connection to 216.58.198.67
>> >
>> > The system returned:
>> >
>> > (71) Protocol error (TLS code:
>> X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY)
>> > SSL Certficate error: certificate issuer (CA) not known:
>> /C=US/O=Equifax/OU=Equifax Secure Certificate Authority
>> >
>> > This proxy and the remote host failed to negotiate a mutually
>> acceptable security settings for handling your request. It is
>> possible that the remote host does not support secure connections, or
>> the proxy is not satisfied with the host security credentials.
>> >
>> > Your cache administrator is webmaster.
>> >
>> > Generated Tue, 18 Apr 2017 12:23:40 GMT by raspberrypi (squid/3.5.23)
>> > -
>> >
>> > The CA is always listed as not known not matter what site I try I
>> always get this error.
>> >
>> > Any ideas?
>> 

Re: [squid-users] HTTPS woes

2017-04-13 Thread Yuri Voinov


13.04.2017 22:57, Olly Lennox пишет:
> Hi There,
>
> I've been battling for the last few days on a little project to setup a 
> Raspberry PI device as a small parental blocking server. I've managed to 
> configure the device to work as a transparent proxy using squid which is 
> assigned as the default gateway via DHCP and after a lot of messing about 
> I've finally got to the point where it's routing traffic correctly, proxying 
> and blocking unwanted websites over HTTP.
>
> The problem I have is that for the life of me I cannot get things to work 
> over HTTPS. It's working over the older, insecure web browsers where anything 
> goes but the more modern browsers will not accept the SSL certificates and 
> fail with insecure messages. I've tried various ways of generating a cert and 
> also generating a CA cert and signing my other cert with it to no avail. I've 
> had a mixture of errors back from the browser from WEAK_ALGORITHM to 
> BAD_AUTHORITY to INVALID_CERT.
>
> I've been using openssl to generate self-signed certificates and create a der 
> file. Below is a recent attempt but I've tried lots of different approaches:
>
> 
> openssl req -x509 -nodes -sha256 -days 3650 -newkey rsa:2048 -keyout 
> squid.key -out squid.crt 
> openssl req -new -x509 -key squid.key -out squid.pem 
> openssl x509 -in squid.pem -inform pem -out squid.der -outform der
> 
>
>
> Then my config in Squid is like this, the dhparams file I generated as per 
> instructions in the squid wiki:
First of all: what's Squid's version?
>
> 
> http_port 3128 intercept
> https_port 3129 intercept ssl-bump generate-host-certificates=on 
> dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/ssl_cert/squid.crt 
> key=/etc/squid3/ssl_cert/squid.key options=NO_SSLv3 
> dhparams=/etc/squid3/ssl_cert/dhparam.pem 
You squid's built with interception support? show squid -v output.
>
> ssl_bump server-first all 
This  ^ option valid only up to Squid 3.4. If you
using 3.5.x, you should use new peek-n-splice rules.
> sslproxy_cert_error allow all 
> sslproxy_flags DONT_VERIFY_PEER 
 Don't do this. Never. This force
squid to ignore (and hide) all security issues with SSL from user and
from you.
> sslproxy_cipher 
> EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
>  
>
> 
>
> The only routing rules I'm using are to forward port 80/443 to 3128/2129 
> respectively and also a POST_ROUTING "masquerade" rule which I got from a 
> guide (and I'm not sure I 100% understand!)
80/443 should be NATed to squid's box on squid's box.
>  
>
> Can anyone tell me where I'm going wrong? This is only for use on very small 
> networks (home router + 2 or 3 trusted devices and users) so security between 
> the rPI and the client is not a major concern - I just want it to work in the 
> most simple and foolproof way possible.
You doing wrong only one: you not give any important to resolve issue
information.
At least squid's version and build options.
>
> Any advice would be very welcome.
>
> Thanks,
>
> Olly
> oli...@lennox-it.uk
> lennox-it.uk
> tel: 07900 648 252
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-dev] [RFC] Changes to http_access defaults

2017-04-13 Thread Yuri Voinov


13.04.2017 21:14, Dan Purgert пишет:
> Quoting Alex Rousskov :
>
>> On 04/12/2017 12:16 PM, Amos Jeffries wrote:
>>
>>> Changes to http_access defaults
>>
>> Clearly stating what you are trying to accomplish with these changes may
>> help others evaluate your proposal. Your initial email focuses on _how_
>> you are going to accomplish some implied/vague goal. What is the goal
>> here?
>>
>>
>>> I have become convinced that Squid always checks those
>>> security rules, then do the custom access rules. All other orderings
>>> seem to have turned out to be problematic and security-buggy in some
>>> edge cases or another.
>>
>> s/Squid always checks/Squid should always check/
>>
>>
>>> What are peoples opinions about making the following items built-in
>>> defaults?
>>>
>>>  acl Safe_ports port 21 80 443
>>>  acl CONNECT_ports port 443
>>>  acl CONNECT method CONNECT
>>>
>>>  http_acces deny !Safe_ports
>>>  http_access deny CONNECT !CONNECT_ports
>>
>>> The above change will have some effect on installations that try to use
>>> an empty squid.conf.
>>
>> And on many other existing installations, of course, especially on those
>> with complex access rules which are usually the most difficult to
>> modify/adjust. In other words, this is a pretty serious change.
>>
>>
>
> How would a "built-in default" alter an existing setup? I mean, in
> every other instance that I can think of, if the config file includes
> the directive, the config file's version overrides the default ...
This is normal behaviour. System administrator should have possibility
to override ANY default.
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto fix X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY Squid error

2017-04-07 Thread Yuri Voinov
;-)

No problem, Raf. This is really much better solution ;-)

07.04.2017 22:44, Rafael Akchurin пишет:
> Hello Yuri,
>
> Yes this is much better solution!
>
> Best regards,
> Rafael Akchurin
>
> Op 7 apr. 2017 om 18:20 heeft Yuri Voinov <yvoi...@gmail.com
> <mailto:yvoi...@gmail.com>> het volgende geschreven:
>
>> #  TAG: sslproxy_foreign_intermediate_certs
>> #Many origin servers fail to send their full server certificate
>> #chain for verification, assuming the client already has or can
>> #easily locate any missing intermediate certificates.
>> #
>> #Squid uses the certificates from the specified file to fill in
>> #these missing chains when trying to validate origin server
>> #certificate chains.
>> #
>> #The file is expected to contain zero or more PEM-encoded
>> #intermediate certificates. These certificates are not treated
>> #as trusted root certificates, and any self-signed certificate in
>> #this file will be ignored.
>> #Default:
>> # none
>>
>> Heh?
>>
>>
>> 07.04.2017 15:13, Rafael Akchurin пишет:
>>>
>>> Hello everyone,
>>>
>>> Added new article for intermediate certificates and
>>> X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error when bumping SSL.
>>> Hopefully will be helpful/interesting for someone
>>> https://docs.diladele.com/faq/squid/fix_unable_to_get_issuer_cert_locally.html
>>>
>>>  
>>>
>>> Best regards,
>>> Rafael Akchurin
>>>
>>> Diladele B.V.
>>>
>>>
>>>
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>
>> -- 
>> Bugs to the Future
>> <0x613DEC46.asc>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> <mailto:squid-users@lists.squid-cache.org>
>> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto fix X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY Squid error

2017-04-07 Thread Yuri Voinov
#  TAG: sslproxy_foreign_intermediate_certs
#Many origin servers fail to send their full server certificate
#chain for verification, assuming the client already has or can
#easily locate any missing intermediate certificates.
#
#Squid uses the certificates from the specified file to fill in
#these missing chains when trying to validate origin server
#certificate chains.
#
#The file is expected to contain zero or more PEM-encoded
#intermediate certificates. These certificates are not treated
#as trusted root certificates, and any self-signed certificate in
#this file will be ignored.
#Default:
# none

Heh?


07.04.2017 15:13, Rafael Akchurin пишет:
>
> Hello everyone,
>
> Added new article for intermediate certificates and
> X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error when bumping SSL.
> Hopefully will be helpful/interesting for someone
> https://docs.diladele.com/faq/squid/fix_unable_to_get_issuer_cert_locally.html
>
>  
>
> Best regards,
> Rafael Akchurin
>
> Diladele B.V.
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto fix X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY Squid error

2017-04-07 Thread Yuri Voinov
I would not install intermediate certificates in the system store. They
have a much shorter validity period - this time, and two - there is a
SQUID functionality that supports adding missing intermediate
certificates from a separate file. For security reasons, intermediate
certificates require additional administrator attention, and they should
be kept separate.


07.04.2017 15:13, Rafael Akchurin пишет:
>
> Hello everyone,
>
> Added new article for intermediate certificates and
> X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY error when bumping SSL.
> Hopefully will be helpful/interesting for someone
> https://docs.diladele.com/faq/squid/fix_unable_to_get_issuer_cert_locally.html
>
>  
>
> Best regards,
> Rafael Akchurin
>
> Diladele B.V.
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Google Captcha, can something be done to help it with squid?

2017-04-03 Thread Yuri Voinov


04.04.2017 3:10, Eliezer Croitoru пишет:
> Why relevant to BlueCoat and not squid?
> It happens for the users for both systems and google clearly states that it's 
> related to this specific ip activity.
Ah. You don't said it first. So, may be bot behind proxy. Or.. skew
routing to tor-like external IP.
> (while bing and others works just fine)
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Yuri Voinov
> Sent: Monday, April 3, 2017 10:51 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Google Captcha, can something be done to help it 
> with squid?
>
> I guess an issue relevant to BlueCoat, not to Squid.
>
> AFAIK BlueCoat ignores RFC. Squid - not.
>
>
> 04.04.2017 1:45, Eliezer Croitoru пишет:
>> Hey List,
>>
>> I got couple complains from couple sysadmins about google forcing 
>> their clients to verify that they are indeed humans in some very horrible 
>> ways.
>> But when they are logged in as a user it's "all good" and the search 
>> is working properly.
>> These networks are using Squid and BlueCoat for 1k users plus and I am 
>> not sure what to do to ease the clients.
>> I can verify the clients using a username and password using basic and 
>> digest auth but I am not sure what I can do to make google services 
>> understand that I have no bot in my networks.
>>
>> Does anyone noticed such an issue? Did anyone found a specific 
>> solution for this?
> With squid there is no issue. Only when Squid is torified, but this as 
> expected and no issue.
>> Maybe some header?
> Don't think so. It still seems as BlueCoat-relevant, not Squid.
>> Thanks,
>> Eliezer
>>
>> 
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: elie...@ngtech.co.il
>>
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> --
> Bugs to the Future
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Google Captcha, can something be done to help it with squid?

2017-04-03 Thread Yuri Voinov
I guess an issue relevant to BlueCoat, not to Squid.

AFAIK BlueCoat ignores RFC. Squid - not.


04.04.2017 1:45, Eliezer Croitoru пишет:
> Hey List,
>
> I got couple complains from couple sysadmins about google forcing their
> clients to verify that they are indeed humans in some very horrible ways.
> But when they are logged in as a user it's "all good" and the search is
> working properly.
> These networks are using Squid and BlueCoat for 1k users plus and I am not
> sure what to do to ease the clients.
> I can verify the clients using a username and password using basic and
> digest auth but I am not sure what I can do to make google services
> understand that I have no bot in my networks.
>
> Does anyone noticed such an issue? Did anyone found a specific solution for
> this?
With squid there is no issue. Only when Squid is torified, but this as
expected and no issue.
> Maybe some header?
Don't think so. It still seems as BlueCoat-relevant, not Squid.
>
> Thanks,
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] hsc-dynamic-cache: relied on storeID rules? Removed in 3.5.20?

2017-03-29 Thread Yuri Voinov


29.03.2017 5:55, L A Walsh пишет:
> Eliezer Croitoru wrote:
>> Hey Linda,
>>
>> As the pathcer\author of StoreID I will try to clarify what might
>> seems odd.
>> StoreID is a "static" rule which is one of the squid cache fundamentals.
>> The feature is the option to tweak this internal cache object ID.
>> This is a very static feature and will not be changed for a very long
>> time from now on.
>> Most of the public helpers I have seen are very "simple" and rely on
>> very simple things.
> 
> Makes sense, otherwise too prone to breakage.
Don't think so. More effective (and complex) solutions just not free
(and, of course, open-source). I don't think that C++ code is easy to
breakage. :)
>
>
>
>>
>> But since many systems these days are aware of the option to predict
>> what would be the next url(think about an exe or other binary that
>> can be replaced on-the-fly\in-transit) the developers changed and
>> change(like a Diffe Hellman) their way of transporting content to the
>> end client.
>> Due to this "Diffe Hellman" feature that many added it makes the more
>> simple scripts useless since the developers became much smarter.
> 
> yeah, my use case was fairly simple -- same
> person w/same browser, watching same vid a 2nd time.
>
> They gave me many "kudos" and noticed that
> youtube was noticeably faster to browse through when I
> implemented the SSL interception on the squid proxy
> that web traffic goes through.  In that case, it was mainly
> the caching of the video-thumbs that noticeably sped up
> moving through YT pages.
YT now is very different thing. As I told much times, YT is actively
opposite caching (to force ISP uses Google Cache) by shuffling
underlying CDN URLs and/or encrypts parts of URL. So, it is very
difficult to make YT videos cacheable in runtime this time. Just
possible to cache static YT pages (and, so, not at all).
>
>> Indeed you will see TCP_MISS and it won't cache but this is only
>> since the admin might have the illusion that an encrypted content can
>> be predicted when a Diffie  Helman cryptography is creating the plain
>> url's these days.
> 
> Oh yeah.  Have noted that there are an infinite number
> of ways to access the same URL and have thought about ways I might
> collapse them to 1 URL, but it's just idle thinking as
> other things on the plate.
>
> One good idea that didn't get updated was an
> extension in FF, that tried to store some of the latest
> Javascript libs that sites used so if they asked for the lib
> from a common site (like jquery), it might return the result
> from a local cache. 
> It wouldn't help for those sites that merge
> multiple JS files and minify them.
>
> But many sites have 15-20 different websites that are "included" to
> get different elements (fonts, stylesheets,
> JS libs, etc) from different sources.  They seem to
In this case Store ID is useful. You simple write regex which combines
this several URLs to one.
> include URL's like a developer would use
> #include files...(and often take forever to load).
>
> multiple elements from different URLs like they would
> use multiple header include files in a local compilation.
>
>
>> Hope It helped,
>> Eliezer
>
> Thanks for the explanation, certainly more useful
> than just telling someone:
>
> "the web broke it"... :-)
It is hardly an explanation that will help to solve a specific question.
It takes some effort, often very big. And - yes, the web actively
opposes caching, or at least does not worry about caching at all. Is it
any wonder that those who could solve this problem not hurry to spread
the results of hard works free of charge around the world? :)
>
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] hsc-dynamic-cache: relied on storeID rules? Removed in 3.5.20?

2017-03-27 Thread Yuri Voinov


28.03.2017 1:26, L A Walsh пишет:
> This caught my attention as my housemate tends to watch alot of
> youtube videos, and caching some of them might speed up their
> access, so was trying to understand what was meant in your post:
>
> Yuri Voinov wrote:
>> Things are changed in the web on regular basis. Nothing permanent in the
>> world.
>>
>> So, store ID rules lost relevance and no longer work.
What word is not clear here?
>>   
> 
>Is the problem that "Store ID rules lost relevance" caused by a
> change from squid 3.5.19 -> 3.5.20?
Caused independently during "things changes time to time".
>
>That doesn't sound so much like a change on the web, but a change
> in squid.
Heh? Really?
>
>Were storeID rules removed in 3.5.20?  If that's the case,
> what might have replaced them?
Why did it happen? What does the documentation tell us?
>
> @Eduardo: am I to understand that this plugin worked in 3.5.19, but
> not in 3.5.20 and above?  (trying to get the versions right)
>
> Also, Eduardo -- what specific features above 3.5.19 were you hoping
> to include by upgrading?  I.e. is 3.5.19 not working for you for some
> reason?  It might be easier to cherry pick changes that you needed from
> some later version back into 3.5.19 if some major feature
> (like storeID rules?) was removed after that version...
Right now I'm running Squid v5.x. With store ID. So?
>
> Thanks Yuri!
> Linda
>
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Free Squid helper for dynamic content caching

2017-03-22 Thread Yuri Voinov
I'm afraid that rewriting the rules is a big job. I strongly doubt that
someone will lay it out in public open access for free. Saving traffic
is money.

Here's what I want to say. There are no really effective helpers in the
public domain. As I know.

22.03.2017 21:38, Eduardo Carneiro пишет:
> Hi Yuri.
>
> The reason I came here is because I've already tried but I didn't succeed. I
> really expected a more specific answer. Not just "You can fix it yourself,
> the code is open."
>
> Anyway, thanks. I'll Keep trying to fix this.
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Free-Squid-helper-for-dynamic-content-caching-tp4670617p4681905.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Free Squid helper for dynamic content caching

2017-03-22 Thread Yuri Voinov
Things are changed in the web on regular basis. Nothing permanent in the
world.

So, store ID rules lost relevance and no longer work.

You can fix it yourself, the code is open.


22.03.2017 20:35, Eduardo Carneiro пишет:
> I have been using this helper for a while. It works very well.
> Congratulations!
>
> But I noticed that after squid 3.5.19, this helper doesn't work anymore. Is
> this a known problem? Is there any way to fix this?
>
> Best regards.
> Eduardo Carneiro
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Free-Squid-helper-for-dynamic-content-caching-tp4670617p4681896.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL list from a URL

2017-03-21 Thread Yuri Voinov


22.03.2017 2:32, Jason B. Nance пишет:
> I'm sorry, this message was sent prematurely. :-\
>
> Completed message follows.
>
>
> Hi Yuri,
>
> I should have mentioned that I'm not caching, I'm only using Squid for 
> whitelisting in this case.  Would you still say this is the right path?  It 
> seems that there is a fair amount of hard coding in this method at least 
> based on:
>
> http://wiki.squid-cache.org/Features/StoreID/DB
>
> I guess a URL regex could also work given that all the URIs are similar.
Mmmm. May be. You can write common regex for all mirrors, yes.
> Regards,
>
> j
>
>
> - Original Message -
> From: "Yuri Voinov" <yvoi...@gmail.com>
> To: squid-users@lists.squid-cache.org
> Sent: Tuesday, March 21, 2017 1:19:43 PM
> Subject: Re: [squid-users] URL list from a URL
>
> Yes.
>
> Functionality you required is:
>
> http://wiki.squid-cache.org/Features/StoreID
>
>
> 21.03.2017 21:52, Jason B. Nance пишет:
>> Hello,
>>
>> I'm using Squid 3.5.20 and wonder if it is possible to define an ACL which 
>> retrieves the list of URLs from another URL (similar to pointing to a file). 
>>  In this specific use case it is to allow a Foreman server to sync Yum 
>> content from the CentOS mirrors.  I tell Foreman to use the following URL:
>>
>> http://mirrorlist.centos.org/?release=7=x86_64=updates
>>
>> Which returns a list of URLs, such as:
>>
>> http://repo1.dal.innoscale.net/centos/7.3.1611/updates/x86_64/
>> http://linux.mirrors.es.net/centos/7.3.1611/updates/x86_64/
>> http://reflector.westga.edu/repos/CentOS/7.3.1611/updates/x86_64/
>> http://mirror.jax.hugeserver.com/centos/7.3.1611/updates/x86_64/
>> http://ftp.linux.ncsu.edu/pub/CentOS/7.3.1611/updates/x86_64/
>> http://mirror.nexcess.net/CentOS/7.3.1611/updates/x86_64/
>> http://mirror.web-ster.com/centos/7.3.1611/updates/x86_64/
>> http://centos.host-engine.com/7.3.1611/updates/x86_64/
>> http://mirror.raystedman.net/centos/7.3.1611/updates/x86_64/
>> http://mirror.linux.duke.edu/pub/centos/7.3.1611/updates/x86_64/
>>
>> Foreman then starts a new HTTP connection (not a redirect) to attempt to 
>> connect to those in turn until it works.
>>
>> So I would like to configure Squid to allow the Foreman server access to any 
>> of those URLs (the list changes somewhat often).
>>
>> I started to go down the external_acl_type but am wondering if I'm missing 
>> something obvious.
>>
>> Regards,
>>
>> j
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assistance with WCCPv2 Setup with Cisco Router

2017-03-21 Thread Yuri Voinov
PS. You configured GRE tunnel, as I can see. Check it defined on both
sides: on router and on your proxy box. Also note, GRE will process on
router CPU, instead of L2 redirection, which is runs on control plane
and hardware accelerated.


22.03.2017 1:04, Waldon, Cooper пишет:
>
> Hello All,
>
>  
>
> I’m trying to set up a transparent proxy for http and https using
> Cisco Routers and Squid.  I have followed the configuration examples
> that are listed under the wccp2 overview section
> (http://wiki.squid-cache.org/Features/Wccp2) of the squid wiki but I’m
> still having some issues.
>
>  
>
> I have a little lab set up with a Cisco 7200 Router and a VM with
> CentOS running the proxy.
>
>  
>
> The “WAN” IP of the Router is 192.168.0.23.  The IP of the Squid Proxy
> is 192.168.0.24 and both have the default gateway of 192.168.0.1 which
> is the “ISP”
>
>  
>
> The Client is sitting on a LAN behind the Router in the 10.10.10.0/24
> subnet and is also sitting behind nat.
>
>  
>
> I believe that the router and proxy are communicating properly based
> on the information in the show ip wccp command on the router as it
> shows clients and routers as well as showing that packets are being
> forwarded:
>
>  
>
> R3#show ip wccp
>
> Global WCCP information:
>
> Router information:
>
> Router Identifier:   192.168.0.23
>
> Configured source-interface: GigabitEthernet5/0
>
>  
>
> Service Identifier: web-cache
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:1079
>
>   Process:   0
>
>   CEF:   1079
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
>GRE tunnel interface:Tunnel1
>
>  
>
> Service Identifier: 70
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:500
>
>   Process:   0
>
>   CEF:   500
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
> GRE tunnel interface:Tunnel0
>
>  
>
> Here is the relevant squid wccp configuration:
>
>  
>
> Output removed
>
> # Squid normally listens to port 3128
>
> http_port 3128
>
> http_port 0.0.0.0:3129
>
>  
>
> # WCCPv2 Parameters
>
> wccp2_router 192.168.0.23
>
> wccp2_forwarding_method 1
>
> wccp2_return_method 1
>
> wccp2_assignment_method hash
>
> wccp2_service standard 0
>
> wccp2_service dynamic 70
>
> wccp2_service_info 70 protocol=tcp
> flags=dst_ip_hash,src_ip_alt_hash,src_port_alt_hash priority=231 ports=443
>
>  
>
> ---Output remove
>
>  
>
> I think that the issue lies with the iptables configuration as I do
> not see any packets been processed in the nat table.  I have tried a
> few different methods such as:
>
>  
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 80 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 443 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> or
>
>  
>
> iptables -t nat -A PREROUTING -p tcp –dport 80 -j DNAT –to-destination
> 192.168.0.24:3129
>
> iptables -t nat -A PREROUTING -p tcp –dport 443 -j DNAT
> –to-destination 192.168.0.24:3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> I have also tried adding ACCEPT commands to the PREROUTING zone just
> in case the proxy is dropping the packets right away but that also
> doesn’t work.
>
>  
>
> The proxy functions perfectly when the client is configured to use a
> proxy so there doesn’t appear to be any issues 

Re: [squid-users] Assistance with WCCPv2 Setup with Cisco Router

2017-03-21 Thread Yuri Voinov
Ah, forgot about this:

http://wiki.squid-cache.org/ConfigExamples/Intercept


22.03.2017 1:04, Waldon, Cooper пишет:
>
> Hello All,
>
>  
>
> I’m trying to set up a transparent proxy for http and https using
> Cisco Routers and Squid.  I have followed the configuration examples
> that are listed under the wccp2 overview section
> (http://wiki.squid-cache.org/Features/Wccp2) of the squid wiki but I’m
> still having some issues.
>
>  
>
> I have a little lab set up with a Cisco 7200 Router and a VM with
> CentOS running the proxy.
>
>  
>
> The “WAN” IP of the Router is 192.168.0.23.  The IP of the Squid Proxy
> is 192.168.0.24 and both have the default gateway of 192.168.0.1 which
> is the “ISP”
>
>  
>
> The Client is sitting on a LAN behind the Router in the 10.10.10.0/24
> subnet and is also sitting behind nat.
>
>  
>
> I believe that the router and proxy are communicating properly based
> on the information in the show ip wccp command on the router as it
> shows clients and routers as well as showing that packets are being
> forwarded:
>
>  
>
> R3#show ip wccp
>
> Global WCCP information:
>
> Router information:
>
> Router Identifier:   192.168.0.23
>
> Configured source-interface: GigabitEthernet5/0
>
>  
>
> Service Identifier: web-cache
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:1079
>
>   Process:   0
>
>   CEF:   1079
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
>GRE tunnel interface:Tunnel1
>
>  
>
> Service Identifier: 70
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:500
>
>   Process:   0
>
>   CEF:   500
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
> GRE tunnel interface:Tunnel0
>
>  
>
> Here is the relevant squid wccp configuration:
>
>  
>
> Output removed
>
> # Squid normally listens to port 3128
>
> http_port 3128
>
> http_port 0.0.0.0:3129
>
>  
>
> # WCCPv2 Parameters
>
> wccp2_router 192.168.0.23
>
> wccp2_forwarding_method 1
>
> wccp2_return_method 1
>
> wccp2_assignment_method hash
>
> wccp2_service standard 0
>
> wccp2_service dynamic 70
>
> wccp2_service_info 70 protocol=tcp
> flags=dst_ip_hash,src_ip_alt_hash,src_port_alt_hash priority=231 ports=443
>
>  
>
> ---Output remove
>
>  
>
> I think that the issue lies with the iptables configuration as I do
> not see any packets been processed in the nat table.  I have tried a
> few different methods such as:
>
>  
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 80 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 443 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> or
>
>  
>
> iptables -t nat -A PREROUTING -p tcp –dport 80 -j DNAT –to-destination
> 192.168.0.24:3129
>
> iptables -t nat -A PREROUTING -p tcp –dport 443 -j DNAT
> –to-destination 192.168.0.24:3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> I have also tried adding ACCEPT commands to the PREROUTING zone just
> in case the proxy is dropping the packets right away but that also
> doesn’t work.
>
>  
>
> The proxy functions perfectly when the client is configured to use a
> proxy so there doesn’t appear to be any issues with routing or
> anything like that, it’s just the transparent proxying that isn’t working.
>
>  
>
> If anyone has any suggestions of what I could try that would 

Re: [squid-users] Assistance with WCCPv2 Setup with Cisco Router

2017-03-21 Thread Yuri Voinov


22.03.2017 1:04, Waldon, Cooper пишет:
>
> Hello All,
>
>  
>
> I’m trying to set up a transparent proxy for http and https using
> Cisco Routers and Squid.  I have followed the configuration examples
> that are listed under the wccp2 overview section
> (http://wiki.squid-cache.org/Features/Wccp2) of the squid wiki but I’m
> still having some issues.
>
>  
>
> I have a little lab set up with a Cisco 7200 Router and a VM with
> CentOS running the proxy.
>
>  
>
> The “WAN” IP of the Router is 192.168.0.23.  The IP of the Squid Proxy
> is 192.168.0.24 and both have the default gateway of 192.168.0.1 which
> is the “ISP”
>
>  
>
> The Client is sitting on a LAN behind the Router in the 10.10.10.0/24
> subnet and is also sitting behind nat.
>
>  
>
> I believe that the router and proxy are communicating properly based
> on the information in the show ip wccp command on the router as it
> shows clients and routers as well as showing that packets are being
> forwarded:
>
>  
>
> R3#show ip wccp
>
> Global WCCP information:
>
> Router information:
>
> Router Identifier:   192.168.0.23
>
> Configured source-interface: GigabitEthernet5/0
>
>  
>
> Service Identifier: web-cache
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:1079
>
>   Process:   0
>
>   CEF:   1079
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
>GRE tunnel interface:Tunnel1
>
>  
>
> Service Identifier: 70
>
> Protocol Version:2.00
>
> Number of Service Group Clients: 1
>
> Number of Service Group Routers: 1
>
> Total Packets Redirected:500
>
>   Process:   0
>
>   CEF:   500
>
> Service mode:Open
>
> Service Access-list: -none-
>
> Total Packets Dropped Closed:0
>
> Redirect access-list:100
>
> Total Packets Denied Redirect:   0
>
> Total Packets Unassigned:0
>
> Group access-list:   10
>
> Total Messages Denied to Group:  0
>
> Total Authentication failures:   0
>
> Total GRE Bypassed Packets Received: 0
>
>   Process:   0
>
>   CEF:   0
>
> GRE tunnel interface:Tunnel0
>
>  
>
> Here is the relevant squid wccp configuration:
>
>  
>
> Output removed
>
> # Squid normally listens to port 3128
>
> http_port 3128
>
> http_port 0.0.0.0:3129
>
>  
>
> # WCCPv2 Parameters
>
> wccp2_router 192.168.0.23
>
> wccp2_forwarding_method 1
>
> wccp2_return_method 1
>
> wccp2_assignment_method hash
>
> wccp2_service standard 0
>
> wccp2_service dynamic 70
>
> wccp2_service_info 70 protocol=tcp
> flags=dst_ip_hash,src_ip_alt_hash,src_port_alt_hash priority=231 ports=443
>
>  
>
> ---Output remove
>
>  
>
> I think that the issue lies with the iptables configuration as I do
> not see any packets been processed in the nat table.  I have tried a
> few different methods such as:
>
>  
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 80 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A PREROUTING -i wccp0 -p tcp –dport 443 -j REDIRECT
> –to-port 3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> or
>
>  
>
> iptables -t nat -A PREROUTING -p tcp –dport 80 -j DNAT –to-destination
> 192.168.0.24:3129
>
> iptables -t nat -A PREROUTING -p tcp –dport 443 -j DNAT
> –to-destination 192.168.0.24:3129
>
> iptables -t nat -A POSTROUTING -j MASQUERADE
>
>  
>
> I have also tried adding ACCEPT commands to the PREROUTING zone just
> in case the proxy is dropping the packets right away but that also
> doesn’t work.
>
1.Ports, you using for redirection, in squid, should be defined as
'intercept':

http_port 3126 intercept

https_port 3127 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/rootCA2.crt
key=/usr/local/squid/etc/rootCA2.key
tls-cafile=/usr/local/squid/etc/rootCA12.crt
options=SINGLE_DH_USE:SINGLE_ECDH_USE

Re: [squid-users] URL list from a URL

2017-03-21 Thread Yuri Voinov
Yes.

Functionality you required is:

http://wiki.squid-cache.org/Features/StoreID


21.03.2017 21:52, Jason B. Nance пишет:
> Hello,
>
> I'm using Squid 3.5.20 and wonder if it is possible to define an ACL which 
> retrieves the list of URLs from another URL (similar to pointing to a file).  
> In this specific use case it is to allow a Foreman server to sync Yum content 
> from the CentOS mirrors.  I tell Foreman to use the following URL:
>
> http://mirrorlist.centos.org/?release=7=x86_64=updates
>
> Which returns a list of URLs, such as:
>
> http://repo1.dal.innoscale.net/centos/7.3.1611/updates/x86_64/
> http://linux.mirrors.es.net/centos/7.3.1611/updates/x86_64/
> http://reflector.westga.edu/repos/CentOS/7.3.1611/updates/x86_64/
> http://mirror.jax.hugeserver.com/centos/7.3.1611/updates/x86_64/
> http://ftp.linux.ncsu.edu/pub/CentOS/7.3.1611/updates/x86_64/
> http://mirror.nexcess.net/CentOS/7.3.1611/updates/x86_64/
> http://mirror.web-ster.com/centos/7.3.1611/updates/x86_64/
> http://centos.host-engine.com/7.3.1611/updates/x86_64/
> http://mirror.raystedman.net/centos/7.3.1611/updates/x86_64/
> http://mirror.linux.duke.edu/pub/centos/7.3.1611/updates/x86_64/
>
> Foreman then starts a new HTTP connection (not a redirect) to attempt to 
> connect to those in turn until it works.
>
> So I would like to configure Squid to allow the Foreman server access to any 
> of those URLs (the list changes somewhat often).
>
> I started to go down the external_acl_type but am wondering if I'm missing 
> something obvious.
>
> Regards,
>
> j
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov


11.03.2017 3:47, Yosi Greenfield пишет:
> Gentlemen,
>
> Thanks Antony. Yes, we are accounting for everything else. I'm
> talking about port 3128 and 3129 only. 
>
> Any other traffic is being tracked both by netflow and tcpdump and
> they match. What does not match is 3128/9 and squid log.
It can be also because of tunneled traffic.
>
> I'll report back after the weekend if the discrepancy is all
> sslbump traffic.
>
> Thank you all,
> Yosi
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Antony Stone
> Sent: Friday, March 10, 2017 4:31 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Data usage reported in log files
>
> On Friday 10 March 2017 at 22:22:59, Yuri Voinov wrote:
>
>> Of course, there is no stream video from security cams, no voice IP, 
>> no SIP, no torrents, no RDP, no other protocol. They simple does not 
>> exists and we're all believe that's all not above over 1% of overall
> traffic.
>> Yes. Sure. Really.
>>
>> Only web-surfing :) Sure :)
> Thanks for the standard sarcasm.
>
> Has it occurred to you that Yosi might have been measuring traffic to & from
> the IP of the Squid server, so as to ignore everything else he knows is
> happening on his network, so he can compare like with like?
>
> My "not more than 1%" was for the additional traffic to/from the Squid
> server, other than HTTP/S.
>
>
> Antony.
>
>> 11.03.2017 3:19, Yuri Voinov пишет:
>>> 11.03.2017 2:57, Antony Stone пишет:
>>>> On Friday 10 March 2017 at 21:50:19, Yuri Voinov wrote:
>>>>> Gentlemen, and it never occurred to you that there are other types of
>>>>> traffic besides HTTP / HTTPS, right?
>>>>>
>>>>> DNS, ICMP, other protocols?
>>>> I'm assuming Yosi has been measuring only TCP traffic, but even if he's
>>>> been measuring everything, I don't think DNS, ICMP and other protocols
>>>> would add more than 1% on top of HTTP/S, unless (as Marcus suggested)
>>>> there is also totally-non-Squid traffic on the link being measured.
>>> Come on, sure? Even in L7? Really? Cool story, bro!
>>>
>>>> Antony.
>>>>
>>>>> 11.03.2017 2:44, Yosi Greenfield пишет:
>>>>>> Aha! That could be it. I use sslbump, but not for all users. I'll
>>>>>> check that out, although I think that it's a problem even for bumped
>>>>>> users. Even for bumped users we don't bump all sites, so that really
>>>>>> could be it.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>>
>>>>>> -Original Message-
>>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>>>> On Behalf Of Marcus Kool
>>>>>> Sent: Friday, March 10, 2017 3:38 PM
>>>>>> To: squid-users@lists.squid-cache.org
>>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>>
>>>>>> On 10/03/17 16:27, Yosi Greenfield wrote:
>>>>>>> Thanks!
>>>>>>>
>>>>>>> Netflow is much larger.
>>>>>>>
>>>>>>> I really want to know exactly what site is costing my users data.
>>>>>>> Many of our users are on metered connections and are paying for
>>>>>>> overage, but I can't tell where that overage is being used. Are they
>>>>>>> using youtube, webmail, wetransfer? I see only a fraction of their
>>>>>>> actual proxy usage in my squid logs.
>>>>>>>
>>>>>>> Data compression would give the opposite result, so that's not what
>>>>>>> I'm seeing.
>>>>>>>
>>>>>>> Any other ideas?
>>>>>> Is there any traffic that is not directed to Squid?
>>>>>>
>>>>>> Do you use ssl-bump in bump mode ?
>>>>>> If not, Squid has no idea how many bytes go through the (HTTPS)
>>>>>> tunnels.
>>>>>>
>>>>>> Marcus
>>>>>>
>>>>>>> -Original Message-
>>>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>>>>> On Behalf Of Antony Stone
>>>>>>> Sent: Friday, March 10, 2017 2:21 PM
>>>>>>> To: squid-users@lists.squid-cache.org
>

Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov


11.03.2017 3:43, Antony Stone пишет:
> On Friday 10 March 2017 at 22:33:44, Yuri Voinov wrote:
>
>> We have not seen the network topology and the full configuration of
>> network devices - what are we arguing about and guessing about?
> Nobody is arguing, and we are guessing so that we might be helpful to Yosi 
> who 
> asked the question.
Guessing can be worse than a lack of response. As they take them away
from the true picture. Especially when you do not have any facts.
>
> Incidentally, please could you consider putting all of your comments (which 
> are unrelated to further replies from other people) into a single posting, 
> instead of sending, for example, four emails to the list, each replying only 
> to your own previous comment?
>
> That would make things far easier to follow in the conversation.
I'll think about it in the future. I usually do not get into the
discussion here, except for very rare cases.
>
>
> Thanks,
>
>
> Antony.
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov
According to the above, NetFlow will always show much more traffic than
the SQUID. This is obvious and there is nothing to discuss here. If this
is not clear to someone, put a collector that collects statistics at the
data link level and compare the counters. I'm not just talking about
TCP, Alex. There is also the UDP. And there are a lot of protocols that
squid can not see, including for the simple reason that these packets
are never routed to a SQUID.

We have not seen the network topology and the full configuration of
network devices - what are we arguing about and guessing about?


11.03.2017 3:27, Yuri Voinov пишет:
> Think of one simple thing. Squid does not see and can not see protocols
> that do not support. What do you expect from it? Does it work on L1/L2?
> No? Then what is the discussion about?
>
>
> 11.03.2017 3:22, Yuri Voinov пишет:
>> Of course, there is no stream video from security cams, no voice IP, no
>> SIP, no torrents, no RDP, no other protocol. They simple does not exists
>> and we're all believe that's all not above over 1% of overall traffic.
>> Yes. Sure. Really.
>>
>> Only web-surfing :) Sure :)
>>
>>
>> 11.03.2017 3:19, Yuri Voinov пишет:
>>> 11.03.2017 2:57, Antony Stone пишет:
>>>> On Friday 10 March 2017 at 21:50:19, Yuri Voinov wrote:
>>>>
>>>>> Gentlemen, and it never occurred to you that there are other types of
>>>>> traffic besides HTTP / HTTPS, right?
>>>>>
>>>>> DNS, ICMP, other protocols?
>>>> I'm assuming Yosi has been measuring only TCP traffic, but even if he's 
>>>> been 
>>>> measuring everything, I don't think DNS, ICMP and other protocols would 
>>>> add 
>>>> more than 1% on top of HTTP/S, unless (as Marcus suggested) there is also 
>>>> totally-non-Squid traffic on the link being measured.
>>> Come on, sure? Even in L7? Really? Cool story, bro!
>>>> Antony.
>>>>
>>>>> 11.03.2017 2:44, Yosi Greenfield пишет:
>>>>>> Aha! That could be it. I use sslbump, but not for all users. I'll
>>>>>> check that out, although I think that it's a problem even for bumped
>>>>>> users. Even for bumped users we don't bump all sites, so that really
>>>>>> could be it.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>>
>>>>>> -Original Message-
>>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>>>>>> Behalf Of Marcus Kool
>>>>>> Sent: Friday, March 10, 2017 3:38 PM
>>>>>> To: squid-users@lists.squid-cache.org
>>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>>
>>>>>> On 10/03/17 16:27, Yosi Greenfield wrote:
>>>>>>> Thanks!
>>>>>>>
>>>>>>> Netflow is much larger.
>>>>>>>
>>>>>>> I really want to know exactly what site is costing my users data. Many
>>>>>>> of our users are on metered connections and are paying for overage,
>>>>>>> but I can't tell where that overage is being used. Are they using
>>>>>>> youtube, webmail, wetransfer? I see only a fraction of their actual
>>>>>>> proxy usage in my squid logs.
>>>>>>>
>>>>>>> Data compression would give the opposite result, so that's not what
>>>>>>> I'm seeing.
>>>>>>>
>>>>>>> Any other ideas?
>>>>>> Is there any traffic that is not directed to Squid?
>>>>>>
>>>>>> Do you use ssl-bump in bump mode ?
>>>>>> If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.
>>>>>>
>>>>>> Marcus
>>>>>>
>>>>>>> -Original Message-
>>>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>>>>> On Behalf Of Antony Stone
>>>>>>> Sent: Friday, March 10, 2017 2:21 PM
>>>>>>> To: squid-users@lists.squid-cache.org
>>>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>>>
>>>>>>> On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:
>>>>>>>> Hello all,
>>>>>>>>
>>>>>>>> I'm analyzing my squid logs with sarg, and I see that the num

Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov
Think of one simple thing. Squid does not see and can not see protocols
that do not support. What do you expect from it? Does it work on L1/L2?
No? Then what is the discussion about?


11.03.2017 3:22, Yuri Voinov пишет:
> Of course, there is no stream video from security cams, no voice IP, no
> SIP, no torrents, no RDP, no other protocol. They simple does not exists
> and we're all believe that's all not above over 1% of overall traffic.
> Yes. Sure. Really.
>
> Only web-surfing :) Sure :)
>
>
> 11.03.2017 3:19, Yuri Voinov пишет:
>> 11.03.2017 2:57, Antony Stone пишет:
>>> On Friday 10 March 2017 at 21:50:19, Yuri Voinov wrote:
>>>
>>>> Gentlemen, and it never occurred to you that there are other types of
>>>> traffic besides HTTP / HTTPS, right?
>>>>
>>>> DNS, ICMP, other protocols?
>>> I'm assuming Yosi has been measuring only TCP traffic, but even if he's 
>>> been 
>>> measuring everything, I don't think DNS, ICMP and other protocols would add 
>>> more than 1% on top of HTTP/S, unless (as Marcus suggested) there is also 
>>> totally-non-Squid traffic on the link being measured.
>> Come on, sure? Even in L7? Really? Cool story, bro!
>>> Antony.
>>>
>>>> 11.03.2017 2:44, Yosi Greenfield пишет:
>>>>> Aha! That could be it. I use sslbump, but not for all users. I'll
>>>>> check that out, although I think that it's a problem even for bumped
>>>>> users. Even for bumped users we don't bump all sites, so that really
>>>>> could be it.
>>>>>
>>>>> Thanks!
>>>>>
>>>>>
>>>>> -Original Message-
>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>>>>> Behalf Of Marcus Kool
>>>>> Sent: Friday, March 10, 2017 3:38 PM
>>>>> To: squid-users@lists.squid-cache.org
>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>
>>>>> On 10/03/17 16:27, Yosi Greenfield wrote:
>>>>>> Thanks!
>>>>>>
>>>>>> Netflow is much larger.
>>>>>>
>>>>>> I really want to know exactly what site is costing my users data. Many
>>>>>> of our users are on metered connections and are paying for overage,
>>>>>> but I can't tell where that overage is being used. Are they using
>>>>>> youtube, webmail, wetransfer? I see only a fraction of their actual
>>>>>> proxy usage in my squid logs.
>>>>>>
>>>>>> Data compression would give the opposite result, so that's not what
>>>>>> I'm seeing.
>>>>>>
>>>>>> Any other ideas?
>>>>> Is there any traffic that is not directed to Squid?
>>>>>
>>>>> Do you use ssl-bump in bump mode ?
>>>>> If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.
>>>>>
>>>>> Marcus
>>>>>
>>>>>> -Original Message-
>>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>>>> On Behalf Of Antony Stone
>>>>>> Sent: Friday, March 10, 2017 2:21 PM
>>>>>> To: squid-users@lists.squid-cache.org
>>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>>
>>>>>> On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:
>>>>>>> Hello all,
>>>>>>>
>>>>>>> I'm analyzing my squid logs with sarg, and I see that the number of
>>>>>>> bytes reported as used by any particular user are often nowhere near
>>>>>>> the bytes reported by netflow and tcpdump.
>>>>>> Which is larger?
>>>>>>
>>>>>>> I'm trying to trace my users' data usage by site, but I'm unable to
>>>>>>> do so from the log files because of this.
>>>>>> Well, what is it you really want to know?
>>>>>>
>>>>>> netflow / tcpdump will give you accurate numbers for the quantity of
>>>>>> data on your Internet link - I assume this is what you're most
>>>>>> interested in?
>>>>>> Squid will show you what quantity of data goes to/from the clients,
>>>>>> but is that really important?
>>>>>>
>>>>>>> Can someone please explain to me what I might be missing? Why does
>>>>>>> squid log report one thing and netflow and tcpdump show something
>>>>>>> else?
>>>>>> Data compression?
>>>>>>
>>>>>> HTTP responses are often gzipped, so if tcpdump is showing you smaller
>>>>>> numbers of bytes than Squid reports, that's what I'd look at first.
>>>>>>
>>>>>>
>>>>>> Antony.

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov
Of course, there is no stream video from security cams, no voice IP, no
SIP, no torrents, no RDP, no other protocol. They simple does not exists
and we're all believe that's all not above over 1% of overall traffic.
Yes. Sure. Really.

Only web-surfing :) Sure :)


11.03.2017 3:19, Yuri Voinov пишет:
>
> 11.03.2017 2:57, Antony Stone пишет:
>> On Friday 10 March 2017 at 21:50:19, Yuri Voinov wrote:
>>
>>> Gentlemen, and it never occurred to you that there are other types of
>>> traffic besides HTTP / HTTPS, right?
>>>
>>> DNS, ICMP, other protocols?
>> I'm assuming Yosi has been measuring only TCP traffic, but even if he's been 
>> measuring everything, I don't think DNS, ICMP and other protocols would add 
>> more than 1% on top of HTTP/S, unless (as Marcus suggested) there is also 
>> totally-non-Squid traffic on the link being measured.
> Come on, sure? Even in L7? Really? Cool story, bro!
>>
>> Antony.
>>
>>> 11.03.2017 2:44, Yosi Greenfield пишет:
>>>> Aha! That could be it. I use sslbump, but not for all users. I'll
>>>> check that out, although I think that it's a problem even for bumped
>>>> users. Even for bumped users we don't bump all sites, so that really
>>>> could be it.
>>>>
>>>> Thanks!
>>>>
>>>>
>>>> -Original Message-
>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>>>> Behalf Of Marcus Kool
>>>> Sent: Friday, March 10, 2017 3:38 PM
>>>> To: squid-users@lists.squid-cache.org
>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>
>>>> On 10/03/17 16:27, Yosi Greenfield wrote:
>>>>> Thanks!
>>>>>
>>>>> Netflow is much larger.
>>>>>
>>>>> I really want to know exactly what site is costing my users data. Many
>>>>> of our users are on metered connections and are paying for overage,
>>>>> but I can't tell where that overage is being used. Are they using
>>>>> youtube, webmail, wetransfer? I see only a fraction of their actual
>>>>> proxy usage in my squid logs.
>>>>>
>>>>> Data compression would give the opposite result, so that's not what
>>>>> I'm seeing.
>>>>>
>>>>> Any other ideas?
>>>> Is there any traffic that is not directed to Squid?
>>>>
>>>> Do you use ssl-bump in bump mode ?
>>>> If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.
>>>>
>>>> Marcus
>>>>
>>>>> -Original Message-
>>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>>> On Behalf Of Antony Stone
>>>>> Sent: Friday, March 10, 2017 2:21 PM
>>>>> To: squid-users@lists.squid-cache.org
>>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>>
>>>>> On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:
>>>>>> Hello all,
>>>>>>
>>>>>> I'm analyzing my squid logs with sarg, and I see that the number of
>>>>>> bytes reported as used by any particular user are often nowhere near
>>>>>> the bytes reported by netflow and tcpdump.
>>>>> Which is larger?
>>>>>
>>>>>> I'm trying to trace my users' data usage by site, but I'm unable to
>>>>>> do so from the log files because of this.
>>>>> Well, what is it you really want to know?
>>>>>
>>>>> netflow / tcpdump will give you accurate numbers for the quantity of
>>>>> data on your Internet link - I assume this is what you're most
>>>>> interested in?
>>>>> Squid will show you what quantity of data goes to/from the clients,
>>>>> but is that really important?
>>>>>
>>>>>> Can someone please explain to me what I might be missing? Why does
>>>>>> squid log report one thing and netflow and tcpdump show something
>>>>>> else?
>>>>> Data compression?
>>>>>
>>>>> HTTP responses are often gzipped, so if tcpdump is showing you smaller
>>>>> numbers of bytes than Squid reports, that's what I'd look at first.
>>>>>
>>>>>
>>>>> Antony.

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov


11.03.2017 2:57, Antony Stone пишет:
> On Friday 10 March 2017 at 21:50:19, Yuri Voinov wrote:
>
>> Gentlemen, and it never occurred to you that there are other types of
>> traffic besides HTTP / HTTPS, right?
>>
>> DNS, ICMP, other protocols?
> I'm assuming Yosi has been measuring only TCP traffic, but even if he's been 
> measuring everything, I don't think DNS, ICMP and other protocols would add 
> more than 1% on top of HTTP/S, unless (as Marcus suggested) there is also 
> totally-non-Squid traffic on the link being measured.
Come on, sure? Even in L7? Really? Cool story, bro!
>
>
> Antony.
>
>> 11.03.2017 2:44, Yosi Greenfield пишет:
>>> Aha! That could be it. I use sslbump, but not for all users. I'll
>>> check that out, although I think that it's a problem even for bumped
>>> users. Even for bumped users we don't bump all sites, so that really
>>> could be it.
>>>
>>> Thanks!
>>>
>>>
>>> -Original Message-
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
>>> Behalf Of Marcus Kool
>>> Sent: Friday, March 10, 2017 3:38 PM
>>> To: squid-users@lists.squid-cache.org
>>> Subject: Re: [squid-users] Data usage reported in log files
>>>
>>> On 10/03/17 16:27, Yosi Greenfield wrote:
>>>> Thanks!
>>>>
>>>> Netflow is much larger.
>>>>
>>>> I really want to know exactly what site is costing my users data. Many
>>>> of our users are on metered connections and are paying for overage,
>>>> but I can't tell where that overage is being used. Are they using
>>>> youtube, webmail, wetransfer? I see only a fraction of their actual
>>>> proxy usage in my squid logs.
>>>>
>>>> Data compression would give the opposite result, so that's not what
>>>> I'm seeing.
>>>>
>>>> Any other ideas?
>>> Is there any traffic that is not directed to Squid?
>>>
>>> Do you use ssl-bump in bump mode ?
>>> If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.
>>>
>>> Marcus
>>>
>>>> -Original Message-
>>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>>> On Behalf Of Antony Stone
>>>> Sent: Friday, March 10, 2017 2:21 PM
>>>> To: squid-users@lists.squid-cache.org
>>>> Subject: Re: [squid-users] Data usage reported in log files
>>>>
>>>> On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:
>>>>> Hello all,
>>>>>
>>>>> I'm analyzing my squid logs with sarg, and I see that the number of
>>>>> bytes reported as used by any particular user are often nowhere near
>>>>> the bytes reported by netflow and tcpdump.
>>>> Which is larger?
>>>>
>>>>> I'm trying to trace my users' data usage by site, but I'm unable to
>>>>> do so from the log files because of this.
>>>> Well, what is it you really want to know?
>>>>
>>>> netflow / tcpdump will give you accurate numbers for the quantity of
>>>> data on your Internet link - I assume this is what you're most
>>>> interested in?
>>>> Squid will show you what quantity of data goes to/from the clients,
>>>> but is that really important?
>>>>
>>>>> Can someone please explain to me what I might be missing? Why does
>>>>> squid log report one thing and netflow and tcpdump show something
>>>>> else?
>>>> Data compression?
>>>>
>>>> HTTP responses are often gzipped, so if tcpdump is showing you smaller
>>>> numbers of bytes than Squid reports, that's what I'd look at first.
>>>>
>>>>
>>>> Antony.

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Data usage reported in log files

2017-03-10 Thread Yuri Voinov
Gentlemen, and it never occurred to you that there are other types of
traffic besides HTTP / HTTPS, right?

DNS, ICMP, other protocols?


11.03.2017 2:44, Yosi Greenfield пишет:
> Aha! That could be it. I use sslbump, but not for all users. I'll
> check that out, although I think that it's a problem even for bumped
> users. Even for bumped users we don't bump all sites, so that really
> could be it.
>
> Thanks!
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Marcus Kool
> Sent: Friday, March 10, 2017 3:38 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Data usage reported in log files
>
>
>
> On 10/03/17 16:27, Yosi Greenfield wrote:
>> Thanks!
>>
>> Netflow is much larger.
>>
>> I really want to know exactly what site is costing my users data. Many 
>> of our users are on metered connections and are paying for overage, 
>> but I can't tell where that overage is being used. Are they using 
>> youtube, webmail, wetransfer? I see only a fraction of their actual 
>> proxy usage in my squid logs.
>>
>> Data compression would give the opposite result, so that's not what 
>> I'm seeing.
>>
>> Any other ideas?
> Is there any traffic that is not directed to Squid?
>
> Do you use ssl-bump in bump mode ?
> If not, Squid has no idea how many bytes go through the (HTTPS) tunnels.
>
> Marcus
>
>
>> -Original Message-
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>> On Behalf Of Antony Stone
>> Sent: Friday, March 10, 2017 2:21 PM
>> To: squid-users@lists.squid-cache.org
>> Subject: Re: [squid-users] Data usage reported in log files
>>
>> On Friday 10 March 2017 at 20:14:36, Yosi Greenfield wrote:
>>
>>> Hello all,
>>>
>>> I'm analyzing my squid logs with sarg, and I see that the number of 
>>> bytes reported as used by any particular user are often nowhere near 
>>> the bytes reported by netflow and tcpdump.
>> Which is larger?
>>
>>> I'm trying to trace my users' data usage by site, but I'm unable to 
>>> do so from the log files because of this.
>> Well, what is it you really want to know?
>>
>> netflow / tcpdump will give you accurate numbers for the quantity of 
>> data on your Internet link - I assume this is what you're most interested
> in?
>> Squid will show you what quantity of data goes to/from the clients, 
>> but is that really important?
>>
>>> Can someone please explain to me what I might be missing? Why does 
>>> squid log report one thing and netflow and tcpdump show something 
>>> else?
>> Data compression?
>>
>> HTTP responses are often gzipped, so if tcpdump is showing you smaller 
>> numbers of bytes than Squid reports, that's what I'd look at first.
>>
>>
>> Antony.
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with SSL-Bump on Debian testing: SSL_ERROR_RX_RECORD_TOO_LONG

2017-03-03 Thread Yuri Voinov


04.03.2017 3:29, C. L. Martinez пишет:
> Hi all,
>
>  After installing Squid 3.5.24 in my Debian testing (many thanks Amos for 
> your help), I am trying to configure Squid as https intercept proxy. My 
> config actually is:
>
> http_port 127.0.0.1:8080
> http_port 127.0.0.1:8081 intercept
> http_port 127.0.0.1:8082 ssl-bump cert=/opt/squid/etc/certs/myCA.pem 
> generate-host-certificates=on \
>   dynamic_cert_mem_cache_size=4MB tls-dh=/opt/squid/etc/certs/dhparam.pem
> https_port 127.0.0.1:8083 ssl-bump intercept 
> cert=/opt/squid/etc/certs/myCA.pem generate-host-certificates=on \
>   dynamic_cert_mem_cache_size=4MB tls-dh=/opt/squid/etc/certs/dhparam.pem
> sslcrtd_program /opt/squid/libexec/ssl_crtd -s /var/squid/ssldb -M 4MB
>
> # SSL-Bump
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump splice localhost
> acl exclude_sites ssl::server_name_regex -i "/usr/local/etc/squid/doms.nobump"
> ssl_bump peek step1 all
> ssl_bump splice exclude_sites
> ssl_bump stare step2 all
> ssl_bump bump all
>
>  Content of "/usr/local/etc/squid/doms.nobump" is:
>
> update\.microsoft\.com$
> update\.microsoft\.com\.akadns\.net$
>
>  But every time I have receiving Error code: SSL_ERROR_RX_RECORD_TOO_LONG in 
> Firefox's browsers when I visit any web using https like 
> https://www.debian.org, https://www.redhat.com, etc.. Some time ago, I have 
> setup same config under OpenBSD and all works ok.
>
>  Where am I doing the mistake?
Hardly this is mistake. Most probably this is platform-specific
non-squid bug.
-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2017-03-03 Thread Yuri Voinov


03.03.2017 18:20, --Ahmad-- пишет:
> @ eliezer 
> i was using children as 10
> ans faced the problem 
>
>
> so i trued to increase children to 1000 to see if this was the reason 
> and unfortunately the same problem .
>
> ys I’m using debian 6 os .
>
> i appreciable the helping from all the replies below but so far i
> havent got any clear solution .
>
> now i updated to 3.5.24 last one .
> and will see it if comes back …i will update the list with result .
>
> if it failed … I’m forced to create cron job to remove the certs like
> every 24 hours .
Cron is not the best solution. Logsurf will be better.
>
> thank you  guys all of you .
> thanks amos , thanks eliezer , thanks yuri
>
> kind regards
>> On Mar 3, 2017, at 1:37 PM, Yuri Voinov <yvoi...@gmail.com
>> <mailto:yvoi...@gmail.com>> wrote:
>>
>>
>>
>> 03.03.2017 6:32, Eliezer Croitoru пишет:
>>> Hey Yuri,
>>>
>>> This issue is not 100% squid but I think it's related to the way
>>> ssl_crtd works.
>>> I am not sure if it has some locking or other things to prevent such
>>> issues.
>>> The first solution is to somehow defend the DB from corruption, like
>>> in a case that more then a dozen identical requests are being done
>>> towards a single site and two ssl_crtd helpers are trying to do the
>>> same things.
>>> I believe that something to fence this should already be inside
>>> squid and ssl_crtd but I am pretty sure this is the main issue.
>> I suggests this can be external reason to occurs this issue. Somehow,
>> for example, BlueCoat on ISP upstream, tcp packets corruption, etc. I
>> dont know, just guessing.
>>> Alex and his team should know the answer for this subject and if I'm
>>> not wrong theoretically there are couple ways to prevent the
>>> mentioned issues.
>>> I had a plan to try and understand the ssl_crtd code and interface
>>> but yet to do so.
>>>
>>> I hope this issue will be resolved in a way that it can be
>>> backported to 3.5 in the worst case.
>> I hope too, but if it external. fewww.
>>
>> Anyway, watchdog is good backup to preventing manual interventions by SA.
>>>
>>> Eliezer
>>>
>>> 
>>> http://ngtech.co.il/lmgtfy/
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: elie...@ngtech.co.il
>>>
>>>
>>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org]
>>> On Behalf Of Yuri Voinov
>>> Sent: Thursday, March 2, 2017 11:46 PM
>>> To: squid-users@lists.squid-cache.org
>>> Subject: Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd
>>> helpers are crashing too rapidly, need help!
>>>
>>> This problem, in principle, is common to all versions of ssl-bumped
>>> Squid from version 3.4 and 5.0, inclusive, and occurs when the
>>> stored certificate is damaged for any reason. The only thing
>>> vorkeraund that I could find - a monitor kesh.log and initialize the
>>> certificate database again with squid restart automatically.
>>> In some installations, this problem does not occur over the years.
>>> In other - almost daily. I have no desire to find out why this is
>>> happening exactly. For me it was easier to make the watchdog, which
>>> will follow up on this.
>>> 03.03.2017 3:40, Yuri Voinov пишет:
>>> One hint finally:
>>> '([^ ]*) helper database ([^ ]*) failed: The SSL certificate
>>> database ([^ ]*) is corrupted. Please rebuild' - - - 0exec
>>> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
>>> 'FATAL: ([^ ]*) helpers are crashing too rapidly, need help!' - - -
>>> 0exec "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
>>> 'Cannot add certificate to db.' - - - 0exec
>>> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
>>> PS. This is from logsurfer.conf.
>>>
>>> 03.03.2017 3:34, Yuri Voinov пишет:
>>> This error is usually preceded by another error in cache.log
>>> associated with the certificates.
>>> I will show you the direction. Then go himself.
>>> This software will useful for you to solve:
>>> http://www.crypt.gen.nz/logsurfer/
>>> HTH, Yuri
>>>
>>> 03.03.2017 2:47, --Ahmad-- пишет:
>>> hey folks . 
>>> i have a problem with squid it get crashed after i enabled https !
>>> cache log error => FATAL

Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2017-03-03 Thread Yuri Voinov


03.03.2017 6:32, Eliezer Croitoru пишет:
> Hey Yuri,
>
> This issue is not 100% squid but I think it's related to the way ssl_crtd 
> works.
> I am not sure if it has some locking or other things to prevent such issues.
> The first solution is to somehow defend the DB from corruption, like in a 
> case that more then a dozen identical requests are being done towards a 
> single site and two ssl_crtd helpers are trying to do the same things.
> I believe that something to fence this should already be inside squid and 
> ssl_crtd but I am pretty sure this is the main issue.
I suggests this can be external reason to occurs this issue. Somehow,
for example, BlueCoat on ISP upstream, tcp packets corruption, etc. I
dont know, just guessing.
> Alex and his team should know the answer for this subject and if I'm not 
> wrong theoretically there are couple ways to prevent the mentioned issues.
> I had a plan to try and understand the ssl_crtd code and interface but yet to 
> do so.
>
> I hope this issue will be resolved in a way that it can be backported to 3.5 
> in the worst case.
I hope too, but if it external. fewww.

Anyway, watchdog is good backup to preventing manual interventions by SA.
>
> Eliezer
>
> 
> http://ngtech.co.il/lmgtfy/
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Yuri Voinov
> Sent: Thursday, March 2, 2017 11:46 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers 
> are crashing too rapidly, need help!
>
> This problem, in principle, is common to all versions of ssl-bumped Squid 
> from version 3.4 and 5.0, inclusive, and occurs when the stored certificate 
> is damaged for any reason. The only thing vorkeraund that I could find - a 
> monitor kesh.log and initialize the certificate database again with squid 
> restart automatically.
> In some installations, this problem does not occur over the years. In other - 
> almost daily. I have no desire to find out why this is happening exactly. For 
> me it was easier to make the watchdog, which will follow up on this.
> 03.03.2017 3:40, Yuri Voinov пишет:
> One hint finally:
> '([^ ]*) helper database ([^ ]*) failed: The SSL certificate database ([^ ]*) 
> is corrupted. Please rebuild' - - - 0exec "/usr/local/bin/crtd_create.sh 
> -r >/dev/null 2>&1"
> 'FATAL: ([^ ]*) helpers are crashing too rapidly, need help!' - - - 0exec 
> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> 'Cannot add certificate to db.' - - - 0exec 
> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> PS. This is from logsurfer.conf.
>
> 03.03.2017 3:34, Yuri Voinov пишет:
> This error is usually preceded by another error in cache.log associated with 
> the certificates.
> I will show you the direction. Then go himself.
> This software will useful for you to solve:
> http://www.crypt.gen.nz/logsurfer/
> HTH, Yuri
>
> 03.03.2017 2:47, --Ahmad-- пишет:
> hey folks . 
> i have a problem with squid it get crashed after i enabled https !
> cache log error => FATAL: The ssl_crtd helpers are crashing too rapidly, need 
> help!
>
> i googled many topics and relevant pages and couldnt find a clear solution .
>
> the quick solution i made was i  removed the certs in file :
> rm -rfv /var/lib/ssl_db/
>
>
> then reinitiated the DB using cmd below :
> /lib/squid/ssl_crtd -c -s /var/lib/ssl_db
> chown -R squid.squid /var/lib/ssl_db
> chown -R squid.squid /var/lib/ssl_db
>
>
> the restarted squid .
>
>
> but this is not a solution becuase squid get crashed again after certain time 
> and i don’t know why !
> my version is 3.5.2
>
>
> here is squid.conf :
>  /etc/squid/squid.conf
> visible_hostname pcloud
> acl ip1 myip 10.1.0.1
> acl ip2 myip 192.168.10.210
> tcp_outgoing_address 192.168.10.210 ip1
> tcp_outgoing_address 192.168.10.210 ip2
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> ac

Re: [squid-users] squid-users Digest, Vol 31, Issue 9

2017-03-03 Thread Yuri Voinov


03.03.2017 10:24, Adrian Miller пишет:
> Are you creating the database as root or the squid user.try as the
> squid user
It will not work when created as root. Will be permission denied. crtd
runs as squid, not as root.
>
> On 3 March 2017 at 08:46, <squid-users-requ...@lists.squid-cache.org
> <mailto:squid-users-requ...@lists.squid-cache.org>> wrote:
>
> Send squid-users mailing list submissions to
> squid-users@lists.squid-cache.org
> <mailto:squid-users@lists.squid-cache.org>
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.squid-cache.org/listinfo/squid-users
> <http://lists.squid-cache.org/listinfo/squid-users>
> or, via email, send a message with subject or body 'help' to
> squid-users-requ...@lists.squid-cache.org
> <mailto:squid-users-requ...@lists.squid-cache.org>
>
> You can reach the person managing the list at
> squid-users-ow...@lists.squid-cache.org
> <mailto:squid-users-ow...@lists.squid-cache.org>
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>1. Re: squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are
>   crashing too rapidly, need help! (Yuri Voinov)
>
>
> ------
>
> Message: 1
> Date: Fri, 3 Mar 2017 03:46:10 +0600
> From: Yuri Voinov <yvoi...@gmail.com <mailto:yvoi...@gmail.com>>
> To: squid-users@lists.squid-cache.org
> <mailto:squid-users@lists.squid-cache.org>
> Subject: Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd
> helpers are crashing too rapidly, need help!
> Message-ID: <714528e5-a6d5-e72a-2bc7-9950a8eec...@gmail.com
> <mailto:714528e5-a6d5-e72a-2bc7-9950a8eec...@gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> This problem, in principle, is common to all versions of ssl-bumped
> Squid from version 3.4 and 5.0, inclusive, and occurs when the stored
> certificate is damaged for any reason. The only thing vorkeraund
> that I
> could find - a monitor kesh.log and initialize the certificate
> database
> again with squid restart automatically.
>
> In some installations, this problem does not occur over the years. In
> other - almost daily. I have no desire to find out why this is
> happening
> exactly. For me it was easier to make the watchdog, which will
> follow up
> on this.
>
> 03.03.2017 3:40, Yuri Voinov пишет:
> >
> > One hint finally:
> >
> > '([^ ]*) helper database ([^ ]*) failed: The SSL certificate
> database
> > ([^ ]*) is corrupted. Please rebuild' - - - 0exec
> > "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> > 'FATAL: ([^ ]*) helpers are crashing too rapidly, need help!' - - -
> > 0exec "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> > 'Cannot add certificate to db.' - - - 0exec
> > "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> >
> > PS. This is from logsurfer.conf.
> >
> >
> > 03.03.2017 3:34, Yuri Voinov пишет:
> >>
> >> This error is usually preceded by another error in cache.log
> >> associated with the certificates.
> >>
> >> I will show you the direction. Then go himself.
> >>
> >> This software will useful for you to solve:
> >>
> >> http://www.crypt.gen.nz/logsurfer/
> <http://www.crypt.gen.nz/logsurfer/>
> >>
> >> HTH, Yuri
> >>
> >>
> >> 03.03.2017 2:47, --Ahmad-- пишет:
> >>> hey folks .
> >>> i have a problem with squid it get crashed after i enabled https !
> >>> cache log error => FATAL: The ssl_crtd helpers are crashing too
> >>> rapidly, need help!
> >>>
> >>> i googled many topics and relevant pages and couldnt find a
> >>> clear solution .
> >>>
> >>> the quick solution i made was i  removed the certs in file :
> >>> *rm -rfv /var/lib/ssl_db/*
> >>> *
> >>> *
> >>> *then reinitiated the DB using cmd below :*
> >>> /lib/squid/ssl_crtd -c -s /var/lib/ssl_db
> >>> chown -R squi

Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2017-03-02 Thread Yuri Voinov
This problem, in principle, is common to all versions of ssl-bumped
Squid from version 3.4 and 5.0, inclusive, and occurs when the stored
certificate is damaged for any reason. The only thing vorkeraund that I
could find - a monitor kesh.log and initialize the certificate database
again with squid restart automatically.

In some installations, this problem does not occur over the years. In
other - almost daily. I have no desire to find out why this is happening
exactly. For me it was easier to make the watchdog, which will follow up
on this.

03.03.2017 3:40, Yuri Voinov пишет:
>
> One hint finally:
>
> '([^ ]*) helper database ([^ ]*) failed: The SSL certificate database
> ([^ ]*) is corrupted. Please rebuild' - - - 0exec
> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> 'FATAL: ([^ ]*) helpers are crashing too rapidly, need help!' - - -
> 0exec "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
> 'Cannot add certificate to db.' - - - 0exec
> "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
>
> PS. This is from logsurfer.conf.
>
>
> 03.03.2017 3:34, Yuri Voinov пишет:
>>
>> This error is usually preceded by another error in cache.log
>> associated with the certificates.
>>
>> I will show you the direction. Then go himself.
>>
>> This software will useful for you to solve:
>>
>> http://www.crypt.gen.nz/logsurfer/
>>
>> HTH, Yuri
>>
>>
>> 03.03.2017 2:47, --Ahmad-- пишет:
>>> hey folks .
>>> i have a problem with squid it get crashed after i enabled https !
>>> cache log error => FATAL: The ssl_crtd helpers are crashing too
>>> rapidly, need help!
>>>
>>> i googled many topics and relevant pages and couldnt find a
>>> clear solution .
>>>
>>> the quick solution i made was i  removed the certs in file :
>>> *rm -rfv /var/lib/ssl_db/*
>>> *
>>> *
>>> *then reinitiated the DB using cmd below :*
>>> /lib/squid/ssl_crtd -c -s /var/lib/ssl_db
>>> chown -R squid.squid /var/lib/ssl_db
>>> chown-R squid.squid /var/lib/ssl_db
>>>
>>> the restarted squid .
>>>
>>> but this is not a solution becuase squid get crashed again after
>>> certain time and i don’t know why !
>>> my version is 3.5.2
>>>
>>> here is squid.conf :
>>>  /etc/squid/squid.conf
>>> visible_hostname pcloud
>>> acl ip1 myip 10.1.0.1
>>> acl ip2 myip 192.168.10.210
>>> tcp_outgoing_address 192.168.10.210 ip1
>>> tcp_outgoing_address 192.168.10.210 ip2
>>> #
>>> # Recommended minimum configuration:
>>> #
>>>
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing
>>> # should be allowed
>>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl localnet src fc00::/7   # RFC 4193 local private network range
>>> acl localnet src fe80::/10  # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80  # http
>>> acl Safe_ports port 21  # ftp
>>> acl Safe_ports port 443 # https
>>> acl Safe_ports port 70  # gopher
>>> acl Safe_ports port 210 # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280 # http-mgmt
>>> acl Safe_ports port 488 # gss-http
>>> acl Safe_ports port 591 # filemaker
>>> acl Safe_ports port 777 # multiling http
>>> acl CONNECT method CONNECT
>>>
>>> #
>>> # Recommended minimum Access Permission configuration:
>>> #
>>> # Deny requests to certain unsafe ports
>>> http_access deny !Safe_ports
>>>
>>> # Deny CONNECT to other than secure SSL ports
>>> http_access deny CONNECT !SSL_ports
>>> http_access allow  CONNECT 
>>> # Only allow cachemgr access from localhost
>>> http_access allow localhost manager
>>> http_access deny manager
>>>
>>> # We strongly recommend the following be uncommented to protect innocent

>>> # web applications running on the proxy server who think the only
>>> # one who can access services on "localhost" is a lo

Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2017-03-02 Thread Yuri Voinov
One hint finally:

'([^ ]*) helper database ([^ ]*) failed: The SSL certificate database
([^ ]*) is corrupted. Please rebuild' - - - 0exec
"/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
'FATAL: ([^ ]*) helpers are crashing too rapidly, need help!' - - - 0   
exec "/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"
'Cannot add certificate to db.' - - - 0exec
"/usr/local/bin/crtd_create.sh -r >/dev/null 2>&1"

PS. This is from logsurfer.conf.


03.03.2017 3:34, Yuri Voinov пишет:
>
> This error is usually preceded by another error in cache.log
> associated with the certificates.
>
> I will show you the direction. Then go himself.
>
> This software will useful for you to solve:
>
> http://www.crypt.gen.nz/logsurfer/
>
> HTH, Yuri
>
>
> 03.03.2017 2:47, --Ahmad-- пишет:
>> hey folks .
>> i have a problem with squid it get crashed after i enabled https !
>> cache log error => FATAL: The ssl_crtd helpers are crashing too
>> rapidly, need help!
>>
>> i googled many topics and relevant pages and couldnt find a
>> clear solution .
>>
>> the quick solution i made was i  removed the certs in file :
>> *rm -rfv /var/lib/ssl_db/*
>> *
>> *
>> *then reinitiated the DB using cmd below :*
>> /lib/squid/ssl_crtd -c -s /var/lib/ssl_db
>> chown -R squid.squid /var/lib/ssl_db
>> chown-R squid.squid /var/lib/ssl_db
>>
>> the restarted squid .
>>
>> but this is not a solution becuase squid get crashed again after
>> certain time and i don’t know why !
>> my version is 3.5.2
>>
>> here is squid.conf :
>>  /etc/squid/squid.conf
>> visible_hostname pcloud
>> acl ip1 myip 10.1.0.1
>> acl ip2 myip 192.168.10.210
>> tcp_outgoing_address 192.168.10.210 ip1
>> tcp_outgoing_address 192.168.10.210 ip2
>> #
>> # Recommended minimum configuration:
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt to list your (internal) IP networks from where browsing
>> # should be allowed
>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl localnet src fc00::/7   # RFC 4193 local private network range
>> acl localnet src fe80::/10  # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70  # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>>
>> #
>> # Recommended minimum Access Permission configuration:
>> #
>> # Deny requests to certain unsafe ports
>> http_access deny !Safe_ports
>>
>> # Deny CONNECT to other than secure SSL ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow  CONNECT 
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
>>
>> # We strongly recommend the following be uncommented to protect innocent
>> # web applications running on the proxy server who think the only
>> # one who can access services on "localhost" is a local user
>> #http_access deny to_localhost
>>
>> #
>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>> #
>>
>> # Example rule allowing access from your local networks.
>> # Adapt localnet in the ACL section to list your (internal) IP networks
>> # from where browsing should be allowed
>> http_access allow localnet
>> http_access allow localhost
>>
>> # And finally deny all other access to this proxy
>> http_access deny all
>>
>> # Squid normally listens to port 3128
>> http_port 3128
>>
>> # Uncomment and adjust the following to add a disk cache directory.
>> #cache_dir ufs /var/cache/squid 100 16 256
>>
>> # Leave coredumps in the first cache dir
>> #coredump_dir /var/cache/squid
>>
>> #
>> # Add any of your own refresh_pattern entries above these.
>> #
>> #
>>
>> http_port 3126
>> #http_port 3128
>> #

Re: [squid-users] squid 3.5.2==> HTTPS FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

2017-03-02 Thread Yuri Voinov
This error is usually preceded by another error in cache.log associated
with the certificates.

I will show you the direction. Then go himself.

This software will useful for you to solve:

http://www.crypt.gen.nz/logsurfer/

HTH, Yuri


03.03.2017 2:47, --Ahmad-- пишет:
> hey folks .
> i have a problem with squid it get crashed after i enabled https !
> cache log error => FATAL: The ssl_crtd helpers are crashing too
> rapidly, need help!
>
> i googled many topics and relevant pages and couldnt find a
> clear solution .
>
> the quick solution i made was i  removed the certs in file :
> *rm -rfv /var/lib/ssl_db/*
> *
> *
> *then reinitiated the DB using cmd below :*
> /lib/squid/ssl_crtd -c -s /var/lib/ssl_db
> chown -R squid.squid /var/lib/ssl_db
> chown-R squid.squid /var/lib/ssl_db
>
> the restarted squid .
>
> but this is not a solution becuase squid get crashed again after
> certain time and i don’t know why !
> my version is 3.5.2
>
> here is squid.conf :
>  /etc/squid/squid.conf
> visible_hostname pcloud
> acl ip1 myip 10.1.0.1
> acl ip2 myip 192.168.10.210
> tcp_outgoing_address 192.168.10.210 ip1
> tcp_outgoing_address 192.168.10.210 ip2
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly
> plugged) machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> http_access allow  CONNECT 
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
>
> # And finally deny all other access to this proxy
> http_access deny all
>
> # Squid normally listens to port 3128
> http_port 3128
>
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/cache/squid 100 16 256
>
> # Leave coredumps in the first cache dir
> #coredump_dir /var/cache/squid
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> #
>
> http_port 3126
> #http_port 3128
> ###
> #cache_swap_low 90
> #cache_swap_high 95
> 
> cache_effective_user squid
> cache_effective_group squid
> memory_replacement_policy lru
> cache_replacement_policy heap LFUDA
> 
> maximum_object_size 1 MB
> #cache_mem 5000 MB
> maximum_object_size_in_memory 10 MB
> #
> logfile_rotate 2
> max_filedescriptors 131072
> ###
> 
> cache_dir aufs /var/cache/squid 60 64 128
> ###
> https_port 3129 intercept ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB
> cert=/usr/local/squid/ssl_cert/myca.pem
> key=/usr/local/squid/ssl_cert/myca.pem
> ssl_bump server-first all
> sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
> sslcrtd_children 1000 startup=1 idle=1
> ###
> minimum_object_size 0 bytes
> #refresh patterns for caching static files
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i .(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
> override-expire ignore-no-cache ignore-no-store ignore-private
> refresh_pattern -i .(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200
> 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private
> refresh_pattern -i
> 

Re: [squid-users] [bug 4674] squid 4.0.18 delay_parameters for class 3 assertion failed

2017-02-27 Thread Yuri Voinov


28.02.2017 1:39, Vitaly Lavrov пишет:
> [bug 4674] Regression in squid 4.0.18 (4.0.17 does not have this error)
>
> OS: Slackware linux 14.2 / gcc 4.8.2
May be ancient compiler. 4.8.2 is not fully C++11 compatible AFAIK.
Try at least 4.9.x. Or 5.4.
>
> Simple config:
>
> delay_pools 1
> delay_class 1 3
> delay_parameters 1 64000/64000 32000/32000 3000/3000
>
> squid -k parse
> 
> 2017/02/20 12:27:48| Processing: delay_pools 1
> 2017/02/20 12:27:48| Processing: delay_class 1 3
> 2017/02/20 12:27:48| assertion failed: CompositePoolNode.h:27: "byteCount == 
> sizeof(CompositePoolNode)"
> Aborted
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-avira-update-cache

2017-02-17 Thread Yuri Voinov
root @ khorne /patch # wget -S
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00050.vdf.lz
--2017-02-17 23:51:22-- 
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00050.vdf.lz
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Server: Apache
  ETag: "934c1d64b13126c7daada1daf0fe3845:1487280555"
  Last-Modified: Thu, 16 Feb 2017 21:26:06 GMT
  Accept-Ranges: bytes
  Content-Length: 2023
  Content-Type: text/plain
  Cache-Control: max-age=45
  Expires: Fri, 17 Feb 2017 17:52:13 GMT
  Date: Fri, 17 Feb 2017 17:51:28 GMT
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Length: 2023 (2.0K) [text/plain]
Saving to: 'xbv00050.vdf.lz'

xbv00050.vdf.lz 100%[===>]   1.98K  --.-KB/sin
0s 

2017-02-17 23:51:28 (176 MB/s) - 'xbv00050.vdf.lz' saved [2023/2023]

root @ khorne /patch # wget -S
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00050.vdf.lz
--2017-02-17 23:51:40-- 
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00050.vdf.lz
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Server: Apache
  ETag: "934c1d64b13126c7daada1daf0fe3845:1487280555"
  Last-Modified: Thu, 16 Feb 2017 21:26:06 GMT
  Accept-Ranges: bytes
  Content-Length: 2023
  Content-Type: text/plain
  Cache-Control: max-age=45
  X-Origin-Date: Fri, 17 Feb 2017 17:51:28 GMT
  Date: Fri, 17 Feb 2017 17:51:40 GMT
  X-Origin-Expires: Fri, 17 Feb 2017 17:52:13 GMT
  Expires: Fri, 17 Feb 2017 17:52:25 GMT
  X-Cache-Age: 12
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 2023 (2.0K) [text/plain]
Saving to: 'xbv00050.vdf.lz.1'

xbv00050.vdf.lz.1   100%[===>]   1.98K  --.-KB/sin
0s 

2017-02-17 23:51:40 (233 MB/s) - 'xbv00050.vdf.lz.1' saved [2023/2023]

root @ khorne /patch # wget -S
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00044.vdf.lz
--2017-02-17 23:52:22-- 
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00044.vdf.lz
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Server: Apache
  ETag: "953d632488104f11aa7252533ce6389b:1487280555"
  Last-Modified: Thu, 16 Feb 2017 21:26:06 GMT
  Accept-Ranges: bytes
  Content-Length: 13443
  Content-Type: text/plain
  Cache-Control: max-age=162
  Expires: Fri, 17 Feb 2017 17:55:04 GMT
  Date: Fri, 17 Feb 2017 17:52:22 GMT
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Length: 13443 (13K) [text/plain]
Saving to: 'xbv00044.vdf.lz'

xbv00044.vdf.lz 100%[===>]  13.13K  --.-KB/sin
0.003s 

2017-02-17 23:52:22 (4.20 MB/s) - 'xbv00044.vdf.lz' saved [13443/13443]

root @ khorne /patch # wget -S
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00044.vdf.lz
--2017-02-17 23:52:23-- 
http://personal.avira-update.com/update/x_vdf_sigver/7.12.155.64_8.12.155.64/xbv00044.vdf.lz
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Server: Apache
  ETag: "953d632488104f11aa7252533ce6389b:1487280555"
  Last-Modified: Thu, 16 Feb 2017 21:26:06 GMT
  Accept-Ranges: bytes
  Content-Length: 13443
  Content-Type: text/plain
  Cache-Control: max-age=162
  X-Origin-Date: Fri, 17 Feb 2017 17:52:22 GMT
  Date: Fri, 17 Feb 2017 17:52:23 GMT
  X-Origin-Expires: Fri, 17 Feb 2017 17:55:04 GMT
  Expires: Fri, 17 Feb 2017 17:55:05 GMT
  X-Cache-Age: 1
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 13443 (13K) [text/plain]
Saving to: 'xbv00044.vdf.lz.1'

xbv00044.vdf.lz.1   100%[===>]  13.13K  --.-KB/sin
0.03s  

2017-02-17 23:52:23 (441 KB/s) - 'xbv00044.vdf.lz.1' saved [13443/13443]

Well, let's give big Avira file:


root @ khorne /patch # wget -S
http://personal.avira-update.com/update/n_vdf/vbase005.vdf.gz
--2017-02-17 23:54:04-- 
http://personal.avira-update.com/update/n_vdf/vbase005.vdf.gz
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Server: Apache
  ETag: "5af09f9e8a3d09809ce8005ddb55fa7d:1487352426"
  Last-Modified: Fri, 17 Feb 2017 17:25:36 GMT
  Accept-Ranges: bytes
  Content-Length: 11556885
  Content-Type: application/x-gzip
  Cache-Control: max-age=185
  Expires: Fri, 17 Feb 2017 17:57:09 GMT
  Date: Fri, 17 Feb 2017 17:54:04 GMT
  X-Cache: MISS from khorne
  X-Cache-Lookup: MISS from khorne:3128
  Connection: keep-alive
Length: 11556885 (11M) [application/x-gzip]
Saving to: 'vbase005.vdf.gz'

vbase005.vdf.gz 100%[===>]  11.02M   471KB/sin
24s

2017-02-17 23:54:28 (471 KB/s) - 'vbase005.vdf.gz' saved [11556885/11556885]

root @ 

Re: [squid-users] squid-avira-update-cache

2017-02-17 Thread Yuri Voinov
Any logs?


17.02.2017 17:43, splice...@gmail.com пишет:
> Hi all, I'm trying to cache "avira updates" with squid, but no luck...
>
> my conf:
> acl aviraupdate dstdomain .avira-update.com
> range_offset_limit -1 aviraupdate
> refresh_pattern -i avira-update.com/.*\.* 4320 80% 43200 reload-into-ims
>
> any help ? 10x!
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] The header: HTTP_VIA is present with the value:

2017-02-13 Thread Yuri Voinov
via off


14.02.2017 0:00, --Ahmad-- пишет:
> hi folks 
> I’m checking my proxy in 
>
> whatismyproxy.com 
>
> and it says :
>
> The header: HTTP_VIA is present with the value:HTTP/1.1
> vnnnz01msp2tser1.wnsnet.attws.com
> .
>
> is there anyway that i remove that proxy detection ?
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High utilization of CPU squid-3.5.23, squid-3.5.24

2017-02-01 Thread Yuri Voinov
Yes, it is require to perform extended diagnostics. Including the system
level.

BTW, it can also network IO. And, it is possible that even a slow DNS.
Have to search.


02.02.2017 3:34, Eliezer Croitoru пишет:
> I believe that the squid manager info page should give some clue about the 
> number of concurrent requests.
>
> If it's above some number(300-400 and above) per second then removing the 
> cache_dir from the server for a windows of a day will answer if it's a DISK 
> IO bottle neck or something else.
>
> All The Bests,
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Vitaly Lavrov
> Sent: Wednesday, February 1, 2017 10:56 PM
> To: squid-us...@squid-cache.org
> Subject: [squid-users] High utilization of CPU squid-3.5.23,squid-3.5.24
>
> Periodically squid begins to linearly increase the use of the CPU.
> Sometimes this process reaches 100%. At random moment of time the CPU usage 
> is reduced to 5-15%, and in the presence of client requests can again start 
> linearly increasing use of CPU.
>
> In the protocols are no error messages.
>
> CPU consumption does not correlate with the number of requests and traffic.
>
> The increase CPU consumption from 0 to 60% occurs in about 4-5 hours, and to 
> 100% for 6-8 hours.
>
> A typical graph of CPU usage can be viewed on 
> http://devel.aanet.ru/tmp/squid-cpu-x.png
>
> With the "perf record -p` pgrep -f squid-1` - sleep 30" I have received the 
> following information:
>
> At 100% CPU load most of the time took 3 calls
>
>   49.15% squid squid [.] MemObject :: dump
>   25.11% squid squid [.] Mem_hdr :: freeDataUpto
>   20.03% squid squid [.] Mem_hdr :: copy
>
> When loading CPU 30-60% most of the time took 3 calls
>
>   37.26% squid squid [.] Mem_node :: dataRange
>   22.61% squid squid [.] Mem_hdr :: NodeCompare
>   17.31% squid squid [.] Mem_hdr :: freeDataUpto
>
> What is it ? Is it possible to somehow fix it?
>
> System: slackware64 14.2
>
> sslbump not used. http only.
>
> Part of config:
>
> memory_pools off
> memory_pools_limit 512 MB
> cache_mem 768 MB
> maximum_object_size_in_memory 64 KB
> cache_dir ufs   /cache/sq_c1 16312 16 256
> cache_dir ufs   /cache/sq_c2 16312 16 256
> cache_dir ufs   /cache/sq_c3 16312 16 256
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High utilization of CPU squid-3.5.23, squid-3.5.24

2017-02-01 Thread Yuri Voinov
It seems as IO bottleneck at first look.

02.02.2017 2:55, Vitaly Lavrov пишет:
> Periodically squid begins to linearly increase the use of the CPU.
> Sometimes this process reaches 100%. At random moment of time the CPU usage 
> is reduced to 5-15%,
> and in the presence of client requests can again start linearly increasing 
> use of CPU.
>
> In the protocols are no error messages.
>
> CPU consumption does not correlate with the number of requests and traffic.
>
> The increase CPU consumption from 0 to 60% occurs in about 4-5 hours, and to 
> 100% for 6-8 hours.
>
> A typical graph of CPU usage can be viewed on 
> http://devel.aanet.ru/tmp/squid-cpu-x.png
>
> With the "perf record -p` pgrep -f squid-1` - sleep 30" I have received the 
> following information:
>
> At 100% CPU load most of the time took 3 calls
>
>   49.15% squid squid [.] MemObject :: dump
>   25.11% squid squid [.] Mem_hdr :: freeDataUpto
>   20.03% squid squid [.] Mem_hdr :: copy
>
> When loading CPU 30-60% most of the time took 3 calls
>
>   37.26% squid squid [.] Mem_node :: dataRange
>   22.61% squid squid [.] Mem_hdr :: NodeCompare
>   17.31% squid squid [.] Mem_hdr :: freeDataUpto
>
> What is it ? Is it possible to somehow fix it?
>
> System: slackware64 14.2
>
> sslbump not used. http only.
>
> Part of config:
>
> memory_pools off
> memory_pools_limit 512 MB
> cache_mem 768 MB
> maximum_object_size_in_memory 64 KB
> cache_dir ufs   /cache/sq_c1 16312 16 256
> cache_dir ufs   /cache/sq_c2 16312 16 256
> cache_dir ufs   /cache/sq_c3 16312 16 256
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Buy Certificates for Squid 'man in the middle'

2017-02-01 Thread Yuri Voinov
In three words:

Forget about it.

No one in the world permit you to do Man-In-The-Middle-Attack hidden
from users.

CAs in the event of such certificates immediately include it in the list
of untrusted. And you can give up the problems up to prison for a long
time. For violation of the privacy of users. In other words, users
should be aware that there is a proxy hacking HTTPS in front of them.
All other tricks are illegal, at least it is contrary to ethics.

Forget about it.

I'm seriously.

02.02.2017 3:10, Yuri Voinov пишет:
>
>
>
> 02.02.2017 2:58, angelv пишет:
>> Hi,
>>
>> I need your advice.
>>
>> I have a transparent proxy running with the self generated
>> certificates 'myCA.pem', as it is not signed by a valid entity then I
>> have to import the 'myCA.der' certificate in all web browsers ...
>>
>> I want to know where I can buy a valid certificate that work in Squid.
> Nowhere. Due to CA's CPS.
>>
>> PD:
>> The proxy is working great
>>
>>
>> --
>> Important information for clarity (FreeBSD, squid-3.5.23 and PF):
>>
>> Create self-signed certificate for Squid server
>>
>> # openssl req -new -newkey rsa:2048 -sha256 -days 36500 -nodes -x509
>> -extensions v3_ca -keyout myCA.pem  -out
>> /usr/local/etc/squid/ssl_cert/myCA.pem -config
>> /usr/local/etc/squid/ssl_cert/openssl.cnf
>>
>> # openssl dhparam -outform PEM -out
>> /usr/local/etc/squid/ssl_cert/dhparam.pem 2048
>>
>> Create a DER-encoded certificate to import into users' browsers
>>
>> # openssl x509 -in /usr/local/etc/squid/ssl_cert/myCA.pem -outform
>> DER -out /usr/local/etc/squid/ssl_cert/myCA.der
>>
>>
>> # edit /usr/local/etc/squid/squid.conf
>> ...
>> # Squid normally listens to port 3128
>> http_port  3128
>>
>> # Intercept HTTPS CONNECT messages with SSL-Bump
>> #
>> http_port  3129 ssl-bump intercept \
>> cert=/usr/local/etc/squid/ssl_cert/myCA.pem \
>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
>> dhparams=/usr/local/etc/squid/ssl_cert/dhparam.pem
>> #
>> https_port 3130 ssl-bump intercept \
>> cert=/usr/local/etc/squid/ssl_cert/myCA.pem \
>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
>> dhparams=/usr/local/etc/squid/ssl_cert/dhparam.pem
>> #
>> sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s
>> /usr/local/etc/squid/ssl_db -M 4MB
>> #
>> acl step1 at_step SslBump1
>> #
>> ssl_bump peek step1
>> ssl_bump stare all
>> ssl_bump bump all
>> always_direct allow all
>> #
>> sslproxy_cert_error allow all
>> sslproxy_flags DONT_VERIFY_PEER
>> ...
>>
>> PF redirect the traffic to the Squid
>>
>> # edit /etc/pf.conf
>> ...
>> # Intercept HTTPS CONNECT messages with SSL-Bump
>> rdr pass on $int_if inet  proto tcp from any to port https \
>> -> 127.0.0.1 port 3130
>> rdr pass on $int_if inet6 proto tcp from any to port https \
>> -> ::1 port 3130
>> ...
>> --
>> -- 
>> Ángel Villa G.
>> US +1 (786) 233-9240 | CO +57 (300) 283-6546
>> ange...@gmail.com <mailto:ange...@gmail.com>
>> https://google.com/+AngelVillaG
>> https://angelcontents.blogspot.com
>>
>> "We are all atheists about most of the gods that societies have ever
>> believed in. Some of us just go one god further" - Richard Dawkins
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> -- 
> Bugs to the Future

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Buy Certificates for Squid 'man in the middle'

2017-02-01 Thread Yuri Voinov


02.02.2017 2:58, angelv пишет:
> Hi,
>
> I need your advice.
>
> I have a transparent proxy running with the self generated
> certificates 'myCA.pem', as it is not signed by a valid entity then I
> have to import the 'myCA.der' certificate in all web browsers ...
>
> I want to know where I can buy a valid certificate that work in Squid.
Nowhere. Due to CA's CPS.
>
> PD:
> The proxy is working great
>
>
> --
> Important information for clarity (FreeBSD, squid-3.5.23 and PF):
>
> Create self-signed certificate for Squid server
>
> # openssl req -new -newkey rsa:2048 -sha256 -days 36500 -nodes -x509
> -extensions v3_ca -keyout myCA.pem  -out
> /usr/local/etc/squid/ssl_cert/myCA.pem -config
> /usr/local/etc/squid/ssl_cert/openssl.cnf
>
> # openssl dhparam -outform PEM -out
> /usr/local/etc/squid/ssl_cert/dhparam.pem 2048
>
> Create a DER-encoded certificate to import into users' browsers
>
> # openssl x509 -in /usr/local/etc/squid/ssl_cert/myCA.pem -outform DER
> -out /usr/local/etc/squid/ssl_cert/myCA.der
>
>
> # edit /usr/local/etc/squid/squid.conf
> ...
> # Squid normally listens to port 3128
> http_port  3128
>
> # Intercept HTTPS CONNECT messages with SSL-Bump
> #
> http_port  3129 ssl-bump intercept \
> cert=/usr/local/etc/squid/ssl_cert/myCA.pem \
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
> dhparams=/usr/local/etc/squid/ssl_cert/dhparam.pem
> #
> https_port 3130 ssl-bump intercept \
> cert=/usr/local/etc/squid/ssl_cert/myCA.pem \
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
> dhparams=/usr/local/etc/squid/ssl_cert/dhparam.pem
> #
> sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s
> /usr/local/etc/squid/ssl_db -M 4MB
> #
> acl step1 at_step SslBump1
> #
> ssl_bump peek step1
> ssl_bump stare all
> ssl_bump bump all
> always_direct allow all
> #
> sslproxy_cert_error allow all
> sslproxy_flags DONT_VERIFY_PEER
> ...
>
> PF redirect the traffic to the Squid
>
> # edit /etc/pf.conf
> ...
> # Intercept HTTPS CONNECT messages with SSL-Bump
> rdr pass on $int_if inet  proto tcp from any to port https \
> -> 127.0.0.1 port 3130
> rdr pass on $int_if inet6 proto tcp from any to port https \
> -> ::1 port 3130
> ...
> --
> -- 
> Ángel Villa G.
> US +1 (786) 233-9240 | CO +57 (300) 283-6546
> ange...@gmail.com 
> https://google.com/+AngelVillaG
> https://angelcontents.blogspot.com
>
> "We are all atheists about most of the gods that societies have ever
> believed in. Some of us just go one god further" - Richard Dawkins
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid reverse proxy (accelerator) for MS Exchange OWA

2017-02-01 Thread Yuri Voinov
I'm sorry to interrupt, gentlemen - but Microsoft does not use
certificate pinning in OWA?


01.02.2017 22:19, Amos Jeffries пишет:
> On 27/01/2017 9:31 p.m., Vieri wrote:
>>
>>
>>
>> - Original Message - From: Alex Rousskov
>> 
>>
 It's interesting to note that the following actually DOES give
 more information (unsupported
 protocol):>
>>> * If the server sent nothing, then Curl gave you potentially
>>> incorrect information (i.e., Curl is just _guessing_ what went
>>> wrong).
>>
>> I never tried telling Squid to use TLS 1.1 ONLY so I never got to see
>> Squid's log when using that protocol. I'm supposing I would have seen
>> the same thing in Squid as I've seen it with CURL. So I'm sure Squid
>> would log useful information for the sys admin but... (see below).
>>
 Maybe if Squid gets an SSL negotiation error with no apparent
 reason then it might need to retry connecting by being more
 explicit, just like in my cURL and openssl binary examples
 above.
>>> Sorry, I do not know what "retry connecting by being more
>>> explicit" means. AFAICT, neither Curl nor s_client tried
>>> reconnecting in your examples. Also, an appropriate default for a
>>> command-line client is often a bad default for a proxy. It is
>>> complicated.
>>
>> Let me rephrase my point but please keep in mind that I have no idea
>> how Squid actually behaves.
> Let me pause you right there.  What you describe is essentially how the
> TLS protocol handshake is actually performed.
>
>> Simply put, when Squid tries to connect
>> for the first time, it will probably (I'm guessing here) try the most
>> secure protcol known today (ie. TSL 1.2), or let OpenSSL decide by
>> default which is probably the same.
> That is exactly what happens.
>
>> In my case, the server replies
>> nothing. That would be like running:
>>
>> # curl -k -v https://10.215.144.21 or # openssl s_client -connect
>> 10.215.144.21:443
>>
>> They give me the same information as Squid's log... almost nothing.
>>
>> So my point is, if that first connection fails and gives me nothing
>> for TLS 1.2 (or whatever the default is), two things can happen:
>> either the remote site is failing or it isn't supporting the
>> protocol. Why not "try again" but this time by being more specific?
> Several reasons.
>
> Reason #1 is that the TLS protocol is a security protocol for securing a
> single 'hop' (just one TCP connection). So ideally TLS details would not
> be remembered at all, it's a dangerous thing in security to remember
> details in the middleware.
>
>
> Reason #2 is that Squid has passed on the 'terminate' signal to the
> client (curl).
>
> As far as Squid is concerned, there is no "again" connection. There is a
> connection, which fails. The end.
>
> There is a connection. Which fails. The end.
>
> There is a connection. Which fails. The end.
>
>  ... and so on. These connections may be from the same client, or
> different ones. May be to same or different servers. They are
> independent of each other and only TCP at this point.
>
> The TLS setup/handshake parts never get far enough for there to be
> anything to remember. Except that TCP connection failed.
>
> Well, Squid does remember that and tries a different IP next time. Until
> it runs out of IPs, then it resets its bad-ID memory with a new DNS
> lookup and the cycle begins again.
>
>
> NP: if you are interested you can see the GOOD/BAD flags on IPs in the
> ipcache cache manager report (squidclient mgr:ipcache).
>
>
>> It would be like doing something like this:
>>
>> # openssl s_client -connect 10.215.144.21:443 || openssl s_client
>> -connect 10.215.144.21:443 -tls1_1 || openssl s_client -connect
>> 10.215.144.21:443 -tls1
>>
> Which brings us to reason #3; downgrade attacks.
>
> You may have heard of the POODLE attack. It is basically a middleware
> (like Squid) forcing the other endpoint(s) to re-try with lower TLS
> versions until a version is reached and cipher selected that the
> attacker can decrypt or steal the keys from.
>
> Squid (mostly) avoids the whole class of vulnerabilities by leaving the
> fallback decisions to the client whenever it can.
>
> Since the curl command only did the one request, that is all that
> happened. No retry.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Antivirus for squid

2017-02-01 Thread Yuri Voinov
http://wiki.squid-cache.org/ConfigExamples/ContentAdaptation/C-ICAP


01.02.2017 22:14, Eliezer Croitoru пишет:
> Hey Yuri,
>
> What wiki article?
>
> Thanks,
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Yuri Voinov
> Sent: Wednesday, February 1, 2017 5:52 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Antivirus for squid
>
> Squid's wiki article contains all required points about performance and 
> tuning.
>
>
> 01.02.2017 21:41, erdosain9 пишет:
>> Hi, again.
>> Well i installed squidclamav, c-icap, and clamav; and its working all 
>> fine, but... the download is too slow, the download of a file. There 
>> is a way to accelerate this??
> What do you mean "too slow"? Exact data, pls. Subjective and relative 
> adjectives, do not say anything of substance.
>
> I mean, i.e.: "Before I've installed clamav, download speed was 1 terabit per 
> second for file http://bwah-bwah.com/bwahbwahbwah.tar.gz.
> After - only 10 megabits. It seems too slow".
>> Also, when the file its a virus, the message "this is a virus bla 
>> bla", go
> This is different procedure, which is not executed by squid itself.
>> fast... i mean the slow download its for all the other files that 
>> dosent have a virus...
>>
>> *This is squid.conf
>> *
>> # c-icap integration
>> icap_enable on
>> icap_send_client_ip on
>> icap_send_client_username on
>> icap_client_username_header X-Authenticated-User icap_preview_enable 
>> on icap_preview_size 1024 icap_service service_req reqmod_precache 
>> bypass=1 icap://127.0.0.1:1344/squidclamav adaptation_access 
>> service_req allow all icap_service service_resp respmod_precache 
>> bypass=1 icap://127.0.0.1:1344/squidclamav adaptation_access 
>> service_resp allow all # end integration
>>
>>
>> *c-icap.conf
>> *
>> PidFile /var/run/c-icap.pid
>> CommandsSocket /var/run/c-icap.ctl
>> StartServers 1
>> MaxServers 20
>> MaxRequestsPerChild  100
>> Port 1344
>> ServerAdmin yourname@yourdomain
>> TmpDir /tmp
>> MaxMemObject 131072
>> DebugLevel 0
>> ModulesDir /usr/local/c-icap/lib/c_icap/ ServicesDir 
>> /usr/local/c-icap/lib/c_icap/ LoadMagicFile 
>> /usr/local/etc/c-icap.magic
>>
>> acl localhost src 127.0.0.1/255.255.255.255 acl PERMIT_REQUESTS type 
>> REQMOD RESPMOD icap_access allow localhost PERMIT_REQUESTS icap_access 
>> deny all
>>
>> ServerLog /var/log/c-icap/server.log
>> AccessLog /var/log/c-icap/access.log
>>
>> Service squidclamav squidclamav.so
>>
>>
>> *CLAMD.CONF*
>> LogFile /var/log/clamd.scan
>> PidFile /var/run/clamd.scan/clamd.pid
>> TemporaryDirectory /var/tmp
>> DatabaseDirectory /var/lib/clamav
>> LocalSocket /var/run/clamd.scan/clamd.sock TCPSocket 3310 TCPAddr 
>> 127.0.0.1 User clamscan
>>
>>
>> *SQUIDCLAMAV.CONF
>> *
>> maxsize 500
>> redirect http://squid.espaciomemoria.lan/cgi-bin/clwarn.cgi.en_EN
>> clamd_ip 127.0.0.1
>> clamd_port 3310
>> trust_cache 0
>> timeout 1
>> logredir 1
>> dnslookup 0
>> safebrowsing 0
>>
>> abortcontent ^video\/x-flv$
>> abortcontent ^video\/mp4$
>> # White list some sites
>>
>> Somebody can give me a hand with this???
>> Thanks to all.
> Thelepathy on vacation. To give your hand, it is require to have root access 
> to your server to make performance diagnostics during "slow downloads". But 
> you always can do this yourself. Pieces of configs is not enough to 
> diagnostics, and, therefore, for tuning.
>>
>>
>> --
>> View this message in context: 
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Antivirus-for-squid
>> -tp4681323p4681413.html Sent from the Squid - Users mailing list 
>> archive at Nabble.com.
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> --
> Bugs to the Future
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-02-01 Thread Yuri Voinov
You'r welcome.

I do not understand what the hell you have clung to me. I have my own
point of view on the problem. Tell tales of the guy who started this
thread. I know the developer's position.

So, let's stop useless discussion. This is wasted time only.

01.02.2017 21:48, Amos Jeffries пишет:
> On 28/01/2017 1:35 a.m., Yuri wrote:
>> I just want to have a choice and an opportunity to say - "F*ck you, man,
>> I'm the System Administrator".
> Does that go down well in parties or something?
>
>> If you do not want to violate the RFC - remove violations HTTP at all.
>> If you remember, this mode is now enabled by default.
> That mode does not mean what you seem to think it means.
>
> It means that *some* *specific* things which are known not to cause much
> damage are allowed which violate HTTP _a little bit_ when it helps the
> traffic work better. Most things it does is enabling Squid to detect and
> talk with broken software that are themselves not quite following HTTP
> right.
>  For example, a client forgetting to %20 some whitespace inside a URL.
>
>> You do not have to teach me that I use. I - an administrator and wish to
>> be able to select tools. And do not be in a situation where the choice
>> is made for me.
>>
>
> Have you tried starting regular conversations with your friends and
> family with the words "F*k you, man, I'm the System Administrator" so
> they know that your way is always right no matter what. Then proceeding
> to say everything else in the conversation at the loudest volume your
> mouth can produce while injecting weird words randomly into each
> sentence? just because you were created with those abilities you might
> as well try using them. It definitely will make conversations short and
> efficient (hmm.. just like 100% caching makes HTTP 'quick').
>
>
> Anyhow, my point is all languages have rules and protocols of behaviour
> that have to be followed for the sentences/messages to be called
> "speaking" that language. If you don't follow those rules you are simply
> not speaking that language. You might be speaking some other language or
> just being a weirdo - either way you are not speaking that language.
>
> HTTP is as much a language as any spoken one. It is just for Internet
> software to 'talk' to each other. By not following its rules you are ...
> well ... not using HTTP.
>
> What you keep saying about how you/admin "must" be allowed to violate
> HTTP just because you are administrator and want to. That makes as much
> sense as being proud about shouting at everyone you talk to in real
> life. It's dumb, on a scale that demonstrates one is not worthy of the
> privilege of being a sysadmin and can lead to early retirement in a
> small padded cell.
>
>
 Antonio, you've seen at least once, so I complained about the
 consequences of my own actions?
>>> You seem to continually complain that people are recommending not to
>>> try going
>>> against standards, or trying to defeat the anti-caching directives on
>>> websites
>>> you find.
>>>
>>> It's your choice to try doing that; people are saying "but if you do
>>> that, bad
>>> things will happen, or things will break, or it just won't work the
>>> way you
>>> want it to", and then you say "but I don't like having to follow the
>>> rules".
>>>
>>> That's what I meant about complaining about the consequences of your
>>> actions.
>> It is my right and my choice. Personally, I do not complain of the
>> consequences, having enough tools to solve any problem.
>>
> Hahahahaha "not complain about the consequences", ROFLMAO.
> Thanks dude, I needed a good laugh today.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] QA Pilots

2017-01-31 Thread Yuri Voinov


01.02.2017 3:49, Alex Rousskov пишет:
> Hello,
>
> The Squid Software Foundation plans to hire a part-time remote QA
> engineer to help us address systemic quality problems with Squid
> releases and development snapshots. This position will be funded by your
> donations to the Foundation. Thank you!
Do you mean the volunteers?
>
> Before a regular QA Engineer position is filled, we offer several paid
> pilot projects to select the most suitable candidate. These projects
> focus on building QA infrastructure to enforce existing (but
> underutilized) quality controls. A lot more details are available at
>
> http://wiki.squid-cache.org/QA/Pilots
>
> If you would like to apply, please see that page for the application
> procedure. If you know somebody who should apply, please point them to
> the above page. If you have constructive feedback regarding the planned
> QA improvements, please share it on this mailing list.
>
>
> Thank you,
>
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to modify cached object?

2017-01-31 Thread Yuri Voinov
Exactly, localhost system administrators can do what they want ;-)


01.02.2017 1:05, boruc пишет:
> Well, basically I'm working on virtual machine with nothing special installed
> on it so I don't have to worry about all of this. I wanted to give squid a
> try, look how it works, learn something new. Being here, reading all your
> answers and suggestions is a great experience for me :)
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Is-it-possible-to-modify-cached-object-tp4681073p4681398.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to modify cached object?

2017-01-31 Thread Yuri Voinov


01.02.2017 0:34, boruc пишет:
> Thank you for your answers Antony.
>
> On packages.ubuntu.com I searched for "squid3" and here's what I've found:
> 12.04LTS - 3.1.19
> 14.04LTS - 3.3.8
> 16.04LTS - 3.5.12
>
> For now the best option would be to upgrade Ubuntu to 16.04, but I cannot do
> it now. Also Amos has written earlier: "All the newer versions should come
> pre-packaged with eCAP support with no action needed on your part." I'd like
> to know if in squid 3.5.12 eCAP is already enabled, but I cannot find it by
> myself.
What an idea, dude! To upgrade one package - upgrade OS at whole! Wow!
You have unlimited free time and not responsible to the all datacenter,
right? :)
> I tried it on Ubuntu 14.04, by "sudo apt-get install squid3", squid 3.3.8
> was installed (and also package libecap2 (0.2.0-1ubuntu4)), the output for
> "squid3 -v" was:
>
> configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
> '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
> '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var'
> '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode'
> '--disable-dependency-tracking' '--disable-silent-rules'
> '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
> '--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
> '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
> '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
> '--enable-icap-client' '--enable-follow-x-forwarded-for'
> '--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
> '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
> '--enable-auth-ntlm=fake,smb_lm'
> '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
> '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
> '--enable-icmp' '--enable-zph-qos' *'--enable-ecap'* '--disable-translation'
> '--with-swapdir=/var/spool/squid3' '--with-logdir=/var/log/squid3'
> '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
> '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter'
> 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wall'
> 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
> 'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector
> --param=ssp-buffer-size=4 -Wformat -Werror=format-security'
>
> So actually this squid release is already configured with "--enable-ecap"
> and all I need to do is to set "ecap_enable" in configuration file to "on"
> (and other stuff mentioned in  documentation
>   ) and it'll work fine?
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Is-it-possible-to-modify-cached-object-tp4681073p4681396.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Yuri Voinov


27.01.2017 19:35, Garri Djavadyan пишет:
> On Fri, 2017-01-27 at 17:58 +0600, Yuri wrote:
>> 27.01.2017 17:54, Garri Djavadyan пишет:
>>> On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
 --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
 Connecting to 127.0.0.1:3128... connected.
 Proxy request sent, awaiting response...
 HTTP/1.1 200 OK
 Cache-Control: no-cache, no-store
 Pragma: no-cache
 Content-Type: text/html
 Expires: -1
 Server: Microsoft-IIS/8.0
 CorrelationVector: BzssVwiBIUaXqyOh.1.1
 X-AspNet-Version: 4.0.30319
 X-Powered-By: ASP.NET
 Access-Control-Allow-Headers: Origin, X-Requested-With,
 Content-
 Type,
 Accept
 Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
 Access-Control-Allow-Credentials: true
 P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD
 TAI
 TELo
 OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
 X-Frame-Options: SAMEORIGIN
 Vary: Accept-Encoding
 Content-Encoding: gzip
 Date: Fri, 27 Jan 2017 09:29:56 GMT
 Content-Length: 13322
 Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
 expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
 Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
 expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
 Strict-Transport-Security: max-age=0; includeSubDomains
 X-CCC: NL
 X-CID: 2
 X-Cache: MISS from khorne
 X-Cache-Lookup: MISS from khorne:3128
 Connection: keep-alive
 Length: 13322 (13K) [text/html]
 Saving to: 'index.html'

 index.html  100%[==>]  13.01K --.-
 KB/sin
 0s

 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved
 [13322/13322]

 Can you explain me - for what static index.html has this:

 Cache-Control: no-cache, no-store
 Pragma: no-cache

 ?

 What can be broken to ignore CC in this page?
>>> Hi Yuri,
>>>
>>>
>>> Why do you think the page returned for URL
>>> [https://www.microsot.cpom/r
>>> u-kz/] is static and not dynamically generated one?
>> And for me, what's the difference? Does it change anything? In
>> addition, 
>> it is easy to see on the page and even the eyes - strangely enough -
>> to 
>> open its code. And? What do you see there?
> I see an official home page of Microsoft company for KZ region. The
> page is full of javascripts and products offer. It makes sense to
> expect that the page could be changed intensively enough.
In essence, the question is, what to say? In addition to the general
discussion of particulars or examples? As I said - this is just one
example. A lot of them. And I think sometimes it's better to chew than talk.
>
>
>>> The index.html file is default file name for wget.
>> And also the name of the default home page in the web. Imagine - I
>> know 
>> the obvious things. But the question was about something else.
>>> man wget:
>>>--default-page=name
>>> Use name as the default file name when it isn't known
>>> (i.e., for
>>> URLs that end in a slash), instead of index.html.
>>>
>>> In fact the https://www.microsoft.com/ru-kz/index.html is a stub
>>> page
>>> (The page you requested cannot be found.).
>> You living in wrong region. This is geo-dependent page, as obvious,
>> yes?
> What I mean is the pages https://www.microsoft.com/ru-kz/ and https://w
> ww.microsoft.com/ru-kz/index.html are not same. You can easily confirm
> it.
>
>
>> Again. What is the difference? I open it from different
>> workstations, 
>> from different browsers - I see the same thing. The code is
>> identical. I 
>> can is to cache? Yes or no?
> I'm a new member of Squid community (about 1 year). While tracking for
> community activity I found that you can't grasp the advantages of
> HTTP/1.1 over HTTP/1.0 for caching systems. Especially, its ability to
> _safely_ cache and serve same amount (but I believe even more) of the
> objects as HTTP/1.0 compliant caches do (while not breaking internet).
> The main tool of HTTP/1.1 compliant proxies is _revalidation_ process.
> HTTP/1.1 compliant caches like Squid tend to cache all possible objects
> but later use revalidation for dubious requests. In fact the
> revalidation is not costly process, especially using conditional GET
> requests.
Nuff said. Let's stop waste time. Take a look on attachement.
>
> I found that most of your complains in the mail list and Bugzilla are
> related to HTTPS scheme. FYI: The primary tool (revalidation) does not
> work for HTTPS scheme using all current Squid branches at the moment.
> See bug 4648.
Forgot about it. Now I've solved all of my problems.
>
> Try to apply the proposed patch and update all related bug reports.
I have no unresolved problems with caching. For me personally, this
debate - only of academic interest. You can continue to 

Re: [squid-users] Not all html objects are being cached

2017-01-26 Thread Yuri Voinov


27.01.2017 2:44, Matus UHLAR - fantomas пишет:
>> 26.01.2017 2:22, boruc пишет:
>>> After a little bit of analyzing requests and responses with WireShark I
>>> noticed that many sites that weren't cached had different
>>> combination of
>>> below parameters:
>>>
>>> Cache-Control: no-cache, no-store, must-revalidate, post-check,
>>> pre-check,
>>> private, public, max-age, public
>>> Pragma: no-cache
>
> On 26.01.17 02:44, Yuri Voinov wrote:
>> If the webmaster has done this - he had good reason to. Trying to break
>> the RFC in this way, you break the Internet.
>
> Actually, no. If the webmaster has done the above - he has no damn
> idea what
> those mean (private and public?) , and how to provide properly cacheable
> content.
It was sarcasm.
>
> Which is very common and also a reason why many proxy admins tend to
> ignore
> those controls...
>

-- 
Bugs to the Future


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-25 Thread Yuri Voinov


26.01.2017 2:22, boruc пишет:
> After a little bit of analyzing requests and responses with WireShark I
> noticed that many sites that weren't cached had different combination of
> below parameters:
>
> Cache-Control: no-cache, no-store, must-revalidate, post-check, pre-check,
> private, public, max-age, public
> Pragma: no-cache
If the webmaster has done this - he had good reason to. Trying to break
the RFC in this way, you break the Internet.
>
> There is a possibility to disable this in squid by using
Don't do it.
> request_header_access and reply_header_access, however it doesn't work for
> me, many pages aren't still in cache. I am currently using lines below:
>
> request_header_access Cache-Control deny all
> request_header_access Pragma deny all
> request_header_access Accept-Encoding deny all
> reply_header_access Cache-Control deny all
> reply_header_access Pragma deny all
> reply_header_access Accept-Encoding deny all
>
> I could also try refresh_pattern, but I don't think that code below will
> work because not every URL ends with .html or .htm (because you visit
> /www.example.com/, not /www.example.com/index.html/)
> refresh_pattern -i \.(html|htm)$  1440   40% 40320 ignore-no-cache
> ignore-no-store ignore-private override-expire reload-into-ims
>
> Thank you in advance.
You're welcome.
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Not-all-html-objects-are-being-cached-tp4681293p4681326.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Antivirus for squid

2017-01-25 Thread Yuri Voinov


26.01.2017 0:03, erdosain9 пишет:
> Hi to all.
> Im a little confuse about this... i just want "antivirus", i dont care block
> some web, filter, etc. (at least no more that what i get with squid)... so,
> just for antivirus, what recommend???
> clamav
You thing you have a choise? All others AV is commercial.
> squidclamav
squidclamav is not AV, it is ICAP adapter for AV.
> squidguard
This is not AV at all.
> 
> Somebody have a tutorial to install something of this on Centos7??
Common example on Squid's wiki. There is no tutorial for all and any
"OS" on the Earth. Adapt wiki example yourself to any OS you want.
> Thanks
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Antivirus-for-squid-tp4681323.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-25 Thread Yuri Voinov


25.01.2017 5:25, Alex Rousskov пишет:
> On 01/24/2017 02:11 PM, Yuri Voinov wrote:
>> 25.01.2017 2:50, Alex Rousskov пишет:
>>> A short-term hack: I have seen folks successfully solving somewhat
>>> similar problems using a localport ACL with an "impossible" value of
>>> zero. Please try this hack and update this thread if it works for you:
>>>
>>>> # Allow ESI, certificate fetching, Cache Digests, etc. internal requests
>>>> # XXX: Sooner or later, this unsupported hack will stop working!
>>>> acl generatedBySquid localport 0
>>>> http_access allow generatedBySquid
>
>> Sadly, but with this hack squid dies at request:
> That death is unlikely to be related to the ACL hack itself IMO. You can
> test my theory by temporary replacing "generatedBySquid" with "all" on
> the http_access line. If Squid still dies with "all", please consider
> properly reporting the crash; it will probably continue to bite you even
> after the long-term solution (e.g., transaction_initiator ACL) is available.
Yes, it also dies when I've changed final deny to allow all. I
understand that is another bug (probably), but rught now has not enought
time to build debug-enabled squid and reproduce death to backtrace. May
be later.
>
>
>>> A possible long-term solution: Factory is working on adding support for
>>> the following new ACL which may help solve this and a few other problems:
>>>
>>>>acl aclname transaction_initiator initiator...
>>>>  # Matches transaction's initiator [fast]
>> Very good. When it will be ready to use, or, at least, to test?
> I expect the first public implementation to be posted for squid-dev
> review within a week but this is not a promise.
Excellent, will wait.
>
>
>> So personally I'm willing to put up with uncontrollable requests to the
>> certificate stores.
> ... including any site pretending to be a certificate store.
As I said, it's silly to cry about lost virginity by performing "Man of
the middle." In addition, as I understand it, Squid takes the addresses
of sites with certificates not from the Higher Mind and not from
subspace - if my memory does not deceive, they are encoded in the
certificates themselves, that is, a priori, be regarded as relatively
safe. Also, it seemed to me, the intermediate certificates Squid sees as
untrusted.
>
> I respect that choice, but it must remain as a choice rather than
> hard-coded behavior IMO. I am sure that sooner or later somebody will
> come up with a "certificate" response that crashes Squid, and we do not
> want to leave the admins without a way to combat that attack vector.
Exactly, I'm not talking about the absence of choice. Everyone should be
able to meet their most foolish desire to scratch paranoia.
>
> We could change Squid to ignore http_access and lots of other existing
> directives while adding an increasing number of new control directives
> dedicated to certificate fetching transactions, but I think that
> segregation approach would be a strategic mistake because there are too
> many potential segregation areas with partially overlapping and complex
> configuration requirements. It will get messy and too error-prone.
No-no-no, no segregation. Aliens are the same as homo primates. :-D

Incidentally, when it was Squid configure simple and easy? :-!
>
>
> Cheers,
>
> Alex.
>



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov


25.01.2017 2:50, Alex Rousskov пишет:
> On 01/24/2017 12:20 PM, Yuri Voinov wrote:
>> 25.01.2017 1:10, Alex Rousskov пишет:
>>> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>>>> http_access deny to_localhost
>>> Does not match. The destination is not localhost.
>> Yes, destination is squid itself. From squid to squid.
> No, not "to squid": The destination of the request is
> http://repository.certum.pl/ca.cer.
>
> As for the "from Squid" part, it is dangerous to think that way because,
> in most Squid contexts, "from" applies to the source of the request
> _received_ by Squid. In this case, there is no received request at all.
>
> So this transaction is _not_ "from Squid to Squid" in a sense that some
> other, regular transaction is "from user X to site Y".
>
>
>
>>>> # And finally deny all other access to this proxy
>>>> http_access deny all
>>> Matches!
>> But! This is recommended final deny rule, if I omit it - squid will adds
>> it silently by default, like normal firewall!
> Nobody is suggesting to omit this catch-all rule (which you have
> confirmed as matching in a follow-up email, thank you). The solution is
> to add a new "allow" rule (at least) above this last "deny all" one. In
> other words, your configuration is missing an http_access rule to allow
> internally-generated requests [to benign destinations].
>
>
>> I don't know (on first look) special ACL rule to permit internal access
>> from squid to squid.
> A short-term hack: I have seen folks successfully solving somewhat
> similar problems using a localport ACL with an "impossible" value of
> zero. Please try this hack and update this thread if it works for you:
>
>> # Allow ESI, certificate fetching, Cache Digests, etc. internal requests
>> # XXX: Sooner or later, this unsupported hack will stop working!
>> acl generatedBySquid localport 0
>> http_access allow generatedBySquid
Sadly, but with this hack squid dies at request:

root @ khorne /patch # wget -S https://yandex.com/company/
--2017-01-25 02:53:58--  https://yandex.com/company/
Connecting to 127.0.0.1:3128... connected.

2017/01/25 02:53:51 kid1| Accepting SSL bumped HTTP Socket connections
at local=[::]:3128 remote=[::] FD 76 flags=9
FATAL: Received Segment Violation...dying.
2017/01/25 02:54:00 kid1| Closing HTTP(S) port [::]:3126
2017/01/25 02:54:00 kid1| Closing HTTP(S) port [::]:3127
2017/01/25 02:54:00 kid1| Closing HTTP(S) port [::]:3128
2017/01/25 02:54:00 kid1| storeDirWriteCleanLogs: Starting...
2017/01/25 02:54:00 kid1| 65536 entries written so far.
2017/01/25 02:54:00 kid1|131072 entries written so far.
2017/01/25 02:54:01 kid1|196608 entries written so far.
2017/01/25 02:54:01 kid1|262144 entries written so far.
2017/01/25 02:54:01 kid1|327680 entries written so far.
2017/01/25 02:54:01 kid1|393216 entries written so far.
2017/01/25 02:54:01 kid1|458752 entries written so far.
2017/01/25 02:54:01 kid1|524288 entries written so far.
2017/01/25 02:54:01 kid1|589824 entries written so far.
2017/01/25 02:54:01 kid1|   Finished.  Wrote 622687 entries.
2017/01/25 02:54:01 kid1|   Took 0.43 seconds (1438401.76 entries/sec).
CPU Usage: 1762.455 seconds = 1490.509 user + 271.946 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0

t@1 (l@1) program terminated by signal ABRT (Abort)
0xfd7ffe7c362a: __lwp_kill+0x000a:  jae  __lwp_kill+0x18   
[ 0xfd7ffe7c3638, .+0xe ]
(dbx)
where 
current thread: t@1
=>[1] __lwp_kill(0x1, 0x6, 0xd1df63a0, 0xfd7ffe7c3f1e, 0x0,
0xfd7ffe823c80), at 0xfd7ffe7c362a
  [2] _thr_kill(), at 0xfd7ffe7bbf23
  [3] raise(), at 0xfd7ffe7681c9
  [4] abort(), at 0xfd7ffe746bc0
  [5] death(), at 0x6f463d
  [6] __sighndlr(), at 0xfd7ffe7bde26
  [7] call_user_handler(), at 0xfd7ffe7b26f2
  [8] sigacthandler(), at 0xfd7ffe7b291e
   called from signal handler with signal 11 (SIGSEGV) --
  [9] Ip::Qos::doTosLocalHit(), at 0x7d800e
  [10] clientReplyContext::doGetMoreData(), at 0x5d3009
  [11] clientReplyContext::identifyFoundObject(), at 0x5d323c
  [12] clientReplyContext::created(), at 0x5d3a45
  [13] StoreEntry::getPublicByRequest(), at 0x6cae49
  [14] clientStreamRead(), at 0x5edcc4
  [15] ClientHttpRequest::httpStart(), at 0x5d917e
  [16] ClientHttpRequest::processRequest(), at 0x5db570
  [17] ClientHttpRequest::doCallouts(), at 0x5dd1be
  [18] ACLChecklist::checkCallback(), at 0x76224a
  [19] ClientHttpRequest::doCallouts(), at 0x5ddd69
  [20] ClientRequestContext::clientStoreIdDone(), at 0x5e33bf
  [21] 0x5e3810(), at 0x5e3810
  [22] ACLChecklist::checkCallback(), at 0x76224a
  [23] ClientRequestCon

Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
On my setup it is easy to reproduce.

It is enough to execute with wget:

wget -S https://yandex.com/company/

access.log immediately shows

0 - TCP_DENIED/403 3574 GET http://repository.certum.pl/ca.cer -
HIER_NONE/- text/html;charset=utf-8

before request to Yandex destination.

However it executes:

root @ khorne /patch # wget -S https://yandex.com/company/
--2017-01-25 02:37:52--  https://yandex.com/company/
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Date: Tue, 24 Jan 2017 20:37:54 GMT
  Content-Type: text/html; charset="UTF-8"
  Set-Cookie: yandexuid=15112434331485290274; Domain=.yandex.com; Path=/
  Content-Security-Policy: default-src 'none'; frame-src 'self'
yastatic.net yandex.st music.yandex.ru download.yandex.ru
static.video.yandex.ru video.yandex.ru player.vimeo.com www.youtube.com
*.cdn.yandex.net; script-src 'unsafe-eval' 'unsafe-inline'
clck.yandex.ru pass.yandex.com yastatic.net mc.yandex.ru
api-maps.yandex.ru social.yandex.com
'nonce-728706f8-8c12-4f25-a00f-27d1e8c36f6f'; style-src 'unsafe-inline'
yastatic.net; connect-src 'self' yandex.st mail.yandex.com mc.yandex.ru
mc.yandex.com; font-src yastatic.net; img-src 'self' data:
avatars.yandex.net avatars.mds.yandex.net avatars.mdst.yandex.net
http://avatars.mds.yandex.net jing.yandex-team.ru download.yandex.ru
yandex.st mc.yandex.ru yastatic.net www.tns-counter.ru
yandexgacom.hit.gemius.pl *.cdn.yandex.net api-maps.yandex.ru
static-maps.yandex.ru *.maps.yandex.net i.ytimg.com company.yandex.com
yandex.com http://img-fotki.yandex.ru img-fotki.yandex.ru; media-src
*.cdn.yandex.net
  X-Content-Security-Policy: default-src 'none'; frame-src 'self'
yastatic.net yandex.st music.yandex.ru download.yandex.ru
static.video.yandex.ru video.yandex.ru player.vimeo.com www.youtube.com
*.cdn.yandex.net; script-src 'unsafe-eval' 'unsafe-inline'
clck.yandex.ru pass.yandex.com yastatic.net mc.yandex.ru
api-maps.yandex.ru social.yandex.com
'nonce-728706f8-8c12-4f25-a00f-27d1e8c36f6f'; style-src 'unsafe-inline'
yastatic.net; connect-src 'self' yandex.st mail.yandex.com mc.yandex.ru
mc.yandex.com; font-src yastatic.net; img-src 'self' data:
avatars.yandex.net avatars.mds.yandex.net avatars.mdst.yandex.net
http://avatars.mds.yandex.net jing.yandex-team.ru download.yandex.ru
yandex.st mc.yandex.ru yastatic.net www.tns-counter.ru
yandexgacom.hit.gemius.pl *.cdn.yandex.net api-maps.yandex.ru
static-maps.yandex.ru *.maps.yandex.net i.ytimg.com company.yandex.com
yandex.com http://img-fotki.yandex.ru img-fotki.yandex.ru; media-src
*.cdn.yandex.net
  X-WebKit-CSP: default-src 'none'; frame-src 'self' yastatic.net
yandex.st music.yandex.ru download.yandex.ru static.video.yandex.ru
video.yandex.ru player.vimeo.com www.youtube.com *.cdn.yandex.net;
script-src 'unsafe-eval' 'unsafe-inline' clck.yandex.ru pass.yandex.com
yastatic.net mc.yandex.ru api-maps.yandex.ru social.yandex.com
'nonce-728706f8-8c12-4f25-a00f-27d1e8c36f6f'; style-src 'unsafe-inline'
yastatic.net; connect-src 'self' yandex.st mail.yandex.com mc.yandex.ru
mc.yandex.com; font-src yastatic.net; img-src 'self' data:
avatars.yandex.net avatars.mds.yandex.net avatars.mdst.yandex.net
http://avatars.mds.yandex.net jing.yandex-team.ru download.yandex.ru
yandex.st mc.yandex.ru yastatic.net www.tns-counter.ru
yandexgacom.hit.gemius.pl *.cdn.yandex.net api-maps.yandex.ru
static-maps.yandex.ru *.maps.yandex.net i.ytimg.com company.yandex.com
yandex.com http://img-fotki.yandex.ru img-fotki.yandex.ru; media-src
*.cdn.yandex.net
  Content-Encoding: gzip
  X-XSS-Protection: 1; mode=block
  X-Content-Type-Options: nosniff
  Strict-Transport-Security: max-age=0; includeSubDomains
  X-Cache: MISS from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Transfer-Encoding: chunked
  Connection: keep-alive
Length: unspecified [text/html]
Saving to: 'index.html'

index.html  [ <=>   ]   3.60K  --.-KB/sin
0s 

2017-01-25 02:37:54 (7.44 MB/s) - 'index.html' saved [3688]

because intermediate certificates file is exists and configured.

25.01.2017 2:09, Yuri Voinov пишет:
> This is mentioned debug for this transaction.
>
> I see no anomalies. Just DENIED finally.
>
>
> 25.01.2017 1:45, Yuri Voinov пишет:
>> Under detailed ACL debug got this transaction:
>>
>> 2017/01/25 01:36:35.772 kid1| 28,3| DomainData.cc(110) match:
>> aclMatchDomainList: checking 'repository.certum.pl'
>> 2017/01/25 01:36:35.772 kid1| 28,3| DomainData.cc(115) match:
>> aclMatchDomainList: 'repository.certum.pl' NOT found
>> 2017/01/25 01:36:35.772 kid1| 28,3| Acl.cc(290) matches: checked:
>> block_tld = 0
>> 2017/01/25 01:36:35.772 kid1| 28,3| Acl.cc(290) matches: checked:
>> http_access#11 = 0
>> 2017/01/25 01:36:35.772 kid1| 28,5| Checklist.cc(397) bannedAction:
>> Action 'ALLOWED/0' is not banned
>> 2017/01/25 01:36:35.772 kid1| 28,5| Acl.cc(263) matches: ch

Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
: checking
http_access#18
2017/01/25 01:36:35.773 kid1| 28,5| Acl.cc(263) matches: checking all
2017/01/25 01:36:35.773 kid1| 28,9| Ip.cc(96) aclIpAddrNetworkCompare:
aclIpAddrNetworkCompare: compare:
[:::::::]/[::] ([::])  vs [::]-[::]/[::]
2017/01/25 01:36:35.773 kid1| 28,3| Ip.cc(540) match: aclIpMatchIp:
'[:::::::]' found
2017/01/25 01:36:35.773 kid1| 28,3| Acl.cc(290) matches: checked: all = 1
2017/01/25 01:36:35.773 kid1| 28,3| Acl.cc(290) matches: checked:
http_access#18 = 1
2017/01/25 01:36:35.773 kid1| 28,3| Acl.cc(290) matches: checked:
http_access = 1
2017/01/25 01:36:35.773 kid1| 28,3| Checklist.cc(63) markFinished:
0x4b781938 answer DENIED for match
2017/01/25 01:36:35.773 kid1| 28,3| Checklist.cc(163) checkCallback:
ACLChecklist::checkCallback: 0x4b781938 answer=DENIED

It seems like bug.

25.01.2017 1:10, Alex Rousskov пишет:
> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>
>>> 1485279884.648  0 - TCP_DENIED/403 3574 GET
>>> http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8
>
>> http_access deny !Safe_ports
> Probably does not match -- 80 is a safe port.
>
>
>> # Instant messengers include
>> include "/usr/local/squid/etc/acl.im.include"
> I am guessing these do not match or are irrelevant.
>
>
>> # Deny CONNECT to other than SSL ports
>> http_access deny CONNECT !SSL_ports
> Does not match. This is a GET request.
>
>
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
> Probably do not match. This is not a cache manager request although I
> have not checked how Squid identifies those exactly.
>
>
>> http_access deny to_localhost
> Does not match. The destination is not localhost.
>
>
>> # Allow purge from localhost
>> http_access allow PURGE localhost
>> http_access deny PURGE
> Do not match. This is a GET request, not PURGE.
>
>
>> # Block torrent files
>> acl TorrentFiles rep_mime_type mime-type application/x-bittorrent
>> http_reply_access deny TorrentFiles
> Does not match. There was no response [with an application/x-bittorrent
> MIME type].
>
>
>> # Windows updates rules
>> http_access allow CONNECT wuCONNECT localnet
>> http_access allow CONNECT wuCONNECT localhost
> Do not match. This is a GET request, not CONNECT.
>
>
>> http_access allow windowsupdate localnet
>> http_access allow windowsupdate localhost
> Probably do not match. The internal transaction is not associated with a
> to-Squid connection coming from localnet or localhost.
>
>
>> # Rule allowing access from local networks
>> http_access allow localnet
>> http_access allow localhost
> Probably do not match. The internal transaction is not associated with a
> to-Squid connection coming from localnet or localhost.
>
>
>> # And finally deny all other access to this proxy
>> http_access deny all
> Matches!
>
>
>> I have no idea, what can block access.
> That much was clear from the time you asked the question. I bet your
> last http_access rule that denies all other connection matches, but I
> would still ask Squid. Squid knows why it blocks (or does not allow)
> access. There are several ways to ask Squid, including increasing
> debugging verbosity when reproducing the problem, adding the matching
> ACL to the error message, using custom error messages for different
> http_access deny lines, etc.
>
> These methods are not easy, pleasant, quick, or human-friendly,
> unfortunately, but you are a very capable sysadmin with more than enough
> Squid knowledge to find the blocking directive/ACL, especially for a
> problem that can be isolated to two HTTP transactions.
>
> Once we know what directive/ACL blocks, we may be able to figure out a
> workaround, propose a bug fix, etc. For example, if my guess is correct
> -- the "deny all" rule has matched -- then you would need to add a rule
> to allow internal requests, including the ones that fetch those missing
> certificates.
>
>
> HTH,
>
> Alex.
>
>
>> 25.01.2017 0:27, Alex Rousskov пишет:
>>> On 01/24/2017 11:19 AM, Yuri Voinov wrote:
>>>
>>>> It is downloads directly via proxy from localhost:
>>>> As I understand, downloader also access via localhost, right? 
>>> This is incorrect. Downloader does not have a concept of an HTTP client
>>> which sends the request to Squid so "via localhost" or "via any client
>>> source address" does not apply to Downloader transactions. In other
>>> words, there is no client [source address] for Downloader requests.

Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov


25.01.2017 1:10, Alex Rousskov пишет:
> On 01/24/2017 11:33 AM, Yuri Voinov wrote:
>
>>> 1485279884.648  0 - TCP_DENIED/403 3574 GET
>>> http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8
>
>> http_access deny !Safe_ports
> Probably does not match -- 80 is a safe port.
>
>
>> # Instant messengers include
>> include "/usr/local/squid/etc/acl.im.include"
> I am guessing these do not match or are irrelevant.
Yes, irrelevant.
>
>
>> # Deny CONNECT to other than SSL ports
>> http_access deny CONNECT !SSL_ports
> Does not match. This is a GET request.
Exactly.
>
>
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
> Probably do not match. This is not a cache manager request although I
> have not checked how Squid identifies those exactly.
>
>
>> http_access deny to_localhost
> Does not match. The destination is not localhost.
Yes, destination is squid itself. From squid to squid.
>
>
>> # Allow purge from localhost
>> http_access allow PURGE localhost
>> http_access deny PURGE
> Do not match. This is a GET request, not PURGE.
>
>
>> # Block torrent files
>> acl TorrentFiles rep_mime_type mime-type application/x-bittorrent
>> http_reply_access deny TorrentFiles
> Does not match. There was no response [with an application/x-bittorrent
> MIME type].
>
>
>> # Windows updates rules
>> http_access allow CONNECT wuCONNECT localnet
>> http_access allow CONNECT wuCONNECT localhost
> Do not match. This is a GET request, not CONNECT.
>
>
>> http_access allow windowsupdate localnet
>> http_access allow windowsupdate localhost
> Probably do not match. The internal transaction is not associated with a
> to-Squid connection coming from localnet or localhost.
>
>
>> # Rule allowing access from local networks
>> http_access allow localnet
>> http_access allow localhost
> Probably do not match. The internal transaction is not associated with a
> to-Squid connection coming from localnet or localhost.
Exactly.
>
>
>> # And finally deny all other access to this proxy
>> http_access deny all
> Matches!
But! This is recommended final deny rule, if I omit it - squid will adds
it silently by default, like normal firewall!
>
>
>> I have no idea, what can block access.
> That much was clear from the time you asked the question. I bet your
> last http_access rule that denies all other connection matches, but I
> would still ask Squid. Squid knows why it blocks (or does not allow)
> access. There are several ways to ask Squid, including increasing
> debugging verbosity when reproducing the problem, adding the matching
> ACL to the error message, using custom error messages for different
> http_access deny lines, etc.
Yes. I've tought about debugging.
>
> These methods are not easy, pleasant, quick, or human-friendly,
> unfortunately, but you are a very capable sysadmin with more than enough
> Squid knowledge to find the blocking directive/ACL, especially for a
> problem that can be isolated to two HTTP transactions.
>
> Once we know what directive/ACL blocks, we may be able to figure out a
> workaround, propose a bug fix, etc. For example, if my guess is correct
> -- the "deny all" rule has matched -- then you would need to add a rule
> to allow internal requests, including the ones that fetch those missing
> certificates.
Here is in doubt. I tought I've good knowlegde about squid's acl. But I
don't know (on first look) special ACL rule to permit internal access
from squid to squid.

I'm reading documented config - there is no special ACL type to
permit/deny internal access.

Hmm. It looks squid blocks own internal access to itself, but
permits the same request from outside.

Can this be bug? Or I lost something?
>
>
> HTH,
>
> Alex.
>
>
>> 25.01.2017 0:27, Alex Rousskov пишет:
>>> On 01/24/2017 11:19 AM, Yuri Voinov wrote:
>>>
>>>> It is downloads directly via proxy from localhost:
>>>> As I understand, downloader also access via localhost, right? 
>>> This is incorrect. Downloader does not have a concept of an HTTP client
>>> which sends the request to Squid so "via localhost" or "via any client
>>> source address" does not apply to Downloader transactions. In other
>>> words, there is no client [source address] for Downloader requests.
>>>
>>> Unfortunately, I do not know exactly what effect that lack of info has
>>> on what ACLs (in part because there are too many of them and because
>>> lack of info is often treat

Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
This is working production server. I've checked configuration twice. See
no problem.

Here:


# -
# Access parameters
# -
# Deny requests to unsafe ports
http_access deny !Safe_ports

# Instant messengers include
include "/usr/local/squid/etc/acl.im.include"

# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
http_access deny to_localhost
# Allow purge from localhost
http_access allow PURGE localhost
http_access deny PURGE

# Block torrent files
acl TorrentFiles rep_mime_type mime-type application/x-bittorrent
http_reply_access deny TorrentFiles
deny_info TCP_RESET TorrentFiles

# No cache directives
cache deny dont_cache_url
cache allow all

# 302 loop
acl text_mime rep_mime_type text/html text/plain
acl http302 http_status 302
store_miss deny text_mime http302
send_hit deny text_mime http302

# Windows updates rules
http_access allow CONNECT wuCONNECT localnet
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate localnet
http_access allow windowsupdate localhost

# SSL bump rules
acl DiscoverSNIHost at_step SslBump1
acl NoSSLIntercept ssl::server_name_regex
"/usr/local/squid/etc/acl.url.nobump"
ssl_bump peek DiscoverSNIHost
ssl_bump splice DiscoverSNIHost icq
ssl_bump splice DiscoverSNIHost icqip icqport
ssl_bump splice NoSSLIntercept
ssl_bump bump all

# Rule allowing access from local networks
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

is ok.

I'm only on doubt about this:

http_access deny to_localhost

but it is recommended to use long time, as documented:

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

and has no visible effect to comment out this line.

I have no idea, what can block access.

25.01.2017 0:27, Alex Rousskov пишет:
> On 01/24/2017 11:19 AM, Yuri Voinov wrote:
>
>> It is downloads directly via proxy from localhost:
>> As I understand, downloader also access via localhost, right? 
> This is incorrect. Downloader does not have a concept of an HTTP client
> which sends the request to Squid so "via localhost" or "via any client
> source address" does not apply to Downloader transactions. In other
> words, there is no client [source address] for Downloader requests.
>
> Unfortunately, I do not know exactly what effect that lack of info has
> on what ACLs (in part because there are too many of them and because
> lack of info is often treated inconsistently by various ACLs). Thus, I
> continue to recommend finding out which directive/ACL denied Downloader
> access as the first step.
>
> Alex.
>
>
>> 25.01.2017 0:16, Alex Rousskov пишет:
>>> On 01/24/2017 10:48 AM, Yuri Voinov wrote:
>>>
>>>> It seems 4.0.17 tries to download certs but gives deny somewhere.
>>>> However, same URL with wget via same proxy works
>>>> Why?
>>> Most likely, your http_access or similar rules deny internal download
>>> transactions but allow external ones. This is possible, for example, if
>>> your access rules use client information. Internal transactions (ESI,
>>> missing certificate fetching, Cache Digests, etc.) do not have an
>>> associated client.
>>>
>>> The standard denial troubleshooting procedure applies here: Start with
>>> finding out which directive/ACL denies access. I am _not_ implying that
>>> this is easy to do.



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
May be, this feature is mutually exclusive with
sslproxy_foreign_intermediate_certs option?


25.01.2017 0:19, Yuri Voinov пишет:
> Mm, hardly.
>
> It is downloads directly via proxy from localhost:
>
> root @ khorne /patch # http_proxy=localhost:3128 curl
> http://repository.certum.pl/ca.cer
> 0
> 0>1 *H
>0UPL1U
> 270611104639Z0>1o.10U   Certum CA0
> 0   UPL1U
> 0   *H. z o.o.10U   Certum CA0"0
> AK°jk̘�gŭ&_O㕨Ώ¸솶n줝ªn9¾䑯؇ r캦[¯ɓ?㆖͡Vn覩S^Ucըೱ.0h³¼جnZN4ڶP·mB  畃
> ºO)¥B^¶
> ¸ϯ唺Ю°Dl´9>¢n­¸!wӔw䟁·cϗ7¾v֫$L齪go-Սþe1pÂ
> {mXIþc2
>kỀ¬«;°鑠   QĴძ`'l2w¼²rЍʿ¹ƤB՗㧝倐̃T(>ԸM
> :;#c?ч'y䋑ၭ];±Գ¤Բ¼nd閐¨ƌt.q;爴ioރ|R®⧙gۼpݛ±i큎@Hj5ȩf!,瞪J@눤ꄖ,s
>
> root @ khorne /patch #
>
> root @ khorne /patch # wget -S http://repository.certum.pl/ca.cer
> --2017-01-24 23:59:54--  http://repository.certum.pl/ca.cer
> Connecting to 127.0.0.1:3128... connected.
> Proxy request sent, awaiting response...
>   HTTP/1.1 200 OK
>   Content-Type: text/plain; charset=UTF-8
>   Content-Length: 784
>   Last-Modified: Fri, 07 Mar 2014 10:05:14 GMT
>   ETag: "34231-310-63d6aa80"
>   X-Cached: MISS
>   Server: NetDNA-cache/2.2
>   X-Cache: HIT
>   Accept-Ranges: bytes
>   X-Origin-Date: Mon, 23 Jan 2017 06:12:38 GMT
>   Date: Tue, 24 Jan 2017 17:59:54 GMT
>   X-Cache-Age: 128836
>   X-Cache: HIT from khorne
>   X-Cache-Lookup: HIT from khorne:3128
>   Connection: keep-alive
> Length: 784 [text/plain]
> Saving to: 'ca.cer'
>
> ca.cer  100%[==>] 784  --.-KB/sin
> 0s 
>
> 2017-01-24 23:59:54 (86.2 MB/s) - 'ca.cer' saved [784/784]
>
> As I understand, downloader also access via localhost, right? So, it
> should work.
>
> Either from localnet, or from localhost download occurs.
>
>
> 25.01.2017 0:16, Alex Rousskov пишет:
>> On 01/24/2017 10:48 AM, Yuri Voinov wrote:
>>
>>> It seems 4.0.17 tries to download certs but gives deny somewhere.
>>> However, same URL with wget via same proxy works
>>> Why?
>> Most likely, your http_access or similar rules deny internal download
>> transactions but allow external ones. This is possible, for example, if
>> your access rules use client information. Internal transactions (ESI,
>> missing certificate fetching, Cache Digests, etc.) do not have an
>> associated client.
>>
>> The standard denial troubleshooting procedure applies here: Start with
>> finding out which directive/ACL denies access. I am _not_ implying that
>> this is easy to do.
>>
>>
>> HTH,
>>
>> Alex.
>>



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
Mm, hardly.

It is downloads directly via proxy from localhost:

root @ khorne /patch # http_proxy=localhost:3128 curl
http://repository.certum.pl/ca.cer
0
0>1 *H
   0UPL1U
270611104639Z0>1o.10U   Certum CA0
0   UPL1U
0   *H. z o.o.10U   Certum CA0"0
AK°jk̘�gŭ&_O㕨Ώ¸솶n줝ªn9¾䑯؇ r캦[¯ɓ?㆖͡Vn覩S^Ucըೱ.0h³¼جnZN4ڶP·mB  畃
ºO)¥B^¶
¸ϯ唺Ю°Dl´9>¢n­¸!wӔw䟁·cϗ7¾v֫$L齪go-Սþe1pÂ
{mXIþc2
   kỀ¬«;°鑠   QĴძ`'l2w¼²rЍʿ¹ƤB՗㧝倐̃T(>ԸM
:;#c?ч'y䋑ၭ];±Գ¤Բ¼nd閐¨ƌt.q;爴ioރ|R®⧙gۼpݛ±i큎@Hj5ȩf!,瞪J@눤ꄖ,s

root @ khorne /patch #

root @ khorne /patch # wget -S http://repository.certum.pl/ca.cer
--2017-01-24 23:59:54--  http://repository.certum.pl/ca.cer
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Content-Type: text/plain; charset=UTF-8
  Content-Length: 784
  Last-Modified: Fri, 07 Mar 2014 10:05:14 GMT
  ETag: "34231-310-63d6aa80"
  X-Cached: MISS
  Server: NetDNA-cache/2.2
  X-Cache: HIT
  Accept-Ranges: bytes
  X-Origin-Date: Mon, 23 Jan 2017 06:12:38 GMT
  Date: Tue, 24 Jan 2017 17:59:54 GMT
  X-Cache-Age: 128836
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 784 [text/plain]
Saving to: 'ca.cer'

ca.cer  100%[==>] 784  --.-KB/sin
0s 

2017-01-24 23:59:54 (86.2 MB/s) - 'ca.cer' saved [784/784]

As I understand, downloader also access via localhost, right? So, it
should work.

Either from localnet, or from localhost download occurs.


25.01.2017 0:16, Alex Rousskov пишет:
> On 01/24/2017 10:48 AM, Yuri Voinov wrote:
>
>> It seems 4.0.17 tries to download certs but gives deny somewhere.
>> However, same URL with wget via same proxy works
>> Why?
> Most likely, your http_access or similar rules deny internal download
> transactions but allow external ones. This is possible, for example, if
> your access rules use client information. Internal transactions (ESI,
> missing certificate fetching, Cache Digests, etc.) do not have an
> associated client.
>
> The standard denial troubleshooting procedure applies here: Start with
> finding out which directive/ACL denies access. I am _not_ implying that
> this is easy to do.
>
>
> HTH,
>
> Alex.
>



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-24 Thread Yuri Voinov
Hm. Another question.

It seems 4.0.17 tries to download certs:

1485279884.648  0 - TCP_DENIED/403 3574 GET
http://repository.certum.pl/ca.cer - HIER_NONE/- text/html;charset=utf-8

but gives deny somewhere.

However, same URL with wget via same proxy works:

root @ khorne /patch # wget -S http://repository.certum.pl/ca.cer
--2017-01-24 23:46:37--  http://repository.certum.pl/ca.cer
Connecting to 127.0.0.1:3128... connected.
Proxy request sent, awaiting response...
  HTTP/1.1 200 OK
  Content-Type: text/plain; charset=UTF-8
  Content-Length: 784
  Last-Modified: Fri, 07 Mar 2014 10:05:14 GMT
  ETag: "34231-310-63d6aa80"
  X-Cached: MISS
  Server: NetDNA-cache/2.2
  X-Cache: HIT
  Accept-Ranges: bytes
  X-Origin-Date: Mon, 23 Jan 2017 06:12:38 GMT
  Date: Tue, 24 Jan 2017 17:46:37 GMT
  X-Cache-Age: 128039
  X-Cache: HIT from khorne
  X-Cache-Lookup: HIT from khorne:3128
  Connection: keep-alive
Length: 784 [text/plain]
Saving to: 'ca.cer.2'

ca.cer.2100%[==>] 784  --.-KB/sin
0s 

2017-01-24 23:46:37 (95.7 MB/s) - 'ca.cer.2' saved [784/784]

Why? Downloader requires special ACL? Or something else undocumented?


24.01.2017 5:08, Amos Jeffries пишет:
> On 24/01/2017 7:06 a.m., Marcus Kool wrote:
>>
>> On 23/01/17 15:31, Alex Rousskov wrote:
>>> On 01/23/2017 04:28 AM, Yuri wrote:
>>>
 1. How does it work?
>>> My response below and the following commit message might answer some of
>>> your questions:
>>>
>>> http://bazaar.launchpad.net/~squid/squid/5/revision/14769
>> This seems that the feature only goes to Squid 5.  Will it be ported to
>> Squid 4 ?
> rev.14769 is from before Squid-5 existed (rev.14932). The commits
> labeled 'trunk' at that time were Squid-4.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.23 little fixes

2017-01-24 Thread Yuri Voinov
teh TCP :-D teh drama :-D

Nice shoot :-D


24.01.2017 14:26, FredB пишет:
> Hello,
>
> FI, I'm reading some parts of code and I found two little spelling errors
>
> FredB
>
> ---
>
> --- src/client_side.cc2016-10-09 21:58:01.0 +0200
> +++ src/client_side.cc2016-12-14 10:57:12.915469723 +0100
> @@ -2736,10 +2736,10 @@ clientProcessRequest(ConnStateData *conn
>  
>  request->flags.internal = http->flags.internal;
>  setLogUri (http, urlCanonicalClean(request.getRaw()));
> -request->client_addr = conn->clientConnection->remote; // XXX: remove 
> reuest->client_addr member.
> +request->client_addr = conn->clientConnection->remote; // XXX: remove 
> request->client_addr member.
>  #if FOLLOW_X_FORWARDED_FOR
>  // indirect client gets stored here because it is an HTTP header result 
> (from X-Forwarded-For:)
> -// not a details about teh TCP connection itself
> +// not a details about the TCP connection itself
>  request->indirect_client_addr = conn->clientConnection->remote;
>  #endif /* FOLLOW_X_FORWARDED_FOR */
>  request->my_addr = conn->clientConnection->local;
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Yuri Voinov


24.01.2017 2:25, Marcus Kool пишет:
>
>
> On 23/01/17 17:23, Yuri Voinov wrote:
> [snip]
>
>>> I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
>>> a week ago but there has not been any activity.
>>> Is there someone who has sslproxy_foreign_intermediate_certs
>>> working in Squid 4.0.17 ?
>> Seems works as by as in 3.5.x. As I can see.
>
> 3.5.x works fine but 4.0.17 fails on my servers.
My test 4.0.17 works. Seems same like 3.5.
>
>>>
>>> Thanks,
>>> Marcus
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Yuri Voinov


24.01.2017 0:06, Marcus Kool пишет:
>
>
> On 23/01/17 15:31, Alex Rousskov wrote:
>> On 01/23/2017 04:28 AM, Yuri wrote:
>>
>>> 1. How does it work?
>>
>> My response below and the following commit message might answer some of
>> your questions:
>>
>> http://bazaar.launchpad.net/~squid/squid/5/revision/14769
>
> This seems that the feature only goes to Squid 5.  Will it be ported
> to Squid 4 ?
>
>>> I.e., where downloaded certs stored, how it
>>> handles, does it saves anywhere to disk?
>>
>> Missing certificates are fetched using HTTP[S]. Certificate responses
>> should be treated as any other HTTP[S] responses with regard to caching.
>> For example, if you have disk caching enabled and your caching rules
>> (including defaults) allow certificate response caching, then the
>> response should be cached. Similarly, the cached certificate will
>> eventually be evicted from the cache following regular cache maintenance
>> rules. When that happens, Squid will try to fetch the certificate again
>> (if it becomes needed again).
>>
>>
>>> 2. How this feature is related to sslproxy_foreign_intermediate_certs,
>>> how it can interfere with it?
>>
>> AFAICT by looking at the code, Squid only downloads certificates that
>> Squid is missing when trying to build a complete certificate chain for a
>> given server connection. Any sslproxy_foreign_intermediate_certs are
>> used as needed during the chain building process (i.e., they are _not_
>> "missing").
>
> I created bug report http://bugs.squid-cache.org/show_bug.cgi?id=4659
> a week ago but there has not been any activity.
> Is there someone who has sslproxy_foreign_intermediate_certs
> working in Squid 4.0.17 ?
Seems works as by as in 3.5.x. As I can see.
>
> Thanks,
> Marcus
>
> [snip]
>
>> HTH,
>>
>> Alex.
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Yuri Voinov


24.01.2017 0:06, Alex Rousskov пишет:
> On 01/23/2017 10:41 AM, Yuri Voinov wrote:
>> 23.01.2017 23:31, Alex Rousskov пишет:
>>> On 01/23/2017 04:28 AM, Yuri wrote:
>>>> I.e., where downloaded certs stored, how it
>>>> handles, does it saves anywhere to disk?
>>> Missing certificates are fetched using HTTP[S]. Certificate responses
>>> should be treated as any other HTTP[S] responses with regard to caching.
>>> For example, if you have disk caching enabled and your caching rules
>>> (including defaults) allow certificate response caching, then the
>>> response should be cached. Similarly, the cached certificate will
>>> eventually be evicted from the cache following regular cache maintenance
>>> rules. When that happens, Squid will try to fetch the certificate again
>>> (if it becomes needed again).
>> I.e., fetchesd intermediate certificate stores only in memory cache for
>>
>> sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
>> /var/lib/ssl_db -M 4MB
>>
>> daemon, right? And never stores anywhere on disk?
> No, this is incorrect -- sslcrtd_program settings are independent from
> fetching missing certificates. The ssl_crtd helper is about fake
> certificate generation. The helper does not use the Squid cache to cache
> its results. The "missing certificates" features are about the virgin
> server certificates that are necessary to complete/validate the server
> chain but absent from the server's ServerHello response.
>
> The only relationship between the ssl_crtd helper and fetching of the
> missing certificates (that I can think of) is that the helper will mimic
> the fetched certificates (in some cases). However, I am not even sure
> whether the helper gets the virgin incomplete certificate chain or the
> completed-by-Squid certificate chain in such cases. I only suspect that
> it is the latter. @Christos, please correct me if my suspicion is wrong.
>
>
>>>> 2. How this feature is related to sslproxy_foreign_intermediate_certs,
>>>> how it can interfere with it?
>>> AFAICT by looking at the code, Squid only downloads certificates that
>>> Squid is missing when trying to build a complete certificate chain for a
>>> given server connection. Any sslproxy_foreign_intermediate_certs are
>>> used as needed during the chain building process (i.e., they are _not_
>>> "missing").
>> Ok, so, this file uses for complete chains, and it contains statically
>> added (manually) certs only, right?
> Yes, the sslproxy_foreign_intermediate_certs file is maintained by the
> Squid administrator. Squid does not update it.
>
>
>> I.e., downloader should not save fetched intermediate CA's here,
> Correct.
>
>
>> which will be logically, isn't it?
> I believe it is better to use the regular Squid cache for storing the
> fetched missing certificates. I would not call abusing the
> sslproxy_foreign_intermediate_certs file for this purpose completely
> illogical, but such abuse would create more problems than it would solve
> IMO. We have also considered using a dedicated storage for the fetched
> missing certificates, but have decided (for many reasons) that it would
> be worse than reusing the existing caching infrastructure.
>
> FWIW, IMO, storing the generated fake certificates in the regular Squid
> cache would also be better than using an OpenSSL-administered database.
Exactly.
>
>
> HTH,
>
> Alex.
>



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x: Intermediate certificates downloader

2017-01-23 Thread Yuri Voinov


23.01.2017 23:31, Alex Rousskov пишет:
> On 01/23/2017 04:28 AM, Yuri wrote:
>
>> 1. How does it work? 
> My response below and the following commit message might answer some of
> your questions:
>
> http://bazaar.launchpad.net/~squid/squid/5/revision/14769
>
>> I.e., where downloaded certs stored, how it
>> handles, does it saves anywhere to disk?
> Missing certificates are fetched using HTTP[S]. Certificate responses
> should be treated as any other HTTP[S] responses with regard to caching.
> For example, if you have disk caching enabled and your caching rules
> (including defaults) allow certificate response caching, then the
> response should be cached. Similarly, the cached certificate will
> eventually be evicted from the cache following regular cache maintenance
> rules. When that happens, Squid will try to fetch the certificate again
> (if it becomes needed again).
I.e., fetchesd intermediate certificate stores only in memory cache for

sslcrtd_program /usr/local/squid/libexec/security_file_certgen -s
/var/lib/ssl_db -M 4MB

daemon, right? And never stores anywhere on disk?
>
>
>> 2. How this feature is related to sslproxy_foreign_intermediate_certs,
>> how it can interfere with it?
> AFAICT by looking at the code, Squid only downloads certificates that
> Squid is missing when trying to build a complete certificate chain for a
> given server connection. Any sslproxy_foreign_intermediate_certs are
> used as needed during the chain building process (i.e., they are _not_
> "missing").
Ok, so, this file uses for complete chains, and it contains statically
added (manually) certs only, right?

I.e., downloader should not save fetched intermediate CA's here, which
will be logically, isn't it?
>
>
>> Release notes contains nothing about this feature. Wiki contains only
>> one mention in passing that this functionality exists in principle.
> I agree that this feature lacks documentation. This is, in part, because
> the feature has no configuration options that normally force developers
> to document at least some of the code logic. We should add a few words
> about it to sslproxy_foreign_intermediate_certs documentation.
>
>
> FWIW, we are also adding an ACL to identify internal transactions that
> fetch missing certificates.
>
>
> HTH,
>
> Alex.
>



0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] A bunch of SSL errors I am not sure why

2017-01-18 Thread Yuri Voinov


18.01.2017 23:40, Eliezer Croitoru пишет:
> Thanks for the detail Amos,
>
> I noticed that couple major Root CA certificates was revoked so it could be 
> one thing.
> And can you give some more details on how to fetch the certificated using the 
> openssl tools?
> (Maybe redirect towards an article about it)
There is no article about trivial things.

root @ khorne / # openssl s_client -connect symantec.com:443
CONNECTED(0003)
depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU =
"(c) 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class
3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network,
CN = Symantec Class 3 EV SSL CA - G3
verify return:1
depth=0 1.3.6.1.4.1.311.60.2.1.3 = US, 1.3.6.1.4.1.311.60.2.1.2 =
Delaware, businessCategory = Private Organization, serialNumber =
2158113, C = US, postalCode = 94043, ST = California, L = Mountain View,
street = 350 Ellis Street, O = Symantec Corporation, OU = Symantec Web -
Redir, CN = symantec.com
verify return:1
---
Certificate chain
 0
s:/1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=Delaware/businessCategory=Private
Organization/serialNumber=2158113/C=US/postalCode=94043/ST=California/L=Mountain
View/street=350 Ellis Street/O=Symantec Corporation/OU=Symantec Web -
Redir/CN=symantec.com
   i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
Class 3 EV SSL CA - G3
 1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec
Class 3 EV SSL CA - G3
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
---
Server certificate
-BEGIN CERTIFICATE-
MIIJ7jCCCNagAwIBAgIQGxlwar89MNsXoPlBKLC9ZjANBgkqhkiG9w0BAQsFADB3
MQswCQYDVQQGEwJVUzEdMBsGA1UEChMUU3ltYW50ZWMgQ29ycG9yYXRpb24xHzAd
BgNVBAsTFlN5bWFudGVjIFRydXN0IE5ldHdvcmsxKDAmBgNVBAMTH1N5bWFudGVj
IENsYXNzIDMgRVYgU1NMIENBIC0gRzMwHhcNMTYwNjEzMDAwMDAwWhcNMTcwNjEz
MjM1OTU5WjCCARsxEzARBgsrBgEEAYI3PAIBAxMCVVMxGTAXBgsrBgEEAYI3PAIB
AgwIRGVsYXdhcmUxHTAbBgNVBA8TFFByaXZhdGUgT3JnYW5pemF0aW9uMRAwDgYD
VQQFEwcyMTU4MTEzMQswCQYDVQQGEwJVUzEOMAwGA1UEEQwFOTQwNDMxEzARBgNV
BAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxGTAXBgNVBAkM
EDM1MCBFbGxpcyBTdHJlZXQxHTAbBgNVBAoMFFN5bWFudGVjIENvcnBvcmF0aW9u
MR0wGwYDVQQLDBRTeW1hbnRlYyBXZWIgLSBSZWRpcjEVMBMGA1UEAwwMc3ltYW50
ZWMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwRqh8lRuQgtO
ZDvGmr2+JKD5dgS8do3CQttE0wUosst5uMBoI0JdWCcD+dBKBMf+5PD2TZie75qY
Dwg4TPWhiJhLVDtriB4xPHIaI3l4HNyiC2QbCYIlNxiYBApEX3xi7V94ZJBiQGhD
jBjVBlWTwYMgcEP+1ivUL0h/ShZOjcJaqdlvLrne7WFQVDzcGcezqXEovgl/63sB
5tL0MDY5lpqUIllNLoMhk+o/NAu19NSQRTqVPmfSQZIQM/aki70LKQWmXzM7yjWk
TYVfoqgj7zE9fwfyEZ3mdohSkxaNKdbnafCLHI6Yzc9t9wnnmYvBWDfTCSE+kdYC
m/hEfFJaTQIDAQABo4IFzjCCBcowggNqBgNVHREEggNhMIIDXYIMc3ltYW50ZWMu
Y29tggpub3J0b24uY29tggt2ZXJpdGFzLmNvbYISYWNjb3VudC5ub3J0b24uY29t
ghRjYXJlZXJzLnN5bWFudGVjLmNvbYIZY3VzdG9tZXJjYXJlLnN5bWFudGVjLmNv
bYIOZGUubm9ydG9uLm1vYmmCGmRvd25sb2Fkcy5ndWFyZGlhbmVkZ2UuY29tghFl
bWVhLnN5bWFudGVjLmNvbYIQZXUuc3RvcmUucGdwLmNvbYIRam9icy5zeW1hbnRl
Yy5jb22CFW1vc3RkYW5nZXJvdXN0b3duLmNvbYITbXlub3J0b25hY2NvdW50LmNv
bYIQbmEuc3RvcmUucGdwLmNvbYIRbm9ydG9uYWNjb3VudC5jb22CFW5vcnRvbmxl
YXJuaW5naHViLmNvbYIKbnVrb25hLmNvbYIRcm93LnN0b3JlLnBncC5jb22CEHNz
bC5zeW1hbnRlYy5jb22CDXN0b3JlLnBncC5jb22CEHVrLnN0b3JlLnBncC5jb22C
Fnd3dy5hY2NvdW50Lm5vcnRvbi5jb22CFXd3dy5lbWVhLnN5bWFudGVjLmNvbYIZ
d3d3Lm1vc3RkYW5nZXJvdXN0b3duLmNvbYIVd3d3Lm5vcnRvbmFjY291bnQuY29t
ghl3d3cubm9ydG9ubGVhcm5pbmdodWIuY29tgg53d3cubnVrb25hLmNvbYILd3d3
LnBncC5jb22CFHd3dy5zc2wuc3ltYW50ZWMuY29tgg93d3cudmVyaXRhcy5jb22C
End3dy5zeW1hbnRlYy5jby5qcIISd3d3LnN5bWFudGVjLmNvLnVrgg93d3cuc3lt
YW50ZWMuZnKCD3d3dy5zeW1hbnRlYy5kZYIPd3d3LnN5bWFudGVjLml0ghN3d3cu
c3ltYW50ZWMuY29tLmF1ghJ3d3cuc3ltYW50ZWMuY28ua3KCE3d3dy5zeW1hbnRl
Yy5jb20uYnKCD3d3dy5zeW1hbnRlYy5teIIPd3d3LnN5bWFudGVjLmVzgg93d3cu
c3ltYW50ZWMuY2GCD3d3dy5zeW1hbnRlYy5oa4ISd3d3LnN5bWFudGVjLmNvLmlu
gg93d3cuc3ltYW50ZWMudHeCD3d3dy5zeW1hbnRlYy5zZzAJBgNVHRMEAjAAMA4G
A1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwbwYD
VR0gBGgwZjAHBgVngQwBATBbBgtghkgBhvhFAQcXBjBMMCMGCCsGAQUFBwIBFhdo
dHRwczovL2Quc3ltY2IuY29tL2NwczAlBggrBgEFBQcCAjAZDBdodHRwczovL2Qu
c3ltY2IuY29tL3JwYTAfBgNVHSMEGDAWgBQBWavn3ToLWaZkY9bPIAdX1ZHnajAr
BgNVHR8EJDAiMCCgHqAchhpodHRwOi8vc3Iuc3ltY2IuY29tL3NyLmNybDBXBggr
BgEFBQcBAQRLMEkwHwYIKwYBBQUHMAGGE2h0dHA6Ly9zci5zeW1jZC5jb20wJgYI
KwYBBQUHMAKGGmh0dHA6Ly9zci5zeW1jYi5jb20vc3IuY3J0MIIBBgYKKwYBBAHW
eQIEAgSB9wSB9ADyAHcA3esdK3oNT6Ygi4GtgWhwfi6OnQHVXIiNPRHEzbbsvswA
AAFVS+V56QAABAMASDBGAiEAlwG/vUrML+CkdGkmUuyjvTHeWMaIvR409GHqmKjC

Re: [squid-users] Help with Certificate validation

2017-01-17 Thread Yuri Voinov
Put your regression server to SSL Bump splice rule.


18.01.2017 1:27, Mustafa Mohammad пишет:
> I’m using squid proxy to connect to our regression server. When our
> configuration file is doing a CRLCheck, I’m unable to connect to the
> server.  I have tried SSL bump and ssl_proxy option but was unable to
> make it work. When I checked the logs, It says it was unable to
> validate certificate. This is a high priority issue for our company.
> Please respond as soon as possible.
>
> Thanks,
> Mustafa
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
What is the fundamental difference between the programmer and by a fag?
Fag never become five times to free the memory of one object. Fag will
not use two almost identical string libraries in the same project. Fag
will never write to a mixture of C and C ++. Fag will never pass objects
by pointer. Now you know why these two categories so often mentioned
together, and one of them is worse :)


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid memory leak on ubuntu 14.04

2017-01-10 Thread Yuri Voinov


10.01.2017 19:34, vinay пишет:
> Thanks Amos , for your timely help. 
>
> As mentioned by you, I have configured squid conf file n able to get TCP_HIT
> in access logs. Thanks a lot.
> My new issue is, my app has 3 types of users. Normal, Editor n Business user
> , The contents are getting catched for Normal user n getting TCP_HIT for
> Normal user. When I log in with other  users still it's giving TCP_MISS even
> though same  contents are loaded.  It should not cache per user . 
> Any idea to overcome this issue ? 
This is feature, not an issue. The "issue" name is "Vary" and (possible)
"dynamic content".
> Any configurations need to be changed ? 
Yes. All configuration should be changed.

Start from here:

http://wiki.squid-cache.org/

>
> Thank you in advance. 
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-memory-leak-on-ubuntu-14-04-tp4674855p4681108.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
What is the fundamental difference between the programmer and by a fag?
Fag never become five times to free the memory of one object. Fag will
not use two almost identical string libraries in the same project. Fag
will never write to a mixture of C and C ++. Fag will never pass objects
by pointer. Now you know why these two categories so often mentioned
together, and one of them is worse :)


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to modify cached object?

2017-01-08 Thread Yuri Voinov


08.01.2017 20:49, boruc пишет:
> Thank you for your answer.
>
> Actually I managed to do what I want by simply editing that file and
> changing content length if necessary. I don't know why sometimes I need to
> restart Squid or reopen browser to see changed version of page. Sometimes
> it works fine on regular browser window, sometimes I need to open it in
> private mode, but that's the thing that I want to figure out now.
>
> Would it be possible to NOT cache specific files, like images by using
> refresh_pattern? Or in other words - I'd like to cache only HTML/CSS files.
Sure. You can do it either using refresh_pattern, or, more selectively,
using

# No cache directives
cache deny dont_cache_url
cache allow all

> 2017-01-06 21:33 GMT+01:00 reinerotto [via Squid Web Proxy Cache] <
> ml-node+s1019090n4681075...@n4.nabble.com>:
>
>> Content adaption can also be done without squid. Mod of message body
>> "on-the-fly" can be achieved using commercial product(s).
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Is-it-
>> possible-to-modify-cached-object-tp4681073p4681075.html
>> To unsubscribe from Is it possible to modify cached object?, click here
>> 
>> .
>> NAML
>> 
>>
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Is-it-possible-to-modify-cached-object-tp4681073p4681091.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
What is the fundamental difference between the programmer and by a fag?
Fag never become five times to free the memory of one object. Fag will
not use two almost identical string libraries in the same project. Fag
will never write to a mixture of C and C ++. Fag will never pass objects
by pointer. Now you know why these two categories so often mentioned
together, and one of them is worse :)


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.3.8 is available

2017-01-05 Thread Yuri Voinov


05.01.2017 22:43, vinay пишет:
> Hi am using Squid 3.3.8 on Ubuntu 14.04. I have default configuration of
> Squid config file . The request is passing via Squid but its not caching the
> contents/images/css , everytime am getting TCP_MISS/200 for each request
> getting logged in access logs.
>
> ###SQUID CONF entries #
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> http_access deny to_localhost
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
> # And finally deny all other access to this proxy
> http_access deny all
> # Squid normally listens to port 3128
> http_port 3128
> # Uncomment and adjust the following to add a disk cache directory.
> cache_dir ufs /var/spool/squid 102400 16 256
> # Leave coredumps in the first cache dir
> coredump_dir /var/spool/squid
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> refresh_pattern .   0   20% 4320
> # Max Object Size Cache
> maximum_object_size 10240 KB
> ##
>
>
> Let me know if Squid 3.3.8 is compatible with Ubuntu 14.04 or not? if not
> what are the compatible versions with Ubuntu 14.04 as i cant change Ubuntu
> version. 
> what changes need to be done to cache the contents ?
You will never be able to achieve this with the default configuration.

To achieve this, you need to work hard. At the same time, no one will
give you a ready-made solution.
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-8-is-available-tp4661037p4681062.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
What is the fundamental difference between the programmer and by a fag?
Fag never become five times to free the memory of one object. Fag will
not use two almost identical string libraries in the same project. Fag
will never write to a mixture of C and C ++. Fag will never pass objects
by pointer. Now you know why these two categories so often mentioned
together, and one of them is worse :)


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-17 Thread Yuri Voinov
Man, this question has been answered a million times. Use the search.


17.12.2016 16:41, Odhiambo Washington пишет:
> Hi,
>
> I keep seeing something that I think is odd. Squid has been exiting on
> signal 6, and I keep seeing this:
>
> root@gw:/usr/local/openssl # tail -f /opt/squid-3.5/var/logs/cache.log
> 2016/12/17 13:38:32| DiskThreadsDiskFile::openDone: (2) No such file
> or directory
> 2016/12/17 13:38:32|/opt/squid-3.5/var/cache/00/26/264D
> 2016/12/17 13:40:24| DiskThreadsDiskFile::openDone: (2) No such file
> or directory
> 2016/12/17 13:40:24|/opt/squid-3.5/var/cache/00/3B/3B56
> 2016/12/17 13:42:34| DiskThreadsDiskFile::openDone: (2) No such file
> or directory
> 2016/12/17 13:42:34|/opt/squid-3.5/var/cache/00/6B/6B0D
> 2016/12/17 13:43:36| DiskThreadsDiskFile::openDone: (2) No such file
> or directory
> 2016/12/17 13:43:36|/opt/squid-3.5/var/cache/00/00/0050
> 2016/12/17 13:44:25| DiskThreadsDiskFile::openDone: (2) No such file
> or directory
> 2016/12/17 13:44:25|/opt/squid-3.5/var/cache/00/AF/AFF1
>
> So, what could be making the files disappear?
>
>
> -- 
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Forward Proxy for LDAP

2016-12-15 Thread Yuri Voinov


15.12.2016 20:29, Bryan Peters пишет:
> My Google-fu seems to be coming up short.
>
> We have an application that ties into our users SSO/LDAP servers.  We,
> don't run an LDAP server of our own, we're just making outbound calls
> to their LDAP servers.
>
> I would like to proxy all outbound LDAP calls through Squid to get
> around some limitations of AWS and our customers need to whitelist an
> IP. (AWS load balancers don't have static IPs, some of our customers
> won't whitelist FQDNs in their firewall).
>
> Getting the traffic from our app server(s) to the Squid box hasn't
> been much of a problem.  I'm using Iptables/NAT to accomplish this.  
> TCPdump on the Squid machine sees  traffic coming in on 3128.
>
> I've added 389 as a 'safe port' in the squid config, created ACLs that
> allow the network the traffic is coming in on.  Yet squid never grabs
> the traffic and does anything with it.  The logs don't get updated at all.
>
> Am I incorrect about Squid being able to proxy LDAP traffic?  
Exactly. By definition, squid is only HTTP proxy. Initially.
Modern versions supports also HTTPS (with restrictions) and FTP (with
restrictions).
>
> Googling for this is sort of maddening as all forums, mailing lists,
> FAQs and documentation continues to come up for doing LDAP auth on a
> Squid machine, which isn't what I'm looking for at all.
Condolences. Thing you want is not possible by Squid.
>
> Any help you can give would be appreciated.
It can not help the fact that the product is not as a class. Squid - no
proxy all protocols in the world. Although it would not prevent the
availability of support for some of them - and it is certainly not FTP
(FTP - in 2016 the year indeed! :))
>
> Thanks
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cisco ASA with transparent Squid with HTTP/HTTPS filtering

2016-12-14 Thread Yuri Voinov


14.12.2016 21:59, Yuri Voinov пишет:
>
>
>
> 14.12.2016 21:08, Rafael Akchurin пишет:
>>
>> Hello everyone,
>>
>>  
>>
>> After pulling all my hair out and reading every possible howto on the
>> Internet for Cisco ASA integration with Squid using WCCP I have
>> decided to write my own. The how to is at
>> https://docs.diladele.com/tutorials/web_filter_https_squid_cisco_wccp/index.html.
>> Please note it is aimed at those with minimal admin skills and
>> contains every single step thoroughly described (mostly for myself
>> not to forget anything).
>>
Raf, one more note. WCCP is never be easy for junior admins. Especially
with minimal admin skills. As by ASA ;) And (by my own opinion) Squid +
WCCP for any infrastructure never been simple task and will never be
simple task. ;) Warn you readers, not mislead them, though it is a very
simple task.
>>
>>  
>>
>> May I get your opinions/ideas if what is written is good enough for
>> the novice admin?
>>
>>  
>>
>> Moreover several question remain:
>>
>>  
>>
>> 1.  Does Squid perform fake CONNECT requests with SNI info
>> instead of raw IP like I am seeing now?
>>
>> 2.  Why HTTPS redirection only works with “wccp2_service_info 70
>> protocol=tcp flags=*dst_ip_hash* priority=240 ports=443” (all other
>> flags from wccp configuration section in squid.conf do not work).
>>
> Because of ASA is router. Cisco routers uses HASH as assignment method.
>>
>> 3.  How to bypass connections from workstations to specific
>> remote sites by FQDN on Cisco ASA?
>>
> In fact this will occurs by IP anyway. Cisco devices do DNS lookup and
> saves IP's in config instead of FQDN.
>>
>> 4.  Or maybe it is better to exclude them (3) from SSL bump on
>> Squid using ssl::server_name by splicing?
>>
> Depending your requirements.
>>
>>  
>>
>> Thanks in advance for everyone who responds.
>>
>>  
>>
>> Best regards,
>>
>> Rafael Akchurin
>>
>> Diladele B.V.
>>
>>  
>>
>> --
>>
>> Please take a look at Web Safety - our ICAP based web filter server
>> for Squid proxy at https://www.diladele.com
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> -- 
> Cats - delicious. You just do not know how to cook them.

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cisco ASA with transparent Squid with HTTP/HTTPS filtering

2016-12-14 Thread Yuri Voinov


14.12.2016 21:59, Yuri Voinov пишет:
>
>
>
> 14.12.2016 21:08, Rafael Akchurin пишет:
>>
>> Hello everyone,
>>
>>  
>>
>> After pulling all my hair out and reading every possible howto on the
>> Internet for Cisco ASA integration with Squid using WCCP I have
>> decided to write my own. The how to is at
>> https://docs.diladele.com/tutorials/web_filter_https_squid_cisco_wccp/index.html.
>> Please note it is aimed at those with minimal admin skills and
>> contains every single step thoroughly described (mostly for myself
>> not to forget anything).
>>
>>  
>>
>> May I get your opinions/ideas if what is written is good enough for
>> the novice admin?
>>
>>  
>>
>> Moreover several question remain:
>>
>>  
>>
>> 1.  Does Squid perform fake CONNECT requests with SNI info
>> instead of raw IP like I am seeing now?
>>
>> 2.  Why HTTPS redirection only works with “wccp2_service_info 70
>> protocol=tcp flags=*dst_ip_hash* priority=240 ports=443” (all other
>> flags from wccp configuration section in squid.conf do not work).
>>
> Because of ASA is router. Cisco routers uses HASH as assignment method.
http://wiki.squid-cache.org/ConfigExamples/Intercept/CiscoIOSv15Wccp2

Here is described differences in configs for switches/routers.
>>
>> 3.  How to bypass connections from workstations to specific
>> remote sites by FQDN on Cisco ASA?
>>
> In fact this will occurs by IP anyway. Cisco devices do DNS lookup and
> saves IP's in config instead of FQDN.
>>
>> 4.  Or maybe it is better to exclude them (3) from SSL bump on
>> Squid using ssl::server_name by splicing?
>>
> Depending your requirements.
>>
>>  
>>
>> Thanks in advance for everyone who responds.
>>
>>  
>>
>> Best regards,
>>
>> Rafael Akchurin
>>
>> Diladele B.V.
>>
>>  
>>
>> --
>>
>> Please take a look at Web Safety - our ICAP based web filter server
>> for Squid proxy at https://www.diladele.com
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> -- 
> Cats - delicious. You just do not know how to cook them.

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cisco ASA with transparent Squid with HTTP/HTTPS filtering

2016-12-14 Thread Yuri Voinov


14.12.2016 21:08, Rafael Akchurin пишет:
>
> Hello everyone,
>
>  
>
> After pulling all my hair out and reading every possible howto on the
> Internet for Cisco ASA integration with Squid using WCCP I have
> decided to write my own. The how to is at
> https://docs.diladele.com/tutorials/web_filter_https_squid_cisco_wccp/index.html.
> Please note it is aimed at those with minimal admin skills and
> contains every single step thoroughly described (mostly for myself not
> to forget anything).
>
>  
>
> May I get your opinions/ideas if what is written is good enough for
> the novice admin?
>
>  
>
> Moreover several question remain:
>
>  
>
> 1.  Does Squid perform fake CONNECT requests with SNI info instead
> of raw IP like I am seeing now?
>
> 2.  Why HTTPS redirection only works with “wccp2_service_info 70
> protocol=tcp flags=*dst_ip_hash* priority=240 ports=443” (all other
> flags from wccp configuration section in squid.conf do not work).
>
Because of ASA is router. Cisco routers uses HASH as assignment method.
>
> 3.  How to bypass connections from workstations to specific remote
> sites by FQDN on Cisco ASA?
>
In fact this will occurs by IP anyway. Cisco devices do DNS lookup and
saves IP's in config instead of FQDN.
>
> 4.  Or maybe it is better to exclude them (3) from SSL bump on
> Squid using ssl::server_name by splicing?
>
Depending your requirements.
>
>  
>
> Thanks in advance for everyone who responds.
>
>  
>
> Best regards,
>
> Rafael Akchurin
>
> Diladele B.V.
>
>  
>
> --
>
> Please take a look at Web Safety - our ICAP based web filter server
> for Squid proxy at https://www.diladele.com
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL too large??

2016-12-13 Thread Yuri Voinov
It means exactly what it said: URL too long.

In Squid's defaults set 8k for URL size. This was reasonable maximum 10
years ago.

Now it seems too small (at least 4 times) because of now Internet full
of adware bullshit (referrals/trackers/counters etc.) which is often
more 8k.

You can easy fix it (if you worry about it. It seems for end-user like
hang/broken links) if you build squid from source. Just change vaule
MAX_URL in src/defines.h and recompile.

But if you do this, beware - some things become slower, and danger of
deinal of service exists.

That's it.

WBR, Yuri


14.12.2016 1:05, Odhiambo Washington пишет:
> I did not dig deep into it I couldn't scan the access log for it
> because I had no idea what 'too long' meant.
> I will ignore it until someone says they're unable to access a
> website, and they can give me details of what it is.
>
>
> On 13 December 2016 at 19:51, Eliezer Croitoru  > wrote:
>
> I think that the maximum size was 64k and it's way above this.
> It should not be an issue if this is some weird application
> creating some random url which doesn't have meaning.
> But if you know what is creating such a url it's a whole another
> story.
> Can you reproduce\recreate this url?
>
> Eliezer
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile+WhatsApp: +972-5-28704261
> Email: elie...@ngtech.co.il 
>
>
>
>
> On Tue, Dec 13, 2016 at 11:08 AM, Odhiambo Washington
> > wrote:
>
> Hi,
>
> Saw this on my cache.log (squid-3.5.22, FreeBSD-9.3,):
>
> 2016/12/13 11:47:55| WARNING: no_suid: setuid(0): (1)
> Operation not permitted
> 2016/12/13 11:47:55| WARNING: no_suid: setuid(0): (1)
> Operation not permitted
> 2016/12/13 11:47:55| HTCP Disabled.
> 2016/12/13 11:47:55| Finished loading MIME types and icons.
> 2016/12/13 11:47:55| Accepting NAT intercepted HTTP Socket
> connections at local=[::]:13128 remote=[::] FD 39 flags=41
> 2016/12/13 11:47:55| Accepting HTTP Socket connections at
> local=[::]:13130 remote=[::] FD 40 flags=9
> 2016/12/13 11:47:55| Accepting NAT intercepted SSL bumped
> HTTPS Socket connections at local=[::]:13129 remote=[::] FD 41
> flags=41
> 2016/12/13 11:47:55| Accepting ICP messages on [::]:3130
> 2016/12/13 11:47:55| Sending ICP messages from [::]:3130
> *2016/12/13 11:53:25| urlParse: URL too large (11654 bytes)*
>
>
> -- 
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
>
>
>
> -- 
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft."
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] caching videos over https?

2016-11-20 Thread Yuri Voinov
I'm not about it.

There is a difference between help and passengers. Who want to get a
turnkey solution without doing anything.

Personally, I quite simply as help to specify the direction, or to show
that anything is possible in principle. The rest I do myself. If I do
not - then I buy it.

At the beginning of the thread I made it - that direction. I think that
should be enough, is not it?


20.11.2016 18:07, --Ahmad-- пишет:
> lol …. i hope you don’t  spent much time for helping people here on
> the mailing list for free .
>
> thanks again for your time .
>
>
>> On Nov 20, 2016, at 2:03 PM, Yuri Voinov <yvoi...@gmail.com
>> <mailto:yvoi...@gmail.com>> wrote:
>>
>> Store-ID is not quite cached. This deduplication and this is just what
>> you need for dynamic content, which is the majority of the video. Do not
>> forget about the volume of the video itself.
>>
>> As for the cache, you should look at what video has captions under
>> HTTPS. Modern vanilla SQUID can not in most cases its cache that
>> Store-ID with that without it. Because of video HTTP headers and pragmas.
>>
>> In any case, the complete solution is too complex for the majority of
>> ordinary users Squid and too costly in terms of effort to give it. These
>> solutions can either buy or write yourself, agree? I see no reason to
>> give free solutions, which spent a lot of time - it is not free.
>>
>> 20.11.2016 17:54, --Ahmad-- пишет:
>>> you are correct .
>>>
>>> but video cache solution was very very simple when compared to the
>>> store id .
>>> also it support couple of websites without  that much effort .
>>>
>>> what i mean here is the simplicity …..im not in the development
>>> level … i talk about the normal squid users .
>>>
>>> cheers 
>>>
>>>> On Nov 20, 2016, at 1:47 PM, Yuri Voinov <yvoi...@gmail.com
>>>> <mailto:yvoi...@gmail.com>> wrote:
>>>>
>>>> And no need to invent anything. Everything has already been invented.
>>>> And it is called the invention Store-ID.
>>>>
>>>> You take it and write on the basis of all that is needed. I do not see
>>>> any problem.
>>>>
>>>> 20.11.2016 17:45, --Ahmad-- пишет:
>>>>> hey guys .
>>>>>
>>>>> as long as the video cache has been opened now  and in past
>>>>>  proved its strength with http other websites for video .
>>>>>
>>>>> ((lets put youtube away now .))
>>>>>
>>>>>
>>>>> why don’t we see development on it to support  the video contents
>>>>> of websites that support http  like daily motion and its sisters
>>>>> websites .
>>>>>
>>>>>
>>>>> and why don’t we use certificates once development for youtube &
>>>>> Facebook ???
>>>>>
>>>>>
>>>>> i saw the development of eleizer of caching windows updates and it
>>>>> was great solution ….. why don’t we combine those 2 solution in 1
>>>>> product ?
>>>>>
>>>>>
>>>>> i  think that continuing on the solution of video cache is better
>>>>> than inventing solution from scratch .
>>>>>
>>>>> thanks again squid users Guys 
>>>>>
>>>>>> On Nov 20, 2016, at 1:10 AM, Eliezer Croitoru
>>>>>> <elie...@ngtech.co.il <mailto:elie...@ngtech.co.il>> wrote:
>>>>>>
>>>>>> The cachevideos solution is not a fake but as Amos mentioned it
>>>>>> might not have been updated\upgraded to match today state of
>>>>>> YouTube and google videos.
>>>>>> I do not know a thing about this product but they offer a trial
>>>>>> period and they have a forums which can be used to get more details.
>>>>>> I believe they still have something really good in their solution
>>>>>> since it's not based on StoreID but on other concepts.
>>>>>>
>>>>>> Eliezer
>>>>>>
>>>>>> 
>>>>>> Eliezer Croitoru
>>>>>> Linux System Administrator
>>>>>> Mobile: +972-5-28704261
>>>>>> Email: elie...@ngtech.co.il <mailto:elie...@ngtech.co.il>
>>>>>>
>>>>>>
>>>>>> -Original Message-
>>>>>> From: Yuri Voinov [mailto:yvoi...@gm

Re: [squid-users] caching videos over https?

2016-11-20 Thread Yuri Voinov
And no need to invent anything. Everything has already been invented.
And it is called the invention Store-ID.

You take it and write on the basis of all that is needed. I do not see
any problem.

20.11.2016 17:45, --Ahmad-- пишет:
> hey guys .
>
> as long as the video cache has been opened now  and in past  proved its 
> strength with http other websites for video .
>
> ((lets put youtube away now .))
>
>
> why don’t we see development on it to support  the video contents of websites 
> that support http  like daily motion and its sisters websites .
>
>
> and why don’t we use certificates once development for youtube & Facebook ???
>
>
> i saw the development of eleizer of caching windows updates and it was great 
> solution ….. why don’t we combine those 2 solution in 1 product ?
>
>
> i  think that continuing on the solution of video cache is better than 
> inventing solution from scratch .
>
> thanks again squid users Guys 
>
>> On Nov 20, 2016, at 1:10 AM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
>>
>> The cachevideos solution is not a fake but as Amos mentioned it might not 
>> have been updated\upgraded to match today state of YouTube and google videos.
>> I do not know a thing about this product but they offer a trial period and 
>> they have a forums which can be used to get more details.
>> I believe they still have something really good in their solution since it's 
>> not based on StoreID but on other concepts.
>>
>> Eliezer
>>
>> 
>> Eliezer Croitoru
>> Linux System Administrator
>> Mobile: +972-5-28704261
>> Email: elie...@ngtech.co.il
>>
>>
>> -Original Message-
>> From: Yuri Voinov [mailto:yvoi...@gmail.com] 
>> Sent: Sunday, November 20, 2016 00:18
>> To: Eliezer Croitoru <elie...@ngtech.co.il>; 
>> squid-users@lists.squid-cache.org
>> Subject: Re: [squid-users] caching videos over https?
>>
>>
>>
>> 20.11.2016 3:59, Eliezer Croitoru пишет:
>>> Yuri,
>>>
>>> I am not the most experienced in life and in security but I can say it's 
>>> possible and I am not selling it
>>> I released the windows update cacher which works in enough places(just by 
>>> seeing how many downloaded it..).
>>> The first rule I have learned from my mentors is that even if you know 
>>> something it might not fit to be in a form that the general public should 
>>> know about.
>>> I am looking for a link to CVE related publication rules of thumb so I 
>>> would be able to understand better what should be published and how.
>>> Any redirections are welcomed..
>>>
>>> A note:
>>> If you have the plain html of a json which contains the next links you 
>>> would be able to predict couple things...
>> I know what are you talking about. I came to this idea two years ago.
>> Unfortunately, I had more important priorities.
>> But I'm not seen open source solutions uses real YT internals yet and really 
>> works.
>>
>> Now I'm working on another squid's thing, but plan to return to YT store-ID 
>> helper later.
>>
>> However, it is only the fact that the "solutions" that are in the public 
>> domain, or obsolete, or are worthless.
>>
>> And for some more money and asking. I would understand if they really 
>> worked. Unfortunately, Google does not idiots work.
>>
>> That's why I said that the development of the Indian - fake.
>>> If you would be able to catch every single fedora\redhat sqlite db file and 
>>> replace it with a malicious sha256 data you would be able to hack each of 
>>> their clients machine when they will be updated.
>>> If you believe you can coordinate such a thing you are way above StoreID 
>>> level of understanding networking and Computer Science.
>>>
>>> Cheers,
>>> Eliezer
>>>
>>> 
>>> Eliezer Croitoru
>>> Linux System Administrator
>>> Mobile: +972-5-28704261
>>> Email: elie...@ngtech.co.il
>>>
>>>
>>> -Original Message-
>>> From: Yuri Voinov [mailto:yvoi...@gmail.com]
>>> Sent: Saturday, November 19, 2016 23:08
>>> To: Eliezer Croitoru <elie...@ngtech.co.il>; 
>>> squid-users@lists.squid-cache.org
>>> Subject: Re: [squid-users] caching videos over https?
>>>
>>> I do not want to waste my and your time and discuss this issue. I know what 
>>> I know, I have seriously studied this issue. None of those who are really 
>>> able to 

Re: [squid-users] caching videos over https?

2016-11-19 Thread Yuri Voinov


20.11.2016 3:59, Eliezer Croitoru пишет:
> Yuri,
>
> I am not the most experienced in life and in security but I can say it's 
> possible and I am not selling it
> I released the windows update cacher which works in enough places(just by 
> seeing how many downloaded it..).
> The first rule I have learned from my mentors is that even if you know 
> something it might not fit to be in a form that the general public should 
> know about.
> I am looking for a link to CVE related publication rules of thumb so I would 
> be able to understand better what should be published and how.
> Any redirections are welcomed..
>
> A note:
> If you have the plain html of a json which contains the next links you would 
> be able to predict couple things...
I know what are you talking about. I came to this idea two years ago.
Unfortunately, I had more important priorities.
But I'm not seen open source solutions uses real YT internals yet and
really works.

Now I'm working on another squid's thing, but plan to return to YT
store-ID helper later.

However, it is only the fact that the "solutions" that are in the public
domain, or obsolete, or are worthless.

And for some more money and asking. I would understand if they really
worked. Unfortunately, Google does not idiots work.

That's why I said that the development of the Indian - fake.
> If you would be able to catch every single fedora\redhat sqlite db file and 
> replace it with a malicious sha256 data you would be able to hack each of 
> their clients machine when they will be updated.
> If you believe you can coordinate such a thing you are way above StoreID 
> level of understanding networking and Computer Science.
>
> Cheers,
> Eliezer 
>
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>
>
> -Original Message-
> From: Yuri Voinov [mailto:yvoi...@gmail.com] 
> Sent: Saturday, November 19, 2016 23:08
> To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] caching videos over https?
>
> I do not want to waste my and your time and discuss this issue. I know what I 
> know, I have seriously studied this issue. None of those who are really able 
> to cache Youtube - not only on desktops but also on mobile devices - all 
> without exception - is no solution in the form of open source or blob will 
> not offer free. This is big money. As for Google, and for those who use it. 
> Therefore, I suggest better acquainted with the way Youtube counteracts 
> caching and close useless discussion.
>
> I'm not going to shake the air and talk about what I do not and can not be. 
> If you have a solution - really works, and for absolutely any type of client 
> (Android and iPhone) - show evidence or let's stop blah-blah-blah. I mean, if 
> you really were a solution - you'd sold it for money. But you do not have it, 
> isn't it?
>
> Personally, I do not want anything. This is not the solution I'm looking for. 
>
> For myself, I found a workaround; what I know - I have stated in the wiki. If 
> someone else wants to spend a year or two for new investigations - welcome.
>
> 20.11.2016 2:45, Eliezer Croitoru пишет:
>> Yuri,
>>
>> Let say I can cache youtube videos, what would I get for this?
>> I mean, what would anyone get from this?
>> Let say I will give you a blob that will work, will you try it? Or 
>> would you want only an open source solution?
>>
>> Eliezer
>>
>> ----
>> Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> Linux System 
>> Administrator
>> Mobile: +972-5-28704261
>> Email: elie...@ngtech.co.il
>>  
>>
>> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
>> On Behalf Of Yuri Voinov
>> Sent: Saturday, November 19, 2016 17:54
>> To: squid-users@lists.squid-cache.org
>> Subject: Re: [squid-users] caching videos over https?
>>
>> HTTPS is not a problem, if not a problem to install the proxy 
>> certificate to the clients.
>> The problem in combating caching YT by Google.
>>
>> 19.11.2016 21:41, Yuri Voinov пишет:
>>
>>
>> 19.11.2016 21:35, Amos Jeffries пишет:
>> 19.11.2016 20:56, Bakhtiyor Homidov пишет:
>> thanks, yuri,
>>
>> just found https://cachevideos.com/, what do you think about this?
>>
>> On 20/11/2016 4:17 a.m., Yuri Voinov wrote:
>> This is fake.
>>
>> Only for strange definitions of "fake".
>>
>> It is simply an old helper from before YouTube became all-HTTPS. It 
>> should still work okay for any of the video sites that are still using 
>> HTTP

Re: [squid-users] caching videos over https?

2016-11-19 Thread Yuri Voinov
I do not want to waste my and your time and discuss this issue. I know
what I know, I have seriously studied this issue. None of those who are
really able to cache Youtube - not only on desktops but also on mobile
devices - all without exception - is no solution in the form of open
source or blob will not offer free. This is big money. As for Google,
and for those who use it. Therefore, I suggest better acquainted with
the way Youtube counteracts caching and close useless discussion.

I'm not going to shake the air and talk about what I do not and can not
be. If you have a solution - really works, and for absolutely any type
of client (Android and iPhone) - show evidence or let's stop
blah-blah-blah. I mean, if you really were a solution - you'd sold it
for money. But you do not have it, isn't it?

Personally, I do not want anything. This is not the solution I'm looking
for. 

For myself, I found a workaround; what I know - I have stated in the
wiki. If someone else wants to spend a year or two for new
investigations - welcome.

20.11.2016 2:45, Eliezer Croitoru пишет:
> Yuri,
>
> Let say I can cache youtube videos, what would I get for this?
> I mean, what would anyone get from this?
> Let say I will give you a blob that will work, will you try it? Or would
> you want only an open source solution?
>
> Eliezer
>
> 
> Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
>  
>
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
> Behalf Of Yuri Voinov
> Sent: Saturday, November 19, 2016 17:54
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] caching videos over https?
>
> HTTPS is not a problem, if not a problem to install the proxy certificate
> to the clients. 
> The problem in combating caching YT by Google.
>
> 19.11.2016 21:41, Yuri Voinov пишет:
>
>
> 19.11.2016 21:35, Amos Jeffries пишет:
> 19.11.2016 20:56, Bakhtiyor Homidov пишет:
> thanks, yuri,
>
> just found https://cachevideos.com/, what do you think about this?
>
> On 20/11/2016 4:17 a.m., Yuri Voinov wrote:
> This is fake.
>
> Only for strange definitions of "fake".
>
> It is simply an old helper from before YouTube became all-HTTPS. It
> should still work okay for any of the video sites that are still using
> HTTP.
> YT uses cache-preventing scheme for videos relatively long time (after they
> finished use Flash videos). So, no one - excluding Google itself - can cache
> it now. Especially for mobile devices. I've spent last two years to learn
> this. So, anyone who talk he can cache YT is lies.
>
> As I explain here why:
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
>
> All another videos - well, this is a bit difficult - but possible to cache.
>
>
> If you look at the features list it clearly says:
>  "No support for HTTPS (secure HTTP) caching."
> HTTPS itself in most cases can't be easy cached by vanilla squid.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> <mailto:squid-users@lists.squid-cache.org> 
> http://lists.squid-cache.org/listinfo/squid-users
>

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] caching videos over https?

2016-11-19 Thread Yuri Voinov
HTTPS is not a problem, if not a problem to install the proxy
certificate to the clients.

The problem in combating caching YT by Google.


19.11.2016 21:41, Yuri Voinov пишет:
>
>
>
> 19.11.2016 21:35, Amos Jeffries пишет:
>>> 19.11.2016 20:56, Bakhtiyor Homidov пишет:
>>>> thanks, yuri,
>>>>
>>>> just found https://cachevideos.com/, what do you think about this?
>>>>
>> On 20/11/2016 4:17 a.m., Yuri Voinov wrote:
>>> This is fake.
>>>
>> Only for strange definitions of "fake".
>>
>> It is simply an old helper from before YouTube became all-HTTPS. It
>> should still work okay for any of the video sites that are still using HTTP.
> YT uses cache-preventing scheme for videos relatively long time (after
> they finished use Flash videos). So, no one - excluding Google itself
> - can cache it now. Especially for mobile devices. I've spent last two
> years to learn this. So, anyone who talk he can cache YT is lies.
>
> As I explain here
> why: 
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
> <http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion>
>
> All another videos - well, this is a bit difficult - but possible to
> cache.
>
>> If you look at the features list it clearly says:
>>  "No support for HTTPS (secure HTTP) caching."
> HTTPS itself in most cases can't be easy cached by vanilla squid.
>> Amos
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> -- 
> Cats - delicious. You just do not know how to cook them.

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] caching videos over https?

2016-11-19 Thread Yuri Voinov


19.11.2016 21:35, Amos Jeffries пишет:
>> 19.11.2016 20:56, Bakhtiyor Homidov пишет:
>>> thanks, yuri,
>>>
>>> just found https://cachevideos.com/, what do you think about this?
>>>
> On 20/11/2016 4:17 a.m., Yuri Voinov wrote:
>> This is fake.
>>
> Only for strange definitions of "fake".
>
> It is simply an old helper from before YouTube became all-HTTPS. It
> should still work okay for any of the video sites that are still using HTTP.
YT uses cache-preventing scheme for videos relatively long time (after
they finished use Flash videos). So, no one - excluding Google itself -
can cache it now. Especially for mobile devices. I've spent last two
years to learn this. So, anyone who talk he can cache YT is lies.

As I explain here
why: 
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion
<http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion>

All another videos - well, this is a bit difficult - but possible to cache.

> If you look at the features list it clearly says:
>  "No support for HTTPS (secure HTTP) caching."
HTTPS itself in most cases can't be easy cached by vanilla squid.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Cats - delicious. You just do not know how to cook them.


0x613DEC46.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >