[squid-users] Squid Peek and splice

2016-05-12 Thread Reet Vyas
Hi Amos/Yuri,

Currently my squid is configured with ssl bump, now I want to use peek and
splice. I read in some forum that we don't need to install certificate on
client's machine.

As I have already asked before in mailing list to install SSL certificate
on Android devices, which is not working.

So my question is If I want to use peek and splice for example I want https
filtering for  proxy websites  and I dont want ssl for bank websites and
facebook youtube and gmail. how will it work? Do i need to install SSL
certifcate on client or not, I am bit confused with peek and splice thing.

Please let me know is that possible to configure squid 3.5.19 in such a way
so that it will bump  only proxy websites not FB youtube etc.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regular expressions with dstdom_regex ACL

2016-05-12 Thread Amos Jeffries
On 13/05/2016 3:44 p.m., Walter H. wrote:
> On 12.05.2016 22:20, Walter H. wrote:
>> Hello,
>> can someone please tell me how I can achive this?
>>
>> the result should be that
>> any URL like this
>> http(s)://ip-address/ should be blocked by the specified error page
>>
>> Thanks and Greetings from Austria,
>> Walter
> p.s.
> the sample here
> http://wiki.squid-cache.org/ConfigExamples/Chat/Skype
> doesn't work, too
> 

The skype pattern is matching the port Skype uses. You need to drop that
off the pattern. But it should match if you use just the raw-IP part.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Regular expressions with dstdom_regex ACL

2016-05-12 Thread Walter H.

On 12.05.2016 22:20, Walter H. wrote:

Hello,
can someone please tell me how I can achive this?

the result should be that
any URL like this
http(s)://ip-address/ should be blocked by the specified error page

Thanks and Greetings from Austria,
Walter

p.s.
the sample here
http://wiki.squid-cache.org/ConfigExamples/Chat/Skype
doesn't work, too



smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread joe
do not worry about vary  its not a bug its the way its setup the vary
handling yet it need plenty of work this is my guess after i look at the
code 

for a range_offset_limit use this test im using it long time and its
wonderful

collapsed_forwarding on
acl range_list_path urlpath_regex \.(mar|msp|esd|pkg\?)
range_offset_limit -1 windows_list_path 

range_offset_limit 16 KB all !range_list_path#<---if you need this 
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 100

im caching the above extension perfectly you can add to it 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Getting-the-full-file-content-on-a-range-request-but-not-on-EVERY-get-tp4677503p4677543.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL is used in context without an HTTP response. Assuming mismatch

2016-05-12 Thread Alex Rousskov
On 05/12/2016 04:04 PM, David Touzeau wrote:
> 
> acl CODE_TCP_DENIED http_status 407
> access_log none CODE_TCP_DENIED
> 
>  
> 
> But squid claim :  
> 
> 2016/05/12 23:44:07 kid1| WARNING: CODE_TCP_DENIED ACL is used in
> context without an HTTP response. Assuming mismatch.
>  
> 
> Why this rule is wrong ?

Squid attempts to log every access(*). Sometimes, Squid is accessed, but
there is no response to log(**). Your rule assumes that there is always
a response. Squid warns that your assumption is wrong for the specific
access it is logging.

If there is no ACL that can be used to test the presence of a response
(and a request) in a master transaction [without triggering such
warnings], then we should add it.


Also, some Squids have bugs where there _is_ a response but Squid
logging code does not know about it. If you are running a relatively
recent Squid v4 release, you might be hitting one of those bugs
(although I would expect more/different error messages in that case).


Endnotes:

(*) Squid fails to log certain accesses. We are fixing one of those bugs
right now.

(**) Imagine, for example, a client that starts sending an HTTP request
but closes the connection to Squid before finishing. Depending on what
state Squid was in when the connection got closed, there may be no
response created for that unfinished request.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Heiler Bemerguy


Hi guys


I just enabled "collapsed_forwarding"  and noticed a lot of "Vary object 
loop!" that wasn't there before..


2016/05/12 19:17:22 kid3| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 
'http://ego.globo.com/paparazzo/noticia/2016/05/tulio-maravilha-fala-de-video-intimo-com-mulher-gente-ve-e-deleta.html' 
'accept-encoding="gzip,%20deflate,%20sdch", 
user-agent="Mozilla%2F5.0%20(Windows%20NT%206.1)%20AppleWebKit%2F537.36%20(KHTML,%20like%20Gecko)%20Chrome%2F50.0.2661.102%20Safari%2F537.36"'

2016/05/12 19:17:22 kid3| clientProcessHit: Vary object loop!
2016/05/12 19:17:22 kid3| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 'http://ego.globo.com/fonts/typography.css' 
'accept-encoding="gzip,%20deflate,%20sdch", 
user-agent="Mozilla%2F5.0%20(Windows%20NT%206.1)%20AppleWebKit%2F537.36%20(KHTML,%20like%20Gecko)%20Chrome%2F50.0.2661.102%20Safari%2F537.36"'

2016/05/12 19:17:22 kid3| clientProcessHit: Vary object loop!
2016/05/12 19:17:22 kid3| varyEvaluateMatch: Oops. Not a Vary match on 
second attempt, 
'http://ego.globo.com/dynamo/scripts/js/glb.recaptcha.js' 
'accept-encoding="gzip,%20deflate,%20sdch", 
user-agent="Mozilla%2F5.0%20(Windows%20NT%206.1)%20AppleWebKit%2F537.36%20(KHTML,%20like%20Gecko)%20Chrome%2F50.0.2661.102%20Safari%2F537.36"'



I don't know if it's helping with the segmented downloads (that this 
thread is about...) though.



Best Regards,


--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751



Em 12/05/2016 18:21, Alex Rousskov escreveu:

On 05/12/2016 02:36 PM, Amos Jeffries wrote:


Have you given collapsed_forwarding a try? Its supposed to prevent all
the duplicate requests making all those extra upstream connections unti
at least the first one has finished getting the object.

For the record, collapsed forwarding collapses requests _before_ there
is any response [header], not after the "first" request got the object
[body]. Once there is a response [header], the usual caching code path
kicks in and no collapsing is needed.

Cache hits on that "usual caching code path" read from a "public" cache
entry. Normally, those public entries are created when Squid receives a
response [header]. Collapsed forwarding creates that entry before Squids
gets the response [header], and, hence, before Squid can know for sure
whether the response is going to be cachable, with all the risks that
entails.


Please do not misinterpret my email as a recommendation to give (or not
to give) collapsed forwarding a try. I have _not_ analyzed the problems
discussed on this thread. I just wanted to correct the description
above. Nothing more.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL is used in context without an HTTP response. Assuming mismatch

2016-05-12 Thread David Touzeau
Hi 

 

I did not want squid to log it's TCP_DENIED/407 when sending authentication
to browsers

 

I think this acl should work

 

acl CODE_TCP_DENIED http_status 407

access_log none CODE_TCP_DENIED

 

But squid claim :

 

2016/05/12 23:44:07 kid1| WARNING: CODE_TCP_DENIED ACL is used in context
without an HTTP response. Assuming mismatch.

 

Why this rule is wrong ?

 

Best regards

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Alex Rousskov
On 05/12/2016 02:36 PM, Amos Jeffries wrote:

> Have you given collapsed_forwarding a try? Its supposed to prevent all
> the duplicate requests making all those extra upstream connections unti
> at least the first one has finished getting the object.

For the record, collapsed forwarding collapses requests _before_ there
is any response [header], not after the "first" request got the object
[body]. Once there is a response [header], the usual caching code path
kicks in and no collapsing is needed.

Cache hits on that "usual caching code path" read from a "public" cache
entry. Normally, those public entries are created when Squid receives a
response [header]. Collapsed forwarding creates that entry before Squids
gets the response [header], and, hence, before Squid can know for sure
whether the response is going to be cachable, with all the risks that
entails.


Please do not misinterpret my email as a recommendation to give (or not
to give) collapsed forwarding a try. I have _not_ analyzed the problems
discussed on this thread. I just wanted to correct the description
above. Nothing more.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Amos, you're a genius! I had forgotten completely about this setting, oh
my Idiotti .


13.05.16 2:36, Amos Jeffries пишет:
> On 13/05/2016 7:17 a.m., Heiler Bemerguy wrote:
>>
>> I also don't care too much about duplicated cached files.. but trying to
>> cache "ranged" requests is topping my link and in the end it seems it's
>> not caching anything lol
>>
>> EVEN if I only allow range_offset to some urls or file extensions
>>
>
> Have you given collapsed_forwarding a try? Its supposed to prevent all
> the duplicate requests making all those extra upstream connections unti
> at least the first one has finished getting the object. Combined with
> the range_offset_limit and qick_abort_max to make that first request be
> a full-fetch it might be able to solve your issue.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNO68AAoJENNXIZxhPexGQzcH/iulc1g/5hxya2gcLe0+FQsV
6Wtfl/DlpMkhwpKxPMMZMtcafyv7Umbbl2H7ErfcVGCVmZAWOZ0TYmA5dAu8NQ3X
d8dQXAoKycNq3ZgKjpDIju33M4yJnPBrBb9M7oz1fXhj3WKo+LCapxX5nb9wYBgn
CyrFcQf1rRSto2ly14w+j8XLJhyoUeBFoclJjksXbSycQmB0jPQtG9PkL3KOJNEv
lYyTqbu0FUBoiek4oI0uOC2E9CjRPMyRdAWxkV5jEyqlRSfFo6bByEF9rKmk9CxH
J0FOjpTvq5Bhrr4UVPp0jTpcuRebbLt6IxIdghU+fx6Fv33eRQwmSvxzHix0i0Y=
=30FJ
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Amos Jeffries
On 13/05/2016 7:17 a.m., Heiler Bemerguy wrote:
> 
> I also don't care too much about duplicated cached files.. but trying to
> cache "ranged" requests is topping my link and in the end it seems it's
> not caching anything lol
> 
> EVEN if I only allow range_offset to some urls or file extensions
> 

Have you given collapsed_forwarding a try? Its supposed to prevent all
the duplicate requests making all those extra upstream connections unti
at least the first one has finished getting the object. Combined with
the range_offset_limit and qick_abort_max to make that first request be
a full-fetch it might be able to solve your issue.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Regular expressions with dstdom_regex ACL

2016-05-12 Thread Walter H.

Hello,

can someone please tell me which regular expression(s) would really block
domains which are IP hosts

for IPv4 this is my regexp:
^[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}$
and this works as expected

acl block_domains_iphost dstdom_regex 
^[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}$

deny_info ERR_IPHOST_BLOCKED block_domains_iphost
http_access deny block_domains_iphost

BUT, I tried and tried and failed with IPv6

this section in squid.conf

acl block_domains_ip6host dstdomain [ipv6]
deny_info ERR_IPHOST_BLOCKED block_domains_iphost6
http_access deny block_domains_iphost6

doesn't work for exact this given IPv6 address ...

I want any IPv6 address

can someone please tell me how I can achive this?

the result should be that
any URL like this
http(s)://ip-address/ should be blocked by the specified error page

Thanks and Greetings from Austria,
Walter




smime.p7s
Description: S/MIME Cryptographic Signature
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
If a hammer, all, of course, is the nails :)

13.05.16 2:08, Yuri Voinov пишет:
>
> In comparison, the cache of thousands of Linux distributions,
regardless of the purpose, of course, a trifle :)
>
> 13.05.16 2:07, Yuri Voinov пишет:
>
>
>   > I recently expressed the idea of caching torrents using
>   SQUID. :) What's an idea! I'm still impressed! :)
>
>
>
>   > 13.05.16 2:02, Yuri Voinov пишет:
>
>
>
>
>
>   >   > Updates,
>
>
>
>
>
>
>
>   >   > in conjunction with hundreds OS's and distros,
>   better to do
>
>   >   with
>
>
>
>   >   > separate dedicated update server. IMHO.
>
>
>
>
>
>
>
>
>
>
>
>   >   > 13.05.16 1:56, Hans-Peter Jansen пишет:
>
>
>
>   >   > > On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov
>   wrote:
>
>
>
>   >   > >> I suggest it is very bad idea to
>   transform caching
>
>   >   proxy to linux
>
>
>
>   >   > >> distro's or something else archive.
>
>
>
>
>
>
>
>   >   > > Yuri, if I wanted an archive, I would mirror
>   all stuff
>
>   >   and use local
>
>
>
>   >   > repos.
>
>
>
>   >   > It was sarcasm. And yes - local mirror is the best
>   approach.
>
>
>
>
>
>
>
>   >   > > I went that route for a long time - it's a
>   lot of work
>
>   >   to keep up
>
>
>
>   >   > everywhere,
>
>
>
>   >   > > and generates an awful amount of traffic (and
>   I did it
>
>   >   the sanest way
>
>
>
>   >   > possible
>
>
>
>   >   > My condolences.
>
>
>
>
>
>
>
>   >   > > - with a custom script, that was using
>   rsync..)
>
>
>
>
>
>
>
>   >   > >> As Amos said, "Squid is a cache, not an
>   archive".
>
>
>
>
>
>
>
>   >   > > Yes, updating 20 similar machines makes a
>   significant
>
>   >   difference with the
>
>
>
>   >   > > squid as a deduplicated cache - with no
>   recurring work
>
>   >   at all.
>
>
>
>   >   > Agree. Partially. With Solaris I've does it one
>   JumpStart
>
>   >   network
>
>
>
>   >   > server. Of course, the same technology is rara
>   avis in
>
>   >   modern
>
>
>
>   >   > world :)
>
>
>
>
>
>
>
>   >   > I now wonder - to sharpen the pencil you too
>   millstone ask?
>
>   >   :)  My
>
>
>
>   >   > condolences again.
>
>
>
>
>
>
>
>
>
>
>
>   >   > > Pete
>
>
>
>   >   > >
>   ___
>
>
>
>   >   > > squid-users mailing list
>
>
>
>   >   > > squid-users@lists.squid-cache.org
>
>
>
>   >   > >
>   http://lists.squid-cache.org/listinfo/squid-users
>
>
>
>
>
>
>
>
>
>
>
>
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNOMQAAoJENNXIZxhPexGQiQIAMX1XwoIr6WVtoSTpG8Jf8Vy
+8jdMQzGMvG7DUkd+PUXu/b28HqcKm5yyTh8LkBt1tPYfetv5VBIY21yN8G/48Rb
U9C+yYeLzQVUxikd5muel9G19khJ7XXqzJHSBNtPzlyN8+ylNX9TFFhmJZu4s6sq
el05x/xtmlNkrW4AIbYF4Zz5HVxOResJS7yCvAuQw7LB010Hwed1PYyFw2baKVRO
BZkiG8E8kxTPT6j+KB/nFZC2l0qmblPa2Tk3sjhKkngJx5rSbKTomZI075KRQilv
VJgdYNp8/hDlvGszpi0/PSV8onia0ekF8jYU/7fkTsYqRJJ1P9G5f4ezZcr4Bos=
=MaNc
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
In comparison, the cache of thousands of Linux distributions, regardless
of the purpose, of course, a trifle :)

13.05.16 2:07, Yuri Voinov пишет:
>
> I recently expressed the idea of caching torrents using SQUID. :)
What's an idea! I'm still impressed! :)
>
> 13.05.16 2:02, Yuri Voinov пишет:
>
>
>   > Updates,
>
>
>
>   > in conjunction with hundreds OS's and distros, better to do
>   with
>
>   > separate dedicated update server. IMHO.
>
>
>
>
>
>   > 13.05.16 1:56, Hans-Peter Jansen пишет:
>
>   > > On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
>
>   > >> I suggest it is very bad idea to transform caching
>   proxy to linux
>
>   > >> distro's or something else archive.
>
>
>
>   > > Yuri, if I wanted an archive, I would mirror all stuff
>   and use local
>
>   > repos.
>
>   > It was sarcasm. And yes - local mirror is the best approach.
>
>
>
>   > > I went that route for a long time - it's a lot of work
>   to keep up
>
>   > everywhere,
>
>   > > and generates an awful amount of traffic (and I did it
>   the sanest way
>
>   > possible
>
>   > My condolences.
>
>
>
>   > > - with a custom script, that was using rsync..)
>
>
>
>   > >> As Amos said, "Squid is a cache, not an archive".
>
>
>
>   > > Yes, updating 20 similar machines makes a significant
>   difference with the
>
>   > > squid as a deduplicated cache - with no recurring work
>   at all.
>
>   > Agree. Partially. With Solaris I've does it one JumpStart
>   network
>
>   > server. Of course, the same technology is rara avis in
>   modern
>
>   > world :)
>
>
>
>   > I now wonder - to sharpen the pencil you too millstone ask?
>   :)  My
>
>   > condolences again.
>
>
>
>
>
>   > > Pete
>
>   > > ___
>
>   > > squid-users mailing list
>
>   > > squid-users@lists.squid-cache.org
>
>   > > http://lists.squid-cache.org/listinfo/squid-users
>
>
>
>
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNOLSAAoJENNXIZxhPexGrL8H/2CENrZvFX4fUaZeCM55p88z
ROVWCMpsx4YsD1zZieOIvUwBsBZlY35iBQ2R0gxEY8vrnNZE5eAt1+BK2zSxX2fZ
us/JZ4yWeSlDYZ+foztnSDcPhqikdkaNXM12KjP7usnybx/dN2KaHdq8nc1xbOeN
A1NCT4mnxiUNeycFOFfAHASvh4rCSii6MMqCS1Pf8PMwdTcKk/UudEBooytGlA7K
gN39HWu+MCd1uUX2Mt9KRfSYQVx7OMjwWzNTH5Rpjk7gBd9a2oQGJrHoZ62xn0Pb
7ZCbZOgTJwovqWycf3Yv+6hb2m4g3xhJ9jBoM3RJjC4oRMynZ+Ls/QyBDfeUD0Q=
=7c66
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I recently expressed the idea of caching torrents using SQUID. :) What's
an idea! I'm still impressed! :)

13.05.16 2:02, Yuri Voinov пишет:
>
> Updates,
>
> in conjunction with hundreds OS's and distros, better to do with
> separate dedicated update server. IMHO.
>
>
> 13.05.16 1:56, Hans-Peter Jansen пишет:
> > On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
> >> I suggest it is very bad idea to transform caching proxy to linux
> >> distro's or something else archive.
>
> > Yuri, if I wanted an archive, I would mirror all stuff and use local
> repos.
> It was sarcasm. And yes - local mirror is the best approach.
>
> > I went that route for a long time - it's a lot of work to keep up
> everywhere,
> > and generates an awful amount of traffic (and I did it the sanest way
> possible
> My condolences.
>
> > - with a custom script, that was using rsync..)
>
> >> As Amos said, "Squid is a cache, not an archive".
>
> > Yes, updating 20 similar machines makes a significant difference
with the
> > squid as a deduplicated cache - with no recurring work at all.
> Agree. Partially. With Solaris I've does it one JumpStart network
> server. Of course, the same technology is rara avis in modern
> world :)
>
> I now wonder - to sharpen the pencil you too millstone ask? :)  My
> condolences again.
>
>
> > Pete
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNOKJAAoJENNXIZxhPexGWxMH+QEBHNNidx9Ar5Ec52bGz/Mm
qpvkklZ3Tq6QPv2fimh8G7vXQghs29MfWCi6Q2ZRtWDBTbwXOBHAmhnkVt+VrzOH
G7w61dmgmv9MfiotQu2f0g+v3v/MFWKjwEHU602SI8hMS4908Zw4z8Z6oGDzTi44
AdkoUX2J1SRLlVTuniVZJRGALnazHGTPiYHdvf5VpLnkWs1UtnugbrxLHDDRuVKi
KsyIJvNv0NfvXa2HBGk7pgM4RWuuSc/bK1fZ6NzF7IccfqbqCUXZTo55IhzjS0kV
fZKUIPk8Hm3Z2GaWvXO/9qtZiw4jMxGFmIDZCUjVDykJZFgdJZdsc87n0Lzbubw=
=T9vF
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Updates,

in conjunction with hundreds OS's and distros, better to do with
separate dedicated update server. IMHO.


13.05.16 1:56, Hans-Peter Jansen пишет:
> On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
>> I suggest it is very bad idea to transform caching proxy to linux
>> distro's or something else archive.
>
> Yuri, if I wanted an archive, I would mirror all stuff and use local
repos.
It was sarcasm. And yes - local mirror is the best approach.
>
> I went that route for a long time - it's a lot of work to keep up
everywhere,
> and generates an awful amount of traffic (and I did it the sanest way
possible
My condolences.
>  
> - with a custom script, that was using rsync..)
>
>> As Amos said, "Squid is a cache, not an archive".
>
> Yes, updating 20 similar machines makes a significant difference with the
> squid as a deduplicated cache - with no recurring work at all.
Agree. Partially. With Solaris I've does it one JumpStart network
server. Of course, the same technology is rara avis in modern
world :)

I now wonder - to sharpen the pencil you too millstone ask? :)  My
condolences again.
>
>
> Pete
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNOE/AAoJENNXIZxhPexGSMsIAItiHicVTsVSI1u1Dn5Bb0M4
RjsQB3eG7ISoJKTe2nBBAeaRfoqcJxlahJ1Yabk+FP25zl+hmp0E1Yba2VTZvDFX
8BKwsNKuGPRdOgI5t69XLfgdQT21hnHNsYtH08pTSvvQeOYE9UA488jqHZKC20y7
J+3MU7aJiDZuSwfMsX5M9g1Wz6gQtgjuF4CL7ur7ssvhs+BD8ZUctJ7AjJ9JU2bf
Z4uIIzKYhkNWZklPpB/ZVRr9qDfhiOpbit4oeI2ELrgU7ro4EcggNY24Nxel6CGX
WzU3NDsv2mm3I5u3mW6KkWHYd2k/5uqE+cAXZSdRBBz2WTj2YJoMePnyMQIUowE=
=i8rv
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Hans-Peter Jansen
On Freitag, 13. Mai 2016 01:09:39 Yuri Voinov wrote:
> I suggest it is very bad idea to transform caching proxy to linux
> distro's or something else archive.

Yuri, if I wanted an archive, I would mirror all stuff and use local repos. 
I went that route for a long time - it's a lot of work to keep up everywhere, 
and generates an awful amount of traffic (and I did it the sanest way possible 
- with a custom script, that was using rsync..)

> As Amos said, "Squid is a cache, not an archive".

Yes, updating 20 similar machines makes a significant difference with the 
squid as a deduplicated cache - with no recurring work at all.

Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid, squidguard and elk - simply combined as docker containers

2016-05-12 Thread d...@muenchhausen.de
Dear Squid enthusiasts !

Squid is fine – it simply works since years at home.
Squidguard helps me to block malicious websites.
kibana visualizes from where my Browser retrieves data
… and Docker combines everything in a simple way :)

I published a small Docker Compose project on Github. Feel free to try it – 
feedback is very welcome!
https://github.com/muenchhausen/docker-squidguard-elk

Best regards,
Derk



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
And I did not promise a silver bullet :) This is just a small
workaround, which does not work in all cases. :)

13.05.16 1:17, Heiler Bemerguy пишет:
>
>
> I also don't care too much about duplicated cached files.. but trying
to cache "ranged" requests is topping my link and in the end it seems
it's not caching anything lol
>
> EVEN if I only allow range_offset to some urls or file extensions
>
>
> Best Regards,
>
>
> --
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
>
> Em 12/05/2016 16:09, Yuri Voinov escreveu:
> I suggest it is very bad idea to transform caching proxy to linux
> distro's or something else archive.
>
> As Amos said, "Squid is a cache, not an archive".
>
>
> 13.05.16 0:57, Hans-Peter Jansen пишет:
> >>> Hi Heiler,
> >>>
> >>> On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
>  Hi Pete, thanks for replying... let me see if I got it right..
> 
>  Will I need to specify every url/domain I want it to act on ? I want
>  squid to do it for every range-request downloads that should/would be
>  cached (based on other rules, pattern_refreshs etc)
> >>> Yup, that's right. At least, that's the common approach to deal
with CDNs.
> >>> I think, that disallowing range requests is too drastic to work fine
> on the
> >>> long run, but let us know, if you get to satisfactory solution
this way.
> >>>
>  It doesn't need to delay any downloads as long as it isn't a dupe of
>  what's already being downloaded.
> >>> You can set to delay to zero of course.
> >>>
> >>> This is only one side of the issues with CDNs. The other, more
> problematic
> >>> side of it is, that many server with different URLs provide the same
> files.
> >>> Every new address will result in a new download of otherwise identical
> >>> content.
> >>>
> >>> Here's an example of openSUSE:
> >>>
> >>> #
> >>> # this file was generated by gen_openSUSE_dedups
> >>> # from http://mirrors.opensuse.org/list/all.html
> >>> # with timestamp Thu, 12 May 2016 05:30:18 +0200
> >>> #
> >>> [openSUSE]
> >>> match:
> >>> # openSUSE Headquarter
> >>> http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
> >>> # South Africa (za)
> >>> http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
> >>> # Bangladesh (bd)
> >>> http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
> >>> # China (cn)
> >>> http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
> >>> http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
> >>> http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
> >>> http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
> >>> # Hong Kong (hk)
> >>> http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
> >>> # Indonesia (id)
> >>> http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
> >>> http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
> >>> http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
> >>> http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
> >>> http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
> >>> http\:\/\/download\.opensuse\.or\.id\/(.*)
> >>> http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
> >>> http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
> >>> # Israel (il)
> >>> http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)
> >>>   
> >>> [...] -> this list contains about 180 entries
> >>>
> >>> replace: http://download.opensuse.org.%(intdomain)s/\1
> >>> # fetch all redirected objects explicitly
> >>> fetch: true
> >>>
> >>>
> >>> This is, how CDNs work, but it's a nightmare for caching proxies.
> >>> In such scenarios squid_dedup comes to rescue.
> >>>
> >>> Cheers,
> >>> Pete
> >>> ___
> >>> squid-users mailing list
> >>> squid-users@lists.squid-cache.org
> >>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNNcvAAoJENNXIZxhPexGLGsH/jilasMtRp499QJS+G5O9lHv
z1MtLbZnExrzVcmqb69jsbaZWjwnhvUF1Ng7fZbep5q6pky3DGcwxQu/EBVhD3+p
tTliArKa45dmhbOm5a0ljJcq73hBtUrlS0UrGDV6CMRrXjHjSUzy6+BwwsI1mClp
dtOos3NoSDlQmazkEDA6+f3iuYykjinmTsFlJRgQipluXnUUlmvbnpwZHqUhTA0R
X2I6j3zdTDHGszlXkoFrKg+Vj0gOzeGfA5IPx7/vnruShlYSPWuvoVfvi4ZLYV2y
8NZ8Q9L1MvBoMUa1WphE2NZeKpV

Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Heiler Bemerguy


I also don't care too much about duplicated cached files.. but trying to 
cache "ranged" requests is topping my link and in the end it seems it's 
not caching anything lol


EVEN if I only allow range_offset to some urls or file extensions


Best Regards,


--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


Em 12/05/2016 16:09, Yuri Voinov escreveu:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
  
I suggest it is very bad idea to transform caching proxy to linux

distro's or something else archive.

As Amos said, "Squid is a cache, not an archive".


13.05.16 0:57, Hans-Peter Jansen пишет:

Hi Heiler,

On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:

Hi Pete, thanks for replying... let me see if I got it right..

Will I need to specify every url/domain I want it to act on ? I want
squid to do it for every range-request downloads that should/would be
cached (based on other rules, pattern_refreshs etc)

Yup, that's right. At least, that's the common approach to deal with CDNs.
I think, that disallowing range requests is too drastic to work fine

on the

long run, but let us know, if you get to satisfactory solution this way.


It doesn't need to delay any downloads as long as it isn't a dupe of
what's already being downloaded.

You can set to delay to zero of course.

This is only one side of the issues with CDNs. The other, more

problematic

side of it is, that many server with different URLs provide the same

files.

Every new address will result in a new download of otherwise identical
content.

Here's an example of openSUSE:

#
# this file was generated by gen_openSUSE_dedups
# from http://mirrors.opensuse.org/list/all.html
# with timestamp Thu, 12 May 2016 05:30:18 +0200
#
[openSUSE]
match:
 # openSUSE Headquarter
 http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
 # South Africa (za)
 http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
 # Bangladesh (bd)
 http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
 http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
 # China (cn)
 http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
 http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
 http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
 http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
 http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
 http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
 http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
 http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
 http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
 # Hong Kong (hk)
 http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
 # Indonesia (id)
 http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
 http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
 http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
 http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
 http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
 http\:\/\/download\.opensuse\.or\.id\/(.*)
 http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
 http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
 # Israel (il)
 http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)

 [...] -> this list contains about 180 entries


replace: http://download.opensuse.org.%(intdomain)s/\1
# fetch all redirected objects explicitly
fetch: true


This is, how CDNs work, but it's a nightmare for caching proxies.
In such scenarios squid_dedup comes to rescue.

Cheers,
Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
  
iQEcBAEBCAAGBQJXNNTzAAoJENNXIZxhPexG8XIIAKal+I1GMvTS9QDdJT6pxi7n

IL/d33/YUelZJ9ok1bLAiI1DNOJR6xwK6OZ+LefPOrxH1Q14quGJ5m873065jE+H
/1qhYs8rVVQ8qlLQyMI+aacEA9FV7j6OpWMteM+54SSjLlW4z0pJkw+vSsMwCnI5
Sy3qryieIImtmYnT1wbVM5Pop3lrLA/t1jza619ioxIxWa4M4bSO2EAR+Qj5HiUg
BT8ki8t1GIO12RatjqDwSouU+yDMK85amUKZBjRFXhyOxi1Cg+5uleI4C2lUjqM2
f1n3KBC7mlF6snAT74kc+JWLsNd2ohlkmJB8tSIhkxvkgmaWDpCpwaGaUmtkuXg=
=/fDD
-END PGP SIGNATURE-



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Moreover,

sometimes it is not an archive, but cemetery.

But - somebody's see no difference.

So, watch what's your caching in.

13.05.16 1:09, Yuri Voinov пишет:
>
> I suggest it is very bad idea to transform caching proxy to linux
> distro's or something else archive.
>
> As Amos said, "Squid is a cache, not an archive".
>
>
> 13.05.16 0:57, Hans-Peter Jansen пишет:
> > Hi Heiler,
>
> > On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
> >> Hi Pete, thanks for replying... let me see if I got it right..
> >>
> >> Will I need to specify every url/domain I want it to act on ? I want
> >> squid to do it for every range-request downloads that should/would be
> >> cached (based on other rules, pattern_refreshs etc)
>
> > Yup, that's right. At least, that's the common approach to deal with
CDNs.
> > I think, that disallowing range requests is too drastic to work fine
> on the
> > long run, but let us know, if you get to satisfactory solution this way.
>
> >> It doesn't need to delay any downloads as long as it isn't a dupe of
> >> what's already being downloaded.
>
> > You can set to delay to zero of course.
>
> > This is only one side of the issues with CDNs. The other, more
> problematic
> > side of it is, that many server with different URLs provide the same
> files.
> > Every new address will result in a new download of otherwise identical
> > content.
>
> > Here's an example of openSUSE:
>
> > #
> > # this file was generated by gen_openSUSE_dedups
> > # from http://mirrors.opensuse.org/list/all.html
> > # with timestamp Thu, 12 May 2016 05:30:18 +0200
> > #
> > [openSUSE]
> > match:
> > # openSUSE Headquarter
> > http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
> > # South Africa (za)
> > http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
> > # Bangladesh (bd)
> > http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
> > http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
> > # China (cn)
> > http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
> > http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
> > http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
> > http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
> > http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
> > http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
> > http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
> > http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
> > http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
> > # Hong Kong (hk)
> > http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
> > # Indonesia (id)
> > http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
> > http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
> > http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
> > http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
> > http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
> > http\:\/\/download\.opensuse\.or\.id\/(.*)
> > http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
> > http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
> > # Israel (il)
> > http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)
>
> > [...] -> this list contains about 180 entries
>
> > replace: http://download.opensuse.org.%(intdomain)s/\1
> > # fetch all redirected objects explicitly
> > fetch: true
>
>
> > This is, how CDNs work, but it's a nightmare for caching proxies.
> > In such scenarios squid_dedup comes to rescue.
>
> > Cheers,
> > Pete
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNNX3AAoJENNXIZxhPexGpeQH/2RlvCmJ8q5lAwGoRnrpkDOQ
d2qmZbTXBycllJ/ajTzVQfW3WGpqm73iOCQSja91AgP9ID/VFvrp3yFcmLpbnfkO
YzqGHOy4vMNTF6GsCVI/JvMiA5jG00BxAtEnanBvuzkwAXG+dZPNPrJDd/UKtqnI
nzsGH9nhQAFypiHLzX1jeqGrQ8oKdZCtraImI9ONrZVBSaI/dUZoPl8T2w5jE1nc
njN/IEnyx8wYgpO5dDj22Sfuev3S/wIoMmgzKeXGEpS2VPRBT2N2dGsbdFnSucm3
hwQHaX6yVcT/Vpfr8rUHH1l3VlUrcoH5EoBJqgK0Ct/XRONkNUPVgSZLaUJaI3M=
=AsWm
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
I suggest it is very bad idea to transform caching proxy to linux
distro's or something else archive.

As Amos said, "Squid is a cache, not an archive".


13.05.16 0:57, Hans-Peter Jansen пишет:
> Hi Heiler,
>
> On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
>> Hi Pete, thanks for replying... let me see if I got it right..
>>
>> Will I need to specify every url/domain I want it to act on ? I want
>> squid to do it for every range-request downloads that should/would be
>> cached (based on other rules, pattern_refreshs etc)
>
> Yup, that's right. At least, that's the common approach to deal with CDNs.
> I think, that disallowing range requests is too drastic to work fine
on the
> long run, but let us know, if you get to satisfactory solution this way.
>
>> It doesn't need to delay any downloads as long as it isn't a dupe of
>> what's already being downloaded.
>
> You can set to delay to zero of course.
>
> This is only one side of the issues with CDNs. The other, more
problematic
> side of it is, that many server with different URLs provide the same
files.
> Every new address will result in a new download of otherwise identical
> content.
> 
> Here's an example of openSUSE:
>
> #
> # this file was generated by gen_openSUSE_dedups
> # from http://mirrors.opensuse.org/list/all.html
> # with timestamp Thu, 12 May 2016 05:30:18 +0200
> #
> [openSUSE]
> match:
> # openSUSE Headquarter
> http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
> # South Africa (za)
> http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
> # Bangladesh (bd)
> http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
> http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
> # China (cn)
> http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
> http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
> http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
> http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
> http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
> http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
> http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
> http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
> http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
> # Hong Kong (hk)
> http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
> # Indonesia (id)
> http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
> http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
> http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
> http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
> http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
> http\:\/\/download\.opensuse\.or\.id\/(.*)
> http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
> http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
> # Israel (il)
> http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)
>
> [...] -> this list contains about 180 entries
>
> replace: http://download.opensuse.org.%(intdomain)s/\1
> # fetch all redirected objects explicitly
> fetch: true
>
>
> This is, how CDNs work, but it's a nightmare for caching proxies.
> In such scenarios squid_dedup comes to rescue.
>
> Cheers,
> Pete
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNNTzAAoJENNXIZxhPexG8XIIAKal+I1GMvTS9QDdJT6pxi7n
IL/d33/YUelZJ9ok1bLAiI1DNOJR6xwK6OZ+LefPOrxH1Q14quGJ5m873065jE+H
/1qhYs8rVVQ8qlLQyMI+aacEA9FV7j6OpWMteM+54SSjLlW4z0pJkw+vSsMwCnI5
Sy3qryieIImtmYnT1wbVM5Pop3lrLA/t1jza619ioxIxWa4M4bSO2EAR+Qj5HiUg
BT8ki8t1GIO12RatjqDwSouU+yDMK85amUKZBjRFXhyOxi1Cg+5uleI4C2lUjqM2
f1n3KBC7mlF6snAT74kc+JWLsNd2ohlkmJB8tSIhkxvkgmaWDpCpwaGaUmtkuXg=
=/fDD
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Hans-Peter Jansen
Hi Heiler,

On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
> Hi Pete, thanks for replying... let me see if I got it right..
> 
> Will I need to specify every url/domain I want it to act on ? I want
> squid to do it for every range-request downloads that should/would be
> cached (based on other rules, pattern_refreshs etc)

Yup, that's right. At least, that's the common approach to deal with CDNs.
I think, that disallowing range requests is too drastic to work fine on the 
long run, but let us know, if you get to satisfactory solution this way.

> It doesn't need to delay any downloads as long as it isn't a dupe of
> what's already being downloaded.

You can set to delay to zero of course.

This is only one side of the issues with CDNs. The other, more problematic 
side of it is, that many server with different URLs provide the same files.
Every new address will result in a new download of otherwise identical 
content.
 
Here's an example of openSUSE:

#
# this file was generated by gen_openSUSE_dedups
# from http://mirrors.opensuse.org/list/all.html
# with timestamp Thu, 12 May 2016 05:30:18 +0200
#
[openSUSE]
match:
# openSUSE Headquarter
http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
# South Africa (za)
http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
# Bangladesh (bd)
http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
# China (cn)
http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
# Hong Kong (hk)
http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
# Indonesia (id)
http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
http\:\/\/download\.opensuse\.or\.id\/(.*)
http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
# Israel (il)
http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)

[...] -> this list contains about 180 entries

replace: http://download.opensuse.org.%(intdomain)s/\1
# fetch all redirected objects explicitly
fetch: true


This is, how CDNs work, but it's a nightmare for caching proxies.
In such scenarios squid_dedup comes to rescue.

Cheers,
Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Squid with AD authentication

2016-05-12 Thread Yuri Voinov
> <http://download.cdn.mozilla.net.%(intdomain)s//1>
> > fetch: true
>
> > The fetch parameter is unique among the other StoreID helper (AFAIK):
> it is
> > fetching the object after a certain delay with a pool of fetcher
threads.
>
> > The idea is: after the first access for an object, wait a bit (global
> setting,
> > default: 15 secs), and then fetch the whole thing once. It won't solve
> > anything for the first client, but for all subsequent accesses.
>
> > The fetcher avoids fetching anything more than once by checking the http
> > headers.
>
> > This is a pretty new project, but be assured, that the basic
functions are
> > working fine, and I will do my best to solve any upcoming issues. It is
> > implemented with Python3 and prepared for supporting additional features
> > easily, while keeping a good part of an eye on efficiency.
>
> > Let me know, if you're going to try it.
>
> > Pete
>
>
> > --
>
> > Message: 4
> > Date: Thu, 12 May 2016 17:46:36 +0100
> > From: Nilesh Gavali 
> > To: squid-users@lists.squid-cache.org
> > Subject: [squid-users] Windows Squid with AD authentication
> > Message-ID:
>
> 
> > Content-Type: text/plain; charset="utf-8"
>
> > Team;
> > we have squid running on Windows and need to integrate it with
Windows AD
> > .can anyone help me with steps to be perform to get this done.
>
> > Thanks & Regards
> > Nilesh Suresh Gavali
> > =-=-=
> > Notice: The information contained in this e-mail
> > message and/or attachments to it may contain
> > confidential or privileged information. If you are
> > not the intended recipient, any dissemination, use,
> > review, distribution, printing or copying of the
> > information contained in this e-mail message
> > and/or attachments to it are strictly prohibited. If
> > you have received this communication in error,
> > please notify us by reply e-mail or telephone and
> > immediately and permanently delete the message
> > and any attachments. Thank you
>
>
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL:
>
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html>
>
> > --
>
> > Message: 5
> > Date: Thu, 12 May 2016 13:28:00 -0300
> > From: Heiler Bemerguy 
> > To: squid-users@lists.squid-cache.org
> > Subject: Re: [squid-users] Getting the full file content on a range
> > request, but not on EVERY get ...
> > Message-ID: <61bf3ff3-c8b2-647f-9b5e-3112b2f43...@cinbesa.com.br>
> > Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
> > Hi Pete, thanks for replying... let me see if I got it right..
>
> > Will I need to specify every url/domain I want it to act on ? I want
> > squid to do it for every range-request downloads that should/would be
> > cached (based on other rules, pattern_refreshs etc)
>
> > It doesn't need to delay any downloads as long as it isn't a dupe of
> > what's already being downloaded.
>
>
> > Best Regards,
>
>
> > --
> > Heiler Bemerguy - (91) 98151-4894
> > Assessor Técnico - CINBESA (91) 3184-1751
>
>
> > Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:
> > > On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> > >> Hey guys,
> > >>
> > >> First take a look at the log:
> > >>
> > >> root@proxy:/var/log/squid# tail -f access.log |grep
> > >>
>
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt->
> BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236
TCP_MISS/206
> > >> 300520 GET
> > > [...]
> > >> Now think: An user is just doing a segmented/ranged download, right?
> > >> Squid won't cache the file because it is a range-download, not a full
> > >> file download.
> > >> But I WANT squid to cache it. So I decide to use "range_offset_limit
> > >> -1", but then on every GET squid will re-download the file from the
> > >> beginning, opening LOTs of simultaneous connections and using too
much
> > >> bandwidth, doing just the OPPOSITE it's meant to!
> > >>
> > >> Is there a smart way to allow squid to download it from the
> beginning to
> > >> the end (to actually cache it), but only on the FIRST
request/get? Even

Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
IMO better to use

range_offset_limit none !dont_cache_url all

to improve selectivity between non-cached and cached URL's with ACL's


12.05.16 23:02, Heiler Bemerguy пишет:
>
>
> Hi Garri,
>
> That bug report is mine.. lol
>
> But I couldn't keep testing it to confirm if the problem was about
ABORTING downloads or just trying to download what's already being
downloaded...
>
> When you use quick_abort_min -1, it seems to "fix" the caching issue
itself, but it won't prevent the concurrent downloads, which sucks up
the link..
>
> I don't know if it won't happen with aufs/ufs, I'm using only rock
store.
>
>
> --
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
>
> Em 12/05/2016 01:01, Garri Djavadyan escreveu:
>> On Wed, 2016-05-11 at 21:37 -0300, Heiler Bemerguy wrote:
>>> Hey guys,
>>> First take a look at the log:
>>> root@proxy:/var/log/squid# tail -f access.log |grep http://download.c
>>> dn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar
>>> 1463011781.572   8776 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
>>> application/octet-stream
>>> 1463011851.008   9347 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463011920.683   9645 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
>>> application/octet-stream
>>> 1463012000.144  19154 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463012072.276  12121 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463012145.643  13358 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463012217.472  11772 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463012294.676  17148 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> 1463012370.131  15272 10.1.3.236 TCP_MISS/206 300520 GET http://downl
>>> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>>> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
>>> application/octet-stream
>>> Now think: An user is just doing a segmented/ranged download, right?
>>> Squid won't cache the file because it is a range-download, not a full
>>> file download.
>>> But I WANT squid to cache it. So I decide to use "range_offset_limit
>>> -1", but then on every GET squid will re-download the file from the
>>> beginning, opening LOTs of simultaneous connections and using too
>>> much bandwidth, doing just the OPPOSITE it's meant to!
>>>
>>> Is there a smart way to allow squid to download it from the beginning
>>> to the end (to actually cache it), but only on the FIRST request/get?
>>> Even if it makes the user wait for the full download, or cancel it
>>> temporarily, or.. whatever!! Anything!!
>>>
>>> Best Regards,
>>> --
>>> Heiler Bemerguy - (91) 98151-4894
>>> Assessor Técnico - CINBESA (91) 3184-1751
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>> Hi, I believe, you describe the bug http://bugs.squid-cache.org/show_bu
>> g.cgi?id=4469
>>
>> I tried to reproduce the problem and have found that the problem
>> appears only with rock storage configurations. Can you try with
>> ufs/aufs storage?
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNLi/AAoJENNXIZxhPexG5aAH/juZyvly/aSIguez9dAKPbbb
AHpPxoky36FYOjlPbqmXjdrMPs9qNGT+Ns7WDxsNZFM0Cfbh5UgBfQc64kFoV0k/
qKseFzDLfSqbL6ppEAg3yh4/NUsBYtjT7hwBzNIU

Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Heiler Bemerguy


Hi Garri,

That bug report is mine.. lol

But I couldn't keep testing it to confirm if the problem was about 
ABORTING downloads or just trying to download what's already being 
downloaded...


When you use quick_abort_min -1, it seems to "fix" the caching issue 
itself, but it won't prevent the concurrent downloads, which sucks up 
the link..


I don't know if it won't happen with aufs/ufs, I'm using only rock 
store.



--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


Em 12/05/2016 01:01, Garri Djavadyan escreveu:

On Wed, 2016-05-11 at 21:37 -0300, Heiler Bemerguy wrote:

Hey guys,
First take a look at the log:
root@proxy:/var/log/squid# tail -f access.log |grep http://download.c
dn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar
1463011781.572   8776 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
application/octet-stream
1463011851.008   9347 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463011920.683   9645 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
application/octet-stream
1463012000.144  19154 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463012072.276  12121 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463012145.643  13358 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463012217.472  11772 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463012294.676  17148 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
1463012370.131  15272 10.1.3.236 TCP_MISS/206 300520 GET http://downl
oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
application/octet-stream
Now think: An user is just doing a segmented/ranged download, right?
Squid won't cache the file because it is a range-download, not a full
file download.
But I WANT squid to cache it. So I decide to use "range_offset_limit
-1", but then on every GET squid will re-download the file from the
beginning, opening LOTs of simultaneous connections and using too
much bandwidth, doing just the OPPOSITE it's meant to!

Is there a smart way to allow squid to download it from the beginning
to the end (to actually cache it), but only on the FIRST request/get?
Even if it makes the user wait for the full download, or cancel it
temporarily, or.. whatever!! Anything!!

Best Regards,
--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Hi, I believe, you describe the bug http://bugs.squid-cache.org/show_bu
g.cgi?id=4469

I tried to reproduce the problem and have found that the problem
appears only with rock storage configurations. Can you try with
ufs/aufs storage?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Windows Squid with AD authentication

2016-05-12 Thread Nilesh Gavali
n
> Message-ID:
> 

> Content-Type: text/plain; charset="utf-8"
>
> Team;
> we have squid running on Windows and need to integrate it with Windows 
AD
> .can anyone help me with steps to be perform to get this done.
>
> Thanks & Regards
> Nilesh Suresh Gavali
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL:
<
http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html
>
>
> --
>
> Message: 5
> Date: Thu, 12 May 2016 13:28:00 -0300
> From: Heiler Bemerguy 
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Getting the full file content on a range
> request, but not on EVERY get ...
> Message-ID: <61bf3ff3-c8b2-647f-9b5e-3112b2f43...@cinbesa.com.br>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
> Hi Pete, thanks for replying... let me see if I got it right..
>
> Will I need to specify every url/domain I want it to act on ? I want
> squid to do it for every range-request downloads that should/would be
> cached (based on other rules, pattern_refreshs etc)
>
> It doesn't need to delay any downloads as long as it isn't a dupe of
> what's already being downloaded.
>
>
> Best Regards,
>
>
> --
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
>
>
> Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:
> > On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> >> Hey guys,
> >>
> >> First take a look at the log:
> >>
> >> root@proxy:/var/log/squid# tail -f access.log |grep
> >>
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
>
BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 
TCP_MISS/206
> >> 300520 GET
> > [...]
> >> Now think: An user is just doing a segmented/ranged download, right?
> >> Squid won't cache the file because it is a range-download, not a full
> >> file download.
> >> But I WANT squid to cache it. So I decide to use "range_offset_limit
> >> -1", but then on every GET squid will re-download the file from the
> >> beginning, opening LOTs of simultaneous connections and using too 
much
> >> bandwidth, doing just the OPPOSITE it's meant to!
> >>
> >> Is there a smart way to allow squid to download it from the
beginning to
> >> the end (to actually cache it), but only on the FIRST request/get? 
Even
> >> if it makes the user wait for the full download, or cancel it
> >> temporarily, or.. whatever!! Anything!!
> > Well, this is exactly, what my squid_dedup helper was created for!
> >
> > See my announcement:
> >
> >  Subject: [squid-users] New StoreID helper: 
squid_dedup
> >  Date: Mon, 09 May 2016 23:56:45 +0200
> >
> > My openSUSE environment is fetching _all_ updates with byte-ranges
from many
> > servers. Therefor, I created squid_dedup.
> >
> > Your specific config could look like this:
> >
> > /etc/squid/dedup/mozilla.conf:
> > [mozilla]
> > match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
> > replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
<http://download.cdn.mozilla.net.%(intdomain)s//1>
> > fetch: true
> >
> > The fetch parameter is unique among the other StoreID helper
(AFAIK): it is
> > fetching the object after a certain delay with a pool of fetcher
threads.
> >
> > The idea is: after the first access for an object, wait a bit
(global setting,
> > default: 15 secs), and then fetch the whole thing once. It won't solve
> > anything for the first client, but for all subsequent accesses.
> >
> > The fetcher avoids fetching anything more than once by checking the 
http
> > headers.
> >
> > This is a pretty new project, but be assured, that the basic
functions are
> > working fine, and I will do my best to solve any upcoming issues. It 
is
> 

Re: [squid-users] squid-users Digest, Vol 21, Issue 54

2016-05-12 Thread Yuri Voinov
l
> https://wiki.freebsd.org/LibreSSL
> https://wiki.freebsd.org/OpenSSL
>
>
> --
>
> Message: 3
> Date: Thu, 12 May 2016 16:06:40 +0200
> From: Hans-Peter Jansen 
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Getting the full file content on a range
> request, but not on EVERY get ...
> Message-ID: <2575073.4c7f0552JP@xrated>
> Content-Type: text/plain; charset="us-ascii"
>
> On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> > Hey guys,
> >
> > First take a look at the log:
> >
> > root@proxy:/var/log/squid# tail -f access.log |grep
> >
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt->
BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
> > 300520 GET
> [...]
> > Now think: An user is just doing a segmented/ranged download, right?
> > Squid won't cache the file because it is a range-download, not a full
> > file download.
> > But I WANT squid to cache it. So I decide to use "range_offset_limit
> > -1", but then on every GET squid will re-download the file from the
> > beginning, opening LOTs of simultaneous connections and using too much
> > bandwidth, doing just the OPPOSITE it's meant to!
> >
> > Is there a smart way to allow squid to download it from the beginning to
> > the end (to actually cache it), but only on the FIRST request/get? Even
> > if it makes the user wait for the full download, or cancel it
> > temporarily, or.. whatever!! Anything!!
>
> Well, this is exactly, what my squid_dedup helper was created for!
>
> See my announcement:
>
> Subject: [squid-users] New StoreID helper: squid_dedup
> Date: Mon, 09 May 2016 23:56:45 +0200
>
> My openSUSE environment is fetching _all_ updates with byte-ranges
from many
> servers. Therefor, I created squid_dedup.
>
> Your specific config could look like this:
>
> /etc/squid/dedup/mozilla.conf:
> [mozilla]
> match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
> replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
<http://download.cdn.mozilla.net.%(intdomain)s//1>
> fetch: true
>
> The fetch parameter is unique among the other StoreID helper (AFAIK):
it is
> fetching the object after a certain delay with a pool of fetcher threads.
>
> The idea is: after the first access for an object, wait a bit (global
setting,
> default: 15 secs), and then fetch the whole thing once. It won't solve
> anything for the first client, but for all subsequent accesses.
>
> The fetcher avoids fetching anything more than once by checking the http
> headers.
>
> This is a pretty new project, but be assured, that the basic functions are
> working fine, and I will do my best to solve any upcoming issues. It is
> implemented with Python3 and prepared for supporting additional features
> easily, while keeping a good part of an eye on efficiency.
>
> Let me know, if you're going to try it.
>
> Pete
>
>
> --
>
> Message: 4
> Date: Thu, 12 May 2016 17:46:36 +0100
> From: Nilesh Gavali 
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Windows Squid with AD authentication
> Message-ID:
>

> Content-Type: text/plain; charset="utf-8"
>
> Team;
> we have squid running on Windows and need to integrate it with Windows AD
> .can anyone help me with steps to be perform to get this done.
>
> Thanks & Regards
> Nilesh Suresh Gavali
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL:
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html>
>
> --
>
> Message: 5
> Date: Thu, 12 May 2016 13:28:00 -0300
> From: Heiler Bemerguy 
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Getting the full file content on a range
> request, but not on EVERY get ...
> Message-ID: <61bf3ff3-

Re: [squid-users] squid-users Digest, Vol 21, Issue 54

2016-05-12 Thread Nilesh Gavali
ile download.
> But I WANT squid to cache it. So I decide to use "range_offset_limit
> -1", but then on every GET squid will re-download the file from the
> beginning, opening LOTs of simultaneous connections and using too much
> bandwidth, doing just the OPPOSITE it's meant to!
> 
> Is there a smart way to allow squid to download it from the beginning to
> the end (to actually cache it), but only on the FIRST request/get? Even
> if it makes the user wait for the full download, or cancel it
> temporarily, or.. whatever!! Anything!!

Well, this is exactly, what my squid_dedup helper was created for!

See my announcement: 

 Subject: [squid-users] New StoreID helper: squid_dedup
 Date: Mon, 09 May 2016 23:56:45 +0200

My openSUSE environment is fetching _all_ updates with byte-ranges from 
many 
servers. Therefor, I created squid_dedup.

Your specific config could look like this:

/etc/squid/dedup/mozilla.conf:
[mozilla]
match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
fetch: true

The fetch parameter is unique among the other StoreID helper (AFAIK): it 
is 
fetching the object after a certain delay with a pool of fetcher threads.

The idea is: after the first access for an object, wait a bit (global 
setting, 
default: 15 secs), and then fetch the whole thing once. It won't solve 
anything for the first client, but for all subsequent accesses. 

The fetcher avoids fetching anything more than once by checking the http 
headers.

This is a pretty new project, but be assured, that the basic functions are 

working fine, and I will do my best to solve any upcoming issues. It is 
implemented with Python3 and prepared for supporting additional features 
easily, while keeping a good part of an eye on efficiency.

Let me know, if you're going to try it.

Pete


--

Message: 4
Date: Thu, 12 May 2016 17:46:36 +0100
From: Nilesh Gavali 
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Windows Squid with AD authentication
Message-ID:
 
Content-Type: text/plain; charset="utf-8"

Team;
we have squid running on Windows and need to integrate it with Windows AD 
.can anyone help me with steps to be perform to get this done.

Thanks & Regards
Nilesh Suresh Gavali
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


-- next part --
An HTML attachment was scrubbed...
URL: <
http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html
>

--

Message: 5
Date: Thu, 12 May 2016 13:28:00 -0300
From: Heiler Bemerguy 
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Getting the full file content on a range
 request, but not on EVERY get ...
Message-ID: <61bf3ff3-c8b2-647f-9b5e-3112b2f43...@cinbesa.com.br>
Content-Type: text/plain; charset="utf-8"; Format="flowed"


Hi Pete, thanks for replying... let me see if I got it right..

Will I need to specify every url/domain I want it to act on ? I want 
squid to do it for every range-request downloads that should/would be 
cached (based on other rules, pattern_refreshs etc)

It doesn't need to delay any downloads as long as it isn't a dupe of 
what's already being downloaded.


Best Regards,


-- 
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:
> On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
>> Hey guys,
>>
>> First take a look at the log:
>>
>> root@proxy:/var/log/squid# tail -f access.log |grep
>> 
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 
TCP_MISS/206
>> 300520 GET
> [...]
>> Now think: An user is just doing a segmented/ranged download, right?
>> Squid won't cache the file because it is a range-download, not a full
>> file download.
>> But I WANT squid to cache it. So I decide to use "range_offset_limit
>> -1", but then on every GET squid will re-download the file from the
>> beginning, opening LOTs of simultaneous connections and using too much
>> bandwidth, doing just the OPPOSITE it's meant to!
>>
>> Is there 

Re: [squid-users] Windows Squid with AD authentication

2016-05-12 Thread Antony Stone
On Thursday 12 May 2016 at 18:46:36, Nilesh Gavali wrote:

> Team;
> we have squid running on Windows and need to integrate it with Windows AD
> .can anyone help me with steps to be perform to get this done.

This specific question has appeared a few times on this list only recently.

Have you so far:

 - searched the list archives for likely answers to your question?

http://lists.squid-cache.org/pipermail/squid-users/

 - consulted the Squid documentation for guidance?

http://www.squid-cache.org/Doc/

 - looked for any independent HOWTOs etc which show how people have done this 
in the past?

http://www.google.com/search?q=squid+active+directory+authentication


Here's some friendly advice:

1. The more information you give us (such as: which version of Squid are you 
using, which version of Windows are you running under, which form of 
authentication are you using?), the easier it is for people here to help.

2. If you have tried something already and run into problems, tell us what you 
have tried and what problems (log file extracts, complete client error message, 
etc) you encountered, so we can offer specific suggestions.

3. If you haven't yet tried to implement anything, at least let us know what 
documentation you have looked up and what problems you encountered when 
following it, so we can try to fill in the gaps.


Regards,


Antony.

-- 
Most people have more than the average number of legs.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Heiler Bemerguy


Hi Pete, thanks for replying... let me see if I got it right..

Will I need to specify every url/domain I want it to act on ? I want 
squid to do it for every range-request downloads that should/would be 
cached (based on other rules, pattern_refreshs etc)


It doesn't need to delay any downloads as long as it isn't a dupe of 
what's already being downloaded.



Best Regards,


--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:

On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:

Hey guys,

First take a look at the log:

root@proxy:/var/log/squid# tail -f access.log |grep
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-> 
BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
300520 GET

[...]

Now think: An user is just doing a segmented/ranged download, right?
Squid won't cache the file because it is a range-download, not a full
file download.
But I WANT squid to cache it. So I decide to use "range_offset_limit
-1", but then on every GET squid will re-download the file from the
beginning, opening LOTs of simultaneous connections and using too much
bandwidth, doing just the OPPOSITE it's meant to!

Is there a smart way to allow squid to download it from the beginning to
the end (to actually cache it), but only on the FIRST request/get? Even
if it makes the user wait for the full download, or cancel it
temporarily, or.. whatever!! Anything!!

Well, this is exactly, what my squid_dedup helper was created for!

See my announcement:

Subject: [squid-users] New StoreID helper: squid_dedup
Date: Mon, 09 May 2016 23:56:45 +0200

My openSUSE environment is fetching _all_ updates with byte-ranges from many
servers. Therefor, I created squid_dedup.

Your specific config could look like this:

/etc/squid/dedup/mozilla.conf:
[mozilla]
match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
fetch: true

The fetch parameter is unique among the other StoreID helper (AFAIK): it is
fetching the object after a certain delay with a pool of fetcher threads.

The idea is: after the first access for an object, wait a bit (global setting,
default: 15 secs), and then fetch the whole thing once. It won't solve
anything for the first client, but for all subsequent accesses.

The fetcher avoids fetching anything more than once by checking the http
headers.

This is a pretty new project, but be assured, that the basic functions are
working fine, and I will do my best to solve any upcoming issues. It is
implemented with Python3 and prepared for supporting additional features
easily, while keeping a good part of an eye on efficiency.

Let me know, if you're going to try it.

Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Windows Squid with AD authentication

2016-05-12 Thread Nilesh Gavali
Team;
we have squid running on Windows and need to integrate it with Windows AD 
.can anyone help me with steps to be perform to get this done.

Thanks & Regards
Nilesh Suresh Gavali
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-12 Thread Hans-Peter Jansen
On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> Hey guys,
> 
> First take a look at the log:
> 
> root@proxy:/var/log/squid# tail -f access.log |grep
> http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-> 
> BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
> 300520 GET
[...] 
> Now think: An user is just doing a segmented/ranged download, right?
> Squid won't cache the file because it is a range-download, not a full
> file download.
> But I WANT squid to cache it. So I decide to use "range_offset_limit
> -1", but then on every GET squid will re-download the file from the
> beginning, opening LOTs of simultaneous connections and using too much
> bandwidth, doing just the OPPOSITE it's meant to!
> 
> Is there a smart way to allow squid to download it from the beginning to
> the end (to actually cache it), but only on the FIRST request/get? Even
> if it makes the user wait for the full download, or cancel it
> temporarily, or.. whatever!! Anything!!

Well, this is exactly, what my squid_dedup helper was created for!

See my announcement: 

Subject: [squid-users] New StoreID helper: squid_dedup
Date: Mon, 09 May 2016 23:56:45 +0200

My openSUSE environment is fetching _all_ updates with byte-ranges from many 
servers. Therefor, I created squid_dedup.

Your specific config could look like this:

/etc/squid/dedup/mozilla.conf:
[mozilla]
match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
fetch: true

The fetch parameter is unique among the other StoreID helper (AFAIK): it is 
fetching the object after a certain delay with a pool of fetcher threads.

The idea is: after the first access for an object, wait a bit (global setting, 
default: 15 secs), and then fetch the whole thing once. It won't solve 
anything for the first client, but for all subsequent accesses. 

The fetcher avoids fetching anything more than once by checking the http 
headers.

This is a pretty new project, but be assured, that the basic functions are 
working fine, and I will do my best to solve any upcoming issues. It is 
implemented with Python3 and prepared for supporting additional features 
easily, while keeping a good part of an eye on efficiency.

Let me know, if you're going to try it.

Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Linking with *SSL

2016-05-12 Thread Spil Oss
> Hi!
> When we worked on squid port on FreeBSD one of the FreeBSD user
> (Bernard Spil) noticed:
>
> When working on this, I ran into another issue. Perhaps maintainer can
> fix that with upstream. I've now added LIBOPENSSL_LIBS="-lcrypto
> -lssl" because of configure failing in configure.ac line 1348.
>
> > AC_CHECK_LIB(ssl,[SSL_library_init],[LIBOPENSSL_LIBS="-lssl 
> > $LIBOPENSSL_LIBS"],[AC_MSG_ERROR([library 'ssl' is required for OpenSSL])
>
> You cannot link against libssl when not linking libcrypto as well
> leading to an error with LibreSSL. This check should add -lcrypto in
> addition to -lssl to pass.
>
> Is this something someone could take a look at?

Hi All,

Sorry for replying out-of-thread.

What happens is that the check for SSL_library_init fails as -lcrypto
is missing.

Output from configure

> checking for CRYPTO_new_ex_data in -lcrypto... yes
> checking for SSL_library_init in -lssl... no
> configure: error: library 'ssl' is required for OpenSSL
> ===>  Script "configure" failed unexpectedly.

What I usually see in autoconf scripts is that temp CFLAGS etc are set
before the test for SSL libs and reversed after the test.

Adding LIBOPENSSL_LIBS="-lcrypto -lssl" to configure works as well

Would be great if you can fix this!

Thanks,

Bernard Spil.
https://wiki.freebsd.org/BernardSpil
https://wiki.freebsd.org/LibreSSL
https://wiki.freebsd.org/OpenSSL
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav (SOLVED)

2016-05-12 Thread Amos Jeffries
On 12/05/2016 11:13 p.m., C. L. Martinez wrote:
> 
> But when squid sents an OPTIONS request to ICAP, why works when I use 
> 127.0.0.1 and not localhost?? Maybe it is a problem with openbsd's package ...
> 

It is quite possible. 127.0.0.1 is not the only address modern computers
use for localhost. Double check what your hosts file contains.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav (SOLVED)

2016-05-12 Thread C. L. Martinez
On Thu 12.May'16 at 22:20:47 +1200, Amos Jeffries wrote:
> On 12/05/2016 8:42 p.m., C. L. Martinez wrote:
> > On Wed 11.May'16 at 21:14:08 +0600, Yuri Voinov wrote:
> >>
> >> -BEGIN PGP SIGNED MESSAGE-
> >> Hash: SHA256
> >>  
> >>
> >> 11.05.16 21:04, L.P.H. van Belle пишет:
> >>>
> >>> Hai,
> >>>
> >>>
> >>>
> >>> I reviewd your config, thing whats different in c-icap.conf compared
> >> to me.
> >>>
> >> Obviously, the mindless copying and pasting the config - very bad
> >> practice, is not it?
> >>>
> >>> RemoteProxyUsers off ( for you ) on for me.
> >>>
> >> # TAG: RemoteProxyUsers
> >> # Format: RemoteProxyUsers onoff
> >> # Description:
> >> #Set it to on if you want to use username provided by the proxy server.
> >> #This is the recomended way to use users in c-icap.
> >> #If the RemoteProxyUsers is off and c-icap configured to use users or
> >> #groups the internal authentication mechanism will be used.
> >> # Default:
> >> #RemoteProxyUsers off
> >> RemoteProxyUsers off
> >>
> >> This is depending proxy configuration. And irrelevant current case.
> >>>
> >>>
> >>>
> >>> Whats the content of /etc/c-icap/squidclamav.conf ?
> >>>
> >>> The important part for me of the file :
> >>>
> >>> #clamd_local /var/run/clamd.socket ! change/check this
> >>>
> >> This is OS-dependent, as obvious.
> >>>
> >>> clamd_ip 127.0.0.1
> >>>
> >>> clamd_port 3310
> >>>
> >>>
> >>>
> >>> If you use socket make sure your rights are correct and icap is added
> >> to the clamav group.
> >>>
> >> Wrong. Squid group, not clamav.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> And my c-icap part of the squid.conf
> >>>
> >>> ## Tested with Squid 3.4.8 and 3.5.x + squidclamav 6.14 and 6.15
> >>>
> >>> icap_enable on
> >>>
> >>> icap_send_client_ip on
> >>>
> >>> icap_send_client_username on
> >>>
> >>> icap_client_username_header X-Authenticated-User
> >>>
> >>> icap_persistent_connections on
> >>>
> >>> icap_preview_enable on
> >>>
> >>> icap_preview_size 1024
> >>>
> >>> icap_service service_req reqmod_precache bypass=1
> >> icap://127.0.0.1:1344/squidclamav
> >>>
> >>> adaptation_access service_req allow all
> >>>
> >>> icap_service service_resp respmod_precache bypass=1
> >> icap://127.0.0.1:1344/squidclamav
> >>>
> >>> adaptation_access service_resp allow all
> >>>
> >>>
> >>>
> >>> I think you changed to much in the example.
> >>>
> >>>
> >>>
> >>> Im reffering to these in the squid.conf
> >>>
>  adaptation_access service_avi_resp allow all
> >>>
> >>> service_avi_resp?
> >>>
> >>>
> >>>
> >> Complete squid.conf fragment:
> >>
> >> icap_service service_avi_req reqmod_precache
> >> icap://localhost:1344/squidclamav bypass=off
> >> adaptation_access service_avi_req allow all
> >> icap_service service_avi_resp respmod_precache
> >> icap://localhost:1344/squidclamav bypass=on
> >> adaptation_access service_avi_resp allow all
> >>
> >> Please, PLEASE, do not make recommendation when you not understand what
> >> does config lines means!
> >>  
> > 
> > Ok, problem is solved. Seems there is some problem between squid and my 
> > unbound DNS server. Changing the following lines:
> > 
> > icap_service service_avi_req reqmod_precache 
> > icap://localhost:1344/squidclamav bypass=off
> > icap_service service_avi_resp respmod_precache 
> > icap://localhost:1344/squidclamav bypass=on
> > 
> > to:
> > 
> > icap_service service_avi_req reqmod_precache 
> > icap://127.0.0.1:1344/squidclamav bypass=off
> > icap_service service_avi_resp respmod_precache 
> > icap://127.0.0.1:1344/squidclamav bypass=on
> > 
> > all works as expected. As you can see I have changed "localhost" for 
> > "127.0.0.1" ... localhost entry exists inside my /etc/hosts file, and 
> > OpenBSD resolves correctly, but under unbound's config I have enabled 
> > "do-not-query-localhost: no" because unbound is configured to work with 
> > dnscrypt-proxy service...
> > 
> > I am not sure about this, but it is the only answer that explains this 
> > problem ... or it is a bug (but I don't think so).
> > 
> > What do you think??
> > 
> 
> I think that Squid told you it was sending an OPTIONS request to ICAP
> service, which failed. So it marked the service down. The service was
> not allowed to be bypassed (bypass=off), so cannot cope with being down.
> 
> It is possible "localhost" had to be resolved to do that OPTIONS
> request. However, if as you say it already has an entry in your
> /etc/hosts file then Squid should have loaded that entry as a permanent
> record and never be looking it up in DNS.
> 
> Amos

But when squid sents an OPTIONS request to ICAP, why works when I use 127.0.0.1 
and not localhost?? Maybe it is a problem with openbsd's package ...

-- 
Greetings,
C. L. Martinez
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav (SOLVED)

2016-05-12 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Hm. Rare case.

In general, any OS TCP stack can resolve localhost itself to 127.0.0.1
with /etc/hosts or whatever.


12.05.16 14:42, C. L. Martinez пишет:
> On Wed 11.May'16 at 21:14:08 +0600, Yuri Voinov wrote:
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>> 
>>
>> 11.05.16 21:04, L.P.H. van Belle пишет:
>>>
>>> Hai,
>>>
>>>
>>>
>>> I reviewd your config, thing whats different in c-icap.conf compared
>> to me.
>>>
>> Obviously, the mindless copying and pasting the config - very bad
>> practice, is not it?
>>>
>>> RemoteProxyUsers off ( for you ) on for me.
>>>
>> # TAG: RemoteProxyUsers
>> # Format: RemoteProxyUsers onoff
>> # Description:
>> #Set it to on if you want to use username provided by the proxy
server.
>> #This is the recomended way to use users in c-icap.
>> #If the RemoteProxyUsers is off and c-icap configured to use users or
>> #groups the internal authentication mechanism will be used.
>> # Default:
>> #RemoteProxyUsers off
>> RemoteProxyUsers off
>>
>> This is depending proxy configuration. And irrelevant current case.
>>>
>>>
>>>
>>> Whats the content of /etc/c-icap/squidclamav.conf ?
>>>
>>> The important part for me of the file :
>>>
>>> #clamd_local /var/run/clamd.socket ! change/check this
>>>
>> This is OS-dependent, as obvious.
>>>
>>> clamd_ip 127.0.0.1
>>>
>>> clamd_port 3310
>>>
>>>
>>>
>>> If you use socket make sure your rights are correct and icap is added
>> to the clamav group.
>>>
>> Wrong. Squid group, not clamav.
>>>
>>>
>>>
>>>
>>>
>>> And my c-icap part of the squid.conf
>>>
>>> ## Tested with Squid 3.4.8 and 3.5.x + squidclamav 6.14 and 6.15
>>>
>>> icap_enable on
>>>
>>> icap_send_client_ip on
>>>
>>> icap_send_client_username on
>>>
>>> icap_client_username_header X-Authenticated-User
>>>
>>> icap_persistent_connections on
>>>
>>> icap_preview_enable on
>>>
>>> icap_preview_size 1024
>>>
>>> icap_service service_req reqmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>>>
>>> adaptation_access service_req allow all
>>>
>>> icap_service service_resp respmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>>>
>>> adaptation_access service_resp allow all
>>>
>>>
>>>
>>> I think you changed to much in the example.
>>>
>>>
>>>
>>> Im reffering to these in the squid.conf
>>>
 adaptation_access service_avi_resp allow all
>>>
>>> service_avi_resp?
>>>
>>>
>>>
>> Complete squid.conf fragment:
>>
>> icap_service service_avi_req reqmod_precache
>> icap://localhost:1344/squidclamav bypass=off
>> adaptation_access service_avi_req allow all
>> icap_service service_avi_resp respmod_precache
>> icap://localhost:1344/squidclamav bypass=on
>> adaptation_access service_avi_resp allow all
>>
>> Please, PLEASE, do not make recommendation when you not understand what
>> does config lines means!
>> 
>
> Ok, problem is solved. Seems there is some problem between squid and
my unbound DNS server. Changing the following lines:
>
> icap_service service_avi_req reqmod_precache
icap://localhost:1344/squidclamav bypass=off
> icap_service service_avi_resp respmod_precache
icap://localhost:1344/squidclamav bypass=on
>
> to:
>
> icap_service service_avi_req reqmod_precache
icap://127.0.0.1:1344/squidclamav bypass=off
> icap_service service_avi_resp respmod_precache
icap://127.0.0.1:1344/squidclamav bypass=on
>
> all works as expected. As you can see I have changed "localhost" for
"127.0.0.1" ... localhost entry exists inside my /etc/hosts file, and
OpenBSD resolves correctly, but under unbound's config I have enabled
"do-not-query-localhost: no" because unbound is configured to work with
dnscrypt-proxy service...
>
> I am not sure about this, but it is the only answer that explains this
problem ... or it is a bug (but I don't think so).
>
> What do you think??
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEbBAEBCAAGBQJXNGP4AAoJENNXIZxhPexGTs4H+KjRaCUYCnTjEeHf/EUDMP8S
FHfDKK4nCRbTL/KDn8i4vp1NnZjUjE/t/MfyfEnWNAO1SFknLqAFmlIX/P2Tm6b9
EzSB6XZKMfSg9PrzZxKJkRqF3tRzBXOs0lK2pEVyTd5i2xKkTCsMGw6eHOp8dveG
4DjG1OW3oGCQELJLuj+kPjnjGzYRHRL3Ck+z4ao+CWnIpCUsy0EEtT8+qhyukPkG
Z4kJZzACLq5eR3Pl6moOIsQjSxch5j6ppuOd2tvgqyelAa2VmOECIhp/E8R68QCl
EbmFT2V6xKBKtj2bMiHnYiRVRnlVd6Sd9jsFjhSyrbj2P6XeyWg/03RlOgwYiA==
=Qv20
-END PGP SIGNATURE-



0x613DEC46.asc
Description: application/pgp-keys
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav (SOLVED)

2016-05-12 Thread Amos Jeffries
On 12/05/2016 8:42 p.m., C. L. Martinez wrote:
> On Wed 11.May'16 at 21:14:08 +0600, Yuri Voinov wrote:
>>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>  
>>
>> 11.05.16 21:04, L.P.H. van Belle пишет:
>>>
>>> Hai,
>>>
>>>
>>>
>>> I reviewd your config, thing whats different in c-icap.conf compared
>> to me.
>>>
>> Obviously, the mindless copying and pasting the config - very bad
>> practice, is not it?
>>>
>>> RemoteProxyUsers off ( for you ) on for me.
>>>
>> # TAG: RemoteProxyUsers
>> # Format: RemoteProxyUsers onoff
>> # Description:
>> #Set it to on if you want to use username provided by the proxy server.
>> #This is the recomended way to use users in c-icap.
>> #If the RemoteProxyUsers is off and c-icap configured to use users or
>> #groups the internal authentication mechanism will be used.
>> # Default:
>> #RemoteProxyUsers off
>> RemoteProxyUsers off
>>
>> This is depending proxy configuration. And irrelevant current case.
>>>
>>>
>>>
>>> Whats the content of /etc/c-icap/squidclamav.conf ?
>>>
>>> The important part for me of the file :
>>>
>>> #clamd_local /var/run/clamd.socket ! change/check this
>>>
>> This is OS-dependent, as obvious.
>>>
>>> clamd_ip 127.0.0.1
>>>
>>> clamd_port 3310
>>>
>>>
>>>
>>> If you use socket make sure your rights are correct and icap is added
>> to the clamav group.
>>>
>> Wrong. Squid group, not clamav.
>>>
>>>
>>>
>>>
>>>
>>> And my c-icap part of the squid.conf
>>>
>>> ## Tested with Squid 3.4.8 and 3.5.x + squidclamav 6.14 and 6.15
>>>
>>> icap_enable on
>>>
>>> icap_send_client_ip on
>>>
>>> icap_send_client_username on
>>>
>>> icap_client_username_header X-Authenticated-User
>>>
>>> icap_persistent_connections on
>>>
>>> icap_preview_enable on
>>>
>>> icap_preview_size 1024
>>>
>>> icap_service service_req reqmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>>>
>>> adaptation_access service_req allow all
>>>
>>> icap_service service_resp respmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>>>
>>> adaptation_access service_resp allow all
>>>
>>>
>>>
>>> I think you changed to much in the example.
>>>
>>>
>>>
>>> Im reffering to these in the squid.conf
>>>
 adaptation_access service_avi_resp allow all
>>>
>>> service_avi_resp?
>>>
>>>
>>>
>> Complete squid.conf fragment:
>>
>> icap_service service_avi_req reqmod_precache
>> icap://localhost:1344/squidclamav bypass=off
>> adaptation_access service_avi_req allow all
>> icap_service service_avi_resp respmod_precache
>> icap://localhost:1344/squidclamav bypass=on
>> adaptation_access service_avi_resp allow all
>>
>> Please, PLEASE, do not make recommendation when you not understand what
>> does config lines means!
>>  
> 
> Ok, problem is solved. Seems there is some problem between squid and my 
> unbound DNS server. Changing the following lines:
> 
> icap_service service_avi_req reqmod_precache 
> icap://localhost:1344/squidclamav bypass=off
> icap_service service_avi_resp respmod_precache 
> icap://localhost:1344/squidclamav bypass=on
> 
> to:
> 
> icap_service service_avi_req reqmod_precache 
> icap://127.0.0.1:1344/squidclamav bypass=off
> icap_service service_avi_resp respmod_precache 
> icap://127.0.0.1:1344/squidclamav bypass=on
> 
> all works as expected. As you can see I have changed "localhost" for 
> "127.0.0.1" ... localhost entry exists inside my /etc/hosts file, and OpenBSD 
> resolves correctly, but under unbound's config I have enabled 
> "do-not-query-localhost: no" because unbound is configured to work with 
> dnscrypt-proxy service...
> 
> I am not sure about this, but it is the only answer that explains this 
> problem ... or it is a bug (but I don't think so).
> 
> What do you think??
> 

I think that Squid told you it was sending an OPTIONS request to ICAP
service, which failed. So it marked the service down. The service was
not allowed to be bypassed (bypass=off), so cannot cope with being down.

It is possible "localhost" had to be resolved to do that OPTIONS
request. However, if as you say it already has an entry in your
/etc/hosts file then Squid should have loaded that entry as a permanent
record and never be looking it up in DNS.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav (SOLVED)

2016-05-12 Thread C. L. Martinez
On Wed 11.May'16 at 21:14:08 +0600, Yuri Voinov wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>  
> 
> 11.05.16 21:04, L.P.H. van Belle пишет:
> >
> > Hai,
> >
> > 
> >
> > I reviewd your config, thing whats different in c-icap.conf compared
> to me.
> >
> Obviously, the mindless copying and pasting the config - very bad
> practice, is not it?
> >
> > RemoteProxyUsers off ( for you ) on for me.
> >
> # TAG: RemoteProxyUsers
> # Format: RemoteProxyUsers onoff
> # Description:
> #Set it to on if you want to use username provided by the proxy server.
> #This is the recomended way to use users in c-icap.
> #If the RemoteProxyUsers is off and c-icap configured to use users or
> #groups the internal authentication mechanism will be used.
> # Default:
> #RemoteProxyUsers off
> RemoteProxyUsers off
> 
> This is depending proxy configuration. And irrelevant current case.
> >
> > 
> >
> > Whats the content of /etc/c-icap/squidclamav.conf ?
> >
> > The important part for me of the file :
> >
> > #clamd_local /var/run/clamd.socket ! change/check this
> >
> This is OS-dependent, as obvious.
> >
> > clamd_ip 127.0.0.1
> >
> > clamd_port 3310
> >
> > 
> >
> > If you use socket make sure your rights are correct and icap is added
> to the clamav group.
> >
> Wrong. Squid group, not clamav.
> >
> > 
> >
> > 
> >
> > And my c-icap part of the squid.conf
> >
> > ## Tested with Squid 3.4.8 and 3.5.x + squidclamav 6.14 and 6.15
> >
> > icap_enable on
> >
> > icap_send_client_ip on
> >
> > icap_send_client_username on
> >
> > icap_client_username_header X-Authenticated-User
> >
> > icap_persistent_connections on
> >
> > icap_preview_enable on
> >
> > icap_preview_size 1024
> >
> > icap_service service_req reqmod_precache bypass=1
> icap://127.0.0.1:1344/squidclamav
> >
> > adaptation_access service_req allow all
> >
> > icap_service service_resp respmod_precache bypass=1
> icap://127.0.0.1:1344/squidclamav
> >
> > adaptation_access service_resp allow all
> >
> > 
> >
> > I think you changed to much in the example.
> >
> > 
> >
> > Im reffering to these in the squid.conf
> >
> > > adaptation_access service_avi_resp allow all
> >
> > service_avi_resp?
> >
> > 
> >
> Complete squid.conf fragment:
> 
> icap_service service_avi_req reqmod_precache
> icap://localhost:1344/squidclamav bypass=off
> adaptation_access service_avi_req allow all
> icap_service service_avi_resp respmod_precache
> icap://localhost:1344/squidclamav bypass=on
> adaptation_access service_avi_resp allow all
> 
> Please, PLEASE, do not make recommendation when you not understand what
> does config lines means!
>  

Ok, problem is solved. Seems there is some problem between squid and my unbound 
DNS server. Changing the following lines:

icap_service service_avi_req reqmod_precache icap://localhost:1344/squidclamav 
bypass=off
icap_service service_avi_resp respmod_precache 
icap://localhost:1344/squidclamav bypass=on

to:

icap_service service_avi_req reqmod_precache icap://127.0.0.1:1344/squidclamav 
bypass=off
icap_service service_avi_resp respmod_precache 
icap://127.0.0.1:1344/squidclamav bypass=on

all works as expected. As you can see I have changed "localhost" for 
"127.0.0.1" ... localhost entry exists inside my /etc/hosts file, and OpenBSD 
resolves correctly, but under unbound's config I have enabled 
"do-not-query-localhost: no" because unbound is configured to work with 
dnscrypt-proxy service...

I am not sure about this, but it is the only answer that explains this problem 
... or it is a bug (but I don't think so).

What do you think??


-- 
Greetings,
C. L. Martinez
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems configuring Squid with C-ICAP+Squidclamav

2016-05-12 Thread C. L. Martinez
On Wed 11.May'16 at 21:53:13 +0600, Yuri Voinov wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>  
> Try to increase debug level in c_icap.conf:
> 
> # TAG: DebugLevel
> # Format: DebugLevel level
> # Description:
> #The level of debugging information to be logged.
> #The acceptable range of levels is between 0 and 10.
> # Default:
> #DebugLevel 1
> DebugLevel 3
> 
> and look at c_icap server log again.
> 
> 
Thanks Yuri. I have enabled debug, but nothing:

root@obsd:/var/log/c-icap# c-icap -N -D -d 10
Setting parameter :-d=10
Searching 0x109580a21bb8 for default value
Setting parameter :PidFile=/var/run/c-icap/c-icap.pid
Searching 0x109580a21bc0 for default value
Setting parameter :CommandsSocket=/var/run/c-icap/c-icap.ctl
Searching 0x109580a21b38 for default value
Setting parameter :Timeout=300
Searching 0x109580a21b40 for default value
Setting parameter :MaxKeepAliveRequests=100
Searching 0x109580a21b3c for default value
Setting parameter :KeepAliveTimeout=600
Searching 0x109580a21c10 for default value
Setting parameter :StartServers=3
Searching 0x109580a21c14 for default value
Setting parameter :MaxServers=10
Searching 0x109580a21c1c for default value
Setting parameter :MinSpareThreads=10
Searching 0x109580a21c20 for default value
Setting parameter :MaxSpareThreads=20
Searching 0x109580a21c18 for default value
Setting parameter :ThreadsPerChild=10
Searching 0x109580a21b4c for default value
Setting parameter :MaxRequestsPerChild=0
Searching 0x109580a21ba8 for default value
Setting parameter :Port=1344
Searching 0x109580a21c00 for default value
Setting parameter :ServerAdmin=sq...@domain.com
Searching 0x109580a21bb0 for default value
Setting parameter :TmpDir=/var/tmp
Searching 0x1097e5919290 for default value
Setting parameter :MaxMemObject=131072
Searching 0x109580a21b60 for default value
Setting parameter :Pipelining=1
Searching 0x109580a21b64 for default value
Setting parameter :SupportBuggyClients=0
Searching 0x109580a21bf8 for default value
Setting parameter :ModulesDir=/usr/local/lib/c_icap
Searching 0x109580a21bf0 for default value
Setting parameter :ServicesDir=/usr/local/lib/c_icap
Searching 0x1097e591a698 for default value
Setting parameter :TemplateDir=/usr/local/share/c_icap/templates/
Searching 0x1097e591a6c0 for default value
Setting parameter :TemplateDefaultLanguage=en
The db file /etc/c-icap/c-icap.magic is the same as default. Ignoring...
Searching 0x109580a22408 for default value
Setting parameter :RemoteProxyUsers=0
Searching 0x109580a22440 for default value
Setting parameter :RemoteProxyUserHeader=X-Authenticated-User
Searching 0x109580a2240c for default value
Setting parameter :RemoteProxyUserHeaderEncoded=1
Adding to acl localhost the data 127.0.0.1/255.255.255.255
In search specs list 0x0,name localhost
New ACL with name:localhost and  ACL Type: src
Adding to acl ALLREQUESTS the data RESPMOD
In search specs list 0x10978ca90b00,name ALLREQUESTS
Checking name:ALLREQUESTS with specname localhost
Adding to acl ALLREQUESTS the data REQMOD
In search specs list 0x10978ca90b00,name ALLREQUESTS
Checking name:ALLREQUESTS with specname localhost
Checking name:ALLREQUESTS with specname ALLREQUESTS
New ACL with name:ALLREQUESTS and  ACL Type: type
Creating new access entry as allow with specs:
In search specs list 0x10978ca90b00,name localhost
Checking name:localhost with specname localhost
In search specs list 0x10978ca90b00,name localhost
Checking name:localhost with specname localhost
Adding acl spec: localhost
In search specs list 0x10978ca90b00,name ALLREQUESTS
Checking name:ALLREQUESTS with specname localhost
Checking name:ALLREQUESTS with specname ALLREQUESTS
In search specs list 0x10978ca90b00,name ALLREQUESTS
Checking name:ALLREQUESTS with specname localhost
Checking name:ALLREQUESTS with specname ALLREQUESTS
Adding acl spec: ALLREQUESTS
Creating new access entry as deny with specs:
In search specs list 0x10978ca90b00,name all
Checking name:all with specname localhost
Checking name:all with specname ALLREQUESTS
In search specs list 0x10978ca90b00,name all
Checking name:all with specname localhost
Checking name:all with specname ALLREQUESTS
The acl spec all does not exists!
Adding acl spec: all
Adding the logformat myFormat: %tl, %a %im %iu %is %I %O %Ib %Ob %{10}bph
Searching 0x109580a22678 for default value
Setting parameter :ServerLog=/var/log/c-icap/server.log
Adding the access logfile /var/log/c-icap/access.log
Setting parameter :Logger=file_logger
Loading service :echo path srv_echo.so
Found handler C_handler for service with extension:.so
Initialization of echo module..
Registering conf table:echo
Warning, alias is the same as service_name, not adding
Loading service :logger path sys_logger.so
Registering conf table:sys_logger
Loading service :squidclamav path squidclamav.so
Found handler C_handler for service with extension:.so
squidclamav.c(183) squidclamav_init_service: DEBUG Going to initialize 
squidclamav
squidclamav.