Re: [squid-users] Host header forgery policy in service provider environment

2016-01-06 Thread Garri Djavadyan
>On 2015-12-31 00:01, Garri Djavadyan wrote:
>> Hello Squid members and developers!
>> 
>> First of all, I wish you a Happy New Year 2016!
>> 
>> The current Host header forgery policy effectively prevents a cache
>> poisoning. But also, I noticed, it deletes verified earlier cached
>> object. Is it possible to implement more careful algorithm as an
>> option? For example, if Squid will not delete earlier successfully
>> verified and valid cached object and serve forged request from the
>> cache if would be more effective and in same time secure behavior.
>
>
>This seems to be describing 
><http://bugs.squid-cache.org/show_bug.cgi?id=3940>
>
>So far we don't have a solution. Patches very welcome.
>
>Amos

Amos, can recheck the bug report? I found the root cause of the problem
and presented possible prototype solution, which solves the problem in
my environment. Thank you in advance!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Host header forgery policy in service provider environment

2015-12-30 Thread Garri Djavadyan
Hello Squid members and developers!

First of all, I wish you a Happy New Year 2016!

The current Host header forgery policy effectively prevents a cache
poisoning. But also, I noticed, it deletes verified earlier cached
object. Is it possible to implement more careful algorithm as an
option? For example, if Squid will not delete earlier successfully
verified and valid cached object and serve forged request from the
cache if would be more effective and in same time secure behavior.

For example, in service provider tproxy environment, it is almost
impossible to effectively optimize content delivery from sophisticated
CDNs, such as appldnld.apple.com, iosapps.itunes.apple.com. For the
latter domain, DNS servers return different pairs of A records for same
host every 15 seconds regardless of Geo location. For the former
domain, local DNS servers and public DNS servers (Google) return
different records. As I emphasized SP environment, it is not possible
to control DNS settings on subscriber systems.

Thank you for attention!

-- 
Garri Djavadyan
iPlus LLC, TM Comnet, Technical Department
Phone: +99871 235 (ext. 27)
http://comnet.uz


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Internet Browsing very slow after implementing Squid peek & splice + Access log not tracing full URL

2016-05-18 Thread Garri Djavadyan
On Thu, 2016-05-19 at 05:27 +1200, Amos Jeffries wrote:
> On 19/05/2016 2:21 a.m., Garri Djavadyan wrote:
> > 
> > On Thu, 2016-05-19 at 00:39 +1200, Amos Jeffries wrote:
> > > 
> > > Using ignore-private and ignore-must-revalidate on the same
> > > refresh_pattern is *extremely* dangerous. Just asking to get your
> > > cache pwned.
> > I'm also using the both options on the same refresh_pattern for
> > several
> > years. Can you explain the consequences? I couldn't find enough
> > information in Squid's reference and RFC2616. Thanks in advance!
> > 
> The 'private' cache-control is supposed to only be used when the
> response contains sensitive credentials or private data.
> 
> ignore-private has a long history of causing (not allowing.
> *causing*)
> people to login to other peoples accounts on various services. One
> might
> have heard about the recent Steam account login having "an issue with
> our proxy settings". I'd bet a lot it was somebody turing on
> "ignore-private" or the equivalent in their systems.
> 
> With the HTTP/1.1 changes I made it tell Squid to treat 'private' the
> same as 'must-revalidate', so that private stuff could still be
> forced
> to cache but much more safely.
> 
> Ignoring both brings back all the security and privacy breach
> problems.
> 
> One should not be afraid of revalidation. It is the backbone of most
> of
> the mechanisms that make HTTP/1.1 more performant than 1.0.
> 
> So IMO, stay away from ignore-private like it was plague. If you
> really
> have a reason to use it. At least dont use ignore-revalidate on the
> same
> traffic.
> 
> (I've similar advice for ignore-no-store. But at least no-store does
> not
> have the same security/privacy/credentials tie-in as private.)
> 
> > 
> > 
> > > 
> > > Also ignore-auth makes things *not* be cacheable in all the auth
> > > related cases when it would normally be stored by Squid.
> > I always thought that the purpose of the option is exact opposite.
> > Squid's reference any trivial test confirmed my thoughts. Sorry,
> > but
> > maybe I understood the quote incorrectly?
> > 
> It tells Squid to ignore the auth headers in a request.
> 
> In HTTP/1.0 messages the presence of auth meant the object was
> non-cacheable due to sensitive credentials. So the control let people
> make that traffic cache.
> 
> In HTTP/1.1 messages the presence of auth is often equivalent to
> must-revalidate. So ignoring the headers makes the alternative
> controls
> in the headers kick in and force non-caching. The opposite of what is
> usually intended.
> 
> 
> (FYI: both ignore-auth and ignore-must-revalidate are gone in Squid-
> 4.
> For the above reasons.)
> 
> Amos

Amos, thank you very much for the clarification!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-11 Thread Garri Djavadyan
On Wed, 2016-05-11 at 21:37 -0300, Heiler Bemerguy wrote:
> 
> Hey guys,
> First take a look at the log:
> root@proxy:/var/log/squid# tail -f access.log |grep http://download.c
> dn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar
> 1463011781.572   8776 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
> application/octet-stream
> 1463011851.008   9347 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463011920.683   9645 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9
> application/octet-stream
> 1463012000.144  19154 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463012072.276  12121 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463012145.643  13358 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463012217.472  11772 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463012294.676  17148 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> 1463012370.131  15272 10.1.3.236 TCP_MISS/206 300520 GET http://downl
> oad.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-
> BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32
> application/octet-stream
> Now think: An user is just doing a segmented/ranged download, right?
> Squid won't cache the file because it is a range-download, not a full
> file download.
> But I WANT squid to cache it. So I decide to use "range_offset_limit
> -1", but then on every GET squid will re-download the file from the
> beginning, opening LOTs of simultaneous connections and using too
> much bandwidth, doing just the OPPOSITE it's meant to!
> 
> Is there a smart way to allow squid to download it from the beginning
> to the end (to actually cache it), but only on the FIRST request/get?
> Even if it makes the user wait for the full download, or cancel it
> temporarily, or.. whatever!! Anything!!
> 
> Best Regards,
> -- 
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

Hi, I believe, you describe the bug http://bugs.squid-cache.org/show_bu
g.cgi?id=4469

I tried to reproduce the problem and have found that the problem
appears only with rock storage configurations. Can you try with
ufs/aufs storage?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid transfers much not requested data from uplink in specific cases

2016-05-17 Thread Garri Djavadyan
Hello Squid community,

According to the bug report 4511 [1], Squid may transfer much useless,
not requested data from uplink after specific sequence of actions.

For example, slow client (access rate 128Kb/s) may begin transfer of
big cacheable object (4GB). After some time, another client (access
rate 1Mb/s) may begin to transfer the same object. Depending on time
interval between the requests (or the volume to data cached to disk by
the first transfer), after some time, transfer rate of the second
client will be limited to 128Kb/s too [2]. If after some time the first
client aborts the transfer, the second client's rate recovers to 1Mb/s
and Squid begin to transfer the object from origin server to disk cache
at maximum rate the uplink permits (let's assume 100Mb/s).

If the second client also aborts the transfer after some time, it may
results in much not requested data transferred from uplink. Also, if
Squid doesn't finish to transfer the object to the disk cache, the
object won't be cached finally.

For example, if the first client aborts the transfer after downloading
100MB, Squid begins to transfer the object at rate 100Mb/s. If after 4
minutes, the second client also aborts the transfer, Squid would
terminate the transfer from origin server and we get is the following
statistics:

Client #1 - 100MB
Client #2 - 100MB + 1Mb/s / 8 bits * 240 seconds = 130MB
Squid - 100MB + 100Mb/s / 8 bits * 240 seconds = 3100MB
Not requested / useless data = 3100MB - (100MB + 130MB) = 2870MB

In that scenario, Squid would get 2870MB useless data that would not be
cached (object's size 4GB). During these 4 minutes, 'Hits as % bytes
sent' counter would show extremely low value.

[1] http://bugs.squid-cache.org/show_bug.cgi?id=4511
[2] http://bugs.squid-cache.org/show_bug.cgi?id=4520



So, I want to ask community to share ideas, best practice to cope with
the problem. Many thank in advance!

-- 
Garri Djavadyan <gar...@comnet.uz>
Comnet ISP

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-13 Thread Garri Djavadyan
On Thu, 2016-05-12 at 14:02 -0300, Heiler Bemerguy wrote:
> 
> Hi Garri,
> That bug report is mine.. lol

Hi Heiler,
Yes, I know it. I just tried to answer to the following question.

> > > Is there a smart way to allow squid to download it from the
> > > beginning
> > > to the end (to actually cache it), but only on the FIRST
> > > request/get?
> > > Even if it makes the user wait for the full download, or cancel
> > > it
> > > temporarily, or.. whatever!! Anything!!

The config option 'range_offset_limit none' (or -1) forces exactly that
behavior. It fetches whole object from the beginning to the end on
first range request. The problems you have encountered are consequences
of the bug 4469 you reported. I encountered the same problems using
Rock store. UFS/AUFS stores are not affected by the bug.

So, to fix the problem more quickly, the best way is to provide to
developers more information. For example, full debug log (ALL,9) for
isolated requests.

Thanks. Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-13 Thread Garri Djavadyan
On Fri, 2016-05-13 at 08:36 +1200, Amos Jeffries wrote:
> Have you given collapsed_forwarding a try? Its supposed to prevent
> all
> the duplicate requests making all those extra upstream connections
> unti
> at least the first one has finished getting the object.

Amos, I believe that the above quote describes default Squid's action,
which does not require collapsed_forwarding. The details of my
experiments can be found here http://bugs.squid-cache.org/show_bug.cgi?
id=4511#c0. Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting the full file content on a range request, but not on EVERY get ...

2016-05-13 Thread Garri Djavadyan
On Sat, 2016-05-14 at 01:52 +1200, Amos Jeffries wrote:
> The default action should be to fetch each range request separately
> and
> in parallel. Not caching the results.
> 
> When admin has set only the range offset & quick-abort to force full
> object retrieval the behaviour Heiler mentions happens - lots of
> upstream bandwidth used for N copies.
>  The first one to complete starts to be used as a HIT for future
> reuqests. But as each of the initial transfers completes it replaces
> the
> previously cached object as the one being hit on.
> 
> So timing is critical, if Squid happens to delay any of the parallel
> requests just long enough in its TCP accept() queue, auth or ACL
> processing they could become HITs on an already finished object.
> 
> Amos

Yes, you was right! Timing is very critical. I've simulated two
concurrent transfers (without 'collapsed_forwarding on' and with
'range_offset_limit none') using this code:

---
#!/bin/bash

export http_proxy="127.0.0.1:3128"
curl --range $((1024 * 1024 * 1))-$((1024 * 1024 * 2)) http://mirror.co
mnet.uz/centos/7/os/x86_64/images/efiboot.img > /dev/null &
curl --range $((1024 * 1024 * 3))-$((1024 * 1024 * 4)) http://mirror.co
mnet.uz/centos/7/os/x86_64/images/efiboot.img > /dev/null
---

And got two MISS':

1463150025.340987 127.0.0.1 TCP_MISS/206 1048943 GET http://mirror.
comnet.uz/centos/7/os/x86_64/images/efiboot.img -
HIER_DIRECT/91.196.76.102 application/octet-stream
1463150026.315   1963 127.0.0.1 TCP_MISS/206 1048943 GET http://mirror.
comnet.uz/centos/7/os/x86_64/images/efiboot.img -
HIER_DIRECT/91.196.76.102 application/octet-stream

Then, I purged the object repeated with 'collapsed_forwarding on' and
got MISS and HIT:

1463150169.010370 127.0.0.1 TCP_MISS/206 1048943 GET http://mirror.
comnet.uz/centos/7/os/x86_64/images/efiboot.img -
HIER_DIRECT/91.196.76.102 application/octet-stream
1463150169.476836 127.0.0.1 TCP_HIT/206 1048950 GET http://mirror.c
omnet.uz/centos/7/os/x86_64/images/efiboot.img - HIER_NONE/-
application/octet-stream


Amos, thank you very much for the detailed explanation.
collapsed_forwarding is really useful option for this situation.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Internet Browsing very slow after implementing Squid peek & splice + Access log not tracing full URL

2016-05-18 Thread Garri Djavadyan
On Thu, 2016-05-19 at 00:39 +1200, Amos Jeffries wrote:
> Using ignore-private and ignore-must-revalidate on the same
> refresh_pattern is *extremely* dangerous. Just asking to get your
> cache pwned.

I'm also using the both options on the same refresh_pattern for several
years. Can you explain the consequences? I couldn't find enough
information in Squid's reference and RFC2616. Thanks in advance!


> Also ignore-auth makes things *not* be cacheable in all the auth
> related cases when it would normally be stored by Squid.

I always thought that the purpose of the option is exact opposite.
Squid's reference any trivial test confirmed my thoughts. Sorry, but
maybe I understood the quote incorrectly?


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High utilization of CPU squid-3.5.23, squid-3.5.24

2017-02-01 Thread Garri Djavadyan
On Wed, 2017-02-01 at 23:55 +0300, Vitaly Lavrov wrote:
> Periodically squid begins to linearly increase the use of the CPU.
> Sometimes this process reaches 100%. At random moment of time the CPU
> usage is reduced to 5-15%,
> and in the presence of client requests can again start linearly
> increasing use of CPU.
> 
> In the protocols are no error messages.
> 
> CPU consumption does not correlate with the number of requests and
> traffic.
> 
> The increase CPU consumption from 0 to 60% occurs in about 4-5 hours,
> and to 100% for 6-8 hours.
> 
> A typical graph of CPU usage can be viewed on http://devel.aanet.ru/t
> mp/squid-cpu-x.png
> 
> With the "perf record -p` pgrep -f squid-1` - sleep 30" I have
> received the following information:
> 
> At 100% CPU load most of the time took 3 calls
> 
>   49.15% squid squid [.] MemObject :: dump
>   25.11% squid squid [.] Mem_hdr :: freeDataUpto
>   20.03% squid squid [.] Mem_hdr :: copy
> 
> When loading CPU 30-60% most of the time took 3 calls
> 
>   37.26% squid squid [.] Mem_node :: dataRange
>   22.61% squid squid [.] Mem_hdr :: NodeCompare
>   17.31% squid squid [.] Mem_hdr :: freeDataUpto
> 
> What is it ? Is it possible to somehow fix it?
> 
> System: slackware64 14.2
> 
> sslbump not used. http only.
> 
> Part of config:
> 
> memory_pools off
> memory_pools_limit 512 MB
> cache_mem 768 MB
> maximum_object_size_in_memory 64 KB
> cache_dir ufs   /cache/sq_c1 16312 16 256
> cache_dir ufs   /cache/sq_c2 16312 16 256
> cache_dir ufs   /cache/sq_c3 16312 16 256


Hi Vitaly,

It seems you faced known issue related to linear search through in-
memory nodes. See bug report 4477 [1].

[1] http://bugs.squid-cache.org/show_bug.cgi?id=4477


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
> --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
> Connecting to 127.0.0.1:3128... connected.
> Proxy request sent, awaiting response...
>    HTTP/1.1 200 OK
>    Cache-Control: no-cache, no-store
>    Pragma: no-cache
>    Content-Type: text/html
>    Expires: -1
>    Server: Microsoft-IIS/8.0
>    CorrelationVector: BzssVwiBIUaXqyOh.1.1
>    X-AspNet-Version: 4.0.30319
>    X-Powered-By: ASP.NET
>    Access-Control-Allow-Headers: Origin, X-Requested-With, Content-
> Type, 
> Accept
>    Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
>    Access-Control-Allow-Credentials: true
>    P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD TAI
> TELo 
> OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
>    X-Frame-Options: SAMEORIGIN
>    Vary: Accept-Encoding
>    Content-Encoding: gzip
>    Date: Fri, 27 Jan 2017 09:29:56 GMT
>    Content-Length: 13322
>    Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com; 
> expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
>    Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com; 
> expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
>    Strict-Transport-Security: max-age=0; includeSubDomains
>    X-CCC: NL
>    X-CID: 2
>    X-Cache: MISS from khorne
>    X-Cache-Lookup: MISS from khorne:3128
>    Connection: keep-alive
> Length: 13322 (13K) [text/html]
> Saving to: 'index.html'
> 
> index.html  100%[==>]  13.01K --.-KB/sin
> 0s
> 
> 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved [13322/13322]
> 
> Can you explain me - for what static index.html has this:
> 
> Cache-Control: no-cache, no-store
> Pragma: no-cache
> 
> ?
> 
> What can be broken to ignore CC in this page?

Hi Yuri,


Why do you think the page returned for URL [https://www.microsoft.com/r
u-kz/] is static and not dynamically generated one?

The index.html file is default file name for wget.

man wget:
  --default-page=name
   Use name as the default file name when it isn't known (i.e., for
   URLs that end in a slash), instead of index.html.

In fact the https://www.microsoft.com/ru-kz/index.html is a stub page
(The page you requested cannot be found.).


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 17:58 +0600, Yuri wrote:
> 
> 27.01.2017 17:54, Garri Djavadyan пишет:
> > On Fri, 2017-01-27 at 15:47 +0600, Yuri wrote:
> > > --2017-01-27 15:29:54--  https://www.microsoft.com/ru-kz/
> > > Connecting to 127.0.0.1:3128... connected.
> > > Proxy request sent, awaiting response...
> > > HTTP/1.1 200 OK
> > > Cache-Control: no-cache, no-store
> > > Pragma: no-cache
> > > Content-Type: text/html
> > > Expires: -1
> > > Server: Microsoft-IIS/8.0
> > > CorrelationVector: BzssVwiBIUaXqyOh.1.1
> > > X-AspNet-Version: 4.0.30319
> > > X-Powered-By: ASP.NET
> > > Access-Control-Allow-Headers: Origin, X-Requested-With,
> > > Content-
> > > Type,
> > > Accept
> > > Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
> > > Access-Control-Allow-Credentials: true
> > > P3P: CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD
> > > TAI
> > > TELo
> > > OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI"
> > > X-Frame-Options: SAMEORIGIN
> > > Vary: Accept-Encoding
> > > Content-Encoding: gzip
> > > Date: Fri, 27 Jan 2017 09:29:56 GMT
> > > Content-Length: 13322
> > > Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.1; domain=.microsoft.com;
> > > expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
> > > Set-Cookie: MS-CV=BzssVwiBIUaXqyOh.2; domain=.microsoft.com;
> > > expires=Sat, 28-Jan-2017 09:29:56 GMT; path=/
> > > Strict-Transport-Security: max-age=0; includeSubDomains
> > > X-CCC: NL
> > > X-CID: 2
> > > X-Cache: MISS from khorne
> > > X-Cache-Lookup: MISS from khorne:3128
> > > Connection: keep-alive
> > > Length: 13322 (13K) [text/html]
> > > Saving to: 'index.html'
> > > 
> > > index.html  100%[==>]  13.01K --.-
> > > KB/sin
> > > 0s
> > > 
> > > 2017-01-27 15:29:57 (32.2 MB/s) - 'index.html' saved
> > > [13322/13322]
> > > 
> > > Can you explain me - for what static index.html has this:
> > > 
> > > Cache-Control: no-cache, no-store
> > > Pragma: no-cache
> > > 
> > > ?
> > > 
> > > What can be broken to ignore CC in this page?
> > 
> > Hi Yuri,
> > 
> > 
> > Why do you think the page returned for URL
> > [https://www.microsot.cpom/r
> > u-kz/] is static and not dynamically generated one?
> 
> And for me, what's the difference? Does it change anything? In
> addition, 
> it is easy to see on the page and even the eyes - strangely enough -
> to 
> open its code. And? What do you see there?

I see an official home page of Microsoft company for KZ region. The
page is full of javascripts and products offer. It makes sense to
expect that the page could be changed intensively enough.


> > The index.html file is default file name for wget.
> 
> And also the name of the default home page in the web. Imagine - I
> know 
> the obvious things. But the question was about something else.
> > 
> > man wget:
> >    --default-page=name
> > Use name as the default file name when it isn't known
> > (i.e., for
> > URLs that end in a slash), instead of index.html.
> > 
> > In fact the https://www.microsoft.com/ru-kz/index.html is a stub
> > page
> > (The page you requested cannot be found.).
> 
> You living in wrong region. This is geo-dependent page, as obvious,
> yes?

What I mean is the pages https://www.microsoft.com/ru-kz/ and https://w
ww.microsoft.com/ru-kz/index.html are not same. You can easily confirm
it.


> Again. What is the difference? I open it from different
> workstations, 
> from different browsers - I see the same thing. The code is
> identical. I 
> can is to cache? Yes or no?

I'm a new member of Squid community (about 1 year). While tracking for
community activity I found that you can't grasp the advantages of
HTTP/1.1 over HTTP/1.0 for caching systems. Especially, its ability to
_safely_ cache and serve same amount (but I believe even more) of the
objects as HTTP/1.0 compliant caches do (while not breaking internet).
The main tool of HTTP/1.1 compliant proxies is _revalidation_ process.
HTTP/1.1 compliant caches like Squid tend to cache all possible objects
but later use revalidation for dubious requests. In fact the
revalidation is not costly process, especially using conditional GET
requests.

I found that most of your complains in the mail list and Bugzilla are
related to HTTPS scheme. FYI: The primary tool (revalidation) does not
work for HTTPS scheme using all current Squid branches at the moment.
See bug 4648.

Try to apply the proposed patch and update all related bug reports.

HTH


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Not all html objects are being cached

2017-01-27 Thread Garri Djavadyan
On Fri, 2017-01-27 at 06:15 -0800, joseph wrote:
> hi its not about https scheme its about evrything

Hi,

First of all, I can't brag about my English and writing style, but your
writing style is _very_ offensive to other members. Please, try it
better. First of all, it is very difficult to catch the idea of many
sentences. I believe punctuation marks could help a lot. Thanks in
advance.

> i decide not to involve with arg...
> but why not its the last one i should say it once
> they ar right most of the ppl admin have no knwoleg so its ok to baby
> sit
> them as its
> but
> --enable-http-violations should be fully ignore cache control and in
> refresh
> pattern  admin shuld control the behavior of his need else they
> should  take
> of  —enable-http-violations or alow us to do so
> controlling the 
> Pragma: no-cache and  Cache-Control: no-cache + + ++ +
> in both request reply

Squid, as HTTP/1.1 compliant cache successfully caches and serves
CC:no-cache replies. Below is excerpt from the RFC7234:

5.2.2.2.  no-cache

   The "no-cache" response directive indicates that the response MUST
   NOT be used to satisfy a subsequent request without successful
   validation on the origin server.

The key word is _validation_. There is nothing bad with revalidation.
It is inexpensive but saves us from possible problems. The log entry
'TCP_REFRESH_UNMODIFIED' should be welcomed as TCP_HIT or TCP_MEM_HIT.

Example:

$ curl -v -s -x http://127.0.0.1:3128 http://sandbox.comnet.local/test.
bin >/dev/null

< HTTP/1.1 200 OK
< Last-Modified: Wed, 31 Aug 2016 19:00:00 GMT
< Accept-Ranges: bytes
< Content-Length: 262146
< Content-Type: application/octet-stream
< Expires: Thu, 01 Dec 1994 16:00:00 GMT
< Date: Fri, 27 Jan 2017 14:55:09 GMT
< Server: Apache
< ETag: "ea0cd5-40002-53b62b438ac00"
< Cache-Control: no-cache
< Age: 3
< X-Cache: HIT from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/3.5.23-BZR)
< Connection: keep-alive

1485528912.222 18 127.0.0.1 TCP_REFRESH_UNMODIFIED/200 262565 GET h
ttp://sandbox.comnet.local/test.bin - HIER_DIRECT/192.168.24.5
application/octet-stream


As you can see, there are no problems with the no-cache reply.


I advise you to consider every specific case where you believe Squid's
transition to HTTP/1.1 compliance restricts you to cache something.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Objects with values below 60 second for Cache-Control max-age are not cached

2016-08-22 Thread Garri Djavadyan
Hello Squid users,

Can anyone explain, why Squid doesn't cache the objects with max-age
values below 60 seconds? For example:

$ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.local/
cgi-bin/hello.cgi" && date
HTTP/1.1 200 OK
Date: Mon, 22 Aug 2016 11:31:16 GMT
Server: Apache
Cache-Control: max-age=60
Content-Type: text/plain
X-Cache: MISS from gentoo.comnet.uz
Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
Connection: keep-alive

Mon Aug 22 16:31:19 UZT 2016

---

$ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.local/
cgi-bin/hello.cgi" && date
HTTP/1.1 200 OK
Date: Mon, 22 Aug 2016 11:31:23 GMT
Server: Apache
Cache-Control: max-age=60
Content-Type: text/plain
X-Cache: MISS from gentoo.comnet.uz
Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
Connection: keep-alive

Mon Aug 22 16:31:26 UZT 2016


No problems with values above 60 seconds. For example:

$ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.local/
cgi-bin/hello.cgi" && date
HTTP/1.1 200 OK
Date: Mon, 22 Aug 2016 11:36:06 GMT
Server: Apache
Cache-Control: max-age=70
Content-Type: text/plain
X-Cache: MISS from gentoo.comnet.uz
Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
Connection: keep-alive

Mon Aug 22 16:36:09 UZT 2016

---

$ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.local/
cgi-bin/hello.cgi" && date
HTTP/1.1 200 OK
Date: Mon, 22 Aug 2016 11:36:06 GMT
Server: Apache
Cache-Control: max-age=70
Content-Type: text/plain
Age: 5
X-Cache: HIT from gentoo.comnet.uz
Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
Connection: keep-alive

Mon Aug 22 16:36:11 UZT 2016


As you can see, time difference between origin server and localhost is
3 seconds (UZT is +5 offset).

Configuration is minimal:

# diff -u etc/squid.conf.default etc/squid.conf
--- etc/squid.conf.default  2016-08-12 17:21:48.877474780 +0500
+++ etc/squid.conf  2016-08-22 16:41:47.759766991 +0500
@@ -71,3 +71,5 @@
 refresh_pattern ^gopher:   14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 00%  0
 refresh_pattern .  0   20% 4320
+
+cache_mem 64 MB


Thanks in advance!
Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Objects with values below 60 second for Cache-Control max-age are not cached

2016-08-24 Thread Garri Djavadyan
On Mon, 2016-08-22 at 16:46 +0500, Garri Djavadyan wrote:
> Hello Squid users,
> 
> Can anyone explain, why Squid doesn't cache the objects with max-age
> values below 60 seconds? For example:
> 
> $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.loca
> l/
> cgi-bin/hello.cgi" && date
> HTTP/1.1 200 OK
> Date: Mon, 22 Aug 2016 11:31:16 GMT
> Server: Apache
> Cache-Control: max-age=60
> Content-Type: text/plain
> X-Cache: MISS from gentoo.comnet.uz
> Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> Connection: keep-alive
> 
> Mon Aug 22 16:31:19 UZT 2016
> 
> ---
> 
> $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.loca
> l/
> cgi-bin/hello.cgi" && date
> HTTP/1.1 200 OK
> Date: Mon, 22 Aug 2016 11:31:23 GMT
> Server: Apache
> Cache-Control: max-age=60
> Content-Type: text/plain
> X-Cache: MISS from gentoo.comnet.uz
> Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> Connection: keep-alive
> 
> Mon Aug 22 16:31:26 UZT 2016
> 
> 
> No problems with values above 60 seconds. For example:
> 
> $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.loca
> l/
> cgi-bin/hello.cgi" && date
> HTTP/1.1 200 OK
> Date: Mon, 22 Aug 2016 11:36:06 GMT
> Server: Apache
> Cache-Control: max-age=70
> Content-Type: text/plain
> X-Cache: MISS from gentoo.comnet.uz
> Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> Connection: keep-alive
> 
> Mon Aug 22 16:36:09 UZT 2016
> 
> ---
> 
> $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.loca
> l/
> cgi-bin/hello.cgi" && date
> HTTP/1.1 200 OK
> Date: Mon, 22 Aug 2016 11:36:06 GMT
> Server: Apache
> Cache-Control: max-age=70
> Content-Type: text/plain
> Age: 5
> X-Cache: HIT from gentoo.comnet.uz
> Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> Connection: keep-alive
> 
> Mon Aug 22 16:36:11 UZT 2016
> 
> 
> As you can see, time difference between origin server and localhost
> is
> 3 seconds (UZT is +5 offset).
> 
> Configuration is minimal:
> 
> # diff -u etc/squid.conf.default etc/squid.conf
> --- etc/squid.conf.default2016-08-12 17:21:48.877474780 +0500
> +++ etc/squid.conf2016-08-22 16:41:47.759766991 +0500
> @@ -71,3 +71,5 @@
>  refresh_pattern ^gopher: 14400%  1440
>  refresh_pattern -i (/cgi-bin/|\?) 0  0%  0
>  refresh_pattern .0   20% 4320
> +
> +cache_mem 64 MB
> 
> 
> Thanks in advance!
> Garri
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

Dear Squid developers,

Is the situation described above intended behaviour, or a bug which
should be reported? Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Objects with values below 60 second for Cache-Control max-age are not cached

2016-10-26 Thread Garri Djavadyan
Sorry, Amos, it seems my latest reply was ambiguous. I tried to inform,
that while debugging the issue I have found the cause. It was default
value for 'minimum_expire_time'.


On Wed, 2016-10-26 at 23:58 +1300, Amos Jeffries wrote:
> On 26/10/2016 7:21 p.m., Garri Djavadyan wrote:
> > 
> > On Wed, 2016-08-24 at 19:09 +0500, Garri Djavadyan wrote:
> > > 
> > > On Mon, 2016-08-22 at 16:46 +0500, Garri Djavadyan wrote:
> > > > 
> > > > 
> > > > Hello Squid users,
> > > > 
> > > > Can anyone explain, why Squid doesn't cache the objects with
> > > > max-
> > > > age
> > > > values below 60 seconds?
> 
> Several possible reasons...
> 
> > 
> > For example:
> > > 
> > > > 
> > > > 
> > > > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comne
> > > > t.lo
> > > > ca
> > > > l/
> > > > cgi-bin/hello.cgi" && date
> > > > HTTP/1.1 200 OK
> > > > Date: Mon, 22 Aug 2016 11:31:16 GMT
> > > > Server: Apache
> > > > Cache-Control: max-age=60
> > > > Content-Type: text/plain
> > > > X-Cache: MISS from gentoo.comnet.uz
> > > > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > > > Connection: keep-alive
> > > > 
> > > > Mon Aug 22 16:31:19 UZT 2016
> > > > 
> 
> 1) This is not a GET request.
> 
> There is no object data returned on a HEAD request. So Squid does not
> have anything to cache.
> 
> If you did a GET before this request, then the caching time is
> relative
> to that request, not this one.

It is not true. Squid successfully caches HEAD requests.

$ for i in 1 2 ; do http_proxy="127.0.0.1:3128" \
curl --head http://sandbox.comnet.local/cgi-bin/5mb.cgi \
2>/dev/null | grep X-Cache; done

X-Cache: MISS from gentoo.comnet.uz
X-Cache: HIT from gentoo.comnet.uz


> 2) There is no Last-Modified header.
> 
> Squid older than 3.5.22 do not revalidate properly with only a Date
> header. Meaning new content is required fetching if the cached object
> was stale.

'Date' + 'Cache-Control: max-age=70' worked as expected.
'Date' + 'Cache-Control: max-age=60' does not worked.


> 3) The response to a HEAD request is supposed to be the headers that
> would be sent in an equivalent GET. So the servers upstream response
> headers are the right output here in light of (2) and/or (1).
> 
> > 
> > > 
> > > > 
> > > > ---
> > > > 
> > > > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comne
> > > > t.lo
> > > > ca
> > > > l/
> > > > cgi-bin/hello.cgi" && date
> > > > HTTP/1.1 200 OK
> > > > Date: Mon, 22 Aug 2016 11:31:23 GMT
> > > > Server: Apache
> > > > Cache-Control: max-age=60
> > > > Content-Type: text/plain
> > > > X-Cache: MISS from gentoo.comnet.uz
> > > > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > > > Connection: keep-alive
> > > > 
> > > > Mon Aug 22 16:31:26 UZT 2016
> > > > 
> > > > 
> > > > No problems with values above 60 seconds. For example:
> > > > 
> > > > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comne
> > > > t.lo
> > > > ca
> > > > l/
> > > > cgi-bin/hello.cgi" && date
> > > > HTTP/1.1 200 OK
> > > > Date: Mon, 22 Aug 2016 11:36:06 GMT
> > > > Server: Apache
> > > > Cache-Control: max-age=70
> > > > Content-Type: text/plain
> > > > X-Cache: MISS from gentoo.comnet.uz
> > > > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > > > Connection: keep-alive
> > > > 
> > > > Mon Aug 22 16:36:09 UZT 2016
> > > > 
> > > > ---
> > > > 
> > > > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comne
> > > > t.lo
> > > > ca
> > > > l/
> > > > cgi-bin/hello.cgi" && date
> > > > HTTP/1.1 200 OK
> > > > Date: Mon, 22 Aug 2016 11:36:06 GMT
> > > > Server: Apache
> > > > Cache-Control: max-age=70
> > > > Content-Type: text/plain
> > > > Age: 5
> > > > X-Cache: HIT from gentoo.comnet.uz
> > > > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > > > Connection: keep-alive
> > > > 
> > > > Mon Aug 22 16:36:11 UZT 2016
> > > > 
> > > > 
> > > > As you can see, time difference between origin server and
> > > > localhost
> > > > is
> > > > 3 seconds (UZT is +5 offset).
> 
> Your interpretation of the timestamps is flawed.
> 
> The message header contains teh timestamp the servre generated teh
> message.
> The 'date' tool produces the timestamp at teh time the transaction
> delivering it was completed.
> 
> All that is evident is that the transaction took ~5 seconds from
> message
> generation to completion of delivery. That may contain any amount of
> +N
> or -N difference in the three machines clocks (server, proxy, and
> client).

I showed the output of 'date' tool to confirm that Squid received fresh
object. The client and Squid ran on the same machine.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Objects with values below 60 second for Cache-Control max-age are not cached

2016-10-26 Thread Garri Djavadyan
On Wed, 2016-08-24 at 19:09 +0500, Garri Djavadyan wrote:
> On Mon, 2016-08-22 at 16:46 +0500, Garri Djavadyan wrote:
> > 
> > Hello Squid users,
> > 
> > Can anyone explain, why Squid doesn't cache the objects with max-
> > age
> > values below 60 seconds? For example:
> > 
> > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.lo
> > ca
> > l/
> > cgi-bin/hello.cgi" && date
> > HTTP/1.1 200 OK
> > Date: Mon, 22 Aug 2016 11:31:16 GMT
> > Server: Apache
> > Cache-Control: max-age=60
> > Content-Type: text/plain
> > X-Cache: MISS from gentoo.comnet.uz
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > Connection: keep-alive
> > 
> > Mon Aug 22 16:31:19 UZT 2016
> > 
> > ---
> > 
> > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.lo
> > ca
> > l/
> > cgi-bin/hello.cgi" && date
> > HTTP/1.1 200 OK
> > Date: Mon, 22 Aug 2016 11:31:23 GMT
> > Server: Apache
> > Cache-Control: max-age=60
> > Content-Type: text/plain
> > X-Cache: MISS from gentoo.comnet.uz
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > Connection: keep-alive
> > 
> > Mon Aug 22 16:31:26 UZT 2016
> > 
> > 
> > No problems with values above 60 seconds. For example:
> > 
> > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.lo
> > ca
> > l/
> > cgi-bin/hello.cgi" && date
> > HTTP/1.1 200 OK
> > Date: Mon, 22 Aug 2016 11:36:06 GMT
> > Server: Apache
> > Cache-Control: max-age=70
> > Content-Type: text/plain
> > X-Cache: MISS from gentoo.comnet.uz
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > Connection: keep-alive
> > 
> > Mon Aug 22 16:36:09 UZT 2016
> > 
> > ---
> > 
> > $ http_proxy="127.0.0.1:3128" curl --head "http://sandbox.comnet.lo
> > ca
> > l/
> > cgi-bin/hello.cgi" && date
> > HTTP/1.1 200 OK
> > Date: Mon, 22 Aug 2016 11:36:06 GMT
> > Server: Apache
> > Cache-Control: max-age=70
> > Content-Type: text/plain
> > Age: 5
> > X-Cache: HIT from gentoo.comnet.uz
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.20)
> > Connection: keep-alive
> > 
> > Mon Aug 22 16:36:11 UZT 2016
> > 
> > 
> > As you can see, time difference between origin server and localhost
> > is
> > 3 seconds (UZT is +5 offset).
> > 
> > Configuration is minimal:
> > 
> > # diff -u etc/squid.conf.default etc/squid.conf
> > --- etc/squid.conf.default  2016-08-12 17:21:48.877474780
> > +0500
> > +++ etc/squid.conf  2016-08-22 16:41:47.759766991 +0500
> > @@ -71,3 +71,5 @@
> >  refresh_pattern ^gopher:   14400%  1440
> >  refresh_pattern -i (/cgi-bin/|\?) 00%  0
> >  refresh_pattern .  0   20% 4320
> > +
> > +cache_mem 64 MB
> > 
> > 
> > Thanks in advance!
> > Garri
> 
> Dear Squid developers,
> 
> Is the situation described above intended behaviour, or a bug which
> should be reported? Thanks.
> 
> Garri

Squid debugging led me to:
http://www.squid-cache.org/Doc/config/minimum_expiry_time/

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Default state for the option generate-host-certificates

2016-10-28 Thread Garri Djavadyan

On 2016-10-28 18:39, Yuri Voinov wrote:

It seems bug.



On 2016-10-28 19:53, Alex Rousskov wrote:

Is it a bug, documentation error or I simply missed something?


It is a bug IMO. The documented intent sounds worth supporting to me.



Thanks. I've opened the report [1].

[1] http://bugs.squid-cache.org/show_bug.cgi?id=4627

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid communications proxy dilemma

2016-10-29 Thread Garri Djavadyan

On 2016-10-29 20:40, paul.greene...@verizon.net wrote:

I've inherited a squid proxy at work; I'm new to squid, so this is
still on the learning curve. Unfortunately no one else in the office
is very good with squid either, so I'm attempting to  be the resident
guru.

Our network is all in private IP address space. A MS WSUS server and a
Symantec Endpoint Protection Manager server need to get through the
squid proxy to get out to MS and Symantec respectively for their
updates. Some other servers are coming online in the near future that
will also need to get out to their respective vendors to get updates,
including a Redhat Satellite server.

For these WSUS and SEPM servers, they have to go through the proxy I'm
working with, through a Cisco firewall, upstream to a McAfee web
gateway, and through another gateway after that. After traffic gets
past that Cisco firewall, a different networking group is responsible
for any upstream configuration

None of our other servers, except these specialty servers that need to
get out to their respective vendors for updates, have direct access to
the internet.

Our firewall guy says what he's seeing in his logs is that traffic
destined for port 443, after it goes through the proxy, is trying to
go straight to the vendor over the internet, rather than go through
the upstream McAfee gateway as required, and thus, the traffic is
getting dropped by the Cisco firewall. I did a packet capture test
with the McAfee gateway guy, and he confirmed that no traffic coming
from either either the WSUS or the SEPM is reaching his gateway.

I thought this line in the squid.conf file should send traffic from
our proxy to the upstream McAfee gateway, but maybe I'm
misunderstanding the intent of the cache_peer parent parameter.

cache_peer   parent8080  3130
proxy-only no-query no-netdb-exchange default login=username:password

(if placement of this cache_peer parameter matters, its currently near
the end of the squid.conf file)

As a test, I configured internet explorer on the WSUS server to use
the proxy for internet access, Without configuring for the proxy, IE
can't go anywhere except the local network. IE can hit http websites
(i.e. www.cnn.com) when it's configured to use the proxy, but not
https websites.

The Safe_ports and SSL_ports list is the same as the squid.conf
defaults.

This is squid 3.3 running on Redhat 7.

Any suggestions or pointers?

PG
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Please, use plain text (not HTML) for messages next time, as it hurts 
people reading messages on web archive [1]. Also, IMO, it increases the 
chances a message would be answered. Thanks.


[1] 
http://lists.squid-cache.org/pipermail/squid-users/2016-October/013308.html


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] flickr.com redirect error

2016-10-30 Thread Garri Djavadyan
>Can you test if the details at bug 4253:
>
>http://bugs.squid-cache.org/show_bug.cgi?id=4253#c13
>
>Helps you to resolve the issue?
>
>Eliezer

The above bug is not related to the issue.

The issue is actually on origin servers side. Details can be found
here:

http://bugs.squid-cache.org/show_bug.cgi?id=4537#c3

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid doesn't use domain name as a request URL in access.log when splice at step 3 occurs

2016-11-05 Thread Garri Djavadyan

On 2016-11-05 09:22, Amos Jeffries wrote:

On 5/11/2016 6:56 a.m., Garri Djavadyan wrote:

On 2016-11-04 19:42, Amos Jeffries wrote:

On 5/11/2016 1:43 a.m., Garri Djavadyan wrote:

The configuration for splice at step 3:

# diff etc/squid.conf.default etc/squid.conf
73a74,78

https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem

generate-host-certificates

acl StepSplice at_step SslBump3
ssl_bump splice StepSplice
ssl_bump peek all
logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs %%[un

%Sh/%sni


The result:
1478256303.420574 172.16.0.21 TCP_TUNNEL/200 6897 CONNECT
104.124.119.14:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org


Is it a bug or intended behavior? Thanks.



The person (Christos) who designed that behaviour is not reading this
mailing list very often.


Does it mean a bug report would have better chances to get noticed?


Sorry, squid-dev is the best place for that.

Amos


Thank you for the information!

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.0.16 still signed by old key

2016-11-05 Thread Garri Djavadyan

On 2016-11-02 06:43, Amos Jeffries wrote:

On 2/11/2016 8:31 a.m., Garri Djavadyan wrote:

According to the announce [1], Squid 4.0.16 and later should be signed
by the new key B06884EDB779C89B044E64E3CD6DBF8EF3B17D3E, but it is 
still
signed by the old Squid 3 key 
EA31CC5E9488E5168D2DCC5EB268E706FF5CF463:


$ gpg2 --verify squid-4.0.16.tar.xz.asc squid-4.0.16.tar.xz
gpg: Signature made Sun 30 Oct 2016 07:45:12 PM UZT
gpg:using RSA key B268E706FF5CF463
gpg: Good signature from "Amos Jeffries <a...@treenet.co.nz>" 
[ultimate]

gpg: aka "Amos Jeffries (Squid 3.0 Release Key)
<squ...@treenet.co.nz>" [ultimate]
gpg: aka "Amos Jeffries (Squid 3.1 Release Key)
<squ...@treenet.co.nz>" [ultimate]
gpg: aka "Amos Jeffries <squ...@treenet.co.nz>" 
[ultimate]



[1]
http://lists.squid-cache.org/pipermail/squid-users/2016-October/013299.html


Darn. I missed one parameter in the script. Sorry.

New .asc files are now uploaded with the correct signatures. They 
should

be visible in the next few hours.

Amos


Thank you, it is OK now!

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid HIT and Cisco ACL

2016-11-07 Thread Garri Djavadyan

On 2016-11-07 20:11, Juan C. Crespo R. wrote:

Hi, Thanks for your response and help


1. Cache: Version 3.5.19
Service Name: squid
configure options:  '--prefix=/usr/local/squid'
'--enable-storeio=rock,diskd,ufs,aufs'
'--enable-removal-policies=lru,heap' '--disable-pf-transparent'
'--enable-ipfw-transparent' '--with-large-files'
'--enable-delay-pools' '--localstatedir=/usr/local/squid/var/run'
'--disable-select' '--enable-ltdl-convenience' '--enable-zph-qos'

2. The only intermediate device its a Cisco 3750G12 switch with no
policy or special configuration between the Squid Box and the Cisco
CMTS.


If 'mls qos' is enabled on your Catalyst, it would clear any QoS marks 
by default. If it is not the case, you can mirror Squid's traffic 
(monitor session on Catalyst) to packet analyzer to check whether the 
QoS marks applied as expected.



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid HIT and Cisco ACL

2016-11-07 Thread Garri Djavadyan
On Mon, 2016-11-07 at 06:25 -0400, Juan C. Crespo R. wrote:
> Good Morning Guys
> 
> 
>  I've been trying to make a few ACL to catch and then improve the
> BW 
> of the HITS sent from my Squid Box to my CMTS and I can't find any
> way 
> to doit
> 
> 
> Squid.conf: qos_flows tos local-hit=0x30
> 
> Cisco CMTS: ip access-list extender JC
> 
> Int giga0/1
> 
> ip address 172.25.25.30 255.255.255.0
> 
> ip access-group JC in
> 
> show access-list JC
> 
>  10 permit ip any any tos 12
>  20 permit ip any any dscp af12
>  30 permit ip any any (64509 matches)
> 
> Thanks

Hi,

1. What version of Squid are you using? Also, please provide configure
options (squid -v).

2. Are you sure that intermediate devices don't clear DSCP bits before
reaching the router?


I've tested the feature using 4.0.16-20161104-r14917 with almost
default configure options:

# sbin/squid -v
Squid Cache: Version 4.0.16-20161104-r14917
Service Name: squid
configure options:  '--prefix=/usr/local/squid40' '--disable-
optimizations' '--with-openssl' '--enable-ssl-crtd'


And with almost default configuration:

# diff etc/squid.conf.default etc/squid.conf
76a77
> qos_flows tos local-hit=0x30


Using tcpdump I see that HIT reply has DSCP AF12:

17:14:56.837675 IP (tos 0x30, ttl 64, id 41134, offset 0, flags [DF],
proto TCP (6), length 2199)
127.0.0.1.3128 > 127.0.0.1.42848: Flags [P.], cksum 0x068c
(incorrect -> 0x478b), seq 1:2148, ack 161, win 350, options
[nop,nop,TS val 607416387 ecr 607416387], length 2147
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] No valid signing SSL certificate configured for HTTPS_port

2016-11-05 Thread Garri Djavadyan

On 2016-11-05 21:24, Konrad Kaluszynski wrote:

Hi All,

My goal is to configure a reverse proxy for Outlook Anywhere clients
using squid.
http://wiki.squid-cache.org/ConfigExamples/Reverse/ExchangeRpc

This will replace existing TMG that my client is currently using.

However, when I run squid I get an error  "No valid signing SSL
certificate configured for HTTPS_port".

Before, I was able to get OWA and HTTPS traffic using NGINX as reverse
proxy but was getting connection errors when trying to use
OutlookAnywhere.

So now I have been testing Squid but cannot get past the certificate
installation which was painless under Nginx.

Configuration is based on an article below:

https://sysadminfixes.wordpress.com/2013/01/25/exchanging-squids/

I have been trying for several days now without much success to
configure SSL certificate on my squid server.

Getting the " ...no valid signing certificate" every time.

I found few posts saying that it was not possible to use SSL
certificates signed by public CA and self-signed certs must be used.

Can anyone confirm if this is a case?

Logs and config files below.

My domain name has been replaced with _contoso.com [1]_ for
confidentiality sake.

squid server- srv-_squid.contoso.com [2]_ / 3.3.3.201

uname -a
Linux srv-squid 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

exchange server - exch.contoso.com [3] / 10.2.2.30

SSL certificate:

obtained from StartSSL for mail.contoso.com [4]

SQUID.CONF

 START

visible_hostname mail.contoso.com [4]
redirect_rewrites_host_header off
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
#logformat combined %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %h"
"%{User-Agent}>h" %Ss:%Sh ###this causes an error
access_log /var/log/squid3/access.log
cache_log /var/log/squid3/cache.log
cache_store_log none
cache_mgr nomail_address_given
forwarded_for transparent
### ignore_expect_100 ## not available in version 3.5
ssl_unclean_shutdown on
### The most important line
 ### "cert" should contain Exchange certificate and key
 ### "sslproxy_cafile" contains CA of root servers - StartSSL ?!
https_port mail.contoso.com:443 [5] accel
cert=/home/kk/ssl/cert-mail/mail.contoso.com.pem
defaultsite=mail.contoso.com [4]
key=/home/kk/ssl/cert-mail/mail.contoso.com.key

cache_peer exch.kk1.tech parent 443 0 proxy-only no-digest no-query
originserver front-end-https=on login=PASS sslflags=DONT_VERIFY_PEER
connection-auth=on name=Exchange

acl exch_url url_regex -i mail.contoso.com/owa [6]
acl exch_url url_regex -i mail.contoso.com/microsoft-server-activesync
[7]
acl exch_url url_regex -i mail.contoso.com/rpc [8]

cache_peer_access Exchange allow exch_url
cache_peer_access Exchange deny all
never_direct allow exch_url
http_access allow exch_url
http_access deny all
miss_access allow exch_url
miss_access deny all
deny_info https://mail.contoso.com/owa all

###END

ERROR

cache.log
2016/11/05 08:52:13| storeDirWriteCleanLogs: Starting...
2016/11/05 08:52:13|   Finished.  Wrote 0 entries.
2016/11/05 08:52:13|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: No valid signing SSL certificate configured for HTTPS_port
3.3.3.201:443 [9]
Squid Cache (Version 3.5.22): Terminated abnormally.
CPU Usage: 0.004 seconds = 0.000 user + 0.004 sys
Maximum Resident Size: 46624 KB
Page faults with physical i/o: 0

SQUID - compiled from sources

squid -v

Squid Cache: Version 3.5.22
Service Name: squid
configure options:  '--prefix=/usr' '--localstatedir=/var'
'--libexecdir=/lib/squid3' '--srcdir=.' '--datadir=/share/squid3'
'--sysconfdir=/etc/squid3' '--with-logdir=/var/log'
'--with-pidfile=/var/run/squid3.pid' '--enable-inline'
'--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd'
'--enable-removal-policies=lru,heap' '--enable-delay-pools'
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=' '--enable-arp-acl' '--enable-esi'
'--enable-ssl' '--enable-zph-qos' '--enable-wccpv2'
'--disable-translation' '--with-logdir=/var/log/squid3'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--with-ssl' '--disable-ipv6'
'--with-openssl' --enable-ltdl-convenience

Appreciate any feedback

Cheers

Konrad



Links:
--
[1] http://contoso.com
[2] http://squid.contoso.com
[3] http://exch.contoso.com
[4] http://mail.contoso.com
[5] http://mail.contoso.com:443
[6] http://mail.contoso.com/owa
[7] http://mail.contoso.com/microsoft-server-activesync
[8] http://mail.contoso.com/rpc
[9] http://3.3.3.201:443


Hi,

Sorry, if my questions would appear naive, but:

1. Does your certificate signed by StartSSL CA 
(/home/kk/ssl/cert-mail/mail.contoso.com.pem) corresponds to your 
private 

Re: [squid-users] No valid signing SSL certificate configured for HTTPS_port

2016-11-05 Thread Garri Djavadyan

On 2016-11-05 22:09, Garri Djavadyan wrote:

1. Does your certificate signed by StartSSL CA
(/home/kk/ssl/cert-mail/mail.contoso.com.pem) corresponds to your
private key (/home/kk/ssl/cert-mail/mail.contoso.com.key)?


For the 'corresponds' I mean, does CSR for StartSSL was generated using 
exactly same key [/home/kk/ssl/cert-mail/mail.contoso.com.key]?


You can check whether the certificate and private key corresponds to 
each other by inspecting modulus. The modulus should be identical. For 
example, you can use the following openssl commands:


# openssl x509 -in /home/kk/ssl/cert-mail/mail.contoso.com.pem -modulus 
-noout
# openssl rsa -in /home/kk/ssl/cert-mail/mail.contoso.com.key -modulus 
-noout



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] No valid signing SSL certificate configured for HTTPS_port

2016-11-05 Thread Garri Djavadyan

On 2016-11-05 23:10, konradka wrote:

Hi Garri,

Thanks for your responses mate !

I did not realize that the squid was compiled with proxy user. Well 
spotted

!

It looks like permission's issue but squid error message is not giving 
away

any more details.

I will configure debug_options to see what is failing exactly.

The modulus check is a good idea too so I will get this checked and 
post the

results.


Actually, there should not be problems with DAC rights for user 'proxy', 
I found that Squid reads the keys as root. But there may be problems 
with MAC rights for Squid, if any enabled by default. As you use Ubuntu, 
you should check AppArmor logs for problems indication.


The same error may appear, if path or filename is misspelled.


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error DiskThreadsDiskFile::openDone: (2) No such file or directory

2016-10-19 Thread Garri Djavadyan
On Tue, 2016-10-18 at 06:37 -0700, erdosain9 wrote:
> Hi.
> squid 3.5.20
> 
> Im having a lot of these in cache.log
> 
> 2016/10/18 10:36:11 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:11 kid1|   /var/spool/squid/00/92/92E9
> 2016/10/18 10:36:14 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:14 kid1|   /var/spool/squid/00/AA/AA46
> 2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA48
> 2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA49
> 2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA4B
> 2016/10/18 10:36:16 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:16 kid1|   /var/spool/squid/00/AA/AA4C
> 2016/10/18 10:36:20 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:20 kid1|   /var/spool/squid/00/AA/AA60
> 2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA67
> 2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA66
> 2016/10/18 10:36:21 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:21 kid1|   /var/spool/squid/00/AA/AA65
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA10
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA8C
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA98
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA18
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA93
> 2016/10/18 10:36:33 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:33 kid1|   /var/spool/squid/00/AA/AA9A
> 2016/10/18 10:36:34 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> 2016/10/18 10:36:34 kid1|   /var/spool/squid/00/70/704B
> 
> What can i do?? thanks

Hi,

You may find an answer for your case here:

* http://lists.squid-cache.org/pipermail/squid-users/2014-December/0012
39.html
* http://lists.squid-cache.org/pipermail/squid-users/2015-September/005
502.html
* http://bugs.squid-cache.org/show_bug.cgi?id=4367

In my case I get the errors after Squid's crash.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS 6.x and SELinux enforcing with Squid 3.5.x (thanks to Eliezer Croitoru for the RPM)

2016-10-18 Thread Garri Djavadyan
On Tue, 2016-10-18 at 13:02 +0200, Walter H. wrote:
> Hello,
> 
> just in case anybody wants to run Squid 3.5.x on CentOS
> with SELinux enforcing,
> 
> here is the semodule
> 
> 
> module squid_update 1.0;
> 
> require {
> type squid_conf_t;
> type squid_t;
> type var_t;
> class file { append open read write getattr lock
> execute_no_trans };
> }
> 
> #= squid_t ==
> allow squid_t squid_conf_t:file execute_no_trans;
> allow squid_t var_t:file { append open read write getattr lock };
> 
> 
> and do the following:
> 
> checkmodule -M -m -o squid_update.mod squid_update.tt
> semodule_package -o squid_update.pp -m squid_update.mod
> semodule -i squid_update.pp

Hi,

Have you tried to use default policy and relabel target dirs/files
using types dedicated for squid? For example:

# semanage fcontext -l | grep squid
/etc/squid(/.*)?   all
files  system_u:object_r:squid_conf_t:s0 
/var/run/squid.*   all
files  system_u:object_r:squid_var_run_t:s0 
/var/log/squid(/.*)?   all
files  system_u:object_r:squid_log_t:s0 
/usr/share/squid(/.*)? all
files  system_u:object_r:squid_conf_t:s0 
/var/cache/squid(/.*)? all
files  system_u:object_r:squid_cache_t:s0 
/var/spool/squid(/.*)? all
files  system_u:object_r:squid_cache_t:s0 
/usr/sbin/squidregular
file   system_u:object_r:squid_exec_t:s0 
/etc/rc\.d/init\.d/squid   regular
file   system_u:object_r:squid_initrc_exec_t:s0 
/usr/lib/squid/cachemgr\.cgi   regular
file   system_u:object_r:httpd_squid_script_exec_t:s0 
/usr/lib64/squid/cachemgr\.cgi regular
file   system_u:object_r:httpd_squid_script_exec_t:s0 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Lots of "Vary object loop!"

2016-10-20 Thread Garri Djavadyan
On Thu, 2016-10-20 at 13:07 +0200, Anton Kornexl wrote:
> Hello,
>  
> i also had many of these messages in cache.log
>  
> we do filtering with squidguard (redirect http://www..xx )
>  
> It is possible that the same url is redirected for one user but not
> for another (different filter rules per user)
>  
> Are the redirected objects saved in cache:dir ?
> Can i control which variables are used for vary checking?
>  
> Now i have disabled caching and the messages are gone but i would
> like to reactivate caching.
> 
> Anton Kornexl

Hi,

There are many reasons have been discussed on the list. For example:

http://lists.squid-cache.org/pipermail/squid-users/2015-August/005132.h
tml

Also, If you use collapsed_forwarding you may be affected by the false
positives described here:

http://bugs.squid-cache.org/show_bug.cgi?id=4619

Garri 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 19:03 +1300, Amos Jeffries wrote:
> On 24/10/2016 6:28 a.m., gar...@comnet.uz wrote:
> > 
> > On 2016-10-23 18:31, Amos Jeffries wrote:
> > > 
> > > On 23/10/2016 2:32 a.m., garryd wrote:
> > > > 
> > > > Since I started use Squid, it's configuration always RFC
> > > > compliant by
> > > > default, _but_ there were always knobs for users to make it
> > > > HTTP
> > > > violent. It was in hands of users to decide how to handle a web
> > > > resource. Now it is not always possible, and the topic is an
> > > > evidence.
> > > > For example, in terms of this topic, users can't violate this
> > > > RFC
> > > > statement [1]:
> > > > 
> > > >    A Vary field value of "*" signals that anything about the
> > > > request
> > > >    might play a role in selecting the response representation,
> > > > possibly
> > > >    including elements outside the message syntax (e.g., the
> > > > client's
> > > >    network address).  A recipient will not be able to determine
> > > > whether
> > > >    this response is appropriate for a later request without
> > > > forwarding
> > > >    the request to the origin server.  A proxy MUST NOT generate
> > > > a Vary
> > > >    field with a "*" value.
> > > > 
> > > > [1] https://tools.ietf.org/html/rfc7231#section-7.1.4
> > > 
> > > 
> > > Please name the option in any version of Squid which allowed
> > > Squid to
> > > cache those "Vary: *" responses.
> > > 
> > > No such option ever existed. For the 20+ years Vary has existed
> > > Squid
> > > has behaved in the same way it does today. For all that time you
> > > did not
> > > notice these responses.
> > 
> > You are absolutely right, but there were not such abuse vector in
> > the
> > past (at least in my practice). There were tools provided by devs
> > to
> > admins to protect against trending abuse cases.
> 
> What trend? There is exactly one mentioned URL that I'm aware of, the
> Chrome browser download URL. I've posted two reasons why Chrome uses
> the
> Vary:* header. Just opinions of mine, but formed after actual
> discussions with the Chrome developers some years back.
> 
> 
> [I very much dislike writing this. But you seem to have been sucked
> in
> and deserve to know the history.]
> 
> All the fuss that is going on AFAICS was started by Yuri. His comment
> history here and in bugzilla, and in private responses range from
> non-compromising "cache everything no matter what - do what I say,
> now!"
> (repeatedy in unrelated bugzilla reports), "f*ck the RFCs and anyone
> following them, just store everything I dont care about what happens"
> (this mornings post), to personal attacks against anyone who mentions
> the previous stance might have problems (all the "Squid developers
> believe/say/..." comments - none of which match what the team we have
> actually said to him or believe).
> 
> There is one other email address which changes its name occasionally
> and
> posts almost exactly the same words as Yuri's. So it looks to me as
> Yuri
> and some sock puppets performing a campaign to spread lies and FUD
> about
> Squid and hurt the people doing work on it.
> 
> Not exactly a good way to get people to do things for free. But it
> seems
> to have worked on getting you and a few others now doing the coding
> part
> for him at no cost, and I have now wasted time responding to you and
> thinking of a solution for it that might get accepted for merge.
> 
> 
> This particular topic is not the first to have such behaviour by
> Yuri.
> There have been other things where someone made a mistake (overlooked
> something) and all hell full of insults broke loose at them. And
> several
> other cases where missing features in Squid did not get instant
> obedience to quite blunt and insulting demands. Followed by weeks of
> insults until the bug was fixed by other people - then suddenly
> polite
> Yuri comes back overnight.
> 
> 
> As a developer, I personally decided not to write the requested code.
> Not in the way demanded. This seems to have upset Yuri who has taken
> to
> insulting me and the rest of the dev team as a whole. I'm not sure if
> he
> is trolling to intentionally cause the above mentioned effects, or
> really in need of medical assistance to deal with work related
> stress.
> 
> [/history]
> 
> 
> > 
> > So, the question arised,
> > what changed in Squid development policy?
> 
> In policy: Nothing I'm aware of in the past 10 years.
> 
> What changed on the Internet? a new bunch of RFCs came out, the
> server
> and clients Squid talks to all got updated to follow those documents
> more closely.
> 
> What changed in Squid? the dev team have been slowly adding the new
> abilities to Squid. One by one, its only ~90% (maybe less) compliant
> withe the MUST conditions, not even close to that on the SHOULDs,
> MAYs,
> and implied processing abilities.
> 
> 
> What do you think should happen to Squid when all the software it
> talks
> to speaks and expects what the RFCs say they should expect from
> 

Re: [squid-users] CentOS 6.x and SELinux enforcing with Squid 3.5.x (thanks to Eliezer Croitoru for the RPM)

2016-10-18 Thread Garri Djavadyan
On Tue, 2016-10-18 at 14:56 +0200, Walter H. wrote:
> with the 3.1.x there is no problem with
> 
> url_rewrite_program /etc/squid/url-rewrite-program.pl
> url_rewrite_children 8
> url_rewrite_host_header on
> url_rewrite_access allow all
> 
> but with the 3.5.x there is access denied (shown in
> /var/log/audit/audit.log)
> and squid doesn't start;
> 
> specific to the 3.5.x release, I added a certificate validator
> helper,
> which has also problems ...
> 
> 
> Greetings,
> Walter

Hi Walter,

Have you tried to move helpers to '/usr/lib64/squid/' and ensure that
the label for them is 'lib_t'?

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 21:05 +0500, Garri Djavadyan wrote:
> On 2016-10-24 19:40, Garri Djavadyan wrote:
> > 
> > So, the big G sends 304 only to HEAD requests, although it is a
> > violation [1], AIUI:
> > 
> > curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
> > -H
> > 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-
> > chro
> > me-stable_current_amd64.deb
> > HTTP/1.1 304 Not Modified
> > ETag: "101395"
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 14:36:32 GMT
> > Connection: keep-alive
> > 
> > ---
> > 
> > $ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09
> > GMT'
> > -H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/goog
> > le-c
> > hrome-stable_current_amd64.deb > /dev/null
> > > 
> > > GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
> > > Host: dl.google.com
> > > User-Agent: curl/7.50.3
> > > Accept: */*
> > > If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
> > > If-None-Match: "101395"
> > > 
> > < HTTP/1.1 200 OK
> > < Accept-Ranges: bytes
> > < Content-Type: application/x-debian-package
> > < ETag: "101395"
> > < Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > < Server: downloads
> > < Vary: *
> > < X-Content-Type-Options: nosniff
> > < X-Frame-Options: SAMEORIGIN
> > < X-Xss-Protection: 1; mode=block
> > < Date: Mon, 24 Oct 2016 14:38:19 GMT
> > < Content-Length: 45532350
> > < Connection: keep-alive
> > 
> > [1] https://tools.ietf.org/html/rfc7234#section-4.3.5
> 
> Actually I mixed SHOULD agains MUST. The RFC 7231, section 4.3.2
> states 
> [1]:
> ...
> The server SHOULD send the same header fields in response to a HEAD 
> request as it would have sent if
> the request had been a GET, except that the payload header fields 
> (Section 3.3) MAY be omitted.
> ...
> 
> So, big G does not follow the recommendation, but does not violate
> the 
> standard.
> 
> [1] https://tools.ietf.org/html/rfc7231#section-4.3.2
> 
> Garri

I've overlooked that the statement applies to header _fields_, not to
reply code. The full paragraph states:

   The HEAD method is identical to GET except that the server MUST NOT
   send a message body in the response (i.e., the response terminates
   at the end of the header section).  The server SHOULD send the same
   header fields in response to a HEAD request as it would have sent if
   the request had been a GET, except that the payload header fields
   (Section 3.3) MAY be omitted.  This method can be used for obtaining
   metadata about the selected representation without transferring the
   representation data and is often used for testing hypertext links  
   for validity, accessibility, and recent modification.

Nevertheless, the last sentence in the above excerpt use word 'can',
same for the following excerpt from section 4.3.5 [1]:

   A response to the HEAD method is identical to what an equivalent
   request made with a GET would have been, except it lacks a body.
   This property of HEAD responses can be used to invalidate or update
   a cached GET response if the more efficient conditional GET request
   mechanism is not available (due to no validators being present in  
   the stored response) or if transmission of the representation body  
   is not desired even if it has changed.

So, HEAD request _can_ be used as a reliable source for object
revalidation. How the 'can' should it be interpreted? RFC2119 [2] does
not specifies that.


AIUI, that exact case leaves two choices:

* Implement something like 'revalidate_using_head [[!]acl]
* Contact Google and inform about the behavior

The former is RFC-compliant way to solve that particular case, but
requires costly development efforts and may be useless after some time.
The latter may break HEAD revalidation also, but gives hopes that the
GET conditionals may be fixed.

[1] https://tools.ietf.org/html/rfc7234#section-4.3.5
[2] https://tools.ietf.org/html/rfc2119
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:
> On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
> > Nevertheless, the topic surfaced new details regarding the Vary and
> > I
> > tried conditional requests on same URL (Google Chrome) from
> > different
> > machines/IPs. Here results:
> > 
> > $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
> > 08:29:09
> > GMT" https://dl.google.com/linux/direct/google-chrome-stable_curren
> > t_am
> > d64.deb
> > HTTP/1.1 304 Not Modified
> > Etag: "101395"
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 08:53:44 GMT
> > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > 
> > 
> > 
> > $ curl --head --header 'If-None-Match: "101395"' https://dl.google.
> > com/
> > linux/direct/google-chrome-stable_current_amd64.deb 
> > HTTP/1.1 304 Not Modified
> > Etag: "101395"
> > Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > Server: downloads
> > Vary: *
> > X-Content-Type-Options: nosniff
> > X-Frame-Options: SAMEORIGIN
> > X-Xss-Protection: 1; mode=block
> > Date: Mon, 24 Oct 2016 08:54:18 GMT
> > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > 
> 
> Sweet! Far better than I was expecting. That means this patch should
> work:
> 
> === modified file 'src/http.cc'
> --- src/http.cc 2016-10-08 22:19:44 +
> +++ src/http.cc 2016-10-24 10:50:16 +
> @@ -593,7 +593,7 @@
>  while (strListGetItem(, ',', , , )) {
>  SBuf name(item, ilen);
>  if (name == asterisk) {
> -vstr.clear();
> +vstr = asterisk;
>  break;
>  }
>  name.toLower();
> @@ -947,6 +947,12 @@
>  varyFailure = true;
>  } else {
>  entry->mem_obj->vary_headers = vary;
> +
> +// RFC 7231 section 7.1.4
> +// Vary:* can be cached, but has mandatory revalidation
> +static const SBuf asterisk("*");
> +if (vary == asterisk)
> +EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
>  }
>  }
> 
> 
> Amos

I have applied the patch. Below my results.

In access.log I see:

1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786 GET h
ttp://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
 - HIER_DIRECT/173.194.222.136 application/x-debian-package

In packet capture, I see that Squid doesn't use conditional request:

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
User-Agent: curl/7.50.3
Accept: */*
Host: dl.google.com
Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan
On Tue, 2016-10-25 at 01:22 +1300, Amos Jeffries wrote:
> On 25/10/2016 12:32 a.m., Garri Djavadyan wrote:
> > 
> > On Mon, 2016-10-24 at 23:51 +1300, Amos Jeffries wrote:
> > > 
> > > On 24/10/2016 9:59 p.m., Garri Djavadyan wrote:
> > > > 
> > > > Nevertheless, the topic surfaced new details regarding the Vary
> > > > and
> > > > I
> > > > tried conditional requests on same URL (Google Chrome) from
> > > > different
> > > > machines/IPs. Here results:
> > > > 
> > > > $ curl --head --header "If-Modified-Since: Thu, 22 Oct 2016
> > > > 08:29:09
> > > > GMT" https://dl.google.com/linux/direct/google-chrome-stable_cu
> > > > rren
> > > > t_am
> > > > d64.deb
> > > > HTTP/1.1 304 Not Modified
> > > > Etag: "101395"
> > > > Server: downloads
> > > > Vary: *
> > > > X-Content-Type-Options: nosniff
> > > > X-Frame-Options: SAMEORIGIN
> > > > X-Xss-Protection: 1; mode=block
> > > > Date: Mon, 24 Oct 2016 08:53:44 GMT
> > > > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > > > 
> > > > 
> > > > 
> > > > $ curl --head --header 'If-None-Match: "101395"' https://dl.goo
> > > > gle.
> > > > com/
> > > > linux/direct/google-chrome-stable_current_amd64.deb 
> > > > HTTP/1.1 304 Not Modified
> > > > Etag: "101395"
> > > > Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
> > > > Server: downloads
> > > > Vary: *
> > > > X-Content-Type-Options: nosniff
> > > > X-Frame-Options: SAMEORIGIN
> > > > X-Xss-Protection: 1; mode=block
> > > > Date: Mon, 24 Oct 2016 08:54:18 GMT
> > > > Alt-Svc: quic=":443"; ma=2592000; v="36,35,34"
> > > > 
> > > 
> > > Sweet! Far better than I was expecting. That means this patch
> > > should
> > > work:
> > > 
> > > === modified file 'src/http.cc'
> > > --- src/http.cc 2016-10-08 22:19:44 +
> > > +++ src/http.cc 2016-10-24 10:50:16 +
> > > @@ -593,7 +593,7 @@
> > >  while (strListGetItem(, ',', , , )) {
> > >  SBuf name(item, ilen);
> > >  if (name == asterisk) {
> > > -vstr.clear();
> > > +vstr = asterisk;
> > >  break;
> > >  }
> > >  name.toLower();
> > > @@ -947,6 +947,12 @@
> > >  varyFailure = true;
> > >  } else {
> > >  entry->mem_obj->vary_headers = vary;
> > > +
> > > +// RFC 7231 section 7.1.4
> > > +// Vary:* can be cached, but has mandatory
> > > revalidation
> > > +static const SBuf asterisk("*");
> > > +if (vary == asterisk)
> > > +EBIT_SET(entry->flags, ENTRY_REVALIDATE_ALWAYS);
> > >  }
> > >  }
> > > 
> > > 
> > > Amos
> > 
> > I have applied the patch. Below my results.
> > 
> > In access.log I see:
> > 
> > 1477307991.672  49890 127.0.0.1 TCP_REFRESH_MODIFIED/200 45532786
> > GET h
> > ttp://dl.google.com/linux/direct/google-chrome-
> > stable_current_amd64.deb
> >  - HIER_DIRECT/173.194.222.136 application/x-debian-package
> > 
> > In packet capture, I see that Squid doesn't use conditional
> > request:
> > 
> > GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
> > User-Agent: curl/7.50.3
> > Accept: */*
> > Host: dl.google.com
> > Via: 1.1 gentoo.comnet.uz (squid/3.5.22)
> > X-Forwarded-For: 127.0.0.1
> > Cache-Control: max-age=259200
> > Connection: keep-alive
> 
> Hmmm. That looks to me like the new patch is working (log says
> REFRESH
> being done) but there is some bug in the revalidate logic not adding
> the
> required headers.
>  If thats right, then that bug might be causing other revalidate
> traffic
> to have major /200 issues.
> 
> I'm in need of sleep right now. If you can grab a ALL,9 cache.log
> trace
> and mail it to me I will take a look in the morning. Otherwise I will
> try to replicate the case myself and track it down in the next few
> days.
> 
> Amos

Sorry, I probably analysed the header of first request. I tried again,
and found that Squid sends the header corr

Re: [squid-users] Caching Google Chrome googlechromestandaloneenterprise64.msi

2016-10-24 Thread Garri Djavadyan

On 2016-10-24 19:40, Garri Djavadyan wrote:

So, the big G sends 304 only to HEAD requests, although it is a
violation [1], AIUI:

curl --head -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT' -H
'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-chro
me-stable_current_amd64.deb
HTTP/1.1 304 Not Modified
ETag: "101395"
Server: downloads
Vary: *
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
Date: Mon, 24 Oct 2016 14:36:32 GMT
Connection: keep-alive

---

$ curl --verbose -H 'If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT'
-H 'If-None-Match: "101395"' http://dl.google.com/linux/direct/google-c
hrome-stable_current_amd64.deb > /dev/null

GET /linux/direct/google-chrome-stable_current_amd64.deb HTTP/1.1
Host: dl.google.com
User-Agent: curl/7.50.3
Accept: */*
If-Modified-Since: Thu, 20 Oct 2016 08:29:09 GMT
If-None-Match: "101395"


< HTTP/1.1 200 OK
< Accept-Ranges: bytes
< Content-Type: application/x-debian-package
< ETag: "101395"
< Last-Modified: Thu, 20 Oct 2016 08:29:09 GMT
< Server: downloads
< Vary: *
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Xss-Protection: 1; mode=block
< Date: Mon, 24 Oct 2016 14:38:19 GMT
< Content-Length: 45532350
< Connection: keep-alive

[1] https://tools.ietf.org/html/rfc7234#section-4.3.5


Actually I mixed SHOULD agains MUST. The RFC 7231, section 4.3.2 states 
[1]:

...
The server SHOULD send the same header fields in response to a HEAD 
request as it would have sent if
the request had been a GET, except that the payload header fields 
(Section 3.3) MAY be omitted.

...

So, big G does not follow the recommendation, but does not violate the 
standard.


[1] https://tools.ietf.org/html/rfc7231#section-4.3.2

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP : Squid sending private IP in PASV response

2016-10-20 Thread Garri Djavadyan
On Thu, 2016-10-20 at 14:07 +, Gael Ancelin wrote:
> Hello,
>  
> I have searched in maillist archives but have not seen so far someone
> with the
> same problem.
>  
> My Squid's objective is to foward FTP & HTTP requests to a distant
> server.
>  
> Squid is running on CentOS 7.2.
> uname -r : 3.10.0-327.28.3.el7.x86_64
> squid -v : Version 3.5.20
>  
>  
> I don't have the choice to use anything but Squid, and I can't use
> firewalling
> rules for forwarding directly ports.
>  
>  
> WAN_1stPublic_IP [FIREWALL_1] ---
> --[FTP_SERVER]
>  
> WAN_2ndPublic_IP ---[FIREWALL_2]--[SQUID]-[VPN]-[FTP_SERVER]
>  
>  
> Here's my problem :
> When I'm connecting in FTP on the 2nd Public IP, everything is ok,
> but when I
> want to switch to passive mode, Squid is sending his own private ip
> instead of
> the 2nd public IP. So the connexion timed out.
>  
>  
> ftp> open 
> Connected to  ().
> 220 Service ready
> Name (:): 
> ---> USER 
> 331 Please specify the password.
> Password:
> ---> PASS 
> 230 Login successful.
> ---> SYST
> 215 UNIX Type: L8
> Remote system type is UNIX.
> Using binary mode to transfer files.
> ftp> pwd
> ---> PWD
> 257 "/"
> ftp> ls
> ---> PASV
> 227 Entering Passive Mode (,).
> ftp: connect: Connexion terminée par expiration du délai d'attente
>  
>  
> Is there a way to "force" Squid to resend his public IP ?
> I'm thinking of something like "pasv_address" option in vsftpd, but
> for squid.
>  
> Gaël Ancelin

Hi,

Can you provide the configuration options related to FTP?
I can't reproduce the problem using following method:

# diff etc/squid.conf.default etc/squid.conf
73a74,75
> 
> ftp_port 21

---

$ ftp 127.0.0.1
Connected to 127.0.0.1.
220 Service ready
Name (127.0.0.1:user): anonym...@mirror.yandex.ru
530 Must login first
530 Must login first
SSL not available
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> passive
Passive mode on.
ftp> ls
227 Entering Passive Mode (127,0,0,1,229,181).
150 Here comes the directory listing.
drwxr-xr-x   19 ftp  ftp  4096 Oct 21 05:00 altlinux
...
drwxr-xr-x   11 ftp  ftp  4096 Oct 21 03:16 ubuntu-releases
226 Transfer complete

---

The example showed that Squid returned the IP address of the interface
facing the client, not the IP address of my interface facing the
origin.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP Outgoing Address ACL Problem

2016-11-11 Thread Garri Djavadyan

On 2016-11-11 21:51, jarrett+squid-us...@jarrettgraham.com wrote:

Can anyone point out what I'm doing wrong in my config?

Squid config:
https://bpaste.net/show/796dda70860d

I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses.  But, squid uses the first IP address
assigned to the interface not listed in the config instead.

IP/Ethernet Interface Assignment:
https://bpaste.net/show/5cf068a4ce9a


Hi,

Your ACLs ipv4-{1..10} are invalid, you combined ACL types 'myportname' 
and 'src' together. I believe you want:


acl ipv4-1 localport 3128
acl ipv4-2 localport 3129
acl ipv4-3 localport 3130
acl ipv4-4 localport 3131
acl ipv4-5 localport 3132
acl ipv4-6 localport 3133
acl ipv4-7 localport 3134
acl ipv4-8 localport 3135
acl ipv4-9 localport 3136
acl ipv4-10 localport 3137

HTH

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP Outgoing Address ACL Problem

2016-11-11 Thread Garri Djavadyan

On 2016-11-11 22:28, Antony Stone wrote:
On Friday 11 November 2016 at 17:51:04, 
jarrett+squid-us...@jarrettgraham.com

wrote:


I'm trying to use ACLs to direct incoming traffic on assigned ports to
assigned outgoing addresses.  But, squid uses the first IP address
assigned to the interface not listed in the config instead.


See http://lists.squid-cache.org/pipermail/squid-users/2016-
October/013270.html

Specifically "IP addressing on the outgoing connections is an operating 
system
choice.  Squid does not have any direct control over outgoing 
connections

besides their destination IP:port."


Hi,

The following configuration works for me on Linux.

1. I set second /32 IP address for Internet facing interface.
# ip addr show wlp3s0 | fgrep 'inet '
inet 192.168.2.102/24 brd 192.168.2.255 scope global dynamic wlp3s0
inet 192.168.2.108/32 scope global wlp3s0


2. I added second http_port, ACL for the second http_port and the rule 
to use second IP address if connection is for second http_port.

# diff -u etc/squid.conf.default etc/squid.conf
--- etc/squid.conf.default  2016-10-28 15:54:53.851704360 +0500
+++ etc/squid.conf  2016-11-11 23:18:48.654385840 +0500
@@ -23,6 +23,7 @@
 acl Safe_ports port 591# filemaker
 acl Safe_ports port 777# multiling http
 acl CONNECT method CONNECT
+acl port3129 localport 3129

 #
 # Recommended minimum Access Permission configuration:
@@ -57,6 +58,7 @@

 # Squid normally listens to port 3128
 http_port 3128
+http_port 3129

 # Uncomment and adjust the following to add a disk cache directory.
 #cache_dir ufs /usr/local/squid35/var/cache/squid 100 16 256
@@ -71,3 +73,4 @@
 refresh_pattern ^gopher:   14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 00%  0
 refresh_pattern .  0   20% 4320
+tcp_outgoing_address 192.168.2.108 port3129


3. I initiated two requests on different http ports:
$ curl -x http://127.0.0.1:3128 -H 'Cache-Control: no-cache' 
http://mirror.comnet.uz/centos/2/readme.txt > /dev/null
$ curl -x http://127.0.0.1:3129 -H 'Cache-Control: no-cache' 
http://mirror.comnet.uz/centos/2/readme.txt > /dev/null



4. Using tcpdump I confirmed that the rule is working.
# tcpdump -i wlp3s0 dst host mirror.comnet.uz
...
23:42:02.230713 IP 192.168.2.102.40506 > mirror.comnet.uz.http: Flags 
[P.], seq 0:218, ack 1, win 229, options [nop,nop,TS val 845937144 ecr 
1281004287], length 218: HTTP: GET /centos/2/readme.txt HTTP/1.1

...
23:42:15.166311 IP 192.168.2.108.48575 > mirror.comnet.uz.http: Flags 
[P.], seq 0:218, ack 1, win 229, options [nop,nop,TS val 845950080 ecr 
1281016928], length 218: HTTP: GET /centos/2/readme.txt HTTP/1.1

...


Thanks for attention!

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error negotiating SSL

2016-11-14 Thread Garri Djavadyan
On Mon, 2016-11-14 at 16:12 +, piequiex wrote:
> What mean this error and how to fix it?
> Error negotiating SSL on FD 29:
> error::lib(0):func(0):reason(0) (5/-1/104)
> Error negotiating SSL on FD 30:
> error::lib(0):func(0):reason(0) (5/-1/104)

Hi,

Please provide more information next time (squid.conf at least).

One of the reasons is configuration of ssl-bump on https_port for user
agents using explicit proxy service. You should use http_port for the
cases.

Similar issue:
http://www.squid-cache.org/mail-archive/squid-users/201209/0294.html


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] is ACL conditional directive possible ?

2016-11-15 Thread Garri Djavadyan
On Tue, 2016-11-15 at 22:48 +1300, Amos Jeffries wrote:
> Then you integrate Squid with those system QoS controls by using the
> tcp_outgoing_tos directive with ACLs to send the appropriate TOS
> label for the client IP.

Hi Amos,

AFAIK, the directive 'tcp_outgoing_tos' is applied only for traffic
from Squid to origin servers.

The reference [1] and my quick test confirmed my expectations:

  Allows you to select a TOS/Diffserv value for packets outgoing
  on the server side, based on an ACL.


Nevertheless, the directive 'qos_flows' [2] could be used to set ToS
for traffic from Squid to client.


[1] http://www.squid-cache.org/Doc/config/tcp_outgoing_tos/
[2] http://www.squid-cache.org/Doc/config/qos_flows/

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP interrupted

2016-11-22 Thread Garri Djavadyan
On Wed, 2016-11-23 at 07:17 +0100, ludek_coufal wrote:
> Hello Garri,
> client FTP - Total Commander (I test WinSCP, FileZilla with same
> result - after 15 min connection interrupted) with proxy server -
> proxy server HTTP with FTP support:
> part of squid.conf:
> *
> **
> acl SSL_ports port 21
> acl SSL_ports port 1024-65535
> acl SSL_ports port 443
> acl SSL_ports port 8443
> acl SSL_ports port 6400
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443  # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210  # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280  # http-mgmt
> acl Safe_ports port 488  # gss-http
> acl Safe_ports port 591  # filemaker
> acl Safe_ports port 777  # multiling http
> acl CONNECT method CONNECT
> acl FTP proto FTP
> always_direct allow FTP
> 
> http_access deny !Safe_ports
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
> 
> ###
> # http_access deny localnet !bandwidth_auth
> ###
> http_access allow localhost
> 
> # And finally deny all other access to this proxy
> http_access deny all
> # Squid normally listens to port 3128
> #http_port 3128 transparent
> http_port 3128
> ftp_port 21
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/log/squid/cache 100 16 256
> # Leave coredumps in the first cache dir
> coredump_dir /var/log/squid/cache
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp:  1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern .  0 20% 4320
> logfile_rotate 2
> *
> 
> When I add ftp_port 21 in squid.conf and proxy.reload I get this
> message:
> /etc/squid/squid.conf:129 unrecognized: 'ftp_port'
> I found this: http://www.squid-cache.org/Doc/config/ftp_port/
> Our version is  Squid Cache ver. 3.3.8

Hi Ludek,

With the above config, your FTP clients use CONNECT methods. Squid
simply tunnels connections from FTP client to FTP server. When you
upload a file over FTP data channel, FTP control channel is idle and
Squid terminates the control connection after 15 minutes [1] by
default. It is because Squid don't know about relations between
tunneled control channel and data channel. You can try to increase
default timeout. But more elegant solution is to use FTP relay function
(ftp_port).

The ftp_port directive only available in Squid-3.5 and above. You
should upgrade Squid to latest 3.5.22 as Eliezer already advised you.

When you configure ftp_port, in Filezilla you should disable
connection->generic proxy and enable connection->ftp->ftp proxy with
following settings:

Type: custom
---
USER %u@%h
PASS %p
---
Proxy host: Squid's IP adress


[1] http://www.squid-cache.org/Doc/config/read_timeout/


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid restarts too often.

2016-11-26 Thread Garri Djavadyan

On 2016-11-26 22:28, piequiex wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In cache.log I have found "assertion failed: support.cc:1781: "0""
Squid Cache: Version 3.5.22


AIUI, your Squid binary was build against buggy openssl library (1.0.1d 
or 1.0.1e). How did you get the binary?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS 6, Squid 3.5.20, Error message in /var/log/squid/cache.log

2016-11-23 Thread Garri Djavadyan

On 2016-11-23 23:20, Walter H. wrote:

Hello,

can someone tell me, especially the maintainer of the binary packages 
for CentOS


what this message

2016/11/23 19:08:58 kid1| Error negotiating SSL on FD 39:
error::lib(0):func(0):reason(0) (5/0/0)

should say to me ...


Hi,

It was already discussed this month. You can find an answer in the 
following thread:

http://lists.squid-cache.org/pipermail/squid-users/2016-November/013470.html


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bad Connection & Round Robin DNS

2016-11-22 Thread Garri Djavadyan

On 2016-11-22 21:07, Jiann-Ming Su wrote:

Is there a way to set the timeout on a bad connection?


Yes, you can use 'connect_timeout' [1] directive.



When watching
tcpdump on the two IPs, I did not see my squid instance try the other
IP automatically.  I had to refresh my web browser connection multiple
times.  This also indicates some DNS caching persistence.  Are there
other DNS settings that can improve this behavior?


I believe Squid is configured for interception in your environment. In 
this case DNS resolution is performed on a client side and Squid uses 
resolved by the client destination IP address to connect to origin. In 
interception mode, Squid performs DNS resolution just to prevent Host 
forgery attack [2].


If you configure the clients explicitly, Squid will mark bad IP 
addresses and will avoid their use. It this case, you can use 
'squidclient mgr:ipcache' [3] to monitor resolved by Squid IP addresses 
and their status.



[1] http://www.squid-cache.org/Doc/config/connect_timeout/
[2] http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery
[3] http://wiki.squid-cache.org/Features/CacheManager/IpCache

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP interrupted

2016-11-22 Thread Garri Djavadyan

On 2016-11-22 17:05, ludek_coufal wrote:

Hello,
Squid Cache ver. 3.3.8 on CentOs Linux 7.2.1511

FTP connection from local net over linux server CentOs firewall with
Squid proxy to internet FTP server is interrupted every 15 min (900
sec).
Large file upload is interrupted.
Direct connection without Squid proxy work OK.


Hi,

The issue may occur, if FTP client uses CONNECT method to connect to 
remote FTP servers. You can find details in the following thread:


http://www.squid-cache.org/mail-archive/squid-users/200609/0111.html


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid logs TCP_MISS/200 for a served cached object requested with If-None-Match

2016-11-28 Thread Garri Djavadyan
On Sat, 2016-11-19 at 01:12 +0500, Garri Djavadyan wrote:
> Hello,
> 
> I noticed that Squid logs TCP_MISS/200 when it serves previously
> cached 
> object in return to non-matched conditional request with If-None-
> Match. 
> For example:
> 
> 1. Non-conditional request to the previously cached object.
> 
> $ curl -v -x http://127.0.0.1:3128 
> http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null
> 
> < HTTP/1.1 200 OK
> < Server: nginx
> < Date: Fri, 18 Nov 2016 19:58:38 GMT
> < Content-Type: application/octet-stream
> < Content-Length: 18009
> < Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
> < ETag: "5668acc2-4659"
> < Accept-Ranges: bytes
> < Age: 383
> < X-Cache: HIT from gentoo.comnet.uz
> < Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
> < Connection: keep-alive
> 
> 
> 2. Conditional request with non-matching entity to the same object.
> 
> $ curl -v -x http://127.0.0.1:3128 -H 'If-None-Match: "5668acc2-
> 4658"' 
> http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null
> 
> < HTTP/1.1 200 OK
> < Server: nginx
> < Date: Fri, 18 Nov 2016 19:58:38 GMT
> < Content-Type: application/octet-stream
> < Content-Length: 18009
> < Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
> < ETag: "5668acc2-4659"
> < Accept-Ranges: bytes
> < X-Cache: MISS from gentoo.comnet.uz
> < Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
> < Connection: keep-alive
> 
> 
> I found that the behavior is related to the following code 
> (client_side_reply.cc):
> 
>  if (!e->hasIfNoneMatchEtag(r)) {
>  // RFC 2616: ignore IMS if If-None-Match did not match
>  r.flags.ims = false;
>  r.ims = -1;
>  r.imslen = 0;
>  r.header.delById(Http::HdrType::IF_MODIFIED_SINCE);
> --->http->logType = LOG_TCP_MISS;
>  sendMoreData(result);
>  return true;
>  }
> 
> 
> So, it seems like intended behavior, but I can't understand the
> reasons.
> Or maybe it is a bug?
> 
> Thanks.
> 
> Garri


Any comments will be much appreciated. Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid restarts too often.

2016-11-27 Thread Garri Djavadyan

On 2016-11-27 19:44, piequiex wrote:

> In cache.log I have found "assertion failed: support.cc:1781: "0""
> Squid Cache: Version 3.5.22

AIUI, your Squid binary was build against buggy openssl library 
(1.0.1d or

1.0.1e). How did you get the binary?


I build them with libressl.


The configure script detected that function SSL_get_certificate() is 
broken in your SSL library (you can check it in config.log). Can you 
upgrade the library and rebuild Squid?



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid logs TCP_MISS/200 for a served cached object requested with If-None-Match

2016-11-28 Thread Garri Djavadyan

On 2016-11-28 17:39, Garri Djavadyan wrote:

On Sat, 2016-11-19 at 01:12 +0500, Garri Djavadyan wrote:

Hello,

I noticed that Squid logs TCP_MISS/200 when it serves previously
cached 
object in return to non-matched conditional request with If-None-
Match. 
For example:

1. Non-conditional request to the previously cached object.

$ curl -v -x http://127.0.0.1:3128 
http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null

< HTTP/1.1 200 OK
< Server: nginx
< Date: Fri, 18 Nov 2016 19:58:38 GMT
< Content-Type: application/octet-stream
< Content-Length: 18009
< Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
< ETag: "5668acc2-4659"
< Accept-Ranges: bytes
< Age: 383
< X-Cache: HIT from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
< Connection: keep-alive


2. Conditional request with non-matching entity to the same object.

$ curl -v -x http://127.0.0.1:3128 -H 'If-None-Match: "5668acc2-
4658"' 
http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null

< HTTP/1.1 200 OK
< Server: nginx
< Date: Fri, 18 Nov 2016 19:58:38 GMT
< Content-Type: application/octet-stream
< Content-Length: 18009
< Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
< ETag: "5668acc2-4659"
< Accept-Ranges: bytes
< X-Cache: MISS from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
< Connection: keep-alive


I found that the behavior is related to the following code 
(client_side_reply.cc):

 if (!e->hasIfNoneMatchEtag(r)) {
 // RFC 2616: ignore IMS if If-None-Match did not match
 r.flags.ims = false;
 r.ims = -1;
 r.imslen = 0;
 r.header.delById(Http::HdrType::IF_MODIFIED_SINCE);
--->http->logType = LOG_TCP_MISS;
 sendMoreData(result);
 return true;
 }


So, it seems like intended behavior, but I can't understand the
reasons.
Or maybe it is a bug?

Thanks.

Garri



Any comments will be much appreciated. Thanks.

Garri


I found that the behavior is a blocker for the bug report 4169 [1]

[1] http://bugs.squid-cache.org/show_bug.cgi?id=4169


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid restarts too often.

2016-11-26 Thread Garri Djavadyan

On 2016-11-26 23:42, Ralf Hildebrandt wrote:

* piequiex :

> In cache.log I have found "assertion failed: support.cc:1781: "0""
> Squid Cache: Version 3.5.22
After rebuild:
assertion failed: Read.cc:69: "fd_table[conn->fd].halfClosedReader != 
NULL"


http://lists.squid-cache.org/pipermail/squid-users/2015-June/003977.html

But hey, 3.5.22 is the most recent 3.5.x version.



The bug report 4270 [1] is still open. The assertion test is in the 
basic function, so many different and unrelated issues may lead to the 
failure. The amount of duplicate reports confirm that.



[1] http://bugs.squid-cache.org/show_bug.cgi?id=4270

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bad Connection & Round Robin DNS

2016-11-21 Thread Garri Djavadyan
On Tue, 2016-11-22 at 03:59 +, Jiann-Ming Su wrote:
> If a website has two (or more) IP addresses, and the TCP connection
> to one of them fails, can squid3 be configured to try the other IP
> address(es)?

Hi,

The behavior you described is default for Squid. For example, you can
set 'debug_options ALL,1 14,9' to confirm it:

2016/11/22 09:47:50.426 kid1| 14,3| ipcache.cc(433) ipcacheParse:
ipcacheParse: mail.ru #0 [2a00:1148:db00:0:b0b0::1]
2016/11/22 09:47:50.426 kid1| 14,3| ipcache.cc(422) ipcacheParse:
ipcacheParse: mail.ru #1 94.100.180.199
2016/11/22 09:47:50.426 kid1| 14,3| ipcache.cc(422) ipcacheParse:
ipcacheParse: mail.ru #2 94.100.180.201
2016/11/22 09:47:50.426 kid1| 14,3| ipcache.cc(422) ipcacheParse:
ipcacheParse: mail.ru #3 217.69.139.199
2016/11/22 09:47:50.426 kid1| 14,3| ipcache.cc(422) ipcacheParse:
ipcacheParse: mail.ru #4 217.69.139.200
2016/11/22 09:47:50.426 kid1| 14,9| comm.cc(646) comm_connect_addr:
connecting to: [2a00:1148:db00:0:b0b0::1]:80
2016/11/22 09:47:50.427 kid1| 14,2| ipcache.cc(924) ipcacheMarkBadAddr:
ipcacheMarkBadAddr: mail.ru [2a00:1148:db00:0:b0b0::1]:80
2016/11/22 09:47:50.427 kid1| 14,3| ipcache.cc(889) ipcacheCycleAddr:
ipcacheCycleAddr: mail.ru now at 94.100.180.199 (2 of 5)
2016/11/22 09:47:50.427 kid1| 14,9| comm.cc(646) comm_connect_addr:
connecting to: 94.100.180.199:80
2016/11/22 09:48:50.036 kid1| 14,2| ipcache.cc(924) ipcacheMarkBadAddr:
ipcacheMarkBadAddr: mail.ru 94.100.180.199:80
2016/11/22 09:48:50.036 kid1| 14,3| ipcache.cc(889) ipcacheCycleAddr:
ipcacheCycleAddr: mail.ru now at 94.100.180.201 (3 of 5)
2016/11/22 09:48:50.036 kid1| 14,9| comm.cc(646) comm_connect_addr:
connecting to: 94.100.180.201:80
2016/11/22 09:49:50.091 kid1| 14,2| ipcache.cc(924) ipcacheMarkBadAddr:
ipcacheMarkBadAddr: mail.ru 94.100.180.201:80
2016/11/22 09:49:50.091 kid1| 14,3| ipcache.cc(889) ipcacheCycleAddr:
ipcacheCycleAddr: mail.ru now at 217.69.139.199 (4 of 5)
2016/11/22 09:49:50.092 kid1| 14,9| comm.cc(646) comm_connect_addr:
connecting to: 217.69.139.199:80
2016/11/22 09:50:50.152 kid1| 14,2| ipcache.cc(924) ipcacheMarkBadAddr:
ipcacheMarkBadAddr: mail.ru 217.69.139.199:80
2016/11/22 09:50:50.153 kid1| 14,3| ipcache.cc(889) ipcacheCycleAddr:
ipcacheCycleAddr: mail.ru now at 217.69.139.200 (5 of 5)
2016/11/22 09:50:50.153 kid1| 14,9| comm.cc(646) comm_connect_addr:
connecting to: 217.69.139.200:80


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FTP interrupted

2016-11-22 Thread Garri Djavadyan

On 2016-11-22 22:24, Garri Djavadyan wrote:

On 2016-11-22 17:05, ludek_coufal wrote:

Hello,
Squid Cache ver. 3.3.8 on CentOs Linux 7.2.1511

FTP connection from local net over linux server CentOs firewall with
Squid proxy to internet FTP server is interrupted every 15 min (900
sec).
Large file upload is interrupted.
Direct connection without Squid proxy work OK.


Hi,

The issue may occur, if FTP client uses CONNECT method to connect to
remote FTP servers. You can find details in the following thread:

http://www.squid-cache.org/mail-archive/squid-users/200609/0111.html


If your FTP client connects to Squid's http_port then it uses CONNECT 
method. To solve the problem try to use ftp_port and disable proxy 
settings on FTP client.

For example:

1. Configure ftp_port.
# diff etc/squid.conf.default etc/squid.conf
59a60

ftp_port 21


2. Connect from FTP client, where:
${squid_ip} - Squid's IP address
${squid_ftp_port} - configured ftp_port
${username} - username on remote FTP server
${ftp_server} - remote FTP server name/IP
${password} - password for remote FTP server

$ ftp ${squid_ip} ${squid_ftp_port}
Connected to localhost.localdomain.
220 Service ready
Name (localhost:garry): ${username}@${ftp_server}
530 Must login first
530 Must login first
SSL not available
331 Please specify the password.
Password: ${password}
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] is ACL conditional directive possible ?

2016-11-15 Thread Garri Djavadyan

On 2016-11-15 22:31, AUBERT Thibaud wrote:

Hi Guys,

Ok, QoS might help to control traffic on the internet access side, but
it won't help between the source, client on a small remote
office/output, and the proxy.

It might also be difficult to split this traffic between what is
intended to internet or just internal.

Example : 1Gb/sec internet link used at 50%, a user on a remote site
with a 15 mbits/sec link used at 80% launch a download. There's pretty
much no impact on the internet link... but on the second, it add an
extra 3mbits/sec that saturate the network.

If I add a restriction with a small value for the max size of file, I
can hope that user won't bother others people on site too long. But it
also mean that every people that use the internet link won't be able
to download big file, even if they normally could.

I also think to use Delay Pool, but, it will penalize people that do
not download big files. If anyone have some XP about individual delay
pool, don't hesitate to share, because I'm not sure that it will fit
my current needs.


As Amos already wrote, the only viable solution for your case are 
configured QoS policies on the routers facing limited links to branches. 
You can't control downstream traffic to branches on HTTP(S) proxy 
servers alone. I believe many other network protocols 
(FTP/Bittorrent/POP3/IMAP ...) are used on the limited links to 
branches.


If Squid generates most of the traffic passing through the slow links, 
you also has an option to apply QoS policies on operating system level. 
For example, if Squid is installed on Linux, you can use very flexible 
HTB queuing discipline [1]. Below is excerpt describing link sharing 
scenario:


  HTB ensures that the amount of service provided to each class is at
  least the minimum of the amount it requests and the amount assigned
  to it. When a class requests less than the amount assigned, the
  remaining (excess) bandwidth is distributed to other classes which
  request service.

Similar methods can be uses for other operating systems.

[1] http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] unexpected debug output

2016-11-18 Thread Garri Djavadyan

On 2016-11-17 22:01, Alex Rousskov wrote:

On 11/17/2016 12:15 AM, senor wrote:

I discovered that 'squid -k rotate' toggles cache.log output into full
debug mode as if I had done 'squid -k debug'.  Execute a second rotate
and it toggles debug off. This only happens when I have an ecap 
adapter

configured. Comment out those lines and everything works as expected.

My question is about the debug behavior. If this isn't a bug


Sounds like a bug to me. If you can reproduce this behavior with one of
the official sample eCAP adapters, then this is a Squid bug. Otherwise,
it is probably your adapter bug.


I can't reproduce the issue with official sample adapter using 
Squid-3.5.22. I used the following configuration:


# diff etc/squid.conf.default etc/squid.conf
73a74,80

loadable_modules /usr/local/lib/ecap_adapter_modifying.so
ecap_enable on
ecap_service ecapModifier respmod_precache \
uri=ecap://e-cap.org/ecap/services/sample/modifying \
victim= \
replacement=eCAP_works
adaptation_access ecapModifier allow all



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid logs TCP_MISS/200 for a served cached object requested with If-None-Match

2016-11-18 Thread Garri Djavadyan

Hello,

I noticed that Squid logs TCP_MISS/200 when it serves previously cached 
object in return to non-matched conditional request with If-None-Match. 
For example:


1. Non-conditional request to the previously cached object.

$ curl -v -x http://127.0.0.1:3128 
http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null


< HTTP/1.1 200 OK
< Server: nginx
< Date: Fri, 18 Nov 2016 19:58:38 GMT
< Content-Type: application/octet-stream
< Content-Length: 18009
< Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
< ETag: "5668acc2-4659"
< Accept-Ranges: bytes
< Age: 383
< X-Cache: HIT from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
< Connection: keep-alive


2. Conditional request with non-matching entity to the same object.

$ curl -v -x http://127.0.0.1:3128 -H 'If-None-Match: "5668acc2-4658"' 
http://mirror.comnet.uz/centos/7/os/x86_64/GPL >/dev/null


< HTTP/1.1 200 OK
< Server: nginx
< Date: Fri, 18 Nov 2016 19:58:38 GMT
< Content-Type: application/octet-stream
< Content-Length: 18009
< Last-Modified: Wed, 09 Dec 2015 22:35:46 GMT
< ETag: "5668acc2-4659"
< Accept-Ranges: bytes
< X-Cache: MISS from gentoo.comnet.uz
< Via: 1.1 gentoo.comnet.uz (squid/5.0.0-BZR)
< Connection: keep-alive


I found that the behavior is related to the following code 
(client_side_reply.cc):


if (!e->hasIfNoneMatchEtag(r)) {
// RFC 2616: ignore IMS if If-None-Match did not match
r.flags.ims = false;
r.ims = -1;
r.imslen = 0;
r.header.delById(Http::HdrType::IF_MODIFIED_SINCE);
--->http->logType = LOG_TCP_MISS;
sendMoreData(result);
return true;
}


So, it seems like intended behavior, but I can't understand the reasons.
Or maybe it is a bug?

Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SOLVED] Re: TCP Outgoing Address ACL Problem

2016-11-13 Thread Garri Djavadyan

On 2016-11-13 23:09, jarrett+squid-us...@jarrettgraham.com wrote:

My problem is solved.


The solution may be useful for other users also. Please, post the 
solution, if possible. Thanks!


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP Outgoing Address ACL Problem

2016-11-11 Thread Garri Djavadyan

On 2016-11-12 07:55, Amos Jeffries wrote:

On 12/11/2016 7:44 a.m., Garri Djavadyan wrote:


2. I added second http_port, ACL for the second http_port and the rule
to use second IP address if connection is for second http_port.
# diff -u etc/squid.conf.default etc/squid.conf
--- etc/squid.conf.default2016-10-28 15:54:53.851704360 +0500
+++ etc/squid.conf2016-11-11 23:18:48.654385840 +0500
@@ -23,6 +23,7 @@
 acl Safe_ports port 591# filemaker
 acl Safe_ports port 777# multiling http
 acl CONNECT method CONNECT
+acl port3129 localport 3129



FYI Garri, "localport" value varies depending on the traffic mode. It 
is

not necessarily the Squid receiving port.


Yes, you are right. I used it for simplicity's sake and the 
configuration permits it.




'jarret+squid-users' is already using "myportname" ACL which is the
better one to use for this.


I thought the string 'acl ipv4-1 myportname 3128 src 10.99.0.0/24' was 
interpreted as:


acl ipv4-1 myportname "3128 src 10.99.0.0/24"

So, I wrongly assumed that the ACL was not matched. If fact it is 
matches. Thanks for pointing out my mistake!



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NCSA-auth don't work for file contain too many passswords

2016-11-11 Thread Garri Djavadyan

Hi Amos,

Thanks for the comments!

On 2016-11-12 07:48, Amos Jeffries wrote:

I can't reproduce the problem using Squid 3.5.22. I used following
method to verify the case:


Unfortunately your test uses the 'openssl' tool below instead of
htpasswd to create the password file. There are some big differences in
security algorithms each uses to reate the password file.


My primary task was to confirm that 20k passwords DB file is not an 
issue for Squid.


I used htpasswd-compatible MD5 algorighm (-apr1), it is equivalent to 
'htpasswd -m'.

The openssl key -crypt is equivalent to 'htpasswd -d'.
You are right, I missed the specified '-d' flag.



2. Create ncsa passwords db for 20k users.
# for i in {1..2}; do echo "user${i}:$(openssl passwd -apr1
pass${i})" >> /usr/local/squid35/etc/passwd; done



This test *will* fail when "htpasswd -db" is used to generate the
password file from those password strings. Notice that the test 'i'
values of 1+ create passwords like "pass1" which are 9
characters long.

The htpasswd -d uses DES encryption which has an 8 character limit on
password length. It will *silently* truncate the password to the first 
8

characters.

Recent basic_ncsa_auth helper versions will detect and reject
authentication using DES algorithm when password is longer than 8
characters.


Thanks. I found the relevant commit 11632 [1] and the associated bug 
report 3107 [2] discussion.
I have a question, maybe there should be an optional argument which 
could be used to permit old behavior? For example, Apache HTTP server 
still permits passwords longer then 8 characters.



[1] http://bazaar.launchpad.net/~squid/squid/5/revision/11632
[2] http://bugs.squid-cache.org/show_bug.cgi?id=3107


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.0.16 still signed by old key

2016-11-01 Thread Garri Djavadyan
According to the announce [1], Squid 4.0.16 and later should be signed 
by the new key B06884EDB779C89B044E64E3CD6DBF8EF3B17D3E, but it is still 
signed by the old Squid 3 key EA31CC5E9488E5168D2DCC5EB268E706FF5CF463:


$ gpg2 --verify squid-4.0.16.tar.xz.asc squid-4.0.16.tar.xz
gpg: Signature made Sun 30 Oct 2016 07:45:12 PM UZT
gpg:using RSA key B268E706FF5CF463
gpg: Good signature from "Amos Jeffries " [ultimate]
gpg: aka "Amos Jeffries (Squid 3.0 Release Key) 
" [ultimate]
gpg: aka "Amos Jeffries (Squid 3.1 Release Key) 
" [ultimate]
gpg: aka "Amos Jeffries " 
[ultimate]



[1] 
http://lists.squid-cache.org/pipermail/squid-users/2016-October/013299.html

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Default state for the option generate-host-certificates

2016-10-28 Thread Garri Djavadyan
Hello list,

The last sentence for generate-host-certificates[=] option
paragraph states:

  This option is enabled by default when ssl-bump is used. See the
  ssl-bump option above for more information.

But a client can't negotiate secure connection and times out when the
option is not specified explicitly. For example, with following config
I get negotiation timeout:

# diff etc/squid.conf.default etc/squid.conf
59c59
< http_port 3128
---
> http_port 3128 ssl-bump cert=/usr/local/squid35/etc/ssl_cert/myCA.pem
73a74,76
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump bump all

-
$ https_proxy="127.0.0.1:3128" curl -v -k https://ya.ru/ > /dev/null
*   Trying 127.0.0.1...
* TCP_NODELAY set
  % Total% Received % Xferd  Average
Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft 
 Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-
- 0* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0)
* Establish HTTP proxy tunnel to ya.ru:443
> CONNECT ya.ru:443 HTTP/1.1
> Host: ya.ru:443
> User-Agent: curl/7.50.3
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 Connection established
< 
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: none
  0 00 00 0  0  0 --:--:--  0:00:59 --:--:-
- 0* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Curl_http_done: called premature == 1
  0 00 00 0  0  0 --:--:--  0:01:00 --:--:-
- 0
* Closing connection 0
curl: (35) Encountered end of file



No problems, if the option specified explicitly:

# diff etc/squid.conf.default etc/squid.conf
59c59,61
< http_port 3128
---
> http_port 3128 ssl-bump \
> cert=/usr/local/squid35/etc/ssl_cert/myCA.pem \
> generate-host-certificates
73a76,78
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump bump all


Is it a bug, documentation error or I simply missed something?

Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid doesn't use domain name as a request URL in access.log when splice at step 3 occurs

2016-11-04 Thread Garri Djavadyan
On Fri, 2016-11-04 at 17:43 +0500, Garri Djavadyan wrote:
> I noticed that Squid doesn't use gathered domain name information for
> %ru in access.log when splice action is performed at step 3 for
> intercepted traffic. The format code ssl::>sni is available at both
> steps. Below are examples used to verify the behavior using Squid
> 3.5.22, but the results are same for Squid 4.0.16.
> 
> The request used on client:
> 
> $ curl https://www.openssl.org/ > /dev/null
> 
> 
> The configuration for splice at step 2:
> 
> # diff etc/squid.conf.default etc/squid.conf
> 73a74,78
> > 
> > https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem
> generate-host-certificates
> > 
> > acl StepSplice at_step SslBump2
> > ssl_bump splice StepSplice
> > ssl_bump peek all
> > logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs % > %[un
> %Sh/%sni
> 
> 
> The result:
> 
> 1478256091.609   1028 172.16.0.21 TAG_NONE/200 0 CONNECT
> 104.124.119.14:443 - HIER_NONE/- - www.openssl.org
> 1478256091.609   1026 172.16.0.21 TCP_TUNNEL/200 9807 CONNECT www.ope
> ns
> sl.org:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org
> 
> 
> -
> The configuration for splice at step 3:
> 
> # diff etc/squid.conf.default etc/squid.conf
> 73a74,78
> > 
> > https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem
> generate-host-certificates
> > 
> > acl StepSplice at_step SslBump3
> > ssl_bump splice StepSplice
> > ssl_bump peek all
> > logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs % > %[un
> %Sh/%sni
> 
> 
> The result:
> 1478256303.420574 172.16.0.21 TCP_TUNNEL/200 6897 CONNECT
> 104.124.119.14:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org
> 
> 
> Is it a bug or intended behavior? Thanks.
> 
> Garri

It prevents domain name identification when SNI is not provided by a
client. For example:

Request:
$ echo -e "HEAD / HTTP/1.1\nHost: www.openssl.org\n\n" | openssl
s_client -quiet -no_ign_eof -connect www.openssl.org:443

Config:
# diff etc/squid.conf.default etc/squid.conf
73a74,78
> https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem
generate-host-certificates
> acl StepSplice at_step SslBump3
> ssl_bump splice StepSplice
> ssl_bump peek all
> logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs %sni

Result:
1478267428.070347 172.16.0.21 TCP_TUNNEL/200 235 CONNECT
104.124.119.14:443 - ORIGINAL_DST/104.124.119.14 - -
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid doesn't use domain name as a request URL in access.log when splice at step 3 occurs

2016-11-04 Thread Garri Djavadyan
I noticed that Squid doesn't use gathered domain name information for
%ru in access.log when splice action is performed at step 3 for
intercepted traffic. The format code ssl::>sni is available at both
steps. Below are examples used to verify the behavior using Squid
3.5.22, but the results are same for Squid 4.0.16.

The request used on client:

$ curl https://www.openssl.org/ > /dev/null


The configuration for splice at step 2:

# diff etc/squid.conf.default etc/squid.conf
73a74,78
> https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem
generate-host-certificates
> acl StepSplice at_step SslBump2
> ssl_bump splice StepSplice
> ssl_bump peek all
> logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs %sni


The result:

1478256091.609   1028 172.16.0.21 TAG_NONE/200 0 CONNECT
104.124.119.14:443 - HIER_NONE/- - www.openssl.org
1478256091.609   1026 172.16.0.21 TCP_TUNNEL/200 9807 CONNECT www.opens
sl.org:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org


-
The configuration for splice at step 3:

# diff etc/squid.conf.default etc/squid.conf
73a74,78
> https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem
generate-host-certificates
> acl StepSplice at_step SslBump3
> ssl_bump splice StepSplice
> ssl_bump peek all
> logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs %sni


The result:
1478256303.420574 172.16.0.21 TCP_TUNNEL/200 6897 CONNECT
104.124.119.14:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org


Is it a bug or intended behavior? Thanks.

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid doesn't use domain name as a request URL in access.log when splice at step 3 occurs

2016-11-04 Thread Garri Djavadyan

On 2016-11-04 19:42, Amos Jeffries wrote:

On 5/11/2016 1:43 a.m., Garri Djavadyan wrote:

The configuration for splice at step 3:

# diff etc/squid.conf.default etc/squid.conf
73a74,78

https_port 3129 intercept ssl-bump cert=etc/ssl_cert/myCA.pem

generate-host-certificates

acl StepSplice at_step SslBump3
ssl_bump splice StepSplice
ssl_bump peek all
logformat squid  %ts.%03tu %6tr %>a %Ss/%03>Hs %
%Sh/%sni


The result:
1478256303.420574 172.16.0.21 TCP_TUNNEL/200 6897 CONNECT
104.124.119.14:443 - ORIGINAL_DST/104.124.119.14 - www.openssl.org


Is it a bug or intended behavior? Thanks.



The person (Christos) who designed that behaviour is not reading this
mailing list very often.


Does it mean a bug report would have better chances to get noticed?



AFAIK, it depends on what the SubjectAltName field in the certificate
provided by 104.124.119.14 contains.


The SubjectAltName field's value in the certificate is:

Not Critical
DNS Name: www.openssl.org
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] r14088 crash on FreeBSD 11

2016-12-16 Thread Garri Djavadyan
On Fri, 2016-12-16 at 06:34 +, k simon wrote:
> Hi,lists,
>    r14087 is quite stable on FB 11. But r14088 crashed frequently
> with 
> "2016/12/16 09:00:59 kid1| assertion failed: MemBuf.cc:216: "0 <= 
> tailSize && tailSize <= cSize" ". The config file is almost the
> default 
> except listening port and http_access modification.

Hi,

I believe you faced bug 4606 [1]. Do you use 'collapsed_forwarding'
option? If you have any new details please add a comment to the bug
report.

[1] http://bugs.squid-cache.org/show_bug.cgi?id=4606


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] r14088 crash on FreeBSD 11

2016-12-16 Thread Garri Djavadyan
On Fri, 2016-12-16 at 14:38 +0500, Garri Djavadyan wrote:
> On Fri, 2016-12-16 at 06:34 +, k simon wrote:
> > Hi,lists,
> >    r14087 is quite stable on FB 11. But r14088 crashed frequently
> > with 
> > "2016/12/16 09:00:59 kid1| assertion failed: MemBuf.cc:216: "0 <= 
> > tailSize && tailSize <= cSize" ". The config file is almost the
> > default 
> > except listening port and http_access modification.
> 
> Hi,
> 
> I believe you faced bug 4606 [1]. Do you use 'collapsed_forwarding'
> option? If you have any new details please add a comment to the bug
> report.
> 
> [1] http://bugs.squid-cache.org/show_bug.cgi?id=4606

Sorry, actually, 'collapsed_forwarding' should not be enabled to facethe bug. 

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] URL too large??

2016-12-13 Thread Garri Djavadyan

On 2016-12-13 22:03, Alex Rousskov wrote:

On 12/13/2016 09:51 AM, Eliezer Croitoru wrote:

I think that the maximum size was 64k


The maximum appears to be 8KB:

  v3.5/src/defines.h:#define MAX_URL  8192
  v4/src/defines.h:#define MAX_URL  8192
  v5/src/defines.h:#define MAX_URL  8192

IIRC, there are many emails discussing this limit and what to do about 
it.


Alex.


On Tue, Dec 13, 2016 at 11:08 AM, Odhiambo Washington wrote:

Hi,

Saw this on my cache.log (squid-3.5.22, FreeBSD-9.3,):

2016/12/13 11:47:55| WARNING: no_suid: setuid(0): (1) Operation 
not

permitted
2016/12/13 11:47:55| WARNING: no_suid: setuid(0): (1) Operation 
not

permitted
2016/12/13 11:47:55| HTCP Disabled.
2016/12/13 11:47:55| Finished loading MIME types and icons.
2016/12/13 11:47:55| Accepting NAT intercepted HTTP Socket
connections at local=[::]:13128 remote=[::] FD 39 flags=41
2016/12/13 11:47:55| Accepting HTTP Socket connections at
local=[::]:13130 remote=[::] FD 40 flags=9
2016/12/13 11:47:55| Accepting NAT intercepted SSL bumped HTTPS
Socket connections at local=[::]:13129 remote=[::] FD 41 flags=41
2016/12/13 11:47:55| Accepting ICP messages on [::]:3130
2016/12/13 11:47:55| Sending ICP messages from [::]:3130
*2016/12/13 11:53:25| urlParse: URL too large (11654 bytes)*


--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254 7 3200 0004/+254 7 2274 3223
"Oh, the cruft."



Details could be found in the following bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=4422#c1

Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-17 Thread Garri Djavadyan

On 2016-12-17 15:41, Odhiambo Washington wrote:

Hi,

I keep seeing something that I think is odd. Squid has been exiting on
signal 6, and I keep seeing this:

root@gw:/usr/local/openssl # tail -f /opt/squid-3.5/var/logs/cache.log
2016/12/17 13:38:32| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2016/12/17 13:38:32|/opt/squid-3.5/var/cache/00/26/264D
2016/12/17 13:40:24| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2016/12/17 13:40:24|/opt/squid-3.5/var/cache/00/3B/3B56
2016/12/17 13:42:34| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2016/12/17 13:42:34|/opt/squid-3.5/var/cache/00/6B/6B0D
2016/12/17 13:43:36| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2016/12/17 13:43:36|/opt/squid-3.5/var/cache/00/00/0050
2016/12/17 13:44:25| DiskThreadsDiskFile::openDone: (2) No such file
or directory
2016/12/17 13:44:25|/opt/squid-3.5/var/cache/00/AF/AFF1

So, what could be making the files disappear?



Hi,

(Reply from Amos Jeffries from 
http://bugs.squid-cache.org/show_bug.cgi?id=4367#c2)

This is Squid *detecting* complete absence of disk files. Not causing
corruption.

Please check if you have multiple Squid instances running and accessing 
the
same cache_dir. That includes multiple workers using the same 
ufs/aufs/diskd

cache_dir configuration line.

Also whether swap.state for that cache_dir is being correctly and 
completely
written out to disk on shutdown or restart. Using an outdated 
swap.state

file can also lead to these warnings.


The last paragraph explains your issue. The signal 6 (abort) forces 
Squid worker to terminate immediately (to avoid all required shutdown 
procedures) and leave core dump. You can find a reason for abort in 
cache.log.



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing cache files

2016-12-17 Thread Garri Djavadyan

On 2016-12-17 18:39, Odhiambo Washington wrote:

Also whether swap.state for that cache_dir is being correctly and
completely
written out to disk on shutdown or restart. Using an outdated
swap.state
file can also lead to these warnings.


The last paragraph explains your issue. The signal 6 (abort) forces
Squid worker to terminate immediately (to avoid all required
shutdown procedures) and leave core dump. You can find a reason for
abort in cache.log.

Garri


Hi Garri,

So, checking, I don't see swap.state being written to disk and there
is no core dump either.


swap.state description [1]:
This index file holds
the metadata of objects saved on disk.  It is used to rebuild
the cache during startup.  Normally this file resides in each
'cache_dir' directory, but you may specify an alternate
pathname here.

You can learn how to get core dump on wiki [2].



There is no directive in my squid.conf to suppress the two.


What do you mean?

AIUI, your Squid instance faced unexpected event and initiated abort. 
Abort produces core dump which could be useful for developers to 
investigate unexpected event. As a side effect of abort, swap.state file 
was not updated correctly. The errors you see in cache.log are harmless 
and just confirm that swap.state and cache_dir objects are not 
synchronized due to abort.


You should concentrate on an event which led to abort. Usually, Squid 
inform about unexpected event in cache.log. Find the lines before 
'Starting Squid Cache version'.



[1] http://www.squid-cache.org/Doc/config/cache_swap_state/
[2] 
http://wiki.squid-cache.org/SquidFaq/BugReporting#crashes_and_core_dumps



Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump with intermediate CA

2017-01-05 Thread Garri Djavadyan
On Thu, 2017-01-05 at 23:40 +, senor wrote:
> Hello All.
> I'd like clarification of the documentation at
> http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpWithInter
> mediateCA
> 
> In section "CA certificate preparation" it is stated that a file
> should
> be created with "intermediate CA2 followed by root CA1 in PEM
> format".
> CA1 is the cert trusted by the clients. CA2 is used to sign the
> mimicked
> certs. And finally the statement "Now Squid can send the intermediate
> CA2 public key with root CA1 to client and does not need to install
> intermediate CA2 to clients."
> 
> The specification states that the clients MUST NOT use CA1 provided
> in
> the TLS exchange. CA1 must be (and in this scenario is) already
> included
> in its trusted store of CAs.
> 
> As I understand it, the TLS exchange with the client for a bumped
> connection should have the mimicked server cert followed by the
> intermediate cert (CA2) and that's all. The client completes the
> chain
> with the already trusted CA1.
> 
> The example file created is used for cafile= option to http_port
> which
> is supposed to be for verifying client certs which is not part of
> this
> scenario.
> 
> This is getting a little long-winded so I'll wait to see what anyone
> has
> to say about my assumptions or understanding.
> 
> Thanks,
> Senor

Hi Senor,

You are right, it is not required to send root CA cert to a client. It
is already installed in client's cert store. You can find more details
in bug report 3426 [1] (comments 11 and 13).

[1] http://bugs.squid-cache.org/show_bug.cgi?id=3426


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid freeze each hour.

2016-12-20 Thread Garri Djavadyan

On 2016-12-20 21:42, David Touzeau wrote:
Is there any way to disabling Cache digest without need to recompile 
squid ?


Hi,

Use "digest_generation off".

http://www.squid-cache.org/Doc/config/digest_generation/


Garri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users