Re: [squid-users] SSL handshake

2021-07-28 Thread Vieri
Hi,

I don't know if my situation is like Nishant's, but today my issues have gone 
away without intervention on my behalf.
I'm guessing the cause was on the remote server's side or some in-between SSL 
inspection...

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL handshake

2021-07-27 Thread Vieri
Hi,

Just recently I've noticed that LAN clients going through Squid with sslbump 
are all of a sudden unable to access certain HTTPS sites such as 
login.yahoo.com.
The squid log has lines like:

kid1| 4,3| Error.cc(22) update: recent: 
ERR_SECURE_CONNECT_FAIL/SQUID_ERR_SSL_HANDSHAKE+TLS_LIB_ERR=1423506E+TLS_IO_ERR=1

and the client error page shows a line like this:

SQUID_TLS_ERR_CONNECT+TLS_LIB_ERR=14094410+TLS_IO_ERR=1

I'm not sure why the lib error code is different. I might not have tracked down 
the right connection in the log.

I have not changed anything in the OS so it might be because of change in the 
remote web service.
It might be that my openssl version is already too old (1.1.1g), and that the 
web site forces the use of an unsupported cypher?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] kswapd0 and memory usage

2021-03-31 Thread Vieri
 On Tuesday, March 30, 2021, 8:01:30 AM GMT+2, Amos Jeffries 
 wrote: 

>> If this were to happen again (not sure when or if) what should I try to 
>> search for?
>
> Output of the "squidclient mgr:mem", "top" and "ps waux" commands would 
> be good.
>
> Those will show how Squid is using the memory it has, what processes are 
> using the most memory, and what processes are running. Most memory 
> issues can be triaged with that info.

Will do, thanks. I have a script that tries to "predict" when these problems 
are about to happen. It runs something like
timeout 30 squidclient mgr:info
and if it actually times out then it restarts both squid and c-icap.
So I'm afraid I might not get anything out of "squidclient mgr:mem", but I will 
run top -b -n 1 and ps waux.

Thanks,

Vieri



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] kswapd0 and memory usage

2021-03-29 Thread Vieri
Hi,

I've been running squid & c-icap for years, and only recently have I had a 
severe system slowdown.

My kswapd0 process was at a constant 100% CPU usage level until I forced 
restarting of both squid and c-icap.

I've been using several Squid versions over the years, but the only differences 
I know of between my previous recent setup that worked and the current setup 
that has "failed" once (for now)  are:

- upgraded from 5.0.4-20201125-r5fadc09ee to Version 5.0.5-20210223-r4af19cc24

- set cgroups for both squid and c-icap services with just one setting: 
cpu.shares 512

- upgraded to c-icap 0.5.8

Given the stressful situation I only had time to notice that kswapd0 was at 
100%, that top reported that all swap space was being used, and that the whole 
server was very sluggish. The additional problem is that the system is a router 
and uses TPROXY with squid sslbump so I don't think I can virtualize the web 
proxying services. Hence the use of cgroups to try to contain squid, c-icap and 
clamav. I have yet to define a cgroup for memory usage.

Restarting Squid and c-icap alone (not clamd) immediately solved the kswapd0 
"gone wild" issue.
Mem usage went back to something like:

# free -h
  total    used    free  shared  buff/cache   available
Mem:   31Gi   9.2Gi    21Gi    48Mi   1.0Gi    21Gi
Swap:  35Gi   1.7Gi    33Gi

I only have debug_options rotate=1 ALL,1 in my squid config file, and sifting 
through cache.log doesn't give me any clues.

If this were to happen again (not sure when or if) what should I try to search 
for?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Why some traffic is TCP_DENIED

2021-02-16 Thread Vieri
Hi,

I'm trying to understand why Squid denies access to some sites, eg:

[Tue Feb 16 10:15:36 2021].044  0 - TCP_DENIED/302 0 GET 
http://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt - 
HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].050 46 10.215.248.160 TCP_DENIED/403 3352 - 
52.109.12.25:443 - HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].050  0 10.215.248.160 NONE_NONE/000 0 - 
error:transaction-end-before-headers - HIER_NONE/- -
[Tue Feb 16 10:15:36 2021].052    140 10.215.246.144 TCP_MISS/200 193311 GET 
https://outlook.office.com/mail/ - ORIGINAL_DST/52.97.168.210 text/html
[Tue Feb 16 10:15:36 2021].053 49 10.215.248.74 TCP_MISS/200 2037 GET 
https://puk1-collabhubrtc.officeapps.live.com/rtc2/signalr/negotiate? - 
ORIGINAL_DST/52.108.88.1 application/json
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- -
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 TCP_DENIED/403 3353 - 
40.67.251.132:443 - HIER_NONE/- text/html
[Tue Feb 16 10:15:36 2021].057  0 10.215.247.159 NONE_NONE/000 0 - 
error:transaction-end-before-headers - HIER_NONE/- -


If I take the first line in the log and I open the URL from a client I use then 
the site opens as expected, and the corresponding Squid log is:

[Tue Feb 16 10:45:50 2021].546    628 10.215.111.210 TCP_MISS/200 2134 GET 
https://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt - 
ORIGINAL_DST/23.210.36.30 application/octet-stream
[Tue Feb 16 10:45:52 2021].668 49 10.215.111.210 NONE_NONE/000 0 CONNECT 
216.58.215.138:443 - ORIGINAL_DST/216.58.215.138 -

In this log I see my host's IP addr. 10.215.111.210.
However, in the first log I do not see a source IP address. Why?

Other clients seem to be denied access with errors in the log such as 
"NONE_NONE/000"  followed by error:invalid-request or 
error:transaction-end-before-headers. How can I find out why I get "invalid 
requests"? Would a tcpdump on the server or client help? Or should I enable 
verbose debugging in Squid?

BTW this might be irrelevant but these messages seem to come up when accessing 
office 365 sites.

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] c-icap, clamav and squid

2021-02-12 Thread Vieri
Hi,

I don't know whether this question should be asked here or on the c-icap or 
clamav lists.

I've had a c-icap/squid failure and noticed that it was because my tmpfs on 
/var/tmp was full (12 GB).

It was filled with files such as these:

# lsof +D /var/tmp/
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
c-icap 773 root   31u   REG   0,48 1204 2169779504 
/var/tmp/CI_TMP_xqWE8   B
c-icap    3080 root   29u   REG   0,48 1204 2169784571 
/var/tmp/CI_TMP_pE6B7   6

The fact that these files build up and are not deleted might be a side-effect 
of something that's failing.

Do you think that the c-icap process is the only one responsible for cleaning 
these files up?
Or is there some Squid configuration option or a cache log event I should check 
regarding this?

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5 service stops after assertion failure

2021-01-25 Thread Vieri

On Sunday, January 24, 2021, 11:08:49 PM GMT+1, Alex Rousskov 
 wrote: 

> Filing a bug report with Squid Bugzilla may increase chances of this problem 
> getting fixed.

Done here:

https://bugs.squid-cache.org/show_bug.cgi?id=5100

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5 service stops after assertion failure

2021-01-25 Thread Vieri


On Sunday, January 24, 2021, 11:03:19 PM GMT+1, Amos Jeffries 
 wrote: 

>> The external script "bllookup" is probably responsible for bad output,
>
> That is a certainty.
>
>> but maybe Squid could handle it without crashing.
> 
> As you noticed, Squid halts service only after the helper fails 10 
> multiple times in a row. Before that Squid is restarting the helper to 
> see if it was a temporary issue.

OK, the external script is definitely guilty. However, it is buggy and triggers 
the Squid assertion failure only in specific circumstances. So it's 
trasaction-specific. In my use case I would definitely prefer that only a few 
transactions were "killed", and that the whole of the proxy service would keep 
working.
Of course, I would still need to identify these cases and fix them, but in the 
meantime I would not get a general crash.
On the other hand a general failure forces me to look into this issue with 
greater celerity. ;-) 

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5 service stops after assertion failure

2021-01-24 Thread Vieri
Hi,

My Squid web proxy crashed as shown in this log:

2021/01/24 13:18:13 kid1| helperHandleRead: unexpected reply on channel 0 from 
bllookup #Hlpr21 '43 ERR message=[...]
    current master transaction: master65
2021/01/24 13:18:13 kid1| assertion failed: helper.cc:1066: "skip == 0 && eom 
== NULL"
    current master transaction: master65
2021/01/24 13:18:13 kid1| Set Current Directory to /var/cache/squid
2021/01/24 13:18:13 kid1| Starting Squid Cache version 
5.0.4-20201125-r5fadc09ee for x86_64-pc-linux-gnu...
2021/01/24 13:18:13 kid1| Service Name: squid
[...]
REPEATS (assertion failure & squid restart)
[...]
2021/01/24 13:18:27 kid1| helperHandleRead: unexpected reply on channel 0 from 
bllookup #Hlpr21 '2 ERR message=[...]
    current master transaction: master76
2021/01/24 13:18:27 kid1| assertion failed: helper.cc:1066: "skip == 0 && eom 
== NULL"
    current master transaction: master76
2021/01/24 13:18:27| Removing PID file (/run/squid.pid)
2021/01/24 13:18:34| Pinger exiting.
2021/01/24 13:18:37| Pinger exiting.

After the assertion failure Squid tries to restart a few times (assertion 
failures seen again) and finally exits.
A manual restart works, but I don't know for how long.

The external script "bllookup" is probably responsible for bad output, but 
maybe Squid could handle it without crashing.

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-11-19 Thread Vieri

On Wednesday, November 4, 2020, 3:27:25 AM GMT+1, Alex Rousskov 
 wrote: 
>   https://bugs.squid-cache.org/show_bug.cgi?id=5084

Hi,

I added a comment to that bug report.
I cannot reproduce the problem anymore, at least not with the latest version of 
Squid 5.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid restart

2020-11-02 Thread Vieri
Just in case anyone else has this problem, or if anyone would like to comment 
on this, here's the solution I've found.

Running '/etc/init.d/squid restart' from cron (setting it up in crontab) does 
not honor ulimits.

Configuring /etc/crontab with something like 'bash -l /etc/init.d/squid 
restart' does not work either (it doesn't seem to run at all).

However, creating a custom.sh script somewhere which calls /etc/init.d/squid 
restart, and then configuring crontab with 'bash -l -c /somewhere/custom.sh' 
actually works. I now see:

# squidclient mgr:info
[...]
File descriptor usage for squid:
    Maximum number of file descriptors:   65535
    Largest file desc currently in use:   1583
    Number of file desc currently in use: 1576
    Files queued for open:   0
    Available number of file descriptors: 63959
    Reserved number of file descriptors:   100
    Store Disk files open:   0

I'm not sure why, but it works.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid restart

2020-11-02 Thread Vieri


On Saturday, October 31, 2020, 4:08:23 PM GMT+1, Amos Jeffries 
 wrote: 

>> However, I set the following directive in squid.conf:
>> 
>> max_filedescriptors 65536
>> 
> Are you using systemd, SysV or another init ?

I'm using SysV on Gentoo Linux.

> It doesn't seem to be honored here unless I stop and restart the squid 
> service again (/etc/init.d/squid restart from command line):
> 
> File descriptor usage for squid:
>      Maximum number of file descriptors:   65535
> 
> It seems that if I run the same command (/etc/init.d/squid restart) from 
> crontab, that ulimit is not honored. I guess that's the root cause of my 
> issue because I am asking cron to restart Squid once daily. I'll try not to, 
> but I was hoping to see if there was a reliable way to fully restart the 
> Squid process.
> 
> Vieri

> 

The init system restart command is the preferred one - it handles any 
system details that need updating. Alternatively, "squid -k restart" can 
be used.

The SysV init script works fine when run from command line or at boot time (and 
probably from a custom inittab script -- cannot confirm it yet). The problem 
shows up when running it from cron (I have cronie-1.5.4).
I'll take a look at the '-k restart' alternative.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid restart

2020-10-31 Thread Vieri
  15.07 KB
    Requests given to unlinkd:  4657
Median Service Times (seconds)  5 min    60 min:
    HTTP Requests (All):   0.05046  0.05046
    Cache Misses:  0.06286  0.06286
    Cache Hits:    0.0  0.0
    Near Hits: 0.15048  0.15048
    Not-Modified Replies:  0.0  0.0
    DNS Lookups:   0.0  0.0
    ICP Queries:   0.0  0.0
Resource usage for squid:
    UP Time:    108.639 seconds
    CPU Time:   10.588 seconds
    CPU Usage:  9.75%
    CPU Usage, 5 minute avg:    12.90%
    CPU Usage, 60 minute avg:   12.90%
    Maximum Resident Size: 462736 KB
    Page faults with physical i/o: 0
Memory accounted for:
    Total accounted:    37879 KB
    memPoolAlloc calls:   1256976
    memPoolFree calls:    1307898
File descriptor usage for squid:
    Maximum number of file descriptors:   4096
    Largest file desc currently in use:    567
    Number of file desc currently in use:  559
    Files queued for open:   0
    Available number of file descriptors: 3537
    Reserved number of file descriptors:   100
    Store Disk files open:   0
Internal Data Structures:
   997 StoreEntries
   997 StoreEntries with MemObjects
   683 Hot Object Cache Items
   683 on-disk objects

This did not happen with Squid 4, or maybe it wasn't as obvious.


I guess the reason could be for this:

    Maximum number of file descriptors:   4096
    Largest file desc currently in use:   4009
    Number of file desc currently in use: 3997

However, I set the following directive in squid.conf:

max_filedescriptors 65536

It doesn't seem to be honored here unless I stop and restart the squid service 
again (/etc/init.d/squid restart from command line):

File descriptor usage for squid:
    Maximum number of file descriptors:   65535

It seems that if I run the same command (/etc/init.d/squid restart) from 
crontab, that ulimit is not honored. I guess that's the root cause of my issue 
because I am asking cron to restart Squid once daily. I'll try not to, but I 
was hoping to see if there was a reliable way to fully restart the Squid 
process.

Vieri



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslbump https intercepted or tproxy

2020-10-19 Thread Vieri
Hi,

It's unclear to me if I can use TPROXY for HTTPS traffic.

If I divert traffic and use tproxy in the Linux kernel and then set this in 
squid:

https_port 3130 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem

it seems to be working fine, just as if I were to REDIRECT https traffic and 
then use this in Squid:

https_port 3130 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem

So, does anyone know if it's not recommended / not supported to use tproxy with 
https traffic?
I'm asking because I don't see any issues with tproxy, with the added advantage 
of being able to route on the gateway per source IP addr. (in intercepted mode, 
the source is always Squid).

Are there any reasons for which one would not use TPROXY with HTTPS?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-18 Thread Vieri

On Saturday, October 17, 2020, 10:36:47 PM GMT+2, Alex Rousskov 
 wrote: 

> or due to some TLS error.
> I filed bug #5084 

 Hi again,

Thanks for opening a bug report.

I don't want to add anything there myself because I wouldn't want to confuse 
whoever might take this issue into account, but I'd like to comment on this 
list that I've captured the traffic between Squid and the destination server.
It's here:

https://drive.google.com/file/d/1WS7Y62Fng5ggXryzKGW1JOsJ16cyR0mg/view?usp=sharing

I can see a client hello, Server Hello Done, Cipher Spec, etc, but then it 
starts over and over again.
So, could it be a TLS issue as you hinted?

I also captured the client console regarding the wss messages (Firefox).
It won't reveal much, but here it is anyway:

https://drive.google.com/file/d/1u4uXW0TCTwClE2kt2nbJSGt5VLdKC03t/view?usp=sharing
NB: the destination server is not the same one as in the packet trace, but 
that's what the client gets each time (it keeps showing '101 Switching 
Protocols' over and over).

Please let me know if I should add something to the bug report, or if you see 
anything of interest in the data I've sent.

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-16 Thread Vieri

On Friday, October 16, 2020, 4:48:55 PM GMT+2, Alex Rousskov 
 wrote: 

> tcp_outgoing_address.


OK, I fixed the "local" address issue, but I'm still seeing the same behavior.

I pinpointed one particular request that's failing:

2020/10/16 16:56:37.250 kid1| 85,2| client_side_request.cc(745) 
clientAccessCheckDone: The request GET 
https://ed1lncb62601.webex.com/direct?type=websocket=binary=1602860196950=G7609603-81A2-4B8D-A1C0-C379CC9B12G9=PUB_IPv4_ADDR_2
 is ALLOWED; last ACL checked: all

It is in this log:

https://drive.google.com/file/d/1OrB42Cvom2PNmV-dnfLVrnMY5IhJkcpS/view?usp=sharing

I see a lot of '101 Switching Protocols' and references to upgrade to 
websockets, but I'm not sure where it is actually failing.

I don't know how to narrow this down further, but if someone could give it 
another peak I'd be most grateful.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-16 Thread Vieri
BTW how does Squid decide which IP address to use for "local" here below?

sendRequest: HTTP Server conn* local=

I tried specifying a bind address in http_port and https_port as well as 
routing traffic from that address out through just one ppp interface, but that 
doesn't seem to change the way "local" is assigned an address.

Is there a way to keep "local" always the same?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-16 Thread Vieri
Hi,

I think I found something in the cache.log I posted before.

sendRequest: HTTP Server conn* local=PUB_IPv4_ADDR_3
...
sendRequest: HTTP Server conn* local=PUB_IPv4_ADDR_2

It seems that Squid sometimes connects to the remote HTTP server with either 
one of the available addresses on the Squid box (eg. PUB_IPv4_ADDR_2, 
PUB_IPv4_ADDR_3, etc). These addresses are on ppp interfaces. In fact, I 
noticed that if the Firefox client shows this error message in its console as 
in my previous post:

The connection to 
wss://ed1lncb62801.webex.com/direct?type=websocket=binary=1602830016480=5659FGE6-DF29-47A7-859A-G4D5FDC937A2=PUB_IPv4_ADDR_2
 was interrupted while the page was loading.

then I see a corresponding 'sendRequest: HTTP Server conn* 
local=PUB_IPv4_ADDR_3' when trying to connect to the same origin. So I'm 
deducing that the remote websocket server is expecting a client connection from 
PUB_IPv4_ADDR_2 when in fact Squid is trying to connect from PUB_IPv4_ADDR_3 -- 
hence the "interruption" message.

My test Squid instance is running on a multi-ISP router, so I guess I have to 
figure out how to either force connections out one interface only for the Squid 
cache or tell Squid to only bind to one interface.

It's only a wild guess though.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-16 Thread Vieri

 On Thursday, October 15, 2020, 5:28:03 PM GMT+2, Alex Rousskov 
 wrote: 

>> In other words, I do not need to be specific with
>> 'http_upgrade_request_protocols WebSocket allow all' unless I want
>> to, right?
>
> Just in case somebody else starts copy-pasting the above rule into their
> configurations: The standard (RFC 6455) WebSocket protocol name in HTTP
> Upgrade requests is "websocket". Squid uses case-sensitive comparison
> for those names so you should use "websocket" in squid.conf.

OK, good to know because:

squid-5.0.4-20200825-rf4ade365f/src/cf.data.pre contains:
    Usage: http_upgrade_request_protocols  allow|deny [!]acl ...

    The required "protocol" parameter is either an all-caps word OTHER or an
    explicit protocol name (e.g. "WebSocket") optionally followed by a slash
    and a version token (e.g. "HTTP/3"). Explicit protocol names and
    versions are case sensitive.

That's why I used "WebSocket" instead of "websocket" in my example. To avoid 
confusion, cf.data.pre could be updated to be more clear.


> The important part here is the existence of those extra transactions.
> They may be related to SslBump if you are bumbing this traffic, but then
> I would expect a slightly different access.log composition.

Hmm, I'm supposed to be sslbumping, yes. I can share my full squid config & 
iptables redirection entries if you wish.

> https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction

 I enabled debugging on a test system where I was the only client (one Firefox 
instance).

The access log is here:

https://drive.google.com/file/d/1jryX5BW4yxLTSBe0QDavPSiKLBpOvtnV/view?usp=sharing

The only odd thing I see is a few ABORTED but they are all WOFF fonts which 
should be unimportant except for 
https://join-test.webex.com/mw3300/mywebex/header.do which is only a TCP 
refresh "abort".

The overwhelming cache log is here (I've sed'ed a few strings for privacy 
reasons):

https://drive.google.com/file/d/1QYRr-0F-DGnCZtyuuAw8RsEgcHICN_0c/view?usp=sharing

I can see the upgrade messages are parsed:

HttpHeader.cc(1548) parse: parsed HttpHeaderEntry: 'Upgrade: WebSocket'

I suppose that adding the "Upgrade[66]" entry is as expected.

Then, I get lost. I can see that Squid is trying to open ed1lncb62801.webex.com 
with https, but it is unclear to me why the ciient complains that the 
connection to the wss:// site is being interrupted:

The connection to 
wss://ed1lncb62801.webex.com/direct?type=websocket=binary=1602830016480=5659FGE6-DF29-47A7-859A-G4D5FDC937A2=PUB_IPv4_ADDR_2
 was interrupted while the page was loading.

Thanks for all the help you can give me.

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-15 Thread Vieri
 On Tuesday, October 13, 2020, 6:14:18 PM GMT+2, Alex Rousskov 
 wrote: 

> You should probably follow up with Gentoo folks responsible for this Squid 
> customization.

Squid 5 now builds and installs perfectly on Gentoo Linux with a few custom 
changes to the distro's package installation script. I hope the devs will 
include these changes so Squid 5 can be readily available to everyone.
BTW it "makes" in parallel fine with -jx where x > 1, so no issues there either.

So, coming back to the original post: websockets.

I added this to Squid 5:

http_upgrade_request_protocols OTHER allow all

Am I right if I state that this should allow any protocol not just WebSockets?
In other words, I do not need to be specific with 
'http_upgrade_request_protocols WebSocket allow all' unless I want to, right?

Unfortunately, after reloading Squid 5 the client browser still states the same:

The connection to 
wss://ed1lncb65702.webex.com/direct?type=websocket=binary=1602769907574=9E73C14G-1580-43B4-B8D4-91453FCF1939=MY_IP_ADDR
 was interrupted while the page was loading.

And in access.log I can see this:

[Thu Oct 15 15:52:27 2020].411  29846 10.215.144.48 TCP_TUNNEL/101 0 GET 
https://ed1lncb65702.webex.com/direct? - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:27 2020].831    125 10.215.144.48 NONE_NONE/000 0 CONNECT 
62.109.225.174:443 - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:28 2020].786 11 10.215.111.210 NONE_NONE_ABORTED/000 0 
CONNECT 44.233.111.149:443 - HIER_NONE/- -
[Thu Oct 15 15:52:37 2020].414  29870 10.215.144.48 TCP_TUNNEL/101 0 GET 
https://ed1lncb65702.webex.com/direct? - ORIGINAL_DST/62.109.225.174 -
[Thu Oct 15 15:52:37 2020].919    107 10.215.144.48 NONE_NONE/000 0 CONNECT 
62.109.225.174:443 - ORIGINAL_DST/62.109.225.174 -

What does NONE_NONE/000 mean?

Where can I go from here?
What can I try to debug this further?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-13 Thread Vieri

On Tuesday, October 13, 2020, 3:55:56 PM GMT+2, Alex Rousskov 
 wrote: 

> The beginning of the above log appears to show some unofficial bootstrapping 
> steps.


Yes, I was looking into this today and I saw that the actual difference between 
a manual build and a Gentoo Linux build is with the following:

1) the build fails as mentioned earlier in this thread when running 
Gentoo-specific "configure" scripts. Bootstrapping makes no real difference.

econf: updating squid-5.0.4-20200825-rf4ade365f/cfgaux/config.sub with 
/usr/share/gnuconfig/config.sub
econf: updating squid-5.0.4-20200825-rf4ade365f/cfgaux/config.guess with 
/usr/share/gnuconfig/config.guess
./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
--host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
--datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
--disable-dependency-tracking --disable-silent-rules 
--docdir=/usr/share/doc/squid-5.0.4 --htmldir=/usr/share/doc/squid-5.0.4/html 
--with-sysroot=/ --libdir=/usr/lib64

Correct me if I'm wrong, but I don't see anything wrong with the third line and 
the parameters passed to configure (unless disable-dependency-tracking could 
have some side-effects).
So I guess the problem might be with the first and second lines where some 
config scripts seem to be replaced.
The timestamps in /usr/share/gnuconfig/config.{sub,guess} are more recent than 
the ones distributed in the Squid tarball.

2) the build succeeds even when using the Gentoo build environment just as long 
as I do not run the Gentoo-specific econf (configure) script but "./configure" 
instead.

I guess I will need to bring this up on the Gentoo forum to see what's going 
on. I am not instructing the build system to "patch" cfgaux so I guess "econf" 
automatically detects something in the squid tarball that makes it patch the 
config.* files.

Thanks for your time.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-12 Thread Vieri
I'm compiling on a Gentoo Linux system the tarball taken from 
http://www.squid-cache.org/Versions/v5/squid-5.0.4.tar.gz.

The build log (failed) is here (notice the call to make -j1):

https://drive.google.com/file/d/1no0uV3Ti1ILZavAaiOyFIY9W0eLRv87q/view?usp=sharing

If I build from git f4ade36 all's well:

https://drive.google.com/file/d/1y-3wlDT_OrwSp7epvDq63xpkYv8gu9Pq/view?usp=sharing

So now I'm just going to have to spot the difference.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-11 Thread Vieri

Just a quick test and question.

If I manually create the tests subdirs and run make then I get an error such as:

/bin/sh ../../libtool  --tag=CXX   --mode=link x86_64-pc-linux-gnu-g++ -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Woverloaded-virtual -pipe 
-D_REENTRANT -O2 -pipe  -Wl,-O1 -Wl,--as-needed -o libdiskio.la  
DiskIOModule.lo ReadRequest.lo WriteRequest.lo libtests.la AIO/libAIO.la -lrt 
Blocking/libBlocking.la DiskDaemon/libDiskDaemon.la 
DiskThreads/libDiskThreads.la -lpthread IpcIo/libIpcIo.la Mmapped/libMmapped.la
libtool:   error: cannot find the library 'libtests.la' or unhandled argument 
'libtests.la'
make[4]: *** [Makefile:868: libdiskio.la] Error 1
make[4]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4/src/DiskIO'


This may be a dumb question, but where are the build instructions for 
libtests.la?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-10 Thread Vieri
I'm also getting this other file that can't be copied:

cp ../../src/tests/stub_debug.cc tests/stub_debug.cc
cp: cannot create regular file 'tests/stub_debug.cc': No such file or directory
make[3]: *** [Makefile:1490: tests/stub_debug.cc] Error 1

Tried "make" and "make -j1", but the error message is the same.

Are you using a specific version of automake?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websockets through Squid

2020-10-10 Thread Vieri
On Friday, October 9, 2020, 3:28:01 AM GMT+2, Amos Jeffries 
 wrote: 

 > I advise explicitly using -j1 for the workaround build.


Well, I'm running with -j1, but I'm still getting the same error message.

Here's a snippet of the build log:

make -j1
Making all in compat
make[1]: Entering directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/compat'
/bin/sh ../libtool  --tag=CXX   --mode=compile x86_64-pc-linux-gnu-g++ 
-DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\" 
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\" 
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib -I../src 
-I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow 
-Woverloaded-virtual -pipe -D_REENTRANT -O2 -pipe -c -o assert.lo assert.cc

It finally ends with:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory

Would you like to review the full build log?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-08 Thread Vieri
> As a workaround, try sequential build ("make" instead of "make -j...")

I removed -j, but I'm still getting a similar error:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory
make[3]: *** [Makefile:1402: tests/stub_fd.cc] Error 1
make[3]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src/icmp'
make[2]: *** [Makefile:6667: all-recursive] Error 1
make[2]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src'
make[1]: *** [Makefile:5662: all] Error 2
make[1]: Leaving directory 
'/var/tmp/portage/net-proxy/squid-5.0.4/work/squid-5.0.4-20200825-rf4ade365f/src'
make: *** [Makefile:591: all-recursive] Error 1

Thanks for the suggestion. I'll try a few other things. Which version of 
automake do you use?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-08 Thread Vieri
OK, so I'm now trying to compile Squid 5 instead of backporting to V 4, but I'm 
getting this silly error:

cp ../../src/tests/stub_fd.cc tests/stub_fd.cc
cp: cannot create regular file 'tests/stub_fd.cc': No such file or directory
make[3]: *** [Makefile:1452: tests/stub_fd.cc] Error 1

I guess it may be because the script is not in the right subdir.

Is this  a known issue?
Can I simply disable building the tests?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
> To allow WebSocket tunnels, you need http_upgrade_request_protocols available 
> since v5.0.4

Thanks for the info.
My distro does not include v. 5 yet as it's still beta, although I could try 
compiling it.

Just a thought though. What would the easiest way be to allow websockets 
through in v. 4? That is, for trusted domains, allow a direct connection maybe?

eg. 
acl direct_dst_domains dstdomain "/opt/custom/proxy-settings/allowed.direct"
# or:
# acl direct_dst_domains ssl::server_name_regex 
"/opt/custom/proxy-settings/allowed.direct"
always_direct allow direct_dst_domains

Thanks

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
Hi,

Using Google Chrome instead of Firefox gives me the same result:

Error during WebSocket handshake: Unexpected response code: 200

I'm not sure what to look for in cache.log.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
I also tried:

on_unsupported_protocol tunnel all

on Squid v. 4.13.

I don't see any denials in the access log.
The only thing I see regarding the URL I mentioned earlier is:

TCP_MISS/200 673 GET https://ed1lncb62202.webex.com/direct? - 
ORIGINAL_DST/62.109.225.31 text/html

It is easy to reproduce by going to the webex test site:

https://www.webex.com/test-meeting.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] websockets through Squid

2020-10-07 Thread Vieri
Hi,

I'd like to allow websockets from specific domains through Squid in intercept 
sslbump mode.

One of the clients reports:

Firefox can’t establish a connection to the server at
wss://ed1lncb62202.webex.com/direct?type=websocket=binary=1602057495268=C99EG7B6-G550-43CG-AD72-7EA5F2CA80B0=X.X.X.X.

This is what I have in my squid configuration:

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all

I am obviously not using on_unsupported_protocol properly.

Any suggestions?

Regards,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-10-02 Thread Vieri

Regarding the use of an external ACL I quickly implemented a perl script that 
"does the job", but it seems to be somewhat sluggish.

This is how it's configured in squid.conf:
external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 
children-startup=10 children-idle=3 concurrency=8 %PROTO %DST %PORT %PATH 
/opt/custom/scripts/squid/ext_txt_blwl_acl.pl 
--categories=adv,aggressive,alcohol,anonvpn,automobile_bikes,automobile_boats,automobile_cars,automobile_planes,chat,costtraps,dating,drugs,dynamic,finance_insurance,finance_moneylending,finance_other,finance_realestate,finance_trading,fortunetelling,forum,gamble,hacking,hobby_cooking,hobby_games-misc,hobby_games-online,hobby_gardening,hobby_pets,homestyle,ibs,imagehosting,isp,jobsearch,military,models,movies,music,podcasts,politics,porn,radiotv,recreation_humor,recreation_martialarts,recreation_restaurants,recreation_sports,recreation_travel,recreation_wellness,redirector,religion,remotecontrol,ringtones,science_astronomy,science_chemistry,sex_education,sex_lingerie,shopping,socialnet,spyware,tracker,updatesites,urlshortener,violence,warez,weapons,webphone,webradio,webtv

I'd like to avoid the use of a DB if possible, but maybe someone here has an 
idea to share on flat file text searches.

Currently the dir structure of my blacklists is:

topdir
category1 ... categoryN
domains urls

So basically one example file to search in is topdir/category8/urls, etc.

The helper perl script contains this code to decide whether to block access or 
not:

foreach( @categories )
{
    chomp($s_urls = qx{grep -nwx '$uri_dst$uri_path' $cats_where/$_/urls | 
head -n 1 | cut -f1 -d:});

    if (length($s_urls) > 0) {
    if ($whitelist == 0) {
    $status = $cid." ERR message=\"URL ".$uri_dst." in BL ".$_." 
(line ".$s_urls.")\"";
    } else {
    $status = $cid." ERR message=\"URL ".$uri_dst." not in WL 
".$_." (line ".$s_urls.")\"";
    }
    next;
    }

    chomp($s_urls = qx{grep -nwx '$uri_dst' $cats_where/$_/domains | head 
-n 1 | cut -f1 -d:});

    if (length($s_urls) > 0) {
    if ($whitelist == 0) {
    $status = $cid." ERR message=\"Domain ".$uri_dst." in BL ".$_." 
(line ".$s_urls.")\"";
    } else {
    $status = $cid." ERR message=\"Domain ".$uri_dst." not in WL 
".$_." (line ".$s_urls.")\"";
    }
    next;
    }
}

There are currently 66 "categories" with around 50MB of text data in all.
So that's a lot to go through each time there's an HTTP request.
Apart from placing these blacklists on a ramdisk (currently on an M.2 SSD disk 
so I'm not sure I'll notice anything) what else can I try?
Should I reindex the lists and group them all alphabetically?
For instance should I process the lists in order to generate a dir structure as 
follows?

topdir
a b c d e f ... x y z 0 1 2 3 ... 7 8 9
domains urls

An example for a client requesting https://www.google.com/ would lead to 
searching only 2 files:
topdir/w/domains
topdir/w/urls

An example for a client requesting https://01.whatever.com/x would also lead to 
searching only 2 files:
topdir/0/domains
topdir/0/urls

An example for a client requesting https://8.8.8.8/xyz would also lead to 
searching only 2 files:
topdir/8/domains
topdir/8/urls

Any ideas or links to scripts that already prepare lists for this?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-10-01 Thread Vieri
Thank you very much.
I will try to set up an external ACL so I don't have to worry about regular 
expressions.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-09-29 Thread Vieri
> None of the file entries are anchored regex. So any one of them could match.

>> Can anyone please let me know if there's a match, or how to enable debugging 
>>  to see which record in this ACL is actually triggering the denial?
>
> To do that we will need to see the complete and exact URL which is being 
> blocked incorrectly.

One of them is https://www.google.com/.

> NP: a large number of that files entries can be far more efficiently blocked 
> using the dstdomain ACL type. For example:
>
>  acl blacklist dstdomain .appspot.com

Agreed. However, this file is generated by an external process I don't control 
(SOC). It's like a "threat feed" I need to load in Squid.
The easiest way for me would be to tell Squid that it's just a list of exact 
URLs, not a list of regexps. I understand that's not possible.

This list comes with entries such as:

https://domain.org/?something={whatever}=(this)

So, if I don't want Squid to complain I process it a little before serving it 
to it and the above line becomes:

https://domain.org/\?something=\{whatever}=\(this)

You mention anchoring them... So now I adjusted the processing and the above 
becomes:

^https://domain.org/\?something=\{whatever}=\(this)$

I'm still getting the same denial when a client tries to access 
https://www.google.com/.

This is what I can see in cache.log:

client_side_request.cc(751) clientAccessCheckDone: The request GET 
https://www.google.com/ is DENIED; last ACL checked: bad_dst_urls

I'm also seeing other denials such as:

 client_side_request.cc(751) clientAccessCheckDone: The request GET 
http://www.microsoft.com/pki/certs/MicRooCerAut2011_2011_03_22.crt is DENIED; 
last ACL checked: bad_dst_urls

If I grep http://www.microsoft.com/pki/certs in the ACL file I get no results 
at all.
That's why I'm puzzled.

So here's the new anchored regex file in case you have the chance to test it 
and reproduce the issue:

https://drive.google.com/file/d/1ZUP9eRAqLzMG162xHfYRV9vx_47kWuXs/view?usp=sharing

Squid doesn't complain about syntax errors so I'm assuming the ACL is as 
expected.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ACL matches when it shouldn't

2020-09-29 Thread Vieri
Hi,

I have a url_regex ACL loaded with this file:

https://drive.google.com/file/d/1C5aZqPfMD3qlVP8zvm67c9ZnXUfz-cEW/view?usp=sharing

Then I have an access denial like so:

http_access deny bad_dst_urls

Problem is that I am not expecting to block, eg. https://www.google.com, but I 
am.
I know it's this ACL because if I remove the htttp_access deny line above, the 
browser can access  just fine.

I've been  looking around this file for possible matches  for google.com, but 
there shouldn't be.

Can anyone please let me know if there's a match, or how to enable debugging  
to see which record in this ACL is actually triggering the denial?

I'm trying with:
debug_options rotate=1 ALL,1 85,2 88,2

Then I grep the log for bad_dst_urls and DENIED, but I can't seem to find a 
clear match.

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] acl for urls without regex

2020-09-29 Thread Vieri
Hi,

Is it possible to create an ACL from a text file containing URLs without 
treating them as regular expressions?
Otherwise, I get errors of this kind:

 ERROR: invalid regular expression: 
'https://whatever.net/auth_hotmail/?user={email}email={email}': Invalid 
content of \{\}

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-15 Thread Vieri

On Monday, September 14, 2020, 9:22:52 PM GMT+2, Alex Rousskov 
 wrote: 


>> I have squid-4.12.
>
> .. which means that the answer to my second question is "no". You need
> to upgrade to Squid v4.13 (for several reasons).

As simple as that.
Thank you very much. I can confirm that fixed the issue.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri

On Monday, September 14, 2020, 6:01:43 PM GMT+2, Alex Rousskov 
 wrote: 


>> I get this when trying to access a web page with a specific browser (Google 
>> Chrome).
>
> What is your Squid version? Does it have a fix for GREASE support as
> detailed in https://github.com/squid-cache/squid/pull/663 ?

I have squid-4.12.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri

On Monday, September 14, 2020, 4:00:30 PM GMT+2, Walter H. 
 wrote: 


>> So what does NONE_ABORTED mean and what should I search for to fix this so 
>> the client can use Chrome?
>>
> What about Microsoft Edge?

The client is Windows 7, so no Edge.
So I got hold of a Windows 10 client and tried Edge there. I got the same 
NONE_ABORTED issue while every other non-chromium browser works fine.

> as I see you don't do SSL-bump,

I am. I could send the whole config here. I also set up an explicit proxy, but 
it seems I'm having issues with kerberos. As a side question, how can one test 
negotiate_kerberos_auth on the command line? I run:
# /usr/libexec/squid/negotiate_kerberos_auth -s HTTP/fqdn@DOMAIN
WRITE_SOMETHING
BH Invalid request

What is the format/syntax of WRITE_SOMETHING?

I'd like to try the explciit proxy instead of ssl-bump to see if there's a 
difference.
Still, the Firefox and Chrome clients are in the same conditions and only one 
is failing.

> could it be that the clients (Chrome) capability of useable ciphersuites 
> may not confirm to the ones offered by the server; the reason for 
> 'NONE_ABORTED'?

If I let the clients by-pass the Squid proxy and connect directly to the 
servers the web pages are properly accessed -- no issues.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cannot access web servers with a specific browser

2020-09-14 Thread Vieri
Hi,

Before digging into the whole squid configuration, I'd like to know what the 
following line means:

NONE_ABORTED/200 0 CONNECT 216.58.211.36:443 - HIER_NONE/- -

I get this when trying to access a web page with a specific browser (Google 
Chrome).

However, from the exact same client host, any other browser works fine (IE, 
Firefox) and I get this in the cache log:

NONE/200 0 CONNECT 216.58.211.36:443 - ORIGINAL_DST/216.58.211.36 -

along with many other log messages that follow.

So what does NONE_ABORTED mean and what should I search for to fix this so the 
client can use Chrome?

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-30 Thread Vieri
On Tuesday, June 30, 2020, 1:41:57 PM GMT+2, Eliezer Croitor 
 wrote: 

> ^(w[0-9]+|[a-z]+\.)?web\.whatsapp\.com$

Yes, it does. I should have seen that... Thanks for your help!

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-30 Thread Vieri
_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_mimetypes
 restricted_requested_mimetypes_1
http_access deny limited_requested_mimetypes_1
http_reply_access deny limited_replied_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_mimetypes
 limited_requested_mimetypes_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_mimetypes
 limited_replied_mimetypes_1
http_access deny !privileged_src_ips bad_dst_domains
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_dst_domains
 bad_dst_domains
http_access deny bad_dst_ccn_domains
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_dst_ccn
 bad_dst_ccn_domains
http_access deny bad_dst_ccn_ips
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_dst_ccn
 bad_dst_ccn_ips
http_access allow privileged_extra1_src_ips limited_dst_domains_1
http_access deny limited_dst_domains_1
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=limited_dst_domains_1
 limited_dst_domains_1
http_access deny bad_filetypes !good_dst_domains_with_any_filetype
http_reply_access deny bad_filetypes !good_dst_domains_with_any_filetype
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_filetypes
 bad_filetypes
http_access deny bad_requested_mimetypes !good_dst_domains_with_any_mimetype
http_reply_access deny bad_replied_mimetypes !good_dst_domains_with_any_mimetype
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_mimetypes
 bad_requested_mimetypes
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_mimetypes
 bad_replied_mimetypes
http_access allow localnet bl_lookup
deny_info 
http://fwprox.domain.org/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=bad_dst_domains_bl
 all
debug_options rotate=1 ALL,1
append_domain .domain.org
reply_header_access Alternate-Protocol deny all
acl DiscoverSNIHost at_step SslBump1
acl NoSSLIntercept ssl::server_name_regex "/SAMBA/proxy-settings/allowed.direct"
ssl_bump peek DiscoverSNIHost
ssl_bump splice NoSSLIntercept
ssl_bump bump all
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service antivirus respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access antivirus allow all
include /etc/squid/squid.include.common
include /etc/squid/squid.include.hide
cache_mem 32 MB
max_filedescriptors 65536
icap_service_failure_limit -1
icap_persistent_connections off


Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4 and on_unsupported_protocol

2020-06-29 Thread Vieri


On Monday, June 29, 2020, 6:41:41 PM GMT+2, Eliezer Croitoru 
 wrote: 
>
>
> I believe what you are looking for is at:
> https://wiki.squid-cache.org/ConfigExamples/Chat/Whatsapp
 
Thanks, but the article doesn't work for me.
I still see Firefox complaining (console) about not being able to connect to 
wss://web.whatsapp.com/ws.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4 and on_unsupported_protocol

2020-06-29 Thread Vieri
Hi,

I'd like to allow whatsapp web through a transparent tproxy sslbump Squid setup.

The target site is not loading:

wss://web.whatsapp.com/ws

I get TCP_MISS/400 305 GET https://web.whatsapp.com/ws in Squid cache log.

I'm not sure I know how to use the on_unsupported_protocol diective.

I have this in my config file:

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all

How can I change this to allow websockets through Squid, but preferably only 
for a specific SRC IP addr. acl?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy Squid 4

2020-06-25 Thread Vieri


On Thursday, June 25, 2020, 10:32:46 AM GMT+2, Amos Jeffries 
 wrote: 

>
>  tls-options=NO_SSLv3,NO_TLSv1_3 tls-min-version=1.0
>
>  tls_options=NO_SSLv3,NO_TLSv1_1,NO_TLSv1_2,NO_TLSv1_3
>
> removing the "sslflags=DONT_VERIFY_PEER"
>
> Then reduce the ssloptions= as much as you can. Remove if possible. 

Tried all of that, but still just getting this in the log:

kid1| 83,5| NegotiationHistory.cc(81) retrieveNegotiatedInfo: SSL connection 
info on FD 13 SSL version NONE/0.0 negotiated cipher
kid1| ERROR: negotiating TLS on FD 13: error::lib(0):func(0):reason(0) 
(5/-1/0)

> A packet trace of what is being attempted will be useful then.

Will try to save one.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reverse proxy Squid 4

2020-06-24 Thread Vieri
This is what the squid cache log reports:

2020/06/25 00:29:05.467 kid1| 83,5| NegotiationHistory.cc(81) 
retrieveNegotiatedInfo: SSL connection info on FD 15 SSL version NONE/0.0 
negotiated cipher
2020/06/25 00:29:05.467 kid1| ERROR: negotiating TLS on FD 15: 
error::lib(0):func(0):reason(0) (5/-1/0)
2020/06/25 00:29:05.467 kid1| 83,5| BlindPeerConnector.cc(68) 
noteNegotiationDone: error=0x55cf5c9bb5b8
2020/06/25 00:29:05.467 kid1| TCP connection to 10.215.144.16/443 failed

Same old issue where openssl does not say why the handshake failed.

I'm having the same problem with an Apache reverse proxy, so now I'm falling 
back to use http on my backend.

Thanks
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reverse proxy Squid 4

2020-06-24 Thread Vieri
Hi,

Today I just migrated from Squid 3 to Squid 4, and I found that a reverse proxy 
that was working fine before is now failing. The client browser sees this 
message:

[No Error] (TLS code: SQUID_ERR_SSL_HANDSHAKE)
Handshake with SSL server failed: [No Error]

This is how I configured the backend:

cache_peer 10.215.144.16 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/etc/ssl/MY-CA/certs/W1_cert.cer 
sslkey=/etc/ssl/MY-CA/certs/W1_key_nopassphrase.pem 
sslcafile=/etc/ssl/MY-CA/cacert.pem 
ssloptions=NO_SSLv3,NO_SSLv2,NO_TLSv1_2,NO_TLSv1_1 sslflags=DONT_VERIFY_PEER 
front-end-https=on name=MyServer

The NO_TLSv* options are because the backend server is an old Windows 2003 
(which hasn't changed either).

How can I debug this?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] explicit proxy and iptables

2020-04-27 Thread Vieri
Hi,

I've been using Squid + TPROXY in transparent sslbump mode for quite a while 
now, but I'd like to use an explicit proxy with user authentication instead.

I have Squid on my first firewall/gateway node, and then I have another gateway 
(node 2) where all the HTTP requests go through, with multiple ISPs.

In transparent tproxy mode, I can obviously mark packets according to the 
"real" client src IP addresses and then use, eg., different ISPs based on 
client src addr.

In the explicit setup, the gateway (node 2) only sees one IP address as HTTP 
source -- the one on the "first node" with the explicit Squid proxy. I presume 
that in this case there is NO WAY I can somehow inform the gateway on node 2 of 
the "real" clent IP addresses?

I can imagine the answer to this silly question, but nonetheless I prefer to 
ask just to make sure. ;-)

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tproxy sslbump and user authentication

2020-04-24 Thread Vieri

On Tuesday, April 21, 2020, 2:41:02 PM GMT+2, Matus UHLAR - fantomas 
 wrote: 

>>On Tuesday, April 21, 2020, 8:29:28 AM GMT+2, Amos Jeffries 
>> wrote:
>>>
>>> Please see the FAQ:
>>> <https://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Why_can.27t_I_use_authentication_together_with_interception_proxying.3F>
>>>
>>> Why bother with the second proxy at all? The explicit proxy has access
>>> to all the details the interception one does (and more - such as
>>> credentials). It should be able to do all filtering necessary.
>
> On 21.04.20 12:33, Vieri wrote:
>>Can the explicit proxy ssl-bump HTTPS traffic and thus analyze traffic with 
>>ICAP + squidclamav, for instance?
>
> yes.
>
>>Simply put, will I be able to block, eg. 
>> https://secure.eicar.org/eicarcom2.zip not by mimetype, file extension,
>> url matching, etc., but by analyzing its content with clamav via ICAP?
>
> without bumping, you won't be able to block by anything, only by 
> secure.eicar.org hostname.

Hi,

I'm not sure I understand how that should be configured.

I whipped up a test instance with the configuration I'm showing below.

My browser can authenticate via kerberos and access several web sites (http & 
https) if I explicitly set it to proxy everything to squid10.mydomain.org on 
port 3228.
However, icap/clamav filtering is "not working" for neither http nor https.
My cache log shows a lot of messages regarding "icap" when I try to download an 
eicar test file. So something is triggered, but before sending a huge log to 
the mailing list, what should I be looking for exactly, or is there a specific 
loglevel I should set?

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager

pid_filename /run/squid.testexplicit.pid
access_log daemon:/var/log/squid/access.test.log squid
cache_log /var/log/squid/cache.test.log

acl explicit myportname 3227
acl explicitbump myportname 3228
acl interceptedssl myportname 3229

http_port 3227
# http_port 3228 tproxy
http_port 3228 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem 
sslflags=NO_DEFAULT_CA
https_port 3229 tproxy ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem 
sslflags=NO_DEFAULT_CA
sslproxy_flags DONT_VERIFY_PEER

sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db_test -M 
16MB
sslcrtd_children 40 startup=20 idle=10

cache_dir diskd /var/cache/squid.test 32 16 256

external_acl_type nt_group ttl=0 children-max=50 %LOGIN 
/usr/libexec/squid/ext_wbinfo_group_acl -K

auth_param negotiate program /usr/libexec/squid/negotiate_kerberos_auth -s 
HTTP/squid10.mydomain.org@MYREALNAME
auth_param negotiate children 60
auth_param negotiate keep_alive on

acl localnet src 10.0.0.0/8
acl localnet src 192.168.0.0/16
acl localnet src 172.16.0.1
acl localnet src fc00::/7

acl ORG_all proxy_auth REQUIRED

http_access deny explicit !ORG_all
#http_access deny explicit SSL_ports
http_access deny explicitbump !localnet
http_access deny explicitbump !ORG_all
http_access deny interceptedssl !localnet
http_access deny interceptedssl !ORG_all

http_access allow CONNECT interceptedssl SSL_ports

http_access allow localnet
http_reply_access allow localnet

http_access allow ORG_all

debug_options rotate=1 ALL,9
# debug_options rotate=1 ALL,1

append_domain .mydomain.org

ssl_bump stare all
ssl_bump bump all

http_access allow localhost

http_access deny all

coredump_dir /var/cache/squid

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service antivirus respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access antivirus allow all
icap_service_failure_limit -1
icap_persistent_connections off


--
Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tproxy sslbump and user authentication

2020-04-21 Thread Vieri

On Tuesday, April 21, 2020, 8:29:28 AM GMT+2, Amos Jeffries 
 wrote: 
>
> Please see the FAQ:
> <https://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Why_can.27t_I_use_authentication_together_with_interception_proxying.3F>
>
> Why bother with the second proxy at all? The explicit proxy has access
> to all the details the interception one does (and more - such as
> credentials). It should be able to do all filtering necessary.

Can the explicit proxy ssl-bump HTTPS traffic and thus analyze traffic with 
ICAP + squidclamav, for instance?
Simply put, will I be able to block, eg. https://secure.eicar.org/eicarcom2.zip 
not by mimetype, file extension, url matching, etc., but by analyzing its 
content with clamav via ICAP?

> TPROXY and NAT are for proxying traffic of clients which do not support
> HTTP proxies. They are hugely limited in what they can do. If you have
> ability to use explicit-proxy, do so.

Unfortunately, some programs don't support proxies, or we simply don't care and 
want to force-filter traffic anyway.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tproxy sslbump and user authentication

2020-04-20 Thread Vieri
Hi,

Is it possible to somehow combine the filtering capabilities of tproxy ssl-bump 
for access to https sites and the access control flexibility of proxy_auth (eg. 
kerberos)?

Is having two proxy servers in sequence an acceptable approach, or can it be 
done within the same instance with the CONNECT method?

My first approach would be to configure clients to send their user credentials 
to an explicit proxy (Squid #1) which would then proxy_auth via Kerberos to a 
PDC. ACL rules would be applied here based on users, domains, IP addr., etc.

The http/https traffic would then go forcibly through a tproxy ssl-bump host 
(Squid #2) which would basically analyze/filter traffic via ICAP.

Has anyone already dealt with this problem, and how?

Regards,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] dynamic ACLs

2020-04-16 Thread Vieri
Hi,

In sslbump tproxy "mode" one cannot authenticate user to limit/allow their 
access to web content.

I was thinking however of making a web form with auth within a custom Squid 
error page. This way a user would "automatically" whitelist a web site and have 
access to it while the IT dep. would know which user accessed where despite the 
site being blacklisted.

From the error page I can tell which ACL is blocking that site so I could 
create an "exception" ACL for that ACL.
My question is: can this whitelist or graylist ACL be dynamic without needing 
to reload Squid, a bit like ipsets with iptables/nftables without the need to 
reload rules?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] debug a failure connection

2020-03-12 Thread Vieri
Hi,

I'm trying to understand what could cause Squid not to connect to the following 
site:

2020/03/12 11:48:24.115 kid1| 17,4| AsyncCallQueue.cc(55) fireNext: entering 
FwdState::ConnectedToPeer(0x561b8b5c7918, local=10.215.144.48:51303 
remote=1.2.3.4:443 FD 784 flags=25, 0x561b8a7ee5b8/0x561b8a7ee5b8)
2020/03/12 11:48:24.115 kid1| 17,4| AsyncCall.cc(37) make: make call 
FwdState::ConnectedToPeer [call219229]
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8b5c7918
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8b5c7918
2020/03/12 11:48:24.115 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x561b8a7ee5b8
2020/03/12 11:48:24.115 kid1| 17,3| FwdState.cc(447) fail: 
ERR_SECURE_CONNECT_FAIL "Service Unavailable"
    1.2.3.4:443


A direct connection by-passing Squid shows that the https site opens fine but 
with a 3DES cipher. In my Squid 4 test I set this temp values just in case:
tls_outgoing_options flags=DONT_VERIFY_PEER cipher=ALL options=ALL


I don't know how to interpret the messages previous to the 
ERR_SECURE_CONNECT_FAIL line. Do I need to send them all? Which debug options 
would be more useful?

Regards,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper

2020-03-05 Thread Vieri

On Thursday, March 5, 2020, 11:37:28 AM GMT+1, Amos Jeffries 
 wrote: 

>
> It means the 'acl' line in squid.conf did not contain any value to pass as 
> extra parameter(s) to that helper lookup.
>
> See
> 

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] external helper

2020-03-05 Thread Vieri
Hi,

I'm using a perl helper script in Squid, and I've migrating to Squid 4 from 
Squid 3. It seems that there's an extra field in the string Squid passes to the 
helper program.

I'd like to know what the character "-" means at the end of the passed string 
as in this message:

external_acl.cc(1085) Start: externalAclLookup: will wait for the result of 
'http www.fltk.org 80 / -' in 'bllookup' (ch=0x5633eaab2118).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] c-icap documentation getting stuck

2019-12-23 Thread Vieri Di Paola
On Sat, Dec 21, 2019 at 7:42 PM robert k Wild  wrote:
>
> WARNING Bad configuration keyword: enable_libarchive 0
> WARNING Bad configuration keyword: banmaxsize 2M

You're probably running an outdated squidclamav.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] deny_info redirect with URL placeholder

2019-12-09 Thread Vieri Di Paola
Is there a way to tell squid to treat %o as-is in deny_info?

In Apache2 with mod_proxy ProxyPass directives, I require to write a
config directive such as:

Header edit Location "(^http[s]?://)([^/]+)" ""

Using %note or %o in squid 4.x or 3.x would be fine, but both have
issues. The config parser in 4.x still complains that a complete URI
is required for deny_info 302.

Still in 4.x, even if I trick it into using this:

deny_info 302:https://%note{location-rewrite} bad_Location

and the helper script outputs something like:

OK location-rewrite="domain without leading protocol://"

I still get the wrong result in the client browser which is literally
trying to connect to https://%note{location-rewrite} (no variable
expansion).

Any thoughts?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny_info redirect with URL placeholder

2019-12-09 Thread Vieri Di Paola
On Mon, Dec 9, 2019 at 10:04 AM Amos Jeffries  wrote:
>
> > How could I refer to these values in the deny_info 302:%* line?
>
>  deny_info 302:https:%o bad_Location
>
> This should do it for Squid-3 (and avoids the config parser bug). You
> just have to have the helper produce the URL (without the "https:"
> scheme name) as its message= value.

Almost, but still not there yet.
All "/" chars are translated to %2f, as in:
https://%2f%2fserver%2fpath...
I guess I need to encode the string somehow.
The helper script is in perl and it looks something like this:

chomp;
my $string = $_;
$string =~ m/^([0-9]+)\s(\S+)$/;
my ($cid, $uri_location) = ($1, $2);
[...]
$status = $cid." OK message=\"".$uri_location."\"";
print $status."\n";

Any ideas?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny_info redirect with URL placeholder

2019-12-09 Thread Vieri Di Paola
On Mon, Dec 9, 2019 at 10:04 AM Amos Jeffries  wrote:
>
> > Is there a way to add a URL variable name to a deny_info 302
> > configuration directive?
> >
>
> <https://wiki.squid-cache.org/Features/CustomErrors> or as I showed
> earlier with logformat codes. Though sorry that does require a later
> Squid version that the one you have.

I set up a test server with the latest stable Squid release:

2019/12/09 10:17:43| FATAL: status 302 requires a URL on
'302:%note{location-rewrite}'
2019/12/09 10:17:43| FATAL: Bungled /etc/squid/squid.aida.include line
60: deny_info 302:%note{location-rewrite} bad_Location
2019/12/09 10:17:43| Squid Cache (Version 4.9): Terminated abnormally.

This is the offending configuration line:

deny_info 302:%note{location-rewrite} bad_Location

Is the syntax OK?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] deny_info redirect with URL placeholder

2019-12-08 Thread Vieri Di Paola
Hi,

Is there a way to add a URL variable name to a deny_info 302
configuration directive?

Suppose I have the following:

external_acl_type location_rewriter ttl=86400 negative_ttl=86400
children-max=80 children-startup=10 children-idle=3 concurrency=8
%http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-05 Thread Vieri Di Paola
On Thu, Dec 5, 2019 at 11:48 AM Amos Jeffries  wrote:
>
>   external_acl_type location_rewriter %   acl bad_Location external location_rewriter
>
>   deny_info 302:%note{location-rewrite} bad_Location
>   acl 302 http_status 302
>   http_reply_access deny 302 bad_Location

I just read something about %note here:
http://www.squid-cache.org/Doc/config/logformat/
However, Squid 3.x doesn't seem to accept %note{location-rewrite} as a
URL placeholder for deny_info.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-05 Thread Vieri Di Paola
On Thu, Dec 5, 2019 at 11:48 AM Amos Jeffries  wrote:
>
>   external_acl_type location_rewriter %   acl bad_Location external location_rewriter
>
>   deny_info 302:%note{location-rewrite} bad_Location
>   acl 302 http_status 302
>   http_reply_access deny 302 bad_Location

Sorry to bother you again with this, but what does
"%note{location-rewrite}" mean?
I'm getting this error message:
FATAL: status 302 requires a URL on '302:%note{location-rewrite}'

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-05 Thread Vieri Di Paola
By the way, if I were to upgrade to Squid 4, would the following do the trick?

reply_header_add Strict-Transport-Security "max-age=31536000;
includeSubDomains; preload" all
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-05 Thread Vieri Di Paola
On Thu, Dec 5, 2019 at 11:48 AM Amos Jeffries  wrote:
>
> Alternative to his would be an eCAP module that just re-writes the
> Location headers in place. That would be simpler, but requires some
> coding to create the module.

Simpler, I like how that sounds...
I presume a good starting point would be:
https://wiki.squid-cache.org/ConfigExamples/ContentAdaptation/eCAP
http://www.e-cap.org/downloads/

If you have any more hints/suggestions/quickstarts for this particular
problem with eCAP, please let me know.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-05 Thread Vieri Di Paola
I could try to use a redirector with location_rewrite_program, but
this directive is not available anymore.
I presume I need to use url_rewrite_program instead.
I wonder if it will rewrite the "Location" header the origin server is
sending to the client browser.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-03 Thread Vieri Di Paola
On Wed, Dec 4, 2019 at 6:15 AM Amos Jeffries  wrote:
>
> I'm trying to see for myself if this is actually normal/OK - since I
> don't know how familiar you are with HTTP accel mode syntax.
>
> The requests in particular are most interesting, though what responses
> are paired with each is also potentially important.

Hope it fits here. Otherwise, I'll pastebin it in another e-mail.

Here's the whole shebang:

2019/12/03 14:52:25.964 kid1| 11,2| client_side.cc(2372)
parseHttpRequest: HTTP Client local=10.215.145.81:50443
remote=10.215.144.48:54243 FD 12 flags=1
2019/12/03 14:52:25.964 kid1| 11,2| client_side.cc(2373)
parseHttpRequest: HTTP Client REQUEST:
-
POST /whatever/j_spring_security_check HTTP/1.1
Host: intranet.mydomain.org:50443
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0)
Gecko/20100101 Firefox/60.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.8,es-ES;q=0.6,es;q=0.4,ca;q=0.2
Accept-Encoding: gzip, deflate, br
Referer: https://intranet.mydomain.org:50443/whatever/security/login
Content-Type: application/x-www-form-urlencoded
Content-Length: 48
Cookie: JSESSIONID=pveHPU4LMS7YcbpaFwAADdL3
Connection: keep-alive
Upgrade-Insecure-Requests: 1

redirect==myuser=mypassword
--
2019/12/03 14:52:25.964 kid1| 11,2| http.cc(2229) sendRequest: HTTP
Server local=10.215.248.91:49470 remote=10.215.248.40:8080 FD 17
flags=1
2019/12/03 14:52:25.964 kid1| 11,2| http.cc(2230) sendRequest: HTTP
Server REQUEST:
-
POST /whatever/j_spring_security_check HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0)
Gecko/20100101 Firefox/60.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.8,es-ES;q=0.6,es;q=0.4,ca;q=0.2
Accept-Encoding: gzip, deflate, br
Referer: https://intranet.mydomain.org:50443/whatever/security/login
Content-Type: application/x-www-form-urlencoded
Content-Length: 48
Cookie: JSESSIONID=pveHPU4LMS7YcbpaFwAADdL3
Upgrade-Insecure-Requests: 1
Host: intranet.mydomain.org:50443
Via: 1.1 rev_whatever (squid)
Surrogate-Capability: inf-fw2="Surrogate/1.0"
X-Forwarded-For: 10.215.144.48
Cache-Control: max-age=259200
Connection: keep-alive


--
2019/12/03 14:52:26.509 kid1| ctx: enter level  0:
'https://intranet.mydomain.org:50443/whatever/j_spring_security_check'
2019/12/03 14:52:26.509 kid1| 11,2| http.cc(719) processReplyHeader:
HTTP Server local=10.215.248.91:49470 remote=10.215.248.40:8080 FD 17
flags=1
2019/12/03 14:52:26.509 kid1| 11,2| http.cc(720) processReplyHeader:
HTTP Server REPLY:
-
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=DQS7FWuX-JxNHXMZE+BHeQ2H; Path=/whatever
Location: http://intranet.mydomain.org:50443/whatever/security/afterLogin
Content-Length: 0
Date: Tue, 03 Dec 2019 13:52:25 GMT


--
2019/12/03 14:52:26.509 kid1| ctx: exit level  0
2019/12/03 14:52:26.509 kid1| 11,2| client_side.cc(1409)
sendStartOfMessage: HTTP Client local=10.215.145.81:50443
remote=10.215.144.48:54243 FD 12 flags=1
2019/12/03 14:52:26.509 kid1| 11,2| client_side.cc(1410)
sendStartOfMessage: HTTP Client REPLY:
-
HTTP/1.1 302 Found
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=DQS7FWuX-JxNHXMZE+BHeQ2H; Path=/whatever
Location: http://intranet.mydomain.org:50443/whatever/security/afterLogin
Content-Length: 0
Date: Tue, 03 Dec 2019 13:52:25 GMT
X-Cache: MISS from inf-fw2
X-Cache-Lookup: MISS from inf-fw2:50443
Via: 1.1 rev_whatever (squid)
Connection: keep-alive


--

> >
> > 2019/12/03 14:52:26.509 kid1| 11,2| http.cc(720) processReplyHeader:
> > HTTP Server REPLY:
> > -
> > HTTP/1.1 302 Moved Temporarily
> ...
> > Location: http://whatever.org:50443/whatever/security/afterLogin
>
> That is a very good sign. The server is using the Squid listening port
> in its generated URLs.

Yes, the port is fine. It's the protocol that's http instead of https.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-03 Thread Vieri Di Paola
> Hmm, what version of Squid is this?

3.5.27 (yes, I'm aware of the security vulnerability, but I'm unable
to upgrade right now)

> Can you configure "debug_options 11,2" and see what the HTTP messages
> look like?

Everything looks OK until I get:

2019/12/03 14:52:26.509 kid1| 11,2| http.cc(720) processReplyHeader:
HTTP Server REPLY:
-
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=DQS7FWuX-JxNHXMZE+BHeQ2H; Path=/aida
Location: http://whatever.org:50443/whatever/security/afterLogin
Content-Length: 0
Date: Tue, 03 Dec 2019 13:52:25 GMT

Then the log ends with:

--
2019/12/03 14:52:26.509 kid1| ctx: exit level  0
2019/12/03 14:52:26.509 kid1| 11,2| client_side.cc(1409)
sendStartOfMessage: HTTP Client local=10.215.145.81:50443
remote=10.215.144.48:54243 FD 12 flags=1
2019/12/03 14:52:26.509 kid1| 11,2| client_side.cc(1410)
sendStartOfMessage: HTTP Client REPLY:
-
HTTP/1.1 302 Found
Server: Apache-Coyote/1.1
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID=DQS7FWuX-JxNHXMZE+BHeQ2H; Path=/whatever
Location: http://whatever.org:50443/whatever/security/afterLogin
Content-Length: 0
Date: Tue, 03 Dec 2019 13:52:25 GMT
X-Cache: MISS from inf-fw2
X-Cache-Lookup: MISS from inf-fw2:50443
Via: 1.1 rev_aida (squid)
Connection: keep-alive


------

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reverse proxy and HTTP redirects

2019-12-03 Thread Vieri Di Paola
Hi,

On Tue, Dec 3, 2019 at 6:33 AM Amos Jeffries  wrote:
>
> NP: you have not configured any Elliptic Curve to be used, so all those
> EC ciphers will not be usable. Also you configured some DES based
> ciphers and then disable DES.

I'll review that, thanks.

> The problem is that the client is talking to port 50443 and the service
> is expecting port 8080 in URLs.
>
> The best solution is to have the server and Squid using the same port
> number. Preferably 443 for HTTPS services.

I can't. Both 443 and 8080 are already in use.

> Alternatively you might be able to use the vport= option on https_port
> to set the URL port to 8080. However, this affects *all* inbound traffic
> at that port and any embedded URLs the service sends the client will
> remain broken (contain port 8080).

Whether I use vport=8080 or not, it still fails because the client
gets an HTTP redirection such as:

http://squidserver.local:50443/whatever (without vport=)

http://squidserver.local:8080/whatever (with vport=8080)

Note the http://.
So the client browser is instructed to connect to an HTTP port which
is closed/firewalled.
I would need to somehow rewrite the redirection to something like:

https://squidserver.local:50443/whatever (without vport=)

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reverse proxy and HTTP redirects

2019-12-02 Thread Vieri Di Paola
Hi,

I configured a reverse proxy with something like this:

https_port 10.215.145.81:50443 accel cert=/etc/ssl/whatever.cer
key=/etc/ssl/whatever_key_nopassphrase.pem
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,CIPHER_SERVER_PREFERENCE,No_Compression
cipher=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA25
6:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
tls-dh=/etc/ssl/whatever/dh2048.pem defaultsite=whatever.org

cache_peer 10.215.248.40 parent 8080 0 no-query originserver
login=PASS front-end-https=on name=httpsServer

[etc]

I can load the web portal just fine from a web client connecting to
10.215.145.81:50443. However, the web server then sends an HTTP
redirection to an HTTP URL which is something like
http://10.215.248.40:8080/whatever (in other words, the page is hosted
on the same server). That breaks the browsing experience (connection
reset).

If I can't modify the server code at 10.215.248.40, is there a
workaround for this?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (no subject)

2019-10-23 Thread Vieri Di Paola
On Wed, Oct 23, 2019 at 1:06 PM Amos Jeffries  wrote:
>
> First problem with these rules is they depend on an IP address. IP is
> the one detail guaranteed not to match properly when TPROXY spoofing is
> going on.

Thank you for giving me clues.
Actually, my whole setup was OK except for one detail.
Where I specify only "10.215.144.48" for TProxy, I needed to also add
the public IP addresses of my 3 ppp links to Internet, ie. the "inet"
values that are shown with:
# ip a s ppp1
# ip a s ppp2
# ip a s ppp3

I don't know how to avoid that. However, it's not a big deal because
they are static addresses.

Thanks again,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (no subject)

2019-10-22 Thread Vieri Di Paola
On Tue, Oct 22, 2019 at 1:48 PM Amos Jeffries  wrote:
>
> I do not see any DIVERT rule at all in your firewall config dump. That
> is at least part of the problem.

I opened the previous dump and saw the divert rules here below:

Chain PREROUTING (policy ACCEPT 573K packets, 462M bytes)
 pkts bytes target prot opt in out source
destination
 573K  462M CONNMARK   all  --  *  *   0.0.0.0/0
0.0.0.0/0CONNMARK restore mask 0xff
 1213  181K routemark  all  --  ppp1   *   0.0.0.0/0
0.0.0.0/0mark match 0x0/0xff
 3195  308K routemark  all  --  ppp2   *   0.0.0.0/0
0.0.0.0/0mark match 0x0/0xff
 1320 79360 routemark  all  --  ppp3   *   0.0.0.0/0
0.0.0.0/0mark match 0x0/0xff
 311K  277M tcpre  all  --  *  *   0.0.0.0/0
0.0.0.0/0mark match 0x0/0xff
0 0 divert tcp  --  ppp1   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:80 flags:!0x17/0x02 socket
--transparent
0 0 divert tcp  --  ppp2   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:80 flags:!0x17/0x02 socket
--transparent
0 0 divert tcp  --  ppp3   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:80 flags:!0x17/0x02 socket
--transparent
   76  7484 TPROXY tcp  --  enp10s0 *   10.215.144.48
0.0.0.0/0tcp dpt:80 TPROXY redirect 0.0.0.0:3129 mark
0x200/0x200
0 0 divert tcp  --  ppp1   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:443 flags:!0x17/0x02 socket
--transparent
0 0 divert tcp  --  ppp2   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:443 flags:!0x17/0x02 socket
--transparent
0 0 divert tcp  --  ppp3   *   0.0.0.0/0
10.215.144.48   [goto]  tcp spt:443 flags:!0x17/0x02 socket
--transparent
   10  1060 TPROXY tcp  --  enp10s0 *   10.215.144.48
0.0.0.0/0tcp dpt:443 TPROXY redirect 0.0.0.0:3130 mark
0x200/0x200

Aren't these the DIVERT rules you are referring to?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (no subject)

2019-10-22 Thread Vieri Di Paola
On Tue, Oct 22, 2019 at 1:48 PM Amos Jeffries  wrote:
>
> On 22/10/19 11:22 pm, Vieri Di Paola wrote:
> >
> > I use Shorewall on this system. This program configures iptables and 
> > routing.
> > I dumped all the network information while trying to access port 80 on
> > host with IP addr. 104.113.250.104 form local host with IP addr.
> > 10.215.144.48:
> I do not see any DIVERT rule at all in your firewall config dump. That
> is at least part of the problem.

I don't know why.. I must have taken the wrong dump. Here's a new one
I just tested:

https://drive.google.com/file/d/1iqIU8SrvmOfSHs7wv2tjLLx1DXWNrP8h/view?usp=sharing

> Have you run through the notes and troubleshooting checks on the TPROXY
> feature page?
> <https://wiki.squid-cache.org/Features/Tproxy4>

Yes, but I'm obviously overlooking something.
I'll work on it.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (no subject)

2019-10-22 Thread Vieri Di Paola
Hi,

On Fri, Oct 18, 2019 at 10:13 PM Amos Jeffries  wrote:
>
> If you are able to share your config maybe we could help spot something,
> both for that and for the timeout issue.

I prepared and tested a trimmed-down squid conf:

# cat squid.conf
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
acl CONNECT method CONNECT

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager

acl explicit myportname 3128
acl intercepted myportname 3129
acl interceptedssl myportname 3130

http_port 3128
http_port 3129 tproxy
https_port 3130 tproxy ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=16MB cert=/etc/ssl/squid/proxyserver.pem
sslflags=NO_DEFAULT_CA
sslproxy_flags DONT_VERIFY_PEER

sslcrtd_program /usr/libexec/squid/ssl_crtd -s /var/lib/squid/ssl_db -M 16MB
sslcrtd_children 40 startup=20 idle=10

cache_dir diskd /var/cache/squid 32 16 256

acl localnet src 10.0.0.0/8
acl localnet src 192.168.0.0/16

acl good_useragents req_header User-Agent Firefox/
acl good_useragents req_header User-Agent Edge/
acl good_useragents req_header User-Agent Microsoft-CryptoAPI/

http_access deny intercepted !localnet
http_access deny interceptedssl !localnet

http_access allow CONNECT interceptedssl SSL_ports
http_access deny !good_useragents

http_access allow localnet

debug_options rotate=1 ALL,9

reply_header_access Alternate-Protocol deny all
ssl_bump stare all
ssl_bump bump all

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service antivirus respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access antivirus allow all

email_err_data on
client_lifetime 480 minutes

httpd_suppress_version_string on
dns_v4_first on
via off
forwarded_for transparent

cache_mem 32 MB

max_filedescriptors 65536
icap_service_failure_limit -1
icap_persistent_connections off

http_access allow localhost

http_access deny all

coredump_dir /var/cache/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

> You said Squid used TPROXY. The spoofing of packets causes a different
> set of routing tables and rules to be applied than normal server
> outgoing traffic.

I use Shorewall on this system. This program configures iptables and routing.
I dumped all the network information while trying to access port 80 on
host with IP addr. 104.113.250.104 form local host with IP addr.
10.215.144.48:
https://drive.google.com/file/d/13Pr2OCgCInY6E72krCci9BiHrB1lrMce/view?usp=sharing

> Looks like Squid is doing everything right and the issues is somewhere
> between the TCP SYN send and SYN ACK returning.

I suspect there must be something wrong with my routing or marking
(please see dump).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] external_acl_type and ipv6

2019-10-22 Thread Vieri Di Paola
Hi,

What is the advantage of using ipv6 instead of ipv4 by default for
external_acl_type?

http://www.squid-cache.org/Doc/config/external_acl_type/

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] (no subject)

2019-10-18 Thread Vieri Di Paola
On Fri, Oct 11, 2019 at 3:50 PM Amos Jeffries  wrote:
>
> Note that this last entry is about a connection to port 443, whereas the
> rest of the log is all about traffic to port 80.
> >
> > The Squid machine has no issues if I browse the web from command line,
> > eg. 'links http://www.linuxheadquarters.com' works fine.
> >
> > What should I be looking for?
>
> TCP/IP level packet routing. Squid is trying to open a TCP connection to
> that "remote=" server. TCP SYN is sent, and then ... ... ... nothing.

I noticed the ":80 to :443" flaw in the log, and I don't know why this
shows up if it's not a redirection.
So I did another test to another destination, and I tried to connect
to host with IP addr. 104.113.250.104 on port 80.
Now the log is consistent, but I'm still getting the same connection
timeout even though I can connect without any issues with an HTTP
client from the Squid machine itself. If it were a packet routing
issue, wouldn't the connection time out also with this HTTP client on
the server itself?

Do you see anything fishy in the squid log I've pasted below?

https://pastebin.com/yJZYw28A

Thanks again,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] (no subject)

2019-10-11 Thread Vieri Di Paola
Hi,

I'm trying to connect from a LAN client with IP addr. 10.215.144.48 to
a web server through Squid 3 + Tproxy.

As you can see from the logs here below, there seems to be a timeout:

https://pastebin.com/2Jka4es1

The Squid machine has no issues if I browse the web from command line,
eg. 'links http://www.linuxheadquarters.com' works fine.

What should I be looking for?

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] daily releases

2019-01-31 Thread Vieri
 
On Wednesday, January 30, 2019, 9:12:51 PM GMT+1, Amos Jeffries 
 wrote: 
>> Does anyone know of a convenient one-liner to get the latest daily
>> release tarball, eg.
>> http://www.squid-cache.org/Versions/v4/squid-4.5-20190128-r568e66b7c.tar.gz,
>> without having to search for it manually on the web?
>
> The contents of the tarball are provided by rsync to optimize update
> bandwidth:
> 
> <https://wiki.squid-cache.org/DeveloperResources#Bootstrapped_sources_via_rsync>

rsync allows to sync the latest source for a particular main version (eg. Squid 
4 or Squid 5).
However, it does not allow to pull in squid v. 4's source code published on Jan 
28th 2019 just like I would get by downloading 
squid-4.5-20190128-r568e66b7c.tar.gz.
Furthermore, I'm guessing that the "daily" tarballs that are published on the 
web site's download page are hand-picked because they are known to solve bugs, 
and are considered to be somewhat "stable". For instance, if I were to rsync 
today would I get the same code as that of the above mentioned tarball?

Another simple solution would be to be able to list the files in the 
/Versions/v4/ directory, but it is not allowed by the server.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] daily releases

2019-01-30 Thread Vieri
Hi,

Does anyone know of a convenient one-liner to get the latest daily release 
tarball, eg. 
http://www.squid-cache.org/Versions/v4/squid-4.5-20190128-r568e66b7c.tar.gz, 
without having to search for it manually on the web?

Either that or a symlink that would always point to the "latest daily".

Thanks,

Vieri

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri
I can add the following info to my previous e-mail. Here's the configure 
command (the pid file name is always the same -- other options may vary 
according to user preferences or system deps):

$ ./configure --prefix=/usr --build=x86_64-pc-linux-gnu 
--host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info 
--datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib 
--disable-dependency-tracking --disable-silent-rules 
--docdir=/usr/share/doc/squid-4.5 --htmldir=/usr/share/doc/squid-4.5/html 
--with-sysroot=/ --libdir=/usr/lib64 --sysconfdir=/etc/squid 
--libexecdir=/usr/libexec/squid --localstatedir=/var 
--with-pidfile=/run/squid.pid --datadir=/usr/share/squid 
--with-logdir=/var/log/squid --with-default-user=squid 
--enable-removal-policies=lru,heap --enable-storeio=aufs,diskd,rock,ufs 
--enable-disk-io 
--enable-auth-basic=NCSA,POP3,getpwnam,SMB,SMB_LM,LDAP,PAM,RADIUS 
--enable-auth-digest=file,LDAP,eDirectory --enable-auth-ntlm=SMB_LM 
--enable-auth-negotiate=kerberos,wrapper 
--enable-external-acl-helpers=file_userip,session,unix_group,delayer,time_quota,wbinfo_group,LDAP_group,eDirectory_userip,kerberos_ldap_group
 --enable-log-daemon-helpers --enable-url-rewrite-helpers 
--enable-cache-digests --enable-delay-pools --enable-eui --enable-icmp 
--enable-follow-x-forwarded-for --with-large-files 
--with-build-environment=default --disable-strict-error-checking 
--disable-arch-native --with-included-ltdl=/usr/include 
--with-ltdl-libdir=/usr/lib64 --with-libcap --enable-ipv6 --disable-snmp 
--with-openssl --with-nettle --with-gnutls --enable-ssl-crtd --disable-ecap 
--disable-esi --enable-htcp --enable-wccp --enable-wccpv2 
--enable-linux-netfilter --enable-zph-qos --with-netfilter-conntrack 
--with-mit-krb5 --without-heimdal-krb5

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri

On Tuesday, January 29, 2019, 1:06:22 PM GMT+1, Amos Jeffries 
 wrote: 
>>
>> Is it necessary to keep this in the Makefile?
>> 
>
> Yes. The path is configurable with --with-pidfile=PATH, so it can be
> absolutely anywhere.
>
> It would help to have a hint about what OS you are using and what
> /configure parameters you used.

I'm using Gentoo and the ebuild (package manager) hardcodes the PID file name 
when calling the configure script:

--with-pidfile=/run/squid.pid

So if this is the case then maybe it would make sense to remove that 
mkinstalldirs line in the Makefile, at least only downstream by the Gentoo devs 
as a patch before configuring/compiling. 
Makefiles might change in the future, but that would be up to the Gentoo devs 
to update. 

I don't know for sure yet if this is why Gentoo "warns" me that the Squid 
installation is trying to write to /run, or if there are other parts of the 
installation code that might do so too.

I'll make a few tests first, but correct me if I'm wrog when I say that if one 
*always* passes the same PID file path to the configure script then that 
mkinstalldirs can be safely removed from the Makefile.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] installing Squid: /run dir creation

2019-01-29 Thread Vieri
Hi,

My Linux distro warns me that when trying to install Squid an attempt is made 
to write to a "volatile" dir.

The Makefile in the src subdir contains:

    $(mkinstalldirs) $(DESTDIR)`dirname $(DEFAULT_PID_FILE)`

The default PID file being /run/squid.pid, the above tries to make the /run dir.

Is it necessary to keep this in the Makefile?

Shouldn't the /run/* files be created at runtime anyway?

The /run dir is also created by the OS.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP 500 is not bypassed

2018-01-30 Thread Vieri
Alex, thanks for your time.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-29 Thread Vieri
Hi,

I reproduced the problem, and saw that the c-icap server (or its squidclamav 
module) reports a 500 internal server error when clamd is down. I guess that's 
not bypassable?


The c-icap server log reports:

Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(1934) dconnect: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, entering.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(2015) connectINET: Mon 
Jan 29 08:30:35 2018, 5134/1290311424, ERROR Can't connect on 127.0.0.1:3310.
Mon Jan 29 08:30:35 2018, 5134/1290311424, squidclamav.c(744) 
squidclamav_end_of_data_handler: Mon Jan 29 08:30:35 2018, 5134/1290311424, 
ERROR Can't connect to Clamd daemon.
Mon Jan 29 08:30:35 2018, 5134/1290311424, An error occured in end-of-data 
handler !return code : -1, req->allow204=1, req->allow206=0


Here's Squid's log:

https://drive.google.com/file/d/18HmM8pOuDQmE4W_vwmSncXEeJSvgDjDo/view?usp=sharing

I was hoping I could relate this to the original topic, but I'm afraid they are 
two different issues.


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-27 Thread Vieri
Hi,

I just wanted to add some information to this topic, although I'm not sure if 
it's related.


I noticed that if I set bypass=1 in squid.conf (regarding ICAP), and if I stop 
the local clamd service (not the c-icap service), then the clients see Squid's 
ERR_ICAP_FAILURE page.
Is this expected?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-18 Thread Vieri

From: Amos Jeffries <squ...@treenet.co.nz>
>
> Sorry I have a bit of a distraction going on ATM so have not got to that 

> detailed check yet. Good to hear you found a slightly better situation > 
> though.
[...]
> In normal network conditions it should rise and fall with your peak vs 
> off-peak traffic times. I expect with your particular trouble it will 
> mostly just go upwards.


No worries. I'd like to confirm that I'm still seeing the same issue with 
c-icap-modules, even though it's slightly better in that the FD numbers grow 
slower, at least at first.
I must say that it seems to be growing faster now. I had 4k two days ago, now I 
have:
Largest file desc currently in use:   6664
Number of file desc currently in use: 6270
So it seesm that the more days go by, the faster the FD numbers rise.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-16 Thread Vieri
Hi,

Just a quick follow-up on this.

I dropped squidclamav so I could test c-icap-modules's clamd service instead.
The only difference between the two is that squidclamav was using unix sockets 
while c-icap-modules is using clamd.

At first, the results were good. The open fd numbers were fluctuating, but 
within the 1k-2k limit during the first days. However, today I'm getting 4k, 
and it's only day 5. I suspect I'll be getting 10k+ numbers within another week 
or two. That's when I'll have to restart squid if I don't want the system to 
come to a network crawl.

I'm posting info and filedescriptors here:

https://drive.google.com/file/d/1V7Horvvak62U-HjSh5pVEBvVnZhu-iQY/view?usp=sharing

https://drive.google.com/file/d/1P1DAX-dOfW0fzt1sAeyT35brQyoPVodX/view?usp=sharing

By the way, what does "Largest file desc currently in use" mean exactly? Should 
this value also drop (eventually) under sane conditions?

So I guess moving from squidclamav to c-icap-modules did improve things, but 
I'm still facing something wrong. I could try moving back to squidclamav in 
"clamd mode" instead of unix sockets just to see if I get the same partial 
improvement as the one I've witnessed this week.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-11 Thread Vieri
Hi,

I don't know how to cleanly seperate the 93,* from the 11,* log lines. I posted 
the following:


https://drive.google.com/file/d/1PRJOc6czrA0QEDHkqn3MrmNh08K8JajR/view?usp=sharing

It contains a cache.log generated by:
debug_options rotate=1 ALL,0 93,6 11,6

I also ran :info and :filedescriptors when I applied the new debug_options 
(*1), and again when I reverted back the debug_options (*2).

I'm using c-icap with squidclamav. I'll try to use c-icap-modules instead asap 
so I can hopefully remove a few variables in the issue (if it keeps giving me 
this issue then it must be a c-icap service flaw).

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-09 Thread Vieri
d and Nwrite seem to be well over 0.

> That implies they > are possibly TCP connections which never complete their 
> opening 
> sequence, or at least the result of connection attempts does not make it 
> back to the ICAP code somehow.


ICAP and Squid are both on localhost. I'd like to find out why this is 
happening.


I believe I already posted a tcpdump trace of the ICAP traffic, but I don't 
know if you had a chance to take a look at it. I had a quick look, but I'm not 
familiar with the ICAP protocol. In any case, I probably would see a lot of 
OPTIONS, REQMOD, RESPMOD methods, but I don't know if I would clearly detect 
initial TCP issues.


Anyway, here's a dumb question. Can't Squid "tell" when a TCP connection to an 
ICAP server has never completed correctly after x timeout, and close it 
down/reset it?
I'm using default values in squid.conf for the following:
connect_timeout
icap_connect_timeout
peer_connect_timeout

The docs say:
#  TAG: icap_connect_timeout
#   This parameter specifies how long to wait for the TCP connect to
#   the requested ICAP server to complete before giving up and either
#   terminating the HTTP transaction or bypassing the failure.


BTW I guess it's just a typo error because instead of an "HTTP transaction" I 
should read "ICAP transaction", right? 

Anyway, I have "bypass=0" for the ICAP service so I guess it should honor 
connect_timeout.
The default is connect_timeout 1 minute. 


With a 1-minute timeout it may be hard to see the sockets close when there's 
plenty of traffic, but I think I should see a substantial drop of open sockets 
when traffic is low (eg. at night). However, I don't see it.


What could I try?

Thank you very much for your time.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-07 Thread Vieri


From: Amos Jeffries <squ...@treenet.co.nz>

>> The open sockets to 127.0.0.1:1344 keep increasing steadily even on high 
>> network usage, but they do not decrease when there's
>> little or no traffic.>> So, day after day the overall number keeps growing 
>> until I have to restart squid once or twice a week.
>> 
>> In other words, this value keeps growing:
>> Largest file desc currently in use:   
>> This other value can decrease at times, but in the long run it keeps growing 
>> too:
>> Number of file desc currently in use: 
>> 
> Ah. What does the cachemgr "filedescriptors" report show when there are 
> a lot starting to accumulate?
>
> And, are you able to get a cache.log trace with "debug_options 93,6" ?


Here's my cache.log:

https://drive.google.com/file/d/1I8R5sCsIGhYa69QmGrOoHVITuom4uW0k/view?usp=sharing

squidclient's filedescriptors:

https://drive.google.com/file/d/1o6zn-o0atqeqFGSMRhPA9r1AAFJpnpBZ/view?usp=sharing

The info page:

https://drive.google.com/file/d/11iWqjgdt2KK1yWPMsr5o-IyWGyKS7joc/view?usp=sharing

The open fds are at around 7k, but they can easily reach 12k or 13k. That's 
when I start running into trouble.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2018-01-05 Thread Vieri
 10.215.248.31 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056875.039  0 10.215.248.99 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056875.412  0 10.215.248.31 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056875.582  0 10.215.248.152 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056875.700  2 10.215.145.136 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056876.291  2 10.215.248.31 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.020 24 10.215.247.182 ICAP_MOD/200 65493 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.134  0 10.215.247.182 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.187  0 10.215.247.182 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.358  0 10.215.247.182 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.452  0 10.215.247.182 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.502  0 10.215.247.182 ICAP_ECHO/204 105 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.536  26334 10.215.246.136 ICAP_MOD/200 914 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -
1515056877.809  25511 10.215.247.120 ICAP_MOD/200 916 RESPMOD 
icap://127.0.0.1:1344/clamav - -/127.0.0.1 -

Yes, the title refers to:
kernel: TCP: out of memory -- consider tuning tcp_mem

I'm trying to find out what's wrong on this system even though restarting Squid 
twice a week at night isn't too bad in my case.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP out of memory

2018-01-04 Thread Vieri
Hi again,

I haven't taken a look at Squid's source code, but I guess that when Squid 
communicates with a c-icap service it acts as a typical socket client, right?
eg. connect(), write(), read(), close()

Does Squid consider forcing disconnection (close()) if the read() is "too long"?
Is there such a timeout? Is it configurable in squid.conf (only for the c-icap 
connection)?


Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] browser acl

2017-12-26 Thread Vieri
Hi,

Which one of the two examples below is syntactically correct?

acl UA browser Firefox/

acl UA browser Firefox\/

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2017-12-21 Thread Vieri
  0.03868
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:257992.456 seconds
CPU Time:   6009.530 seconds
CPU Usage:  2.33%
CPU Usage, 5 minute avg:4.90%
CPU Usage, 60 minute avg:   2.98%
Maximum Resident Size: 5549728 KB
Page faults with physical i/o: 0
Memory accounted for:
Total accounted:   997578 KB
memPoolAlloc calls: 980907766
memPoolFree calls:  999669215
File descriptor usage for squid:
Maximum number of file descriptors:   65536
Largest file desc currently in use:   4399
Number of file desc currently in use: 3676
Files queued for open:   0
Available number of file descriptors: 61860
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
1895 StoreEntries
1732 StoreEntries with MemObjects
1687 Hot Object Cache Items
1617 on-disk objects


Clients are now browsing, and squid/c-icap are apparently communicating.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP out of memory

2017-12-18 Thread Vieri


From: Amos Jeffries 
>
> What is your ICAP configuration in squid.conf?


icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all

icap_service_failure_limit -1
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP out of memory

2017-12-18 Thread Vieri
Hi,

I need to restart Squid once a week because I see "TCP out of memory" messages 
in syslog.

I see lots of open file descriptors of type "127.0.0.1:1344".

There could be an issue with the c-icap service.

As suggested previously, I dumped a packet trace here:

https://drive.google.com/file/d/1qCkH6YYa7fgeYzm-AoJEpXTDVpzILCQ9/view?usp=sharing

Can anyone please take a look at it? I'm trying to determine whether c-icap is 
closing connections properly.
Maybe the dump's time range is too short to see anything useful?
I also tried looking at the c-icap logs, but unfortunately I don't see anything 
(or I don't know how to interpret them correctly).


Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] url_rewrite_program and ACLs

2017-11-22 Thread Vieri


From: Amos Jeffries <squ...@treenet.co.nz>
>
> If we assume that each request opens a new connection and they are not 
> closed until TCP times out on the socket we do get numbers much more 
> like that 11K+ you are seeing.
> 
> That implies that ICAP transactions are probably not finishing 

> completely.

I'll have to look into this asap. Quick question: if I restart c-icap shouldn't 
I see a drop in open FD numbers if it were c-icap's "fault"?

I restarted c-icap (stop+start), but the open FDs are the same.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-22 Thread Vieri

From: Amos Jeffries <squ...@treenet.co.nz>
>
> If you place that after the default "deny CONNECT !SSL_ports", and 
> before your UA checks, AND if you are using ssl_bump on the allowed 
> tunnels then you can relatively safely use "allow CONNECT".
> 
> Just be careful that the CONNECT allowed by that are always handled 
> safely by the ssl_bump rules you have.
>   Meaning that you either bump or terminate traffic you are not sure is 
> okay, splice if you are reasonably sure, etc. it is a balancing effort 
> between "splice as much as possible" and "terminate if unsure of the 
> traffic" advice.


As you say, I placed "allow CONNECT" after the default "deny CONNECT 
!SSL_ports", and before my UA checks. I'm also using:
ssl_bump stare all
ssl_bump bump all


Considering the following (taken from previous e-mail):

http_access deny intercepted !localnet
http_access deny interceptedssl !localnet
http_access deny explicit !ORG_all
http_access deny explicit SSL_ports

Would it be "safer" or "indifferent" to use the following right before the UA 
checks?

http_access allow CONNECT interceptedssl SSL_ports


> Just FYI you would be a huge amount better off dropping the UA 
> fingerprinting. It's a _really_ simplistic idea about the HTTP world, 
> and it is partly because of that overly-simplistic nature and depending 
> on unreliable values that you are having so much more trouble than 
> normal admin face.


I'm aware that UA checks are not fully reliable, but in a big corporate 
environment it can reveal a lot of interested information.

I also know that some HTTP clients mimic others' user-agent strings or 
substrings. They can even sometimes dynamically change them.

However, in my particular case I could define a custom UA for our corporate 
browser allowed to go through Squid. For instance, Firefox can easily do that. 
Other browsers such as Edge seem not to.
In any case, it is not my intention to do so long-term. In short-term I found 
out that:

1) Squid logic *can* be understood :-)

2) some hosts may have HTTP clients that should be blocked even though the rest 
of the Squid rules were not programmed for that (so I couldn't know about it). 
A simple example: we may allow traffic to all microsoft sites, but some 
software may not necessarily be well installed/configured. I found that 
Microsoft Office may connect to an MS site to download or update software with 
a utility/service called 
OfficeClickToRun. Of course, generic rules in Squid.conf already blocked 
unauthorized downloads according to mimetypes or filetypes. However, some 
clients could be whitelisted and allowed to download (eg. from all MS sites). 
In this case, I would not necessarily want OfficeClickToRun to update. That 
could be done by identifying the dst domains, but they could change in time, 
and in any case would require more digging into. 


Adobe has similar http client behavior.


Anyway, it's informative to say the least, and can be used to improve the rest 
of the "standard" squid acl access rules.

I was also thinking of using custom HTTP headers such as X-MyCustomHeader: 
Whatever instead of UA strings. Custom headers can easily be added in Firefox, 
and other browsers such as Edge also seem to support that.

Anyway, I had a great time fiddling with Squid.
Thank you for your assistance.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-21 Thread Vieri
20Error%5D=89.16.167.134=10.215.144.48=CONNECT==/=Tue,%2021%20Nov%202017%2009%3A07%3A01%20GMT=https%3A%2F%2F89.16.167.134%2F*=89.16.167.134%3A443=IT%40mydomain.org==bad_useragents
X-Squid-Error: 403 Access Denied
X-Cache: MISS from proxy-server1
X-Cache-Lookup: NONE from proxy-server1:3227
Connection: close


--
2017/11/21 10:07:01.090 kid1| 33,2| client_side.cc(832) swanSong: 
local=89.16.167.134:443 remote=10.215.144.48 flags=17
2017/11/21 10:07:01.090 kid1| 20,2| store.cc(996) checkCachable: 
StoreEntry::checkCachable: NO: not cachable
2017/11/21 10:07:01.090 kid1| 20,2| store.cc(996) checkCachable: 
StoreEntry::checkCachable: NO: not cachable

Isn't the message "The request CONNECT 89.16.167.134:443 is DENIED" what I 
should be concentrating on?
Isn't that the root cause?
In another message, you mentioned that I should notice that Squid reports 
another ACL name (in this case, after the name change, it's 
"bad_replied_mimetypes").
In any case, the message "The reply for GET https://www.gentoo.org/ is ALLOWED" 
means that Squid should ALLOW, right?
However, why do I get a 307 redirect to a deny_info page (where incidentally 
the URL refers to bad_useragents, not bad_replied_mimetypes)?

I can't seem to clear this out and make it work without adding "http_access 
allow CONNECT SSL_ports" right before checking for the useragent.

Help greatly appreciated.

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] url_rewrite_program and ACLs

2017-11-20 Thread Vieri

From: Amos Jeffries <squ...@treenet.co.nz>
>
> I would compare your custom script to the ext_sql_session_acl.pl.in 
> script we bundle with current Squid.


I've rewritten my perl script, and have been running it for a full week now 
without any issues. Free RAM drops down to alarming values, but then rises back 
up again. In any case, "used swap" is always the same. The only thing that 
keeps be edgy is the fact that the open FDs keep growing (slowly but steadily). 
After a few days the value is around 6000, but after a week (today) it's:

Squid Object Cache: Version 3.5.27-20171101-re69e56c
Build Info:
Service Name: squid
Start Time: Mon, 13 Nov 2017 11:06:36 GMT
Current Time:   Mon, 20 Nov 2017 08:48:00 GMT
Connection information for squid:
Number of clients accessing cache:  582
Number of HTTP requests received:   6435251
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   647.3
Average ICP messages per minute since start:0.0
Select loop called: 246503925 times, 2.420 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 4.4%, 60min: 4.3%
Hits as % of bytes sent:5min: -0.7%, 60min: -6.0%
Memory hits as % of hit requests:   5min: 75.4%, 60min: 67.9%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.1%
Storage Swap size:  29848 KB
Storage Swap capacity:  91.1% used,  8.9% free
Storage Mem size:   29120 KB
Storage Mem capacity:   88.9% used, 11.1% free
Mean Object Size:   13.19 KB
Requests given to unlinkd:  97921
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.18699  0.19742
Cache Misses:  0.19742  0.20843
Cache Hits:0.0  0.0
Near Hits: 0.0  0.27332
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.08334  0.07618
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:596484.490 seconds
CPU Time:   15823.550 seconds
CPU Usage:  2.65%
CPU Usage, 5 minute avg:4.38%
CPU Usage, 60 minute avg:   4.86%
Maximum Resident Size: 14493888 KB
Page faults with physical i/o: 0
Memory accounted for:
Total accounted:   -862888 KB
memPoolAlloc calls: 2199430697
memPoolFree calls:  2241183896
File descriptor usage for squid:
Maximum number of file descriptors:   65536
Largest file desc currently in use:   12714
Number of file desc currently in use: 11998
Files queued for open:   0
Available number of file descriptors: 53538
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
2520 StoreEntries
2519 StoreEntries with MemObjects
2314 Hot Object Cache Items
2263 on-disk objects


mgr:filedescriptors shows a great deal of these:

Remote AddressDescription
- --
127.0.0.1:1344127.0.0.1


# squidclient mgr:filedescriptors | grep -c "127.0.0.1:1344"
11578


Port 1344 is where the c-icap daemon listens on.
This is the relevant part in squid.conf:

icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service squidclamav respmod_precache bypass=0 icap://127.0.0.1:1344/clamav
adaptation_access squidclamav allow all
icap_service_failure_limit -1


The number of connections to this port fluctuates over time (it also 
decreases), but overall it clearly increases day by day.
I could have an issue with either c-icap itself or one of its modules.
I'll keep an eye on it.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] block user agent

2017-11-20 Thread Vieri
d_domains_mimetypes
deny_info 
http://proxy-server1/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=denied_mimetypes
 denied_mimetypes_req
deny_info 
http://proxy-server1/proxy-error/?a=%a=%B=%e=%E=%H=%i=%M=%o=%R=%T=%U=%u=%w=%x=denied_mimetypes
 denied_mimetypes_rep

http_access allow localnet bl_lookup
http_access allow localhost

http_access deny all

I'd greatly appreciate your input on this.

Hoping to understand Squid logic someday.

Thanks,

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   >