Re: [squid-users] Error Question

2024-06-12 Thread Alex Rousskov

On 2024-06-12 17:51, Jonathan Lee wrote:
when killing squid I only get the following and no core dumps core does 
does work


Glad you have a working "sanity check" test! I agree with FreeBSD forum 
folks that you have proven that your OS does have core dumps enabled (in 
general). Now we need to figure out what is the difference between that 
working test script and Squid.


Please start Squid from the command line, with -N command line option 
(among others that you might be using already), just like you start the 
"sanity check" test script. And then kill Squid as you kill the test script.


If the above does not produce a Squid core file, then I would suspect 
that Squid runs as "squid" user while the test script runs as "root". 
Try starting the test script as "squid" user (you may be able to use 
"sudo -u squid ..." for that).


If same user does not expose the difference, start the test script from 
the directory where you told Squid to dump core.



HTH,

Alex.


I have tested it with a sanity check with the help of FreeBSD 
forum users. However it just does not show a core dump for me on 
anything kill -11 kill -6 killall or kill -SIGABRT. I have it set in the 
config to use coredump directory also
forums.freebsd.org 
<https://forums.freebsd.org/threads/core-dumps.93778/page-2>



Jun 12 14:49:09 kernel  pid 87824 (squid), jid 0, uid 100: exited on signal 6
Jun 12 14:47:52 kernel  pid 87551 (squid), jid 0, uid 0: exited on signal 11



On Jun 12, 2024, at 10:19, Jonathan Lee  wrote:

You know what it was, it needed to be bound to the loopback and not 
just the LAN, again I am still working on getting a core dump file 
manually. Will update once I get one. Chmod might be needed.

Sent from my iPhone

On Jun 12, 2024, at 06:13, Alex Rousskov 
 wrote:


On 2024-06-11 23:32, Jonathan Lee wrote:


So I just run this on command line SIGABRT squid?


On Unix-like systems, the command to send a process a signal is 
called "kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html


For example, if you want to abort a Squid worker process that has OS 
process ID (PID) 12345, you may do something like this:


  sudo kill -SIGABRT 12345

You can use "ps" or "top" commands to learn PIDs of processes you 
want to signal.



also added an item to the Netgate forum to, but not many users are 
Squid wizards


Beyond using a reasonable coredump_dir value in squid.conf, the 
system administration problems you need to solve to enable Squid core 
dumps are most likely not specific to Squid.



HTH,

Alex.


It’s funny as soon as I enabled the sysctl command and set the 
directory it won’t crash anymore. I also changed it to reside on the 
loopback before it was only on my lan interface. I run an external 
drive as my swap partition or a swap drive, it works I get crash 
reports when playing around with stuff. /dev/da0 or something it 
dumps to it and when it reboots shows in the var/crash folder and 
will display on gui report ready, again if anyone else knows pfSense 
let me know. I also added an item to the Netgate forum to, but not 
many users are Squid wizards so it might take a long time to get any 
community input over there.






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-12 Thread Alex Rousskov

On 2024-06-11 23:32, Jonathan Lee wrote:


So I just run this on command line SIGABRT squid?


On Unix-like systems, the command to send a process a signal is called 
"kill": https://www.man7.org/linux/man-pages/man1/kill.1p.html


For example, if you want to abort a Squid worker process that has OS 
process ID (PID) 12345, you may do something like this:


sudo kill -SIGABRT 12345

You can use "ps" or "top" commands to learn PIDs of processes you want 
to signal.



also added an item to the Netgate forum to, but not many users are 
Squid wizards


Beyond using a reasonable coredump_dir value in squid.conf, the system 
administration problems you need to solve to enable Squid core dumps are 
most likely not specific to Squid.



HTH,

Alex.



It’s funny as soon as I enabled the sysctl command and set the directory it 
won’t crash anymore. I also changed it to reside on the loopback before it was 
only on my lan interface. I run an external drive as my swap partition or a 
swap drive, it works I get crash reports when playing around with stuff. 
/dev/da0 or something it dumps to it and when it reboots shows in the var/crash 
folder and will display on gui report ready, again if anyone else knows pfSense 
let me know. I also added an item to the Netgate forum to, but not many users 
are Squid wizards so it might take a long time to get any community input over 
there.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 18:09, Jonathan Lee wrote:
When I run sysctl debug.kdb.panic=1 I get a crash report for pfsense in 
var/crash should my path for core dumps use my swap drive too?


It is a pfsense-specific question that I do not know the answer for. 
Perhaps others do. However, you may be able to get an answer faster if 
you set coredump_dir in squid.conf to /var/crash, start Squid with that 
configuration, and then kill a running Squid worker with SIGABRT.



HTH,

Alex.



On Jun 11, 2024, at 14:42, Alex Rousskov wrote:

On 2024-06-11 17:06, Jonathan Lee wrote:

I can’t locate the dump file for segmentation fault it never 
generates one.


I assume that you cannot locate the core dump file because your 
OS/environment is not configured to produce core dump files. Enabling 
core dumps is a sysadmin task that is mostly independent from Squid 
specifics. The FAQ I linked to earlier has some hints, but none are 
pfsense-specific. If others on the list do not tell you how to enable 
coredumps on pfsense, you may want to ask on pfsense or sysadmin forums.


Alternatively, you can try starting Squid from gdb or attacking gdb to 
a running Squid kid process, but neither is trivial, especially if you 
are using SMP Squid. The same FAQ has some hints.


BTW, to test whether core dumps are enabled in your environment, you 
do not need to wait for Squid to crash. Instead, you can send a 
SIGABRT signal (as "root" if needed) to any running test process and 
see whether it creates a core dump file when crashing.




I am running cache it shows a swap file however it is not readable.


There are many kinds of swap files, but the core file we need is 
probably not one of them.




I fixed the other issues.


Glad to hear that!

Alex.


On Jun 11, 2024, at 14:00, Alex Rousskov 
 wrote:


On 2024-06-11 14:46, Jonathan Lee wrote:
2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db

2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...

2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log

2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about 
the segmentation fault bug you have reported earlier. The above log 
is _not_ the requested backtrace that we need to triage that bug. If 
there is another problem you need help with, please start a new 
mailing list thread and detail _that_ problem there.



Thank you,

Alex.


On Jun 11, 2024, at 11:24, Jonathan Lee  
wrote:


thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again

On Jun 11, 2024, at 11:17, Jonathan Lee 
 wrote:


I have attempted to upgrade the program fails to recognize 
”DHParamas Key Size” and will no longer use my certificates and 
shows many errors. I am kind of stuck on 5.8


I do not know where the core dump would be located on pfSense let 
me research this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov 
 wrote:


On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but 
without a usable stack trace, it is unlikely that somebody will 
guess what is going on with your Squid.


Please note that you are running Squid v5 that is no longer 
supported by the Squid Project. You should upgrade to v6+. 
However, I do not know whether that upgrade is going to address 
the specific problem you are suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 17:06, Jonathan Lee wrote:

I can’t locate the dump file for segmentation fault it never generates 
one. 


I assume that you cannot locate the core dump file because your 
OS/environment is not configured to produce core dump files. Enabling 
core dumps is a sysadmin task that is mostly independent from Squid 
specifics. The FAQ I linked to earlier has some hints, but none are 
pfsense-specific. If others on the list do not tell you how to enable 
coredumps on pfsense, you may want to ask on pfsense or sysadmin forums.


Alternatively, you can try starting Squid from gdb or attacking gdb to a 
running Squid kid process, but neither is trivial, especially if you are 
using SMP Squid. The same FAQ has some hints.


BTW, to test whether core dumps are enabled in your environment, you do 
not need to wait for Squid to crash. Instead, you can send a SIGABRT 
signal (as "root" if needed) to any running test process and see whether 
it creates a core dump file when crashing.




I am running cache it shows a swap file however it is not readable.


There are many kinds of swap files, but the core file we need is 
probably not one of them.




I fixed the other issues.


Glad to hear that!

Alex.


On Jun 11, 2024, at 14:00, Alex Rousskov 
 wrote:


On 2024-06-11 14:46, Jonathan Lee wrote:
2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db

2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...

2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log

2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about 
the segmentation fault bug you have reported earlier. The above log is 
_not_ the requested backtrace that we need to triage that bug. If 
there is another problem you need help with, please start a new 
mailing list thread and detail _that_ problem there.



Thank you,

Alex.


On Jun 11, 2024, at 11:24, Jonathan Lee  
wrote:


thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again

On Jun 11, 2024, at 11:17, Jonathan Lee  
wrote:


I have attempted to upgrade the program fails to recognize 
”DHParamas Key Size” and will no longer use my certificates and 
shows many errors. I am kind of stuck on 5.8


I do not know where the core dump would be located on pfSense let 
me research this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov 
 wrote:


On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but 
without a usable stack trace, it is unlikely that somebody will 
guess what is going on with your Squid.


Please note that you are running Squid v5 that is no longer 
supported by the Squid Project. You should upgrade to v6+. 
However, I do not know whether that upgrade is going to address 
the specific problem you are suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 14:46, Jonathan Lee wrote:

2024-05-16 14:10:23 [60780] loading dbfile 
/var/db/squidGuard/Nick_Blocks/urls.db
2024/06/11 10:23:05 kid1| FATAL: Received Segment Violation...dying.
2024/06/11 10:23:25 kid1| Starting Squid Cache version 5.8 for 
aarch64-portbld-freebsd14.0...
2024/06/11 10:23:25 kid1| Service Name: squid
2024-06-11 10:23:25 [9471] (squidGuard): can't write to logfile 
/var/log/squidGuard/squidGuard.log
2024-06-11 10:23:25 [9471] New setting: logdir: /var/squidGuard/log
2024-06-11 10:23:25 [9471] New setting: dbhome: /var/db/squidGuard
2024-06-11 10:23:25 [9471] init domainlist 
/var/db/squidGuard/blk_blacklists_adult/domains
2024-06-11 10:23:25 [9471] loading dbfile 
/var/db/squidGuard/blk_blacklists_adult/domains.db
2024-06-11 10:23:25 [9471] init expressionlist 
/var/db/squidGuard/blk_blacklists_adult/expressions

There is my log file being blocked for some reason


Just to avoid a misunderstanding: This mailing list thread is about the 
segmentation fault bug you have reported earlier. The above log is _not_ 
the requested backtrace that we need to triage that bug. If there is 
another problem you need help with, please start a new mailing list 
thread and detail _that_ problem there.



Thank you,

Alex.



On Jun 11, 2024, at 11:24, Jonathan Lee  wrote:

thanks i have enabled

coredump_dir /var/squid/logs

I will submit a dump as soon as it occurs again


On Jun 11, 2024, at 11:17, Jonathan Lee  wrote:

I have attempted to upgrade the program fails to recognize ”DHParamas Key Size” 
and will no longer use my certificates and shows many errors. I am kind of 
stuck on 5.8

I do not know where the core dump would be located on pfSense let me research 
this and get back to you.


On Jun 11, 2024, at 11:04, Alex Rousskov  
wrote:

On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.
Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but without a usable 
stack trace, it is unlikely that somebody will guess what is going on with your 
Squid.

Please note that you are running Squid v5 that is no longer supported by the 
Squid Project. You should upgrade to v6+. However, I do not know whether that 
upgrade is going to address the specific problem you are suffering from.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error Question

2024-06-11 Thread Alex Rousskov

On 2024-06-11 13:24, Jonathan Lee wrote:

FATAL: Received Segment Violation...dying.

Does any know how to fix this??


Please post full backtrace from this failure:
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps

The other information you have already provided may help, but without a 
usable stack trace, it is unlikely that somebody will guess what is 
going on with your Squid.


Please note that you are running Squid v5 that is no longer supported by 
the Squid Project. You should upgrade to v6+. However, I do not know 
whether that upgrade is going to address the specific problem you are 
suffering from.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-11 Thread Alex Rousskov

On 2024-06-11 03:33, Dieter Bloms wrote:


I've added that option like:
tls_outgoing_options options=0x4 ...
but no change.

I tried 0x4 (for SSL_OP_LEGACY_SERVER_CONNECT), but also without any change.


I have seen this behavior before. My current working theory is that 
Squid ignores tls_outgoing_options when SslBump peeks or stares at 
Squid-to-server TLS connection. In case of staring, this smells like a 
Squid bug to me. Peeking case is more nuanced, but Squid code 
modifications are warranted in that case as well.


If your Squid is peeking and splicing Squid-origin connection, then 
please try the following unofficial patch:

https://github.com/measurement-factory/squid/commit/4dad35eb.patch

The patch sets SSL_OP_LEGACY_SERVER_CONNECT unconditionally when 
peeking, for the reasons explained in the patch. This change has been 
proposed for official adoption at

https://github.com/squid-cache/squid/pull/1839


I do not have a patch for the staring use case.


HTH,

Alex.




I use a debian bookworm container and when I use openssl s_client
without -legacy_server_connect I can't established a tls connection

--snip--
root@tarski:/# openssl s_client -connect cisco.com:443
CONNECTED(0003)
4097F217F17F:error:0A000152:SSL routines:final_renegotiate:unsafe legacy 
renegotiation disabled:../ssl/statem/extensions.c:893:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5177 bytes and written 322 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
 Protocol  : TLSv1.2
 Cipher: 
 Session-ID: 
869B4016868DFF23D1DAB3A33F99F9879274C1F62FD45BF9DF839B27735FC72C
 Session-ID-ctx:
 Master-Key:
 PSK identity: None
 PSK identity hint: None
 SRP username: None
 Start Time: 1718090662
 Timeout   : 7200 (sec)
 Verify return code: 0 (ok)
 Extended master secret: no
---
root@tarski:/#
--snip--

but when I add the -legacy_server_connect option I can as shown here:

--snip--
---
root@cdxiaphttpproxy04:/# openssl s_client -legacy_server_connect -connect 
cisco.com:443
CONNECTED(0003)
depth=2 C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
verify return:1
depth=1 C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
verify return:1
depth=0 C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
verify return:1
---
Certificate chain
  0 s:C = US, ST = California, L = San Jose, O = Cisco Systems Inc., CN = 
www.cisco.com
i:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Nov 14 05:48:20 2023 GMT; NotAfter: Nov 13 05:47:20 2024 GMT
  1 s:C = US, O = IdenTrust, OU = HydrantID Trusted Certificate Service, CN = 
HydrantID Server CA O1
i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Dec 12 16:56:15 2019 GMT; NotAfter: Dec 12 16:56:15 2029 GMT
  2 s:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
i:C = US, O = IdenTrust, CN = IdenTrust Commercial Root CA 1
a:PKEY: rsaEncryption, 4096 (bit); sigalg: RSA-SHA256
v:NotBefore: Jan 16 18:12:23 2014 GMT; NotAfter: Jan 16 18:12:23 2034 GMT
---
Server certificate
-BEGIN CERTIFICATE-
MIIHkDCCBnigAwIBAgIQQAGLzF+ffeG2bq2GaN2HuTANBgkqhkiG9w0BAQsFADBy
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MS4wLAYDVQQLEyVIeWRy
YW50SUQgVHJ1c3RlZCBDZXJ0aWZpY2F0ZSBTZXJ2aWNlMR8wHQYDVQQDExZIeWRy
YW50SUQgU2VydmVyIENBIE8xMB4XDTIzMTExNDA1NDgyMFoXDTI0MTExMzA1NDcy
MFowajELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExETAPBgNVBAcT
CFNhbiBKb3NlMRswGQYDVQQKExJDaXNjbyBTeXN0ZW1zIEluYy4xFjAUBgNVBAMT
DXd3dy5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5
CZi7tsogSJCAE5Zu78Z57FBC67OpK0OkIyVeixqKg57K/wqE4UF59GHHHVwOZhGv
VgsD3jjiQOhxZbUJnaen0+cMH6s1lSRZtiIi2K/Z1Oy+1Gytpw2bYZTbuWHWk1/e
VUgH8dS6PbwQp+/KAzV52Z98asWGzxWYqfJV5GUdC5V2MPDuDRfbrrl6uxVb05tN
69xfCIAR2KJtM64UJifesa7ItQBMzh1TYqPa4A15Ku6MgiuOkUddCrkZWRt1uevD
E6k47uR4wcuM/hF/eSX8wl/BaKrM3eiAc94Thom0wvKzlG0uziL4cux/O6O0na0w
o3WPfbSQltquqVPb9Z1JAgMBAAGjggQoMIIEJDAOBgNVHQ8BAf8EBAMCBaAwgYUG
CCsGAQUFBwEBBHkwdzAwBggrBgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2Nz
cC5pZGVudHJ1c3QuY29tMEMGCCsGAQUFBzAChjdodHRwOi8vdmFsaWRhdGlvbi5p
ZGVudHJ1c3QuY29tL2NlcnRzL2h5ZHJhbnRpZGNhTzEucDdjMB8GA1UdIwQYMBaA
FIm4m7ae7fuwxr0N7GdOPKOSnS35MCEGA1UdIAQaMBgwCAYGZ4EMAQICMAwGCmCG
SAGG+S8ABgMwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDovL3ZhbGlkYXRpb24uaWRl
bnRydXN0LmNvbS9jcmwvaHlkcmFudGlkY2FvMS5jcmwwggE9BgNVHREEggE0MIIB
MIIJY2lzY28uY29tgg13d3cuY2lzY28uY29tgg53d3cxLmNpc2NvLmNvbYIOd3d3
Mi5jaXNjby5jb22CDnd3dzMuY2lzY28uY29tghB3d3ctMDEuY2lzY28uY29tghB3
d3ctMDIuY2lzY28uY29tghF3d3ctcnRwLmNpc2NvLmNvbYISd3d3MS1zczIuY2lz

Re: [squid-users] Howto enable openssl option UNSAFE_LEGACY_RENEGOTIATION ?

2024-06-10 Thread Alex Rousskov

On 2024-06-10 08:10, Dieter Bloms wrote:


I have activated ssl_bump and must activate the UNSAFE_LEGACY_RENEGOTIATION 
option to enable access to https://cisco.com.
The web server does not support secure renegotiation.

I have tried to set the following options, but squid does not recognize any of 
them:

tls_outgoing_options options=UNSAFE_LEGACY_RENEGOTIATION

or

tls_outgoing_options options=ALLOW_UNSAFE_LEGACY_RENEGOTIATION

and

tls_outgoing_options options=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION

but no matter which syntax I use, I always get the message during squid-k parse:

“2024/06/10 14:08:17| ERROR: Unknown TLS option 
ALLOW_UNSAFE_LEGACY_RENEGOTIATION”

How can I activate secure renegotiation for squid?


To set an OpenSSL connection option that Squid does not know by name, 
use that option hex value (based on your OpenSSL sources). For example:


# SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is defined to be
# SSL_OP_BIT(18) which is equal to (1 << 18) or 0x4 in hex.
tls_outgoing_options options=0x4

Disclaimer: I have not tested the above and do not know whether adding 
that option achieves what you want to achieve.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External Sender] Re: Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Alex Rousskov

On 2024-06-05 12:05, Akash Karki (CONT) wrote:


Anything specific do we need to check from any documents?


Yes, anything that mentions features or directives you are using (or 
would like to use).




If I can get any document to refer to, that would be great!!


Release notes for Squid vN should be in doc/release-notes/release-N.html 
file inside official Squid source code tarball for vN. For a given major 
Squid version, use the tarball for latest available minor release. 
Official tarballs for various versions are currently available by 
following version-specific links at http://www.squid-cache.org/Versions/


The following wiki pages may also contain useful info:
https://wiki.squid-cache.org/Releases/Squid-5
https://wiki.squid-cache.org/Releases/Squid-6


HTH,

Alex.



On Wed, Jun 5, 2024 at 4:31 PM Alex Rousskov wrote:

On 2024-06-05 10:30, Akash Karki (CONT) wrote:

 > I want to understand if we can go straight from 4.15 to 6.x (n-1 of
 > latest version) without any intermediary steps or do we have to 
update

 > to intermediary first and then move to the n-1 version of 6.9?

Go straight to the latest v6. While doing that, study release notes for
all the Squid versions you are skipping to flag configuration
directives
that you need to adjust (and other upgrade caveats).

Needless to say, test before you deploy the upgraded version -- a
lot of
things have changed since v4, and not all of the changes may be covered
in release notes. When in doubt, ask (specific) questions.


HTH,

Alex.



 > On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:
 >
 >     Hi Team,
 >
 >     We are running on squid ver 4.15 and want to update to n-1 of the
 >     latest ver(I believe 6.9 is the latest ver).
 >
 >     I want to understand if we can go straight from 4.15 to 6.x
(n-1 of
 >     latest version) without any intermediary steps or do we have to
 >     update to intermediary first and then move to the n-1 version
of 6.9?
 >
 >     Kindly send us the detailed guidance!
 >
 >     --
 >     Thanks & Regards,
 >     Akash Karki
 >
 >
 >     Save Nature to Save yourself :)
 >
 >
 >
 > --
 > Thanks & Regards,
 > Akash Karki
 > UK Hawkeye Team*
 > *
 > *Slack : *#uk-monitoring
 > *Confluence : *UK Hawkeye
 > <https://confluence.kdc.capitalone.com/display/UH/UK+Hawkeye
<https://confluence.kdc.capitalone.com/display/UH/UK+Hawkeye>>
 >
 > Save Nature to Save yourself :)
 >

 >
 >
 > The information contained in this e-mail may be confidential and/or
 > proprietary to Capital One and/or its affiliates and may only be
used
 > solely in performance of work or services for Capital One. The
 > information transmitted herewith is intended only for use by the
 > individual or entity to which it is addressed. If the reader of this
 > message is not the intended recipient, you are hereby notified
that any
 > review, retransmission, dissemination, distribution, copying or
other
 > use of, or taking of any action in reliance upon this information is
 > strictly prohibited. If you have received this communication in
error,
 > please contact the sender and delete the material from your computer.
 >
 >
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
 >

https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!LFF_SotEcUzjJJWv4JR26UjdlUap9Xu0U8n45FREcUUkuWDemCLyI2vLlkSciqHShz-wFylLp3e-TZwmlCNKQc28HngJzWvg$
 
<https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!FrPt2g6CO4Wadw!LFF_SotEcUzjJJWv4JR26UjdlUap9Xu0U8n45FREcUUkuWDemCLyI2vLlkSciqHShz-wFylLp3e-TZwmlCNKQc28HngJzWvg$>



--
Thanks & Regards,
Akash Karki
UK Hawkeye Team*
*
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye 
<https://confluence.kdc.capitalone.com/display/UH/UK+Hawkeye>


Save Nature to Save yourself :)



The information contained in this e-mail may be confidential and/or 
proprietary to Capital One and/or its affiliates and may only be used 
solely in performance of work or services for Capital One. The 
information transmitted herewith is intended only for use by the 
individual or entity to which it is addressed. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
review, retransmissi

Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-05 Thread Alex Rousskov

On 2024-06-05 10:30, Akash Karki (CONT) wrote:

I want to understand if we can go straight from 4.15 to 6.x (n-1 of 
latest version) without any intermediary steps or do we have to  update 
to intermediary first and then move to the n-1 version of 6.9?


Go straight to the latest v6. While doing that, study release notes for 
all the Squid versions you are skipping to flag configuration directives 
that you need to adjust (and other upgrade caveats).


Needless to say, test before you deploy the upgraded version -- a lot of 
things have changed since v4, and not all of the changes may be covered 
in release notes. When in doubt, ask (specific) questions.



HTH,

Alex.




On Wed, Jun 5, 2024 at 3:20 PM Akash Karki (CONT) wrote:

Hi Team,

We are running on squid ver 4.15 and want to update to n-1 of the
latest ver(I believe 6.9 is the latest ver).

I want to understand if we can go straight from 4.15 to 6.x (n-1 of
latest version) without any intermediary steps or do we have to 
update to intermediary first and then move to the n-1 version of 6.9?


Kindly send us the detailed guidance!

-- 
Thanks & Regards,

Akash Karki


Save Nature to Save yourself :)



--
Thanks & Regards,
Akash Karki
UK Hawkeye Team*
*
*Slack : *#uk-monitoring
*Confluence : *UK Hawkeye 



Save Nature to Save yourself :)



The information contained in this e-mail may be confidential and/or 
proprietary to Capital One and/or its affiliates and may only be used 
solely in performance of work or services for Capital One. The 
information transmitted herewith is intended only for use by the 
individual or entity to which it is addressed. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
review, retransmission, dissemination, distribution, copying or other 
use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, 
please contact the sender and delete the material from your computer.




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv6 happy eyeball on dualstack host

2024-06-05 Thread Alex Rousskov

On 2024-06-05 07:31, sachin gupta wrote:

We are shifting to IPv6 dual stack hosts. As per squid documentation 
, IPv6 is enabled by 
default.


That statement is a bit misleading: IPv6 detection or probing is enabled 
in default Squid builds (i.e. ./configure --enable-ipv6 is the default), 
but whether a Squid instance will actually "enable IPv6" also depends on 
the result of certain startup probes or checks. If those startup checks 
fail, Squid will not send DNS  queries.



As per documentation, based on DNS response squid will try both IP4 and 
IPv6 if DNS return both addresses. 


FWIW, this summary does not quite match modern Squid behavior. The 
difference is _not_ important for your current triage because your Squid 
currently does not even request an IPv6 address from DNS. Once you fix 
that, you should _not_ expect Squid to use both IPv4 and IPv6 TCP/IP 
connections in every test case: Squid may or may not use both address 
families, depending on various runtime factors that affect Squid's Happy 
Eyeballs algorithm (e.g., see happy_eyeballs_connect_timeout directive).




But I see that squid is only getting IPv4 address


To be more precise, your Squid does not send a DNS  query after 
sending a DNS A query (no idnsSendSlaveQuery line after idnsALookup 
in your cache.log). That fact suggests that your Squid runs with 
disabled IPv6. I suggest the following triage steps:


1. Examine "/path/to/your/executable/squid -v" output to make sure your 
Squid executable is _not_ built with --disable-ipv6.


2. Examine level-1 cache.log for startup BCP 177 warnings like this one:
   WARNING: BCP 177 violation. Detected non-functional IPv6 loopback

3. Examine _early_ level-2 startup ProbeTransport messages. For example:
   $ your/squid -f your.squid.conf -N -X -d9 2>&1 | grep ProbeTransport
ProbeTransport: Detected IPv6 hybrid or v4-mapping stack...
ProbeTransport: Detected functional IPv6 loopback ...
ProbeTransport: IPv6 transport Enabled


Someday, somebody will (a) completely remove --disable-ipv6 and (b) 
improve startup probing code to make steps 1 and 3 completely 
unnecessary. We have recently done a couple of baby steps towards (a).



HTH,

Alex.


though with dis command I can see IPv6 address as well. 
Also from same host, I am able to make curl command to google using IPv6.


DNS logs for squid

24/06/05 10:41:54.953 kid1| 5,4| AsyncCallQueue.cc(59) fireNext: 
entering helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1, 
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)


2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(41) make: make call 
helperHandleRead [call4]


2024/06/05 10:41:54.953 kid1| 78,3| dns_internal.cc(1792) idnsALookup: 
idnsALookup: buf is 32 bytes for www.google.com , 
id = 0xe006


2024/06/05 10:41:54.953 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The 
AsyncCall helperHandleRead constructed, this=0x55c87a9301e0 [call89]


2024/06/05 10:41:54.953 kid1| 5,5| Read.cc(58) comm_read_base: 
comm_read, queueing read for conn4 local=[::] remote=[::] FD 13 flags=1; 
asynCall 0x55c87a9301e0*1


2024/06/05 10:41:54.954 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 13, 
type=1, handler=1, client_data=0x7f183475a700, timeout=0


2024/06/05 10:41:54.954 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: 
leaving helperHandleRead(conn4 local=[::] remote=[::] FD 13 flags=1, 
data=0x55c87a45bb38, size=5, buf=0x55c87a45bd60)


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1318) idnsRead: 
idnsRead: starting with FD 11


2024/06/05 10:41:54.955 kid1| 5,5| ModEpoll.cc(116) SetSelect: FD 11, 
type=1, handler=1, client_data=0, timeout=0


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1364) idnsRead: 
idnsRead: FD 11: received 48 bytes from 10.0.32.2:53 


2024/06/05 10:41:54.955 kid1| 78,3| dns_internal.cc(1171) idnsGrokReply: 
idnsGrokReply: QID 0xe006, 1 answers


2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile: 
0x55c87a944210 made conn56 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1


2024/06/05 10:41:54.955 kid1| 5,5| Connection.cc(99) cloneProfile: 
0x55c87a944830 made conn57 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1


2024/06/05 10:41:54.955 kid1| 5,3| ConnOpener.cc(43) ConnOpener: will 
connect to conn57 local=0.0.0.0 remote=142.251.215.228:80 
 HIER_DIRECT flags=1 with 15 timeout


2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(428) comm_init_opened: conn58 
local=0.0.0.0 remote=[::] FD 16 flags=1 is a new socket


2024/06/05 10:41:54.955 kid1| 5,4| AsyncCall.cc(29) AsyncCall: The 
AsyncCall Comm::ConnOpener::earlyAbort constructed, this=0x55c87a944cd0 
[call95]


2024/06/05 10:41:54.955 kid1| 5,5| comm.cc(1004) comm_add_close_handler: 
comm_add_close_handler: FD 16, AsyncCall=0x55c87a944cd0*1


2024/06/05 10:41:54.955 kid1| 5,4| 

Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Alex Rousskov

On 2024-05-30 02:30, Rik Theys wrote:

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:
squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.


As Amos has explained, SslBump at step2 is supposed to relay TLS Client 
Hello information via fake CONNECT request headers. SNI should go into 
CONNECT Host header and CONNECT target pseudo-header. That fake CONNECT 
request should then be checked for forgery.


Whether all of the above actually happens is an open question. I bet a 
short answer is "no". I am not just being extra cautious here based on 
overall poor SslBump code quality! I believe there are "real bugs" on 
that code path because we have fixed some of them (and I hope to find 
the time to post a polished version of those fixes for the official 
review in the foreseeable future). For an example that fuels my 
concerns, see the following unofficial commit message:

https://github.com/measurement-factory/squid/commit/462aedcc


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then.


I suspect that there is currently no solution that does not involve 
writing complex external ACL helpers or complex Squid code fixes.



I guess that explains why if I add "%ssl::logformat for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just 
does not get a plain text certificate in peeking configurations.


I've updated the configuration to use stare/bump instead. The field is 
then indeed added to the log file. A curl request that forces the 
connection to a different IP address then also fails because the 
certificate isn't valid for the name. There's no mention of the Host 
header not matching the IP address, but I assume that check comes after 
the certificate check then.


In most cases, the forgery check should happen before the certificate 
check. I suspect that it does not happen at all in your test case.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above 
configuration allows CONNECT tunnels to prohibited domains (i.e. 
domains that do not match allowed_domains). Consider restricting your 
"allow...CONNECT" rule to step1. For example:


    http_access allow allowed_clients step1 CONNECT

Thanks, I've updated my configuration.



Please do test any suggested changes. There are too many variables here 
for me to guarantee that a particular set of http_access and ssl_bump 
rules works as expected.



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, 
so ignore for them". To validate that theory, use "debug_options 
ALL,3" and look for "SECURITY ALERT: Host header forgery detected" 
messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm currently 
using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" messages 
is present in v5.5, but perhaps it is not triggered in that version (or 
even in modern/supported Squids) when I expect it to be triggered. 
Unfortunately, there are too many variables for me to predict what 
exactly went wrong in your particular test case without doing a lot more 
work (and I cannot volunteer to do that work right now).



Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Yes, it is a known problem (even for developers). There are also bugs 
related to step boundaries.



Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel (supported 
today but problematic for other reasons) or be enhanced to use 
out-of-band validation tricks (that come with their own set of problems).


I guess that explains why if I add "%ssl::for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just does 
not get a plain text certificate in peeking configurations.



Is there a way to configure squid to validate that the server 
certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



Do you happen to known which version of Squid introduced that check?


IIRC, Squid v5.5 has that code.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above configuration 
allows CONNECT tunnels to prohibited domains (i.e. domains that do not 
match allowed_domains). Consider restricting your "allow...CONNECT" rule 
to step1. For example:


http_access allow allowed_clients step1 CONNECT


squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, so 
ignore for them". To validate that theory, use "debug_options ALL,3" and 
look for "SECURITY ALERT: Host header forgery detected" messages in 
cache.log.


Please note that in many environments forgery detection does not work 
well (for cases where it is performed) due to clients and Squid seeing 
different sets of IP addresses for the same host name. There are 
numerous complains about that in squid-users archives.



For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be used 
to bypass the restrictions.


Agreed.


Is there an option in squid to make it perform a forward DNS lookup for 
the domain from the SNI information from step1


FYI: SNI comes from step2. step1 looks at TCP/IP client info.


to validate that the IP 
address we're trying to connect to is actually valid for that host? In 
the example above, a DNS lookup for wordpress.org would return 
198.143.164.252 as the IP address. This is not the IP address we're 
trying to connect to, so squid should block the request.


AFAICT, there is no built-in support for that in current Squid code. One 
could enhance Squid or write an external ACL to perform that kind of 
validation. See above for details/caveats.



Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the server 
certificate.


Peeking at the server certificates happens at step3. In many modern use 
cases, server certificates are encrypted, so a _peeking_ Squid cannot 
see them. To validate, Squid has to bump the tunnel (supported today but 
problematic for other reasons) or be enhanced to use out-of-band 
validation tricks (that come with their own set of problems).



Is there a way to configure squid to validate that the 
server certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simulate connections for tuning squid?

2024-05-24 Thread Alex Rousskov

On 2024-05-24 01:43, Periko Support wrote:


I would like to know if there exists a tool that helps us simulate
connections to squid and helps us tune squid for different scenarios
like small, medium or large networks?


Yes, there are many tools, offering various tradeoffs, including:

* Apache "ab": Not designed for testing proxies but well-known and 
fairly simple.


* Web Polygraph: Designed for testing proxies but has a steep learning 
curve and lacks fresh releases.


* curl/wget/netcat: Not designed for testing performance but well-known 
and very simple.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Alex Rousskov

On 2024-05-23 13:06, Robin Wood wrote:
I've tried searching for Squid and sslbump and not found anything useful 
that works with the current version, that is why I'm asking here, I was 
hoping someone could point me at an example that would definitely work 
with the current version of Squid.


FWIW, most of the basics are covered at
https://wiki.squid-cache.org/Features/SslPeekAndSplice

That page was written for a feature introduced in v3.5, but it is not 
specific to that Squid version.



HTH,

Alex.



 > On May 23, 2024, at 08:49, Alex Rousskov wrote:
 >
 > On 2024-05-22 03:49, Robin Wood wrote:
 >
 >> I'm trying to work out how to add an extra header to a TLS
connection.
 >
 > I assume that you want to add a header field to an HTTP request
or response that is being transmitted inside a TLS connection
between a TLS client (e.g., a user browser) and an HTTPS origin server.
 >
 > Do you control the client that originates that TLS connection (or
its OS/environment) or the origin server? If you do not, then what
you want is impossible -- TLS encryption exists, in part, to prevent
such traffic modifications.
 >
 > If you control the client that originates that TLS connection (or
its OS/environment), then you may be able to, in _some_ cases, add
that header by configuring the client (or its OS/environment) to
trust you as a Certificate Authority, minting your own X509
certificates, and configuring Squid to perform a "man in the middle"
attack on client-server traffic, using your minted certificates. You
can search for Squid SslBump to get more information about this
feature, but the area is full of insurmountable difficulties and
misleading advice. Avoid it if at all possible!
 >
 >
 > HTH,
 >
 > Alex.
 >
 >
 >> I've found information on how to do it on what I think is the
pre-3.5 release, but I can't find any useful information on doing it
on the current version.
 >> Could someone give me an example or point me at some
documentation on how to do it.
 >> Thanks
 >> Robin
 >> ___
 >> squid-users mailing list
 >> squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
 >> https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
 > https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>
___
squid-users mailing list
squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Alex Rousskov

On 2024-05-22 03:49, Robin Wood wrote:


I'm trying to work out how to add an extra header to a TLS connection.


I assume that you want to add a header field to an HTTP request or 
response that is being transmitted inside a TLS connection between a TLS 
client (e.g., a user browser) and an HTTPS origin server.


Do you control the client that originates that TLS connection (or its 
OS/environment) or the origin server? If you do not, then what you want 
is impossible -- TLS encryption exists, in part, to prevent such traffic 
modifications.


If you control the client that originates that TLS connection (or its 
OS/environment), then you may be able to, in _some_ cases, add that 
header by configuring the client (or its OS/environment) to trust you as 
a Certificate Authority, minting your own X509 certificates, and 
configuring Squid to perform a "man in the middle" attack on 
client-server traffic, using your minted certificates. You can search 
for Squid SslBump to get more information about this feature, but the 
area is full of insurmountable difficulties and misleading advice. Avoid 
it if at all possible!



HTH,

Alex.


I've found information on how to do it on what I think is the pre-3.5 
release, but I can't find any useful information on doing it on the 
current version.


Could someone give me an example or point me at some documentation on 
how to do it.


Thanks

Robin

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-21 Thread Alex Rousskov

On 2024-05-21 14:47, Bobby Matznick wrote:
To add and maybe clarify what my confusion is, the log entries below 
(hidden internal/external IP’s, domain and username) don’t seem to show 
what I expected, a line marked “referrer”. Am I misunderstanding how 
that should show up in the log?


Kind of: HTTP CONNECT requests normally do not have Referer headers. 
These requests establish a TCP tunnel to an origin server through Squid. 
The "real" requests to origin server are inside that tunnel.


In some cases, it is possible to configure the client and Squid in such 
a way that Squid can look inside that tunnel and find "real" requests, 
but doing so well requires a lot of effort, including becoming a 
Certificate Authority and configuring client to trust certificates 
produced by that Certificate Authority. You can search for SslBump to 
get more information, but the area is full of insurmountable 
difficulties and misleading advice. Avoid it if at all possible.



HTH,

Alex.



--

Message: 1
Date: Tue, 21 May 2024 17:50:49 +
From: Bobby Matznick mailto:bmatzn...@pbandt.bank>>
To: "squid-users@lists.squid-cache.org 
"
>

Subject: [squid-users] log_referrer question
Message-ID:
mailto:mw5pr14mb52897188c2ed83596b406151b0...@mw5pr14mb5289.namprd14.prod.outlook.com>>

Content-Type: text/plain; charset="utf-8"

I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


It is working, as far as logging the normal stuff I would see before 
having tried to implement referrer. I noticed somewhere that you need to 
build squid with -enable-referrer-log, it was an older version, looked 
like 3.1 and lower, I am using 4.13. So, checked with squid -v and do 
not see "-enable-referrer_log" as one of the configure options used 
during install. Would I need to reinstall, or is that no longer 
necessary in version 4.13? Thanks!!


Bobby


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-21 Thread Alex Rousskov

On 2024-05-21 13:50, Bobby Matznick wrote:
I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


Please do not redefine built-in logformat configurations like "squid" 
and "combined". Name and define your own instead.



It is working, as far as logging the normal stuff I would see before 
having tried to implement referrer. I noticed somewhere that you need to 
build squid with –enable-referrer-log, it was an older version, looked 
like 3.1 and lower, I am using 4.13.


Please upgrade to v6. Squid v4 is not supported by the Squid Project.


So, checked with squid -v and do 
not see “—enable-referrer_log” as one of the configure options used 
during install. Would I need to reinstall, or is that no longer 
necessary in version 4.13?


referer_log and the corresponding ./configure options have been removed 
long time ago, probably before v4.13 was released.


HTH,

Alex.


*From:*squid-users  *On 
Behalf Of *squid-users-requ...@lists.squid-cache.org

*Sent:* Tuesday, April 23, 2024 6:00 AM
*To:* squid-users@lists.squid-cache.org
*Subject:* [External] squid-users Digest, Vol 116, Issue 31



*Caution:*This is an external email and has a suspicious subject or 
content. Please take care when clicking links or opening attachments. 
When in doubt, contact your IT Department


Send squid-users mailing list submissions to
squid-users@lists.squid-cache.org 

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.squid-cache.org/listinfo/squid-users 


or, via email, send a message with subject or body 'help' to
squid-users-requ...@lists.squid-cache.org 



You can reach the person managing the list at
squid-users-ow...@lists.squid-cache.org 



When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

1. Re: Warm cold times (Amos Jeffries)
2. Re: Container Based Issues Lock Down Password and Terminate
SSL (Amos Jeffries)


--

Message: 1
Date: Tue, 23 Apr 2024 19:41:37 +1200
From: Amos Jeffries mailto:squ...@treenet.co.nz>>
To: squid-users@lists.squid-cache.org 


Subject: Re: [squid-users] Warm cold times
Message-ID: <9d8f4de6-c797-4e70-aaf5-c073f45c3...@treenet.co.nz 
>

Content-Type: text/plain; charset=UTF-8; format=flowed

On 22/04/24 17:42, Jonathan Lee wrote:
 > Has anyone else taken up the fun challenge of doing windows update 
caching. It is amazing when it works right. It is a complex 
configuration, but it is worth it to see a warm download come down that 
originally took 30 mins instantly to a second client. I didn?t know how 
much of the updates are the same across different vendor laptops.

 >

There have been several people over the years.
The collected information is being gathered at
>


If you would like to check and update the information for the current
Windows 11 and Squid 6, etc. that would be useful.

Wiki updates are now made using github PRs against the repository at
>.





 > Amazing stuff Squid team.
 > I wish I could get some of the Roblox Xbox stuff to cache but it?s a 
night to get running with squid in the first place, I had to splice a 
bunch of stuff and also wpad the Xbox system.


FWIW, what I have seen from routing perspective is that Roblox likes to
use custom ports and P2P connections for a lot of things. So no high
expectations there, but anything cacheable is great news.



 >> On Apr 18, 2024, at 23:55, Jonathan Lee wrote:
 >>
 >> ?Does anyone know the current warm cold download times for dynamic 
cache of windows updates?

 >>
 >> I can say my experience was a massive increase in the warm download 
it was delivered in under a couple mins versus 30 or so to download it 
cold. The warm download was almost instant on the second device. Very 
green energy efficient.

 >>
 >>
 >> Does squid 5.8 or 6 work better on warm delivery?

There is no significant differences AFAIK. They both come down to what
you have configured. That said, the ongoing improvements may make v6
some amount of "better" - even if only trivial.



 >> Is there a way to make 100 percent sure a docker container can?t get 
inside the cache?


For Windows I would expect the only "100% sure" way is to completely
forbid access to the disk where the 

Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-20 Thread Alex Rousskov

On 2024-05-17 09:51, Andre Bolinhas wrote:


Alex can you reply this please


Already did. Please see
https://lists.squid-cache.org/pipermail/squid-users/2024-May/026677.html

Alex.



 Hi
Well, the performance and NTLM issues that I had with persistent 
connections goes back to squid 3.5 , so I never re-enabled it again 
on new version, I'm using Squid 5.9 and 6.8 now.


If you tell me that now that persistent connections are more stable 
and inclusive is recommended to be enabled by default to gain 
performance and also speed up NTLM/Kerberos authentication, I will 
re-enable again on my production servers. 



On 17/05/2024 14:42, Alex Rousskov wrote:

On 2024-05-16 19:12, Jonathan Lee wrote:

What about using COSS file system?


Squid does not support COSS cache_dirs since v3.5. If Squid in 
question does disk caching, then rock cache_dirs may be the best bet.


Alex.



On May 16, 2024, at 15:10, Andre Bolinhas wrote:

 Hi
Well, the performance and NTLM issues that I had with persistent 
connections goes back to squid 3.5 , so I never re-enabled it 
again on new version, I'm using Squid 5.9 and 6.8 now.


If you tell me that now that persistent connections are more stable 
and inclusive is recommended to be enabled by default to gain 
performance and also speed up NTLM/Kerberos authentication, I will 
re-enable again on my production servers.


Best Regards

On 16/05/2024 21:34, Alex Rousskov wrote:

On 17/05/24 02:23, Bolinhas André wrote:

Has I explain, by default I set those directives to off to avoid 
high cpu consumption.


Just FYI: In this context, when you say "default", folks will tend 
to think that you are talking about default Squid configuration 
setting (i.e. something hard-coded in Squid code) rather than the 
actual thing you are talking about (i.e. your custom Squid 
configuration).


I do not know whether disabling persistent connections reduces CPU 
consumption in your environment. There are too many variables. In 
most cases, including NTLM authentication cases detailed by Amos, 
disabling persistent connections hurts performance, but there are 
always exceptions (and bugs).


It is not clear (to me) whether you disable persistent connections 
because they hurt performance in your environment OR you disable 
persistent connections because _you assume_ (without evidence) that 
they hurt performance in your environment.


If you do not know that disabling persistent connections reduces 
CPU consumption in your environment, then you should not disable 
them until you discover strong evidence that they hurt performance. 
At that point, you can share that evidence and ask for 
configuration advice based on that evidence.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Question: cache_mem share among multiple squid instances with the same service_name in SMP mode

2024-05-20 Thread Alex Rousskov

On 2024-05-20 03:35, Zhijian Li (Fujitsu) wrote:


In SMP mode, is it possible that cache_mem can be share among
multiple squid instances with the same service_name?


Short answer: "Do not run multiple SMP Squid instances with the same 
service_name".


SMP Squid cache[1] is not supposed to be shared across Squid instances. 
Any configurations or actions that result in such sharing are 
unsupported and may lead to undefined behavior. Squid may not emit 
warnings or other diagnostics in such unsupported cases -- no Squid code 
has been written specifically to detect and warn about unsupported 
memory sharing.


For example, running multiple identically-built Squid instances on the 
same "box"[2] with the _same_ service name is unsupported and may lead 
to undefined behavior, especially if SMP Squid cache[1] is enabled.


Running multiple identically-built Squid instances on the same "box"[2] 
with _different_ service names is supported on some OSes because it does 
not lead to unsupported sharing. In other environments, it may lead to 
undefined behavior. This limitation is a Squid bug or a missing feature. 
Developers wishing to remove this limitation should look at 
shm_portable_segment_name_is_path() description and use case.


[1]: Here, "SMP Squid cache" applies to both cache_dir storage (on disk 
and in shared memory) and cache_mem storage (in shared memory). Very 
similar reasoning applies to non-caching SMP Squid instances as well, 
but the question was about caching, so I will not detail these other cases.


[2]: Here, the term "box" is used to mean "isolation environment": "Same 
box" means the same OS instance, the same container instance (if 
containerized), and the same filesystem (i.e. no chroot, jails, or 
similar isolation tricks for each Squid instance). Various OSes isolate 
shared memory segments differently, but many use file systems for some 
shared memory artifacts. If artifacts from different Squid instances 
clash, Squid behavior is undefined.



HTH,

Alex.



Per SmpScale[1], "memory object cache (in most environments)" can be share 
among workers
Per smp-enabled-squid[2], "Each set of SMP-aware processes will interact only with 
other processes using the same service name"

So if i have multiple (SMP mode + same service_name) squid instances, would 
they share the cache_mem objects.

[1] https://wiki.squid-cache.org/Features/SmpScale#what-can-workers-share
[2] 
https://wiki.squid-cache.org/KnowledgeBase/MultipleInstances#smp-enabled-squid


Thanks
Zhijian
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-17 Thread Alex Rousskov

On 2024-05-16 19:12, Jonathan Lee wrote:

What about using COSS file system?


Squid does not support COSS cache_dirs since v3.5. If Squid in question 
does disk caching, then rock cache_dirs may be the best bet.


Alex.



On May 16, 2024, at 15:10, Andre Bolinhas wrote:

 Hi
Well, the performance and NTLM issues that I had with persistent 
connections goes back to squid 3.5 , so I never re-enabled it again 
on new version, I'm using Squid 5.9 and 6.8 now.


If you tell me that now that persistent connections are more stable 
and inclusive is recommended to be enabled by default to gain 
performance and also speed up NTLM/Kerberos authentication, I will 
re-enable again on my production servers.


Best Regards

On 16/05/2024 21:34, Alex Rousskov wrote:

On 17/05/24 02:23, Bolinhas André wrote:

Has I explain, by default I set those directives to off to avoid 
high cpu consumption.


Just FYI: In this context, when you say "default", folks will tend to 
think that you are talking about default Squid configuration setting 
(i.e. something hard-coded in Squid code) rather than the actual 
thing you are talking about (i.e. your custom Squid configuration).


I do not know whether disabling persistent connections reduces CPU 
consumption in your environment. There are too many variables. In 
most cases, including NTLM authentication cases detailed by Amos, 
disabling persistent connections hurts performance, but there are 
always exceptions (and bugs).


It is not clear (to me) whether you disable persistent connections 
because they hurt performance in your environment OR you disable 
persistent connections because _you assume_ (without evidence) that 
they hurt performance in your environment.


If you do not know that disabling persistent connections reduces CPU 
consumption in your environment, then you should not disable them 
until you discover strong evidence that they hurt performance. At 
that point, you can share that evidence and ask for configuration 
advice based on that evidence.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-17 Thread Alex Rousskov

On May 16, 2024, at 15:10, Andre Bolinhas wrote:
Well, the performance and NTLM issues that I had with persistent 
connections goes back to squid 3.5 , so I never re-enabled it again 
on new version, I'm using Squid 5.9 and 6.8 now.


If you tell me that now that persistent connections are more stable 
and inclusive is recommended to be enabled by default to gain 
performance and also speed up NTLM/Kerberos authentication, I will 
re-enable again on my production servers.


FWIW, I am not going to tell you any of that. I am also not going to 
tell you the opposite of those statements. In my previous emails, I did 
my best to document that I cannot correctly predict performance impact 
in your specific deployment environments (and suggested alternatives to 
asking unsubstantiated "Should I enable or disable persistent 
connections?" question on a mailing list). I obviously failed to get 
that message across since essentially the same question is still being 
asked.


Alex.




On 16/05/2024 21:34, Alex Rousskov wrote:

On 17/05/24 02:23, Bolinhas André wrote:

Has I explain, by default I set those directives to off to avoid 
high cpu consumption.


Just FYI: In this context, when you say "default", folks will tend to 
think that you are talking about default Squid configuration setting 
(i.e. something hard-coded in Squid code) rather than the actual 
thing you are talking about (i.e. your custom Squid configuration).


I do not know whether disabling persistent connections reduces CPU 
consumption in your environment. There are too many variables. In 
most cases, including NTLM authentication cases detailed by Amos, 
disabling persistent connections hurts performance, but there are 
always exceptions (and bugs).


It is not clear (to me) whether you disable persistent connections 
because they hurt performance in your environment OR you disable 
persistent connections because _you assume_ (without evidence) that 
they hurt performance in your environment.


If you do not know that disabling persistent connections reduces CPU 
consumption in your environment, then you should not disable them 
until you discover strong evidence that they hurt performance. At 
that point, you can share that evidence and ask for configuration 
advice based on that evidence.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-16 Thread Alex Rousskov

On 17/05/24 02:23, Bolinhas André wrote:

Has I explain, by default I set those directives to off to avoid high 
cpu consumption.


Just FYI: In this context, when you say "default", folks will tend to 
think that you are talking about default Squid configuration setting 
(i.e. something hard-coded in Squid code) rather than the actual thing 
you are talking about (i.e. your custom Squid configuration).


I do not know whether disabling persistent connections reduces CPU 
consumption in your environment. There are too many variables. In most 
cases, including NTLM authentication cases detailed by Amos, disabling 
persistent connections hurts performance, but there are always 
exceptions (and bugs).


It is not clear (to me) whether you disable persistent connections 
because they hurt performance in your environment OR you disable 
persistent connections because _you assume_ (without evidence) that they 
hurt performance in your environment.


If you do not know that disabling persistent connections reduces CPU 
consumption in your environment, then you should not disable them until 
you discover strong evidence that they hurt performance. At that point, 
you can share that evidence and ask for configuration advice based on 
that evidence.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-16 Thread Alex Rousskov

On 2024-05-15 19:02, Andre Bolinhas wrote:

To handle this amount of traffic should I enable 
client_persistent_connections and server_persistent_connections or is it 
better to keep it disable?


As Jonathan has already mentioned, the question is misleading because 
these directives default to "on" -- persistent connections are enabled 
by default. Modern HTTP specs enable them by default as well.


Since you do not know whether persistent connections are harmful in your 
particular deployment environments, remove those two directives from 
your Squid configuration files (effectively enabling persistent 
connection use). There are always exceptions, but in the vast majority 
of cases, not specifying those directives is the best first step.


If you want to research whether persistent connections are harmful in 
your environments, you will need to define performance metrics and 
experiment with all four different combinations across the two boolean 
directives (at least -- there are more directives that affect connection 
persistency). Doing this kind of research right is difficult!



HTH,

Alex.



Best regards

On 31/01/2022 14:52, Eliezer Croitoru wrote:


Hey Andre,

I *would not *recommend on 5.x yet since there are couple bugs which 
are blocking it to be used as stable.


I believe that your current setup is pretty good.

The only thing which might affect the system is the authentication and 
ACLs.


As long these ACL rules are static it should not affect too much on 
the operation, however,
When adding external authentication and external helpers for other 
things it’s possible to see some slowdown in specific scenarios.


As long as the credentials and the ACLs will be fast enough it is 
expected to work fast but only testing will prove how the real world usage

will affect the service.

I believe that 5 workers is enough and also take into account that the 
external helpers would also require CPU so don’t rush into

changing the workers amount just yet.

All The Bests,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:* André Bolinhas 
*Sent:* Monday, January 31, 2022 15:47
*To:* 'NgTech LTD' 
*Cc:* 'Squid Users' 
*Subject:* RE: [squid-users] Tune Squid proxy to handle 90k connection

Hi

I will not use cache in this project.

Yes, I will need

  * ACL (based on Domain, AD user, Headers, User Agent…)
  * Authentication
  * SSL bump just for one domain.
  * DNS resolution (I will use Unbound DNS service for this)

Also, I will divide the traffic between two Squid box instead just one.

So each box will handle around 50k request.

Each box have:

  * CPU(s) 16
  * Threads per code 2
  * Cores per socket 8
  * Sockets 1
  * Inter Xeron Silver 4208  @ 2.10GHz
  * 96GB Ram
  * 1TB raid-0 SSD

At this time I have 5 workers on each Squid box and the Squid version 
is 4.17, do you recommend more workers or upgrade the squid version to 5?


Best regards

*De:*NgTech LTD 
*Enviada:* 31 de janeiro de 2022 04:59
*Para:* André Bolinhas 
*Cc:* Squid Users 
*Assunto:* Re: [squid-users] Tune Squid proxy to handle 90k connection

I would recommend you to start with 0 caching.

However, for choosing the right solution you must give more details.

For example there is an IBM reasearch that prooved that for about 90k 
connections you can use vm's ontop of such hardware with apache web 
server.


If you do have the set of the other requirements from the proxy else 
then the 90k requests it would be wise to mention them.


Do you need any specific acls?

Do you need authentication?

etc..

For a simple forward proxy I would suggest to use a simpler solution 
and if possible to not log anything as a starter point.


Any local disk i/o will slow down the machine.

About the url categorization, I do not have experience with ufdbguard 
on such scale but it would be pretty heavy for any software to handle 
90k rps...


 It's doable to implement such setup but will require testing.

Will you use ssl bump in this setup?

If I will have all the technical and specs/requirements details I 
might be able to suggest better then now.


Take into account that each squid worker can handle about 3k rps 
tops(with my experience) and it's a juggling between two sides so... 
3k is really 3k+3k+external_acls+dns...


I believe that in this case an example of configuration from the squid 
developers might be usefull.


Eliezer

בתאריך יום ג׳, 25 בינו׳ 2022, 18:42, מאתAndré Bolinhas 
‏:


Any tip about my last comment?

-Mensagem original-
De: André Bolinhas 
Enviada: 21 de janeiro de 2022 16:36
Para: 'Amos Jeffries' ;
squid-users@lists.squid-cache.org
Assunto: RE: [squid-users] Tune Squid proxy to handle 90k connection

Thanks Amos
Yes, you are right, I will put a second box with HaProxy in front
to balance the traffic.
About the sockets I can't double it because is a physical machine,
do you think disable 

Re: [squid-users] Squid returns a lot of ABORTED in access log and user navigation speed slows

2024-05-15 Thread Alex Rousskov

On 2024-05-15 14:08, Andre Bolinhas wrote:

I'm not using pipeline_prefetch, because pipeline_prefetch breaks the 
NTLM/Kerberos authentication.



Enabling pipeline_prefetch introduces other problems as well. There 
might be some very special use cases that benefit from pipeline_prefetch 
today, but, in general, that directive should not be used (and the whole 
feature should be removed from Squid until it is properly implemented).


I cannot currently answer your primary questions on this thread. I hope 
somebody else will guide you through this triage.


Alex.



On 15/05/2024 18:15, Jonathan Lee wrote:

Have you researched enabling pipeline_prefetch??

On May 14, 2024, at 17:56, Andre Bolinhas 
 wrote:


Hi

Sometimes my users complains that the internet navigation thought 
Squid is very slow.


After checking the access.log, I can see a lot of ABORTED messages 
like this


1715537802.589  2 10.103.12.94 NONE_NONE_ABORTED/200 0 CONNECT 
api.telegram.org:443 - HIER_NONE/-:- - mac="00:00:00:00:00:00" 
accessrule:%20global_whitelist%0D%0A exterr="ERR_CLIENT_GONE|WITH_CLIENT"


1715537183.180  3 172.16.31.205 TCP_MISS_ABORTED/000 0 POST 
http://pjcpd-dlpend01.hlbank.my/GECommunicationWS.asmx - 
HIER_NONE/-:- - mac="00:00:00:00:00:00" 
accessrule:%20global_whitelist%0D%0A exterr="ERR_CLIENT_GONE|WITH_CLIENT"


I have imported the access.log into my ELK machine and I can see that 
during the time that the users complained about the slowness there is 
a huge spike of NONE_ABORTED messages.


https://i.postimg.cc/6QR79GWk/6e727e86-de3d-4f3b-bd9e-04c04052ca2e.jpg

Now my question is:
1. What can cause this kind of issue? It's a squid server issue, 
network (firewall, switch, router, …), or client?
2. Why the number of NONE_ABORTED requests is almost 4 time more than 
normal request?


Best regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error from icap during respmod

2024-05-08 Thread Alex Rousskov

On 2024-05-06 19:39, Arun Kumar wrote:
Are you aware of any compatible 
Python or Java based iCAP server implemenation?


I am not aware of any Python- or Java-based ICAP service that I can 
recommend. AFAIK, most folks looking for a free ICAP service (that 
resist the temptation to reinvent a rather complex wheel) use c-icap, 
but c-icap is written in C: https://c-icap.sourceforge.net/


Please note that if my triage is correct, then the issue here is not 
"compatibility" with Squid. It is a serious ICAP service bug or 
misconfiguration.



Good luck,

Alex.


We want to implement 
custom virus scanning of the response.
I got the book /Squid: The Definitive Guide /and going over for more 
understanding. Saw your name mentioned by the author. I am very proud to 
work with great people like you.



On Thursday, May 2, 2024 at 04:18:45 PM EDT, Alex Rousskov 
 wrote:



On 2024-04-29 13:06, Arun Kumar wrote:
 > Configured python based icap server (pyicap) and getting 500 Internal
 > Server error during respmod.

AFAICT, this ICAP RESPMOD service is buggy: It sends what looks like an
HTTP response body chunk after sending an ICAP 100 Continue control
message. Instead, it is supposed to send the final ICAP response headers
and HTTP response headers _before_ sending that HTTP response body chunk.


     00:50:54.989 ... ReadNow: conn33 ... size 65535, retval 25
     ICAP/1.0 100 Continue


     00:50:54.991 ReadNow: conn33 ... size 65535, retval 137
     83
     {"activity":...}


HTH,

Alex.


 > 
https://drive.google.com/file/d/19yirXfxKli7NXon4ewiy-v3GpLvECT1i/view?usp=sharing <https://drive.google.com/file/d/19yirXfxKli7NXon4ewiy-v3GpLvECT1i/view?usp=sharing> <https://drive.google.com/file/d/19yirXfxKli7NXon4ewiy-v3GpLvECT1i/view?usp=sharing <https://drive.google.com/file/d/19yirXfxKli7NXon4ewiy-v3GpLvECT1i/view?usp=sharing>>

 >
 > Squid configuration:
 > icap_enable on
 > icap_send_client_ip on
 > icap_send_client_username on
 > icap_client_username_encode off
 > icap_client_username_header X-Authenticated-User
 > icap_preview_enable on
 > icap_preview_size 1024
 >
 > icap_service service_req reqmod_precache bypass=0
 > icap://127.0.0.1:13440/example
 > icap_service service_resp respmod_precache bypass=0
 > icap://127.0.0.1:13441/example

 >
 >
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org>
 > https://lists.squid-cache.org/listinfo/squid-users 
<https://lists.squid-cache.org/listinfo/squid-users>





___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error during ICAP RESPMOD

2024-05-02 Thread Alex Rousskov

On 2024-04-24 21:23, Arun Kumar wrote:
I managed to reproduce the problem in my personal setup. Please find the 
cache logs when the problem is reproduced. Squid version is 5.8


Just to close this old thread: My response[1] on a newer thread 
(analyzing the same log file you shared on this thread) supports and 
details the "HTTP body instead of an ICAP response header" theory I 
suggested further below (before you shared that log file).


[1]:
https://lists.squid-cache.org/pipermail/squid-users/2024-May/026634.html

Alex.



On Friday, March 22, 2024 at 11:02:51 PM EDT, Alex Rousskov wrote:


On 2024-03-22 13:11, Arun Kumar wrote:
 > The lines above are. The content-length is 138 (8a in hex), but the
 > bytes are 144. Could this be the reason?
 >
 > parseMore: have 144 bytes to parse [FD 14;RBG/Comm(14)wr job24]
 > parseMore:
 > 8a^M
 > {"activity":"Make a simple musical
 > 
instrument","type":"music","participants":1,"price:0.4,"link":"","key":"7091374","accessibility":0.25}^M

 > parseHeaders: parse ICAP headers
 > parsePart: have 144 head bytes to parse; state: 0
 > parsePart: head parsing result: 0 detail: 600


I cannot be sure based on the tiny snippets shared so far, but it
_looks_ like Squid expected an ICAP response header and got an ICAP
response body chunk instead. It is also possible that we are looking at
log lines from two unrelated ICAP transactions, or I am simply
misinterpreting the snippets.

If you want a more reliable diagnosis, then my earlier recommendation
regarding sharing (privately if needed) the following information still
stands:

* compressed ALL,9 cache.log and
* the problematic ICAP response in a raw packet capture format.


HTH,

Alex.



 > On Monday, March 18, 2024 at 11:21:02 PM EDT, Alex Rousskov
 > <mailto:rouss...@measurement-factory.com>> wrote:

 >
 >
 > On 2024-03-18 18:46, Arun Kumar wrote:
 >
 >  > Any idea, the reason for error in ModXact.cc parsePart fuction.
 >  > Happening during parsing the response from ICAP
 >  >
 >  >
 >  > parsePart: have 144 head bytes to parse; state: 0
 >  > parsePart: head parsing result: 0 detail: 600
 >
 >
 > AFAICT, Squid considers received ICAP response header malformed. More
 > than five possible problems/cases may match the above lines. The answer
 > to your question (or an additional clue!) is in different debugging
 > output, possibly logged somewhere between the two lines quoted above.
 > The right debugging lines may be visible in "debug_options ALL,2 58,5,
 > 93,5" output, but it is usually best to share compressed ALL,9 logs
 > (privately if needed).
 >
 > 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction> <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>>

 >
 >
 > Sharing the problematic ICAP response (header) in a raw packet capture
 > format (to preserve important details) may also be very useful.
 >
 >
 > HTH,
 >
 > Alex.
 >
 >



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error from icap during respmod

2024-05-02 Thread Alex Rousskov

On 2024-04-29 13:06, Arun Kumar wrote:
Configured python based icap server (pyicap) and getting 500 Internal 
Server error during respmod.


AFAICT, this ICAP RESPMOD service is buggy: It sends what looks like an 
HTTP response body chunk after sending an ICAP 100 Continue control 
message. Instead, it is supposed to send the final ICAP response headers 
and HTTP response headers _before_ sending that HTTP response body chunk.



00:50:54.989 ... ReadNow: conn33 ... size 65535, retval 25
ICAP/1.0 100 Continue


00:50:54.991 ReadNow: conn33 ... size 65535, retval 137
83
{"activity":...}


HTH,

Alex.


https://drive.google.com/file/d/19yirXfxKli7NXon4ewiy-v3GpLvECT1i/view?usp=sharing 


Squid configuration:
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024

icap_service service_req reqmod_precache bypass=0 
icap://127.0.0.1:13440/example
icap_service service_resp respmod_precache bypass=0 
icap://127.0.0.1:13441/example




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL / http_access rules stop work using Squid 6+

2024-04-18 Thread Alex Rousskov
 will probably have to 
share them privately if you are using production configuration/instance.



HTH,

Alex.


On 2024-04-15 19:49, Andre Bolinhas wrote:

Hi Alex,
Thnks for your reply.

Logs uploaded again, you can find it here.

https://we.tl/t-QiSKMgclOb

Best regards

On 15/04/2024 14:12, Alex Rousskov wrote:

On 2024-04-14 17:23, Andre Bolinhas wrote:

Any tip on this matter? I want to upgrade to squid 6.9 but due to 
this issue, i'm stuck.



Hi Andre,

    Please note that I did _not_ receive your email quoted below. It 
is in the email archive, so the problem is not on your end, but I just 
wanted to mention that I was not (knowingly) ignoring you.


> I have re-uploaded the cache.log files.

The files have expired again. I have reviewed the diff you shared, but 
cannot make further progress without those test logs. Hopefully, your 
next list post reaches me.


Alex.



On 01/04/2024 11:53, Andre Bolinhas wrote:


Hi Alex

Thanks for your help on the matter.


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story 
here. However, to make progress, let's assume that v5 configuration 
files are identical to v6 configuration files. 
If you want, I can run the same test with in a different debug 
parameters, just tell which ones.


I have re-uploaded the cache.log files.
https://we.tl/t-AB4XuUwuf7

One way to answer all of the above questions is to look at the 
following output:


    squid -k parse ... |& grep Processing:.http_access 

There is no diff between both squid version, you can check it here
DiffNow - Compare Files, URLs, and Clipboard Contents Online 
<https://www.diffnow.com/report/jsrva>


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story 
here. However, to make progress, let's assume that v5 configuration 
files are identical to v6 configuration files.
The configuration files / folder are the same, the server is the 
same, the only thing that changes is the Squid version


On 29/03/2024 17:40, Alex Rousskov wrote:

On 2024-03-25 15:13, Bolinhas André wrote:


Yes, the configuration is the same for both versions.


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story 
here. However, to make progress, let's assume that v5 configuration 
files are identical to v6 configuration files.


1. Is there an "http_access allow all AnnotateFinalAllow" rule?

2. Is there an "http_access deny HTTP Group38 AnnotateRule28" rule?

3. Assuming the answers are "yes" and "yes", which rule comes 
first? If you use include files, this question applies to the 
imaginary preprocessed squid.conf file with all the include files 
inlined (recursively if needed). That kind of preprocessed 
configuration is what Squid effectively sees when compiling 
http_access rules, one by one. Which of the two rules will Squid 
see first?


One way to answer all of the above questions is to look at the 
following output:


    squid -k parse ... |& grep Processing:.http_access

Replace "..." with your regular squid startup command line options 
and adjust standard error redirection (|&) as needed for your 
shell. Run the above command for both Squid v5 and v6 binaries. You 
should see output like this:




2024/03/29 13:31:05| Processing: http_access allow manager
2024/03/29 13:31:05| Processing: http_access deny all



HTH,

Alex.




*De:* Alex Rousskov 
*Enviado:* segunda-feira, 25 de março de 2024 19:12
*Para:* squid-users@lists.squid-cache.org
*Assunto* Re: [squid-users] ACL / http_access rules stop work 
using Squid 6+




On 2024-03-22 09:38, Andre Bolinhas wrote:

 > In previous versions of squid, from 3 to 5.9, I use this kind 
of deny

 > rules and they work like charm
 >
 > acl AnnotateRule28 annotate_transaction accessrule=Rule28
 > http_access deny HTTP Group38 AnnotateRule28
 >
 > This allows me to deny objects without bump / show the error page
 > (deny_info)
 >
 > But using squid 6+ this rules stop to work and everything is 
allowed.

 >
 > Example:
 > Squid 5.9 (OK)
 > https://ibb.co/YdKgL1Y
 >
 > Squid 6.8 (NOK)
 > https://ibb.co/tbyY2GV
 >
 > Sample of both cache.log in debug mode
 >
 > https://we.tl/t-T7Nz1rVbVu


In you v6 logs, most logged transactions are allowed because a rule
similar to the one reconstructed below is matching:

  http_access allow all AnnotateFinalAllow


There are similar cases in v5 logs as well, but most denied v5
transactions match the following rule instead (i.e. the o

Re: [squid-users] Squid 6.8 SSL_BUMP TLS Error

2024-04-18 Thread Alex Rousskov

On 2024-04-18 04:13, Rauch, Mario wrote:

We have created a DER version of the PEM certificate which Squid uses 
and imported this into client certificate store using script like this:


certmgr /add DN_SIGNATOR_CA.der /r localMachine /s root

DN_SIGNATOR_CA.der is the self signed certificate


There is no practical way for me to verify that the above steps have the 
desired result. However, _you_ can verify that by, for example, using 
OpenSSL s_server configured with a certificate signed by DN_SIGNATOR_CA. 
Does the client trust that test server?


Can you verify that your client is getting a certificate signed by 
DN_SIGNATOR_CA? Depending on TLS version, it may be possible to do that 
using Wireshark or a similar packet capture analysis tool. If you can 
run OpenSSL s_client or a similar test client, it can also tell you what 
certificate(s) it is getting from Squid.



Maybe there must be some additional or changed setting in config from 
3.5 > 6.8 Squid version?


Lots of things changed since Squid v3. Others may be able to guide you 
through those changes, but I cannot. That is why I am focusing on 
solving your problem in v6 (rather than trying to figure out what change 
triggered that problem).



As I wrote on old server with Squid 3.5 and same certificate it worked. 
Should I attach both config files?


Personally, I am not interested in Squid v3 configuration. Seeing your 
ssl_bump rules for v6 may be useful (especially if you know for sure 
which rules have matched for the test transaction), but I would _start_ 
by checking that Squid is sending the certificate(s) you think it is 
sending.



HTH,

Alex.


*Von:*squid-users  *Im 
Auftrag von *Alex Rousskov

*Gesendet:* Mittwoch, 17. April 2024 19:53
*An:* squid-users@lists.squid-cache.org
*Betreff:* Re: [squid-users] Squid 6.8 SSL_BUMP TLS Error

On 2024-04-17 09: 07, Rauch, Mario wrote: > We are receiving following 
errors when clients > want to connect to specific website using ssl bump 
feature and self > signed certificate: > > 2024/04/17 14: 55: 15 kid1| 
ERROR: failure


On 2024-04-17 09:07, Rauch, Mario wrote:

We are receiving following errors when clients 


want to connect to specific website using ssl bump feature and self 



signed certificate:






2024/04/17 14:55:15 kid1| ERROR: failure while accepting a TLS 


connection on conn275 local=185.229.91.169:3128 


remote=81.217.86.125:63673 FD 16 flags=1: 



SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1







Does somebody know what the problem could be?


$ openssl errstr A000418

error:0A000418:SSL routines::tlsv1 alert unknown ca

Looks like the client does not trust Squid certificate and tells Squid

about that lack of trust via a TLS alert. Did you configure the client

to trust the certificate your Squid is using for bumping client connections?

HTH,

Alex.


With old Squid 3.5 it worked with almost same config and certificate.


___

squid-users mailing list

squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org>

https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!Gb9UCRAl!8v8DHhzXtUPSxAheCy_Rh2E-Sywz_Z-_afBDDwJUCCJ0ojG5KeBK_73nBnc3Uo6bz9cIuzHlHwrxDZNznVMO1E0k3oPcDpH5ysNH$
 
<https://urldefense.com/v3/__https:/lists.squid-cache.org/listinfo/squid-users__;!!Gb9UCRAl!8v8DHhzXtUPSxAheCy_Rh2E-Sywz_Z-_afBDDwJUCCJ0ojG5KeBK_73nBnc3Uo6bz9cIuzHlHwrxDZNznVMO1E0k3oPcDpH5ysNH$>



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.8 SSL_BUMP TLS Error

2024-04-17 Thread Alex Rousskov

On 2024-04-17 09:07, Rauch, Mario wrote:

We are receiving following errors when clients 
want to connect to specific website using ssl bump feature and self 
signed certificate:


2024/04/17 14:55:15 kid1| ERROR: failure while accepting a TLS 
connection on conn275 local=185.229.91.169:3128 
remote=81.217.86.125:63673 FD 16 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000418+TLS_IO_ERR=1


Does somebody know what the problem could be?


$ openssl errstr A000418
error:0A000418:SSL routines::tlsv1 alert unknown ca

Looks like the client does not trust Squid certificate and tells Squid 
about that lack of trust via a TLS alert. Did you configure the client 
to trust the certificate your Squid is using for bumping client connections?



HTH,

Alex.



With old Squid 3.5 it worked with almost same config and certificate.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store limit

2024-04-16 Thread Alex Rousskov

On 2024-04-16 03:20, FredB wrote:

I'm trying to use rock store with 6.9, there is a limitation about the 
size of cache ?


If my calculations are correct, all cache_dirs share the same byte-size 
limit: A single cache_dir cannot store more than ~2048 terabytes (i.e. 
2^51 bytes).


However, all cache_dir types are also limited by factors other than the 
total size. For example, each cache_dir cannot store more than 
16'777'215 entries (2^24-1).


IIRC, rock cache_dirs also cannot have more than 2'147'483'648 slots 
(2^31). See cache_dir rock ... slot-size=bytes documentation for more 
info regarding rock slots.


Rock cache_dirs are also limited by the maximum shared memory segment 
size. A cache_dir index maintains an index in shared memory. That index 
has three components. Each component must fit into a dedicated shared 
memory segment (your OS configuration limits the size of that segment):


* index component A:  4 bytes per cache_dir entry
* index component B: 96 bytes per cache_dir entry
* index component C:  8 bytes per cache_dir slot



I tried 15000 but there is no rock db created with squid -z


What error do you get from Squid? Please note that Squid may be limited 
by the maximum shared memory segment size or some other limit.




My goal is using a 200G SSD disk


With default store_avg_object_size of 13KB, the 16'777'215 entry limit 
implies the maximum cache_dir size of ~208G, but, again, there are other 
limits.



Please also note that large rock cache_dirs will take a long time to be 
indexed on Squid startup. Rock indexing is usually done in background, 
but it is still a significant performance expense. Optimizing indexing 
is an old item on our to-do list.



HTH,

Alex.



cache_dir rock /cache 1000 max-swap-rate=250 swap-timeout=350


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL / http_access rules stop work using Squid 6+

2024-04-15 Thread Alex Rousskov

On 2024-04-14 17:23, Andre Bolinhas wrote:

Any tip on this matter? I want to upgrade to squid 6.9 but due to this 
issue, i'm stuck.



Hi Andre,

Please note that I did _not_ receive your email quoted below. It is 
in the email archive, so the problem is not on your end, but I just 
wanted to mention that I was not (knowingly) ignoring you.


> I have re-uploaded the cache.log files.

The files have expired again. I have reviewed the diff you shared, but 
cannot make further progress without those test logs. Hopefully, your 
next list post reaches me.


Alex.



On 01/04/2024 11:53, Andre Bolinhas wrote:


Hi Alex

Thanks for your help on the matter.


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story here. 
However, to make progress, let's assume that v5 configuration files 
are identical to v6 configuration files. 
If you want, I can run the same test with in a different debug 
parameters, just tell which ones.


I have re-uploaded the cache.log files.
https://we.tl/t-AB4XuUwuf7

One way to answer all of the above questions is to look at the 
following output:


    squid -k parse ... |& grep Processing:.http_access 

There is no diff between both squid version, you can check it here
DiffNow - Compare Files, URLs, and Clipboard Contents Online 
<https://www.diffnow.com/report/jsrva>


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story here. 
However, to make progress, let's assume that v5 configuration files 
are identical to v6 configuration files.
The configuration files / folder are the same, the server is the same, 
the only thing that changes is the Squid version


On 29/03/2024 17:40, Alex Rousskov wrote:

On 2024-03-25 15:13, Bolinhas André wrote:


Yes, the configuration is the same for both versions.


The logs archive you shared previously has expired, so I cannot 
double check, but from what I remember, the shared logs did not 
support the above assertion, so there may be more to the story here. 
However, to make progress, let's assume that v5 configuration files 
are identical to v6 configuration files.


1. Is there an "http_access allow all AnnotateFinalAllow" rule?

2. Is there an "http_access deny HTTP Group38 AnnotateRule28" rule?

3. Assuming the answers are "yes" and "yes", which rule comes first? 
If you use include files, this question applies to the imaginary 
preprocessed squid.conf file with all the include files inlined 
(recursively if needed). That kind of preprocessed configuration is 
what Squid effectively sees when compiling http_access rules, one by 
one. Which of the two rules will Squid see first?


One way to answer all of the above questions is to look at the 
following output:


    squid -k parse ... |& grep Processing:.http_access

Replace "..." with your regular squid startup command line options 
and adjust standard error redirection (|&) as needed for your shell. 
Run the above command for both Squid v5 and v6 binaries. You should 
see output like this:




2024/03/29 13:31:05| Processing: http_access allow manager
2024/03/29 13:31:05| Processing: http_access deny all



HTH,

Alex.



----
*De:* Alex Rousskov 
*Enviado:* segunda-feira, 25 de março de 2024 19:12
*Para:* squid-users@lists.squid-cache.org
*Assunto* Re: [squid-users] ACL / http_access rules stop work using 
Squid 6+




On 2024-03-22 09:38, Andre Bolinhas wrote:

 > In previous versions of squid, from 3 to 5.9, I use this kind of 
deny

 > rules and they work like charm
 >
 > acl AnnotateRule28 annotate_transaction accessrule=Rule28
 > http_access deny HTTP Group38 AnnotateRule28
 >
 > This allows me to deny objects without bump / show the error page
 > (deny_info)
 >
 > But using squid 6+ this rules stop to work and everything is 
allowed.

 >
 > Example:
 > Squid 5.9 (OK)
 > https://ibb.co/YdKgL1Y
 >
 > Squid 6.8 (NOK)
 > https://ibb.co/tbyY2GV
 >
 > Sample of both cache.log in debug mode
 >
 > https://we.tl/t-T7Nz1rVbVu


In you v6 logs, most logged transactions are allowed because a rule
similar to the one reconstructed below is matching:

  http_access allow all AnnotateFinalAllow


There are similar cases in v5 logs as well, but most denied v5
transactions match the following rule instead (i.e. the one you shared
above):

  http_access deny HTTP Group38 AnnotateRule28


In your Squid configuration, v6 allow rule is listed much higher 
than v5

deny rule (#43 vs #149). I do not see any signs of Group38 or
AnnotateRule28 ACL evaluation in v6 logs, as if the rul

Re: [squid-users] SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR

2024-04-11 Thread Alex Rousskov

On 2024-04-10 17:48, Jonathan Lee wrote:
It works in 5.8 with no errors however in 6.6 I can see indexing and 
other information that I have never seen before


Unfortunately, I am unable to make progress with this email thread 
because there are just too many different problems being introduced and 
discussed. I am aware that the problems may be related, but it does not 
change the outcome for me. I hope somebody else can sort through this 
collection of concerns and data. If not, I recommend restarting from 
scratch while focusing on a single issue (of your choice).




I have some open pull requests in for Squid here
https://github.com/pfsense/FreeBSD-ports/pull/


I am not familiar with PFSense development practices, but PFSense PRs 
should be discussed with PFSense folks, using PFSense support channels 
(rather than this mailing list). If you want your code changes to be 
accepted into _official_ Squid releases, then please follow

https://wiki.squid-cache.org/MergeProcedure

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR

2024-04-10 Thread Alex Rousskov

On 2024-04-10 16:22, Jonathan Lee wrote:

Could it be related to this ??

"WARNING: Failed to decode EC parameters '/etc/dh-parameters.2048'. 
error:1E08010C:DECODER routines::unsupported”


I do not know the answer to your question. I speculate that it could be 
related: Depending on various factors, without those DH parameters, 
Squid may not be able to communicate with clients. See WARNING in tls-dh 
description in squid.conf.documented.


I know that others are reporting similar WARNINGs during v6 upgrades and 
dislike the letters "EC" those messages use. I am not going to debate 
the best choice of letters for this message, but I can tell you that, in 
the cases I investigated, the message was caused by a mismatch between 
squid.conf tls-dh=... option value and DH parameter file contents:


* To Squid, tls-dh=curve:filename format implies that the keytype is 
"EC". These two letters are then fed to an OpenSSL function that 
configures related TLS state. OpenSSL then fails if tls-dh filename 
contains DH parameters produced with "openssl dhparam" command. I have 
seen these failures in tests.


* To Squid, tls-dh=filename format (i.e. format without the curve name 
prefix) implies that the keytype is "DC". These two letters are then fed 
to an OpenSSL function that configured related TLS state. OpenSSL then 
probably fails if tls-dh filename contains DH parameters produced with 
"openssl ecparam" command. I have not tested this use case.


* The failing checks and their messages are specific to Squids built 
with OpenSSL v3. It is possible that Squids built with OpenSSL v1 just 
silently fail (at runtime), but I have not checked that theory.



FWIW, this poorly categorized message indicates a configuration _error_. 
AFAICT, Squid code should be adjusted to _quit_ (i.e. reject bad 
configuration) after discovering this error instead of continuing as if 
nothing bad happened.


I recommend addressing the underlying cause, even if this message is 
unrelated to SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417.



HTH,

Alex.


On Apr 10, 2024, at 08:38, Alex Rousskov 
 wrote:


On 2024-04-10 10:50, Jonathan Lee wrote:

I am getting the following error in 6.6 after a upgrade from 5.8 does 
anyone know what this is caused by?

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR


   $ openssl errstr A000417
   error:0A000417:SSL routines::sslv3 alert illegal parameter

I think I have seen that error code before, but I do not recall the 
exact circumstances. Sorry! The error happens when Squid tries to 
accept (or peek at) a TLS connection from the client. Might be 
prohibited TLS version/feature, TLS greasing, or non-TLS traffic? Try 
examining client TLS Hello packet(s) in Wireshark.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR

2024-04-10 Thread Alex Rousskov

On 2024-04-10 10:50, Jonathan Lee wrote:


I am getting the following error in 6.6 after a upgrade from 5.8 does anyone 
know what this is caused by?

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR


$ openssl errstr A000417
error:0A000417:SSL routines::sslv3 alert illegal parameter

I think I have seen that error code before, but I do not recall the 
exact circumstances. Sorry! The error happens when Squid tries to accept 
(or peek at) a TLS connection from the client. Might be prohibited TLS 
version/feature, TLS greasing, or non-TLS traffic? Try examining client 
TLS Hello packet(s) in Wireshark.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-04-06 Thread Alex Rousskov

On 2024-04-06 01:40, Jonathan Lee wrote:

Can you please help I moved from 5.8 to 6.6 I am getting access denied for mgr 
info.



Http manager is built in now right?


Yes, it is and it was. No changes there.



I can access it from the loopback



Currently, you may need to figure out what hostname Squid considers to 
self-identify as and use that hostname in cache manager requests. The 
following bug report may help, but there are several overlapping 
problems here, and that makes it difficult to triage without more 
information: https://bugs.squid-cache.org/show_bug.cgi?id=5283


I second Francesco's suggestion for sharing more information (privately 
if needed). A pointer to compressed ALL,9 cache.log for the problematic 
transaction would be best IMO[1], but you can start with sharing the 
output of your squidclient command with "-v" option added, your 
http_port configuration for port(s) 3128, and your visible_hostname 
setting in squid.conf (if any).



HTH,

Alex.

[1]: 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-05 Thread Alex Rousskov

On 2024-04-04 03:01, David Komanek wrote:
I do not observe this problem accessing sites running only 
on port 80 (no 443 at all), but my configuration is simple:


squid 6.6 as FreeBSD binary package

not much about ssl in the config file though, just passing it through, 
no ssl juggling


Your use case is not applicable to this problem because your Squid is 
not using SslBump. It is SslBump actions that confuse Chrome (in some 
cases).


Alex.



acl SSL_ports port
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow 
http_access allow 
http_access allow 
http_access allow 
http_access allow 
http_access deny all

I don't think it was different with squid 5.9, which I used till 
November 2023.


Occasionally, I see another problem, which may or may not be related to 
squid ssl handling configuration: PR_END_OF_FILE_ERROR (Firefox) / 
ERR_CONNECTION_CLOSED (Chrome), typically accessing samba.org. But they 
use permanent redirect from http to https, so it another situation than 
http-only site.


David



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-05 Thread Alex Rousskov

On 2024-04-05 08:16, Loučanský Lukáš wrote:


Build Info: GIT V6.8 commit 4bee0c8

Could you please somehow elaborate how this seems to be working?

acl SquidSecureConnectFail squid_error ERR_SECURE_CONNECT_FAIL
acl SquidTLSErrorConnect ssl_error SQUID_TLS_ERR_CONNECT

#tunnel all for connection errors
on_unsupported_protocol tunnel SquidTLSErrorConnect
on_unsupported_protocol tunnel SquidSecureConnectFail


Assuming the above rules have the desired effect, I speculate that, in 
your particular test cases (where these rules have the desired effect), 
the tested non-https origin servers result in those two Squid TLS 
errors, those errors happen where on_unsupported_protocol still applies, 
and the selected "tunnel" action tickles the right Chrome behavior. I 
also speculate that not all non-https origin servers exhibit similar 
behavior because other errors were alleged to (also) matter during PR 
#1668 work (e.g., ERR_ZERO_SIZE_OBJECT).


Sorry, I currently do not have enough free time to verify any of the 
above assumptions and speculations. Some of them do surprise me, but 
that does not mean they have to be wrong/false.



Is it a good or bad attempt? As I put redir.netcentrum.cz as an example 
in my first post - now it seems to just request TCP_MISS/200 815 GET 
http://redir.netcentrum.cz/? - ORIGINAL_DST/46.255.231.158 text/html -.


If there is no corresponding TLS connection attempt (through Squid) 
before that, then Chrome has changed its behavior in your tests (or your 
network has stopped delivering that attempt to Squid if your Squid is 
intercepting Chrome TLS connections rather than receiving plain CONNECT 
requests from Chrome). Without such an attempt, you are not really 
testing what this thread calls "Chrome auto-HTTPS-upgrade"...



I do not think my chrome just decided this site is http only and call it 
like this forever. I just did not see more SSL errors till yesterday . I 
do not say I haven't seen any (during some fairly short period) - such 
as SSL version errors, TLS inappropiate fallbacks, broken certs, no 
common ciphers etc. - but now I could not find a site that does not work 
(for me) - I have to ask my users.


Same "If there is no..." comment applies.


Anyway - squid seemed to have slight 
problems downloading intermediate certificates - to work properly - so I 
had to create a collection of several ones for myself (and some root 
certificates too - for example from MS WU site etc.) - but this could be 
just trouble with my Debian underlaying distro. (BTW I've alerady 
implemented transaction_initiator certificate-fetching acl and have 
http_access line for it)


This sounds like a completely separate issue. If you are suspecting that 
Squid should get certain intermediate certificates but does not, check 
Bugzilla, and, if there is no corresponding bug report, file a new one.



HTH,

Alex.



Dne 03.04.2024 v 17:05 Alex Rousskov napsal(a):

On 2024-04-03 02:14, Loučanský Lukáš wrote:


this has recently started me up more then let it go. For a while
chrome is upgrading in-page links to https.

Just to add two more pieces of related information to this thread:

Some Squid admins report that their v6-based code does not suffer from 
this issue while their v5-based code does. I have not verified those 
reports, but there may be more to the story here. What Squid version 
are _you_ using?


One way to track progress with this annoying and complex issue is to 
follow the following pull request. The current code cannot be 
officially merged as is, and I would not recommend using it in 
production (because of low-level bugs that will probably crash Squid 
in some cases), but testing it in the lab and providing feedback to 
authors may be useful:


https://github.com/squid-cache/squid/pull/1668

HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] BWS after chunk-size

2024-04-03 Thread Alex Rousskov

On 2024-04-01 23:03, r...@ohmuro.net wrote:

after an upgrade from squid 5.4.1 to squid 5.9, unable to parse HTTP 
chunked response containing whitespace after chunk size.


I could be wrong, but Can you please advise me know if there is a way or 
patch to fix this issue.


The sender of these malformed chunks is at fault. If you can reach out 
to them, they may be able to upgrade or fix their software.


Senders with similar behavior were used for attacks on clients or 
network infrastructure. Squid cannot tell whether an attack is going on 
and, hence, rejects traffic with such serious message framing-related 
violations. This is the right default that will never change.


It is, of course, possible to modify Squid code to resume accepting this 
dangerous whitespace again. However, such changes will not be officially 
accepted, and running your Squid with such changes does elevate security 
risks of your Squid deployment or those around it. FWIW, we work in the 
background to better address this issue, but we are currently too busy 
with more important Squid problems to make good progress with that work.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-03 Thread Alex Rousskov

On 2024-04-03 02:14, Loučanský Lukáš wrote:


this has recently started me up more then let it go. For a while
chrome is upgrading in-page links to https.

Just to add two more pieces of related information to this thread:

Some Squid admins report that their v6-based code does not suffer from 
this issue while their v5-based code does. I have not verified those 
reports, but there may be more to the story here. What Squid version are 
_you_ using?


One way to track progress with this annoying and complex issue is to 
follow the following pull request. The current code cannot be officially 
merged as is, and I would not recommend using it in production (because 
of low-level bugs that will probably crash Squid in some cases), but 
testing it in the lab and providing feedback to authors may be useful:


https://github.com/squid-cache/squid/pull/1668

HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL / http_access rules stop work using Squid 6+

2024-03-29 Thread Alex Rousskov

On 2024-03-25 15:13, Bolinhas André wrote:


Yes, the configuration is the same for both versions.


The logs archive you shared previously has expired, so I cannot double 
check, but from what I remember, the shared logs did not support the 
above assertion, so there may be more to the story here. However, to 
make progress, let's assume that v5 configuration files are identical to 
v6 configuration files.


1. Is there an "http_access allow all AnnotateFinalAllow" rule?

2. Is there an "http_access deny HTTP Group38 AnnotateRule28" rule?

3. Assuming the answers are "yes" and "yes", which rule comes first? If 
you use include files, this question applies to the imaginary 
preprocessed squid.conf file with all the include files inlined 
(recursively if needed). That kind of preprocessed configuration is what 
Squid effectively sees when compiling http_access rules, one by one. 
Which of the two rules will Squid see first?


One way to answer all of the above questions is to look at the following 
output:


squid -k parse ... |& grep Processing:.http_access

Replace "..." with your regular squid startup command line options and 
adjust standard error redirection (|&) as needed for your shell. Run the 
above command for both Squid v5 and v6 binaries. You should see output 
like this:




2024/03/29 13:31:05| Processing: http_access allow manager
2024/03/29 13:31:05| Processing: http_access deny all



HTH,

Alex.



----
*De:* Alex Rousskov 
*Enviado:* segunda-feira, 25 de março de 2024 19:12
*Para:* squid-users@lists.squid-cache.org
*Assunto* Re: [squid-users] ACL / http_access rules stop work using Squid 6+



On 2024-03-22 09:38, Andre Bolinhas wrote:

 > In previous versions of squid, from 3 to 5.9, I use this kind of deny
 > rules and they work like charm
 >
 > acl AnnotateRule28 annotate_transaction accessrule=Rule28
 > http_access deny HTTP Group38 AnnotateRule28
 >
 > This allows me to deny objects without bump / show the error page
 > (deny_info)
 >
 > But using squid 6+ this rules stop to work and everything is allowed.
 >
 > Example:
 > Squid 5.9 (OK)
 > https://ibb.co/YdKgL1Y
 >
 > Squid 6.8 (NOK)
 > https://ibb.co/tbyY2GV
 >
 > Sample of both cache.log in debug mode
 >
 > https://we.tl/t-T7Nz1rVbVu


In you v6 logs, most logged transactions are allowed because a rule
similar to the one reconstructed below is matching:

  http_access allow all AnnotateFinalAllow


There are similar cases in v5 logs as well, but most denied v5
transactions match the following rule instead (i.e. the one you shared
above):

  http_access deny HTTP Group38 AnnotateRule28


In your Squid configuration, v6 allow rule is listed much higher than v5
deny rule (#43 vs #149). I do not see any signs of Group38 or
AnnotateRule28 ACL evaluation in v6 logs, as if the rule sets are
different for two different Squid instances. Are you using the same set
of http_access rules for both Squid versions?

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL / http_access rules stop work using Squid 6+

2024-03-25 Thread Alex Rousskov

On 2024-03-22 09:38, Andre Bolinhas wrote:

In previous versions of squid, from 3 to 5.9, I use this kind of deny 
rules and they work like charm


acl AnnotateRule28 annotate_transaction accessrule=Rule28
http_access deny HTTP Group38 AnnotateRule28

This allows me to deny objects without bump / show the error page 
(deny_info)


But using squid 6+ this rules stop to work and everything is allowed.

Example:
Squid 5.9 (OK)
https://ibb.co/YdKgL1Y

Squid 6.8 (NOK)
https://ibb.co/tbyY2GV

Sample of both cache.log in debug mode

https://we.tl/t-T7Nz1rVbVu



In you v6 logs, most logged transactions are allowed because a rule 
similar to the one reconstructed below is matching:


http_access allow all AnnotateFinalAllow


There are similar cases in v5 logs as well, but most denied v5 
transactions match the following rule instead (i.e. the one you shared 
above):


http_access deny HTTP Group38 AnnotateRule28


In your Squid configuration, v6 allow rule is listed much higher than v5 
deny rule (#43 vs #149). I do not see any signs of Group38 or 
AnnotateRule28 ACL evaluation in v6 logs, as if the rule sets are 
different for two different Squid instances. Are you using the same set 
of http_access rules for both Squid versions?


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error during ICAP RESPMOD

2024-03-22 Thread Alex Rousskov

On 2024-03-22 13:11, Arun Kumar wrote:
The lines above are. The content-length is 138 (8a in hex), but the 
bytes are 144. Could this be the reason?


parseMore: have 144 bytes to parse [FD 14;RBG/Comm(14)wr job24]
parseMore:
8a^M
{"activity":"Make a simple musical 
instrument","type":"music","participants":1,"price:0.4,"link":"","key":"7091374","accessibility":0.25}^M

parseHeaders: parse ICAP headers
parsePart: have 144 head bytes to parse; state: 0
parsePart: head parsing result: 0 detail: 600



I cannot be sure based on the tiny snippets shared so far, but it 
_looks_ like Squid expected an ICAP response header and got an ICAP 
response body chunk instead. It is also possible that we are looking at 
log lines from two unrelated ICAP transactions, or I am simply 
misinterpreting the snippets.


If you want a more reliable diagnosis, then my earlier recommendation 
regarding sharing (privately if needed) the following information still 
stands:


* compressed ALL,9 cache.log and
* the problematic ICAP response in a raw packet capture format.


HTH,

Alex.


On Monday, March 18, 2024 at 11:21:02 PM EDT, Alex Rousskov 
 wrote:



On 2024-03-18 18:46, Arun Kumar wrote:

 > Any idea, the reason for error in ModXact.cc parsePart fuction.
 > Happening during parsing the response from ICAP
 >
 >
 > parsePart: have 144 head bytes to parse; state: 0
 > parsePart: head parsing result: 0 detail: 600


AFAICT, Squid considers received ICAP response header malformed. More
than five possible problems/cases may match the above lines. The answer
to your question (or an additional clue!) is in different debugging
output, possibly logged somewhere between the two lines quoted above.
The right debugging lines may be visible in "debug_options ALL,2 58,5,
93,5" output, but it is usually best to share compressed ALL,9 logs
(privately if needed).

https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>


Sharing the problematic ICAP response (header) in a raw packet capture
format (to preserve important details) may also be very useful.


HTH,

Alex.




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error during ICAP RESPMOD

2024-03-18 Thread Alex Rousskov

On 2024-03-18 18:46, Arun Kumar wrote:

Any idea, the reason for error in ModXact.cc parsePart fuction.
Happening during parsing the response from ICAP


parsePart: have 144 head bytes to parse; state: 0
parsePart: head parsing result: 0 detail: 600


AFAICT, Squid considers received ICAP response header malformed. More 
than five possible problems/cases may match the above lines. The answer 
to your question (or an additional clue!) is in different debugging 
output, possibly logged somewhere between the two lines quoted above. 
The right debugging lines may be visible in "debug_options ALL,2 58,5, 
93,5" output, but it is usually best to share compressed ALL,9 logs 
(privately if needed).


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


Sharing the problematic ICAP response (header) in a raw packet capture 
format (to preserve important details) may also be very useful.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from squid6.6 to 6.8 we have a lot of ICAP_ERR_OTHER and ICAP_ERR_GONE messages in icap logfiles

2024-03-14 Thread Alex Rousskov

On 2024-03-11 11:31, Dieter Bloms wrote:

Hello,

after an upgrade from squid6.6 to squid6.8 on a debian bookworm we have a lot
of messages from type:

ICAP_ERR_GONE/000
ICAP_ERR_OTHER/200
ICAP_ERR_OTHER/408
ICAP_ERR_OTHER/204

and some of our users claim about bad performance and some get "empty
pages".


Please see Squid Bug 5352 for a work-in-progress fix that needs testing:
https://bugs.squid-cache.org/show_bug.cgi?id=5352

Thank you,

Alex.



Unfortunately it is not deterministic, the page will appear the next
time it is called up. I can't see anything conspicuous in the cache.log.

There was no change to the virus scanner nor any change to the squid
config during the upgrade.

Here the icap spefific config lines from squid:

--snip--
acl CONNECT method CONNECT
acl withoutvirusscanner.dstnames dstdomain 
"/etc/squid/withoutvirusscanner.dstnames"
acl audio rep_mime_type ^audio/
acl audio rep_mime_type ^video/

icap_enable on
icap_preview_enable on
icap_preview_size 128
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_service_failure_limit -1
icap_service_revival_delay 30
logformat icap_debug %ts.%03tu %6icap::tr %>a %icap::to/%03icap::Hs %icap::

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Compilation error for v6.8

2024-03-14 Thread Alex Rousskov

On 2024-03-14 08:21, Miha Miha wrote:

Hello Squid team,

I get following error while compiling v6.8

...

In file included from basic_nis_auth.cc:15:
../../../../src/auth/basic/NIS/nis_support.h:8: error: unterminated #ifndef
#ifndef SQUID_SRC_AUTH_BASIC_NIS_NIS_SUPPORT_H
basic_nis_auth.cc: In function 'int main(int, char**)':
basic_nis_auth.cc:71:21: error: 'get_nis_password' was not declared in
this scope
  nispasswd = get_nis_password(user, nisdomain, nismap);
  ^~~~
...
Build environment: CentOS7.9; gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)
Note: I'm able to compile successfully v6.7 in same build environment.


Please see Squid Bug 5349 for a fix:
https://bugs.squid-cache.org/show_bug.cgi?id=5349

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended squid settings when using IPS-based domain blocking

2024-03-06 Thread Alex Rousskov

On 2024-03-06 09:48, Jason Marshall wrote:

We have been using squid (version squid-5.5-6.el9_3.5) under RHEL9 as a 
simple pass-through proxy without issue for the past month or so. 
Recently our security team implemented an IPS product that intercepts 
domain names known to be associated with malware and ransomware command 
and control. Once this was in place, we started having issues with the 
behavior of squid.


Through some troubleshooting, it appears that what is happening is that 
that when a user's machine make a request through squid for one of these 
bad domains, the request is dropped by the IPS, squid waits for the DNS 
timeout, and then all requests made to squid after that result 
in NONE_NONE/500 errors, and it never seems to recover until we do a 
restart or reload of the service.



DNS errors, including DNS query timeouts, are common, and Squid is 
supposed to handle them well. Assuming the DNS server is operational, 
what you describe sounds like a Squid bug. Lots of bugs were fixed since 
Squid v5.5, but I do not recall any single bug that would have such a 
drastic outcome.


Squid v5 is not supported by the Squid Project. I recommend upgrading to 
the latest Squid v6 and retesting.



HTH,

Alex.


Initially the dns_timeout was set for 30 seconds. I reduced this, 
thinking that perhaps requests were building up or something along those 
lines. I set it to 5 seconds, but that just got us to a failure state 
faster.


I also found the negative_dns_ttl setting and thought it might be having 
an effect, but setting this to 0 seconds resulted in no change to the 
behavior.


Are there any configuration tips that anyone can provide that might work 
better with dropped/intercepted DNS requests? My current configuration 
is included here:


acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8              # RFC 1918 
local private network (LAN)
acl localnet src 100.64.0.0/10           # RFC 
6598 shared address space (CGN)
acl localnet src 169.254.0.0/16          # RFC 
3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12           # RFC 
1918 local private network (LAN)
acl localnet src 192.168.0.0/16          # RFC 
1918 local private network (LAN)


acl localnet src fc00::/7               # RFC 4193 local private network 
range
acl localnet src fe80::/10              # RFC 4291 link-local (directly 
plugged) machines


acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 443         # https
acl Safe_ports port 9191        # papercut
http_access deny !Safe_ports
http_access allow localhost manager
http_access deny manager

http_access allow localnet
http_access allow localhost
http_access deny all
http_port 0.0.0.0:3128 
http_port 0.0.0.0:3129 
cache deny all
coredump_dir /var/spool/squid
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
debug_options rotate=1 ALL,2
negative_dns_ttl 0 seconds
dns_timeout 5 seconds

Thank you for any help that you can provide.

Jason Marshall

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing IPv6 sockets in Squid 6.7 in some servers

2024-03-04 Thread Alex Rousskov

On 2024-03-04 14:03, Dragos Pacher wrote:


POC running well on 3 servers but on the 4th I get no IPv6
sockets:
ubuntu@A2-3:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 10.10.0.16:3128         0.0.0.0:*   
LISTEN      2891391/(squid-1)


Are there any other processes listening on IPv6 addresses on this 
problematic host?


Does something like "nc -6 -l 3128" listen on an IPv6 address on this 
problematic host?


If possible, please also check cache.log for messages mentioning IPv6 
and "BCP 177"; I know you shared syslog output, but I am a bit worried 
that syslog might be missing some relevant early debugging messages.



If nothing helps, consider sharing a pointer to compressed Squid startup 
cache.log after adding "debug_options ALL,2 50,3" to your squid.conf. We 
do not need to see any transactions, just Squid startup steps. Still, 
this log may contain some sensitive details, so share privately if needed.



Thank you,

Alex.




and on the other 3 I have IPv6:
ubuntu@A2-2:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 x.x.x.x:52386    x.x.x.x:443     ESTABLISHED 
997651/(squid-1)
tcp6       0      0 :::3128                 :::*   
  LISTEN      997651/(squid-1)
tcp6       0      0 10.10.0.12:3128         10.20.0.1:39428   
  ESTABLISHED 997651/(squid-1)






This creates a problem for us since the apps I monitor are not starting 
since their start routine is IPV6 only and then they switch to 
IPv4/IPV6, but the start is IPV6 alone.


Therefore my questions are as follows:

 1. How can I make it listen on both IPV6/IPV4 like on the other servers?
 2. Any configuration improvement suggestions?


Please find all details here:
So far I did a POC on 4 servers, here is the full config, nothing 
sophisticated since this is where my Squid knowledge took me so far. 
Running Squid 6.7 with some basic options

on Ubuntu 22.04 kernel 5.15.0-89-generic x86_64
squid -v
Squid Cache: Version 6.7
Service Name: squid
This binary uses OpenSSL 3.0.2 15 Mar 2022. configure options: 
  '--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid' 
'--datadir=/share/squid' '--sysconfdir=/etc/squid' 
'--with-default-user=proxy' '--with-logdir=/var/log/squid' 
'--enable-ssl-crtd' '--with-openssl'


and here is the syslog of Squid start:
Mar  4 16:09:28 A2-3 systemd[1]: Starting Squid Web Proxy Server...
Mar  4 16:09:28 A2-3 squid[3094662]: 2024/03/04 16:09:28| Processing 
Configuration File: /etc/squid/squid.conf (depth 0)
Mar  4 16:09:28 A2-3 squid[3094662]: 2024/03/04 16:09:28| WARNING: empty 
ACL: acl broken_sites ssl::server_name "/etc/squid/ssl_broken_sites.txt"
Mar  4 16:09:28 A2-3 squid[3094662]: 2024/03/04 16:09:28| WARNING: The 
"Hs" formatting code is deprecated. Use the ">Hs" instead.
Mar  4 16:09:28 A2-3 squid[3094662]: 2024/03/04 16:09:28| Created PID 
file (/var/run/squid.pid)

Mar  4 16:09:28 A2-3 squid[3094662]: Squid Parent: will start 1 kids
Mar  4 16:09:28 A2-3 squid[3094662]: Squid Parent: (squid-1) process 
3094665 started
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| 
Processing Configuration File: /etc/squid/squid.conf (depth 0)
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| WARNING: 
empty ACL: acl broken_sites ssl::server_name 
"/etc/squid/ssl_broken_sites.txt"
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| WARNING: 
The "Hs" formatting code is deprecated. Use the ">Hs" instead.
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| Set 
Current Directory to /var/cache/squid
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| Creating 
missing swap directories
Mar  4 16:09:28 A2-3 squid[3094665]: 2024/03/04 16:09:28 kid1| No 
cache_dir stores are configured.
Mar  4 16:09:28 A2-3 squid[3094662]: Squid Parent: squid-1 process 
3094665 exited with status 0
Mar  4 16:09:28 A2-3 squid[3094662]: 2024/03/04 16:09:28| Removing PID 
file (/var/run/squid.pid)
Mar  4 16:09:28 A2-3 squid[3094666]: Processing Configuration File: 
/etc/squid/squid.conf (depth 0)
Mar  4 16:09:28 A2-3 squid[3094666]: WARNING: empty ACL: acl 
broken_sites ssl::server_name "/etc/squid/ssl_broken_sites.txt"
Mar  4 16:09:28 A2-3 squid[3094666]: WARNING: The "Hs" formatting code 
is deprecated. Use the ">Hs" instead.

Mar  4 16:09:28 A2-3 squid[3094666]: Created PID file (/var/run/squid.pid)
Mar  4 16:09:28 A2-3 squid[3094666]: Squid Parent: will start 1 kids
Mar  4 16:09:28 A2-3 squid[3094666]: Squid Parent: (squid-1) process 
3094668 started
Mar  4 16:09:28 A2-3 squid[3094668]: Processing Configuration File: 
/etc/squid/squid.conf (depth 0)
Mar  4 16:09:28 A2-3 squid[3094668]: WARNING: empty ACL: acl 
broken_sites ssl::server_name "/etc/squid/ssl_broken_sites.txt"
Mar  4 16:09:28 A2-3 squid[3094668]: WARNING: The "Hs" formatting code 
is deprecated. Use the ">Hs" instead.
Mar  4 16:09:28 A2-3 squid[3094668]: Set Current Directory to 
/var/cache/squid
Mar  4 16:09:28 A2-3 squid[3094668]: 

Re: [squid-users] Squid delay_access with external acl

2024-03-04 Thread Alex Rousskov

On 2024-03-04 06:31, Szilárd Horváth wrote:


Thank you so much your answer but this solution isn't work.


Please note that I did not (try to) offer a solution. I only tried to 
correct a specific problem in a specific configuration statement.


I hope that Francesco will continue to guide you towards the solution 
that works in your environment. It may be useful to know what exactly 
does not work at this point (e.g., the transaction never gets a 
limited=yes annotation, which you can check by logging %note to 
access.log, OR the transaction is annotated as expected but is not 
delayed as expected).



Good luck,

Alex.



Please check 
my config maybe i made a mistake. Or maybe have you any other solution?
I can use proxy users from QUOTA_EXCEEDED_USERS.acl which contain e-mail 
address or get from ldap with external_acl_type overkvota 
children-max=10 children-startup=10 ttl=600 negative_ttl=600 %LOGIN 
/usr/lib/squid/ext_ldap_group_acl -Z -v 3 -P -p 389 -h ldapm1.x.hu 
-s sub -D cn=squid_proxy,o=services -W /etc/squid/secret -b o= -f 
"(&(mail=%u)(objectclass=InetorgPerson)(InternetUser=true)(QuotaExceeded=true))"

*acl QUOTA_EXCEEDED_USERS ext_user "/etc/squid/QUOTA_EXCEEDED_USERS.acl"*
*acl markAsLimited annotate_transaction limited=yes*
*acl markedAsLimited note limited yes*
*http_access allow QUOTA_EXCEEDED_USERS markAsLimited !all
*
*delay_pools 1
delay_class 1 1
delay_parameters 1 32000/32000
delay_access 1 allow markedAsLimited
delay_access 1 deny all*
br,
Szilard



Alex Rousskov  02/20/2024, 04:52 PM >>>

On 2024-02-20 03:14, Francesco Chemolli wrote:

 > acl users ext_user foo bar gazonk
 > http_access allow users all # always allow

The above does not always allow. What you meant it probably this:

# This rule never matches. It is used for its side effect:
# The rule evaluates users ACL, caching evaluation result.
http_access allow users !all


 > delay_access 3 allow users
 >
 > should do the trick

... but sometimes will not. Wiki recommendation to "exploit caching" is
an ugly outdated hack that should be avoided. The correct solution these
days is to use annotate_transaction ACL to mark the transaction
accordingly. Here is an untested sketch:

acl fromUserThatShouldBeLimited ext_user ...
acl markAsLimited annotate_transaction limited=yes
acl markedAsLimited note limited yes

# This rule never matches; used for its annotation side effect.
http_access allow fromUserThatShouldBeLimited markAsLimited !all

delay_access 3 allow markedAsLimited

HTH,

Alex.



 > On Tue, Feb 20, 2024 at 2:15 PM Szilárd Horváth wrote:
 >
 > Good Day!
 >
 > I try to make limitation bandwidth for some user group. I have an
 > external acl which get the users from ldap database server. In the
 > old version of config we blocked the internet with http_access deny
 > GROUP, but now i try to allow the internet which has limited
 > bandwidth. I know that the delay_access work with only fast ACL and
 > external acl or proxy_auth acl are slow. I already tried some
 > opportunity but i couldn't solve.
 >
 > Maybe have you any solution for this? Or any idea how can limitation
 > the bandwidth for some user? I need use the username (e-mail address
 > format) because that use to login to the proxy.
 >
 > Version: Squid Cache: Version 5.6
 >
 > Thank you so much and i am waiting for your answer!
 >
 > Have a good day!
 >
 > Br,
 > Szilard Horvath
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
 > <mailto:squid-users@lists.squid-cache.org>
 > https://lists.squid-cache.org/listinfo/squid-users
 > <https://lists.squid-cache.org/listinfo/squid-users>
 >
 >
 >
 > --
 > Francesco
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org
 > https://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient ERR_ACCESS_DENIED

2024-02-28 Thread Alex Rousskov

On 2024-02-28 08:52, Francesco Chemolli wrote:


just replace

squidclient mgr:objects

with

curl --silent --user squid_cachemgr_user:squd_cachemgr_password 
http://squid.host.name:3128/squid-internal-mgr/objects 


Neither is required for basic cases, but it is better, IMHO, to use 
--no-progress-meter instead of error-hiding --silent.


One only needs --user when accessing password-protected reports.

The biggest difficulty in this conversion is with guessing what hostname 
a modern Squid will recognize as its own. And the correct guess is 
likely to change when we fix the remaining bugs.



Cheers,

Alex.



(and of course replace port 3128 with whatever port you're using for Squid)
Everything else is the same as previously.

Also, the same applies to all other cachemgr reports:

curl --silent --user squid_cachemgr_user:squd_cachemgr_password 
http://squid.host.name:3128/squid-internal-mgr/menu 



will give you the list of available subpages; replace menuwith the 
subpage name to access any




What could be an equivalent using curl/wget?

   bye & Thanks
         av.


___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users




--
     Francesco

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient ERR_ACCESS_DENIED

2024-02-27 Thread Alex Rousskov

On 2024-02-27 10:36, Andrea Venturoli wrote:


I'm having trouble accessing cachemgr with squidclient.


You are suffering from one or several known problems[1,2] related to 
cache manager changes in v6+ code. Without going into complicated 
details, I recommend that you replace deprecated squidclient with curl, 
wget, or another popular client of your choice _and_ then use the URL 
host name (or IP address) and other client configuration parameters that 
"work" in your specific Squid environment. You may need to adjust them 
later, but at least you will have a temporary workaround.


AFAIK[1], a Squid developer is working on improving this ugly situation, 
but that work takes time (and will not resurrect squidclient support in 
future Squid versions).



HTH,

Alex.

[1] https://bugs.squid-cache.org/show_bug.cgi?id=5283
[2] 
https://lists.squid-cache.org/pipermail/squid-users/2023-August/026023.html


As a test, I've added the following to my squid.conf as the first 
http_access line:

http_access manager


(I know this is dangerous and I've removed it after the test).


Opening "http://10.1.2.39:8080/squid-internal-mgr/info; from a client, I 
see all the stats.

However, squidclient still gets an access denied error:

# squidclient -vv -p 8080 -h 10.1.2.39 mgr:info
verbosity level set to 2
Request:
GET http://10.1.2.39:8080/squid-internal-mgr/info HTTP/1.0
Host: 10.1.2.39:8080
User-Agent: squidclient/6.6
Accept: */*
Connection: close


.
Transport detected: IPv4-only
Resolving 10.1.2.39 ...
Connecting... 10.1.2.39 (10.1.2.39:8080)
Connected to: 10.1.2.39 (10.1.2.39:8080)
Sending HTTP request ... done.
HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Tue, 27 Feb 2024 15:33:55 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3691
X-Squid-Error: ERR_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
Cache-Status: proxy2.ventu;fwd=miss;detail=mismatch
Via: 1.1 proxy2.ventu (squid), 1.1 proxy2.ventu (squid)
Cache-Status: proxy2.ventu;fwd=miss;detail=no-cache
Connection: close


This happens indifferently if I run it on the cache host itself or from 
the same client where the browser works.


In cache.log I see:

2024/02/27 16:34:48 kid1| WARNING: Forwarding loop detected for:
GET /squid-internal-mgr/info HTTP/1.1
Host: proxy2.ventu:8080
User-Agent: squidclient/6.6
Accept: */*
Via: 1.0 proxy2.ventu (squid)
X-Forwarded-For: 10.1.2.18
Cache-Control: max-age=259200
Connection: keep-alive


    current master transaction: master2562


Does this mean Squid is connecting to itself as a proxy in order to 
connect to himself?
I removed all "*proxy*" env vars and tried running squidclient again, 
but there was no difference.


Any hint?
Is there a way to get more debugging info from Squid on this?

  bye & Thanks
 av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-02-20 Thread Alex Rousskov

On 2024-02-12 06:46, Stephen Borrill wrote:

On 16/01/2024 14:37, Alex Rousskov wrote:

On 2024-01-16 06:01, Stephen Borrill wrote:
The problem is no different with 6.6. Is there any more debugging I 
can provide, Alex?


Yes, but I need to give you a patch that adds that (temporary) 
debugging first (assuming I fail to reproduce the problem in the lab). 
The ball is on my side (unless somebody else steps in). Unfortunately, 
I do not have any free time for any of that right now. If you do not 
hear from me sooner, please ping me again on or after February 8, 2024.



PING!


I reproduced this bug and posted a minimal master/v7 fix for the 
official review: https://github.com/squid-cache/squid/pull/1691


Please test the corresponding patch; it applies to Squid v5 and v6:

https://github.com/squid-cache/squid/commit/7d255a72131217d30af3653cec10452fa53289c3.patch


Thank you,

Alex.


I will get 6.7 compiled up so we can add debugging to it quickly. It 
would be good if we could get something in place this week as it is 
school holidays next week in the UK and so there will be little 
opportunity to test until afterwards.



On 10/01/2024 12:40, Stephen Borrill wrote:

On 09/01/2024 15:42, Alex Rousskov wrote:

On 2024-01-09 05:56, Stephen Borrill wrote:

On 09/01/2024 09:51, Stephen Borrill wrote:

On 09/01/2024 03:41, Alex Rousskov wrote:

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) connected 
via IPv4-only periodically fails to connect to the destination 
and then requires a restart to fix it (reload is not sufficient).


The problem appears to be that a host that has one address each 
of IPv4 and IPv6 occasionally has its IPv4 address go missing 
as a destination. On closer inspection, this appears to happen 
when the IPv6 address (not the IPv4) address is marked as bad.


ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in 
216.239.38.120 #1/2-0



Thank you for sharing more debugging info!


The following seemed odd to. It finds an IPv4 address (this host 
does not have IPv6), puts it in the cache and then says "No DNS 
records":


2024/01/09 12:31:24.020 kid1| 14,4| ipcache.cc(617) nbgethostbyname: 
schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(313) ipcacheRelease: 
ipcacheRelease: Releasing entry for 'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(670) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: MISS for 
'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(480) ipcacheParse: 1 
answers for schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(549) updateTtl: use 
first 69 from RR TTL 69
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(535) addGood: 
schoolbase.online #1 20.54.32.34
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(253) forwardIp: 
20.54.32.34
2024/01/09 12:31:24.020 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector72389 found conn564274 local=0.0.0.0 
remote=20.54.32.34:443 HIER_DIRECT flags=1, destination #1 for 
schoolbase.online:443
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(459) latestError: 
ERROR: DNS failure while resolving schoolbase.online: No DNS records
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(586) 
ipcacheHandleReply: done with schoolbase.online: 20.54.32.34 #1/1-0
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(236) finalCallback: 
0x1b7381f38  lookup_err=No DNS records


It seemed to happen about the same time as the other failure, so 
perhaps another symptom of the same.


The above log line is self-contradictory AFAICT: It says that the 
cache has both IPv6-looking and IPv4-looking address at the same 
cache position (0) and, judging by the corresponding code, those 
two IP addresses are equal. This is not possible (for those 
specific IP address values). The subsequent Squid behavior can be 
explained by this (unexplained) conflict.


I assume you are running official Squid v6.5 code.


Yes, compiled from source on NetBSD. I have the patch I refer to 
here applied too:

https://lists.squid-cache.org/pipermail/squid-users/2023-November/026279.html


I can suggest the following two steps for going forward:

1. Upgrade to the latest Squid v6 in hope that the problem goes away.


I have just upgraded to 6.6.

2. If the problem is still there, patch the latest Squid v6 to add 
more debugging in hope to explain what is going on. This may take a 
few iterations, and it will take me some time to produce the 
necessary debugging patch.


Unfortunately, I don't have a test case that will cause the problem 
so I need to run this at a customer's production site that is 
particularly affected by it. Luckily, the problem recurs pretty 
quickly.


Here's a run with 6.6 where the number of destinations drops from 2 
to 1 before reverting

Re: [squid-users] Unable to filter javascript exchanges

2024-02-20 Thread Alex Rousskov

On 2024-02-12 17:40, speed...@chez.com wrote:

I'm using Squid 3.5.24 (indluded in Synology DSM 6) and I've an issue 
with time acl. All works fine except some websites like myhordes.de. 
Once the user connected to this kind of website, the time acl has no 
effect while the web page is not reloaded. All datas sent and received 
by the javascript scripts continue going thru the proxy server without 
any filter.


Squid does not normally evaluate ACLs while tunneling traffic: Various 
directives are checked at the tunnel establishment time and after the 
tunnel is closed, but not when bytes are shoveled back and forth between 
a TCP client and a TCP server.


The same can be said about processing (large) HTTP message bodies.

If your use case involves CONNECT tunnels, intercepted (but not bumped) 
TLS connections, or very large/slow HTTP messages, then you need to 
enhance Squid to apply some [time-related] checks "in the middle of a 
[long] transaction".


https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something

N.B. Squid v3 is very buggy and has not been supported by the Squid 
Project for many years. Please upgrade to Squid v6 or later. The upgrade 
itself will not add a "check directive X when tunneling for a long time" 
feature though.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid delay_access with external acl

2024-02-20 Thread Alex Rousskov

On 2024-02-20 03:14, Francesco Chemolli wrote:


acl users ext_user foo bar gazonk
http_access allow users all  # always allow


The above does not always allow. What you meant it probably this:

# This rule never matches. It is used for its side effect:
# The rule evaluates users ACL, caching evaluation result.
http_access allow users !all



delay_access 3 allow users

should do the trick


... but sometimes will not. Wiki recommendation to "exploit caching" is 
an ugly outdated hack that should be avoided. The correct solution these 
days is to use annotate_transaction ACL to mark the transaction 
accordingly. Here is an untested sketch:


acl fromUserThatShouldBeLimited ext_user ...
acl markAsLimited annotate_transaction limited=yes
acl markedAsLimited note limited yes

# This rule never matches; used for its annotation side effect.
http_access allow fromUserThatShouldBeLimited markAsLimited !all

delay_access 3 allow markedAsLimited

HTH,

Alex.




On Tue, Feb 20, 2024 at 2:15 PM Szilárd Horváth wrote:

Good Day!

I try to make limitation bandwidth for some user group. I have an
external acl which get the users from ldap database server. In the
old version of config we blocked the internet with http_access deny
GROUP, but now i try to allow the internet which has limited
bandwidth. I know that the delay_access work with only fast ACL and
external acl or proxy_auth acl are slow. I already tried some
opportunity but i couldn't solve.

Maybe have you any solution for this? Or any idea how can limitation
the bandwidth for some user? I need use the username (e-mail address
format) because that use to login to the proxy.

Version: Squid Cache: Version 5.6

Thank you so much and i am waiting for your answer!

Have a good day!

Br,
Szilard Horvath

___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users




--
     Francesco

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] stale-if-error returning a 502

2024-02-12 Thread Alex Rousskov

On 2024-02-12 10:13, Robin Carlisle wrote:

I have been having success so far with the config workaround.. config 
snippet :-


/max_stale 31536000 seconds
refresh_pattern . 0  20% 4320 max-stale=31536000/

When an object has expired due to max-age and the PC is offline 
(ethernet unplugged), squid attempts an origin refresh and gives me :


/0 ::1 TCP_REFRESH_FAIL_OLD/200 35965 GET 
https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json <https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json> - HIER_NONE/- application/json/


Previously it had been passing the 502 through to the client application.


Glad this workaround helps. Just keep in mind that the configuration 
snippet above changes max-stale for _all_ responses.




I am continuing to test this - but it looks like I have a working solution.


Meanwhile, the fix for the underlying Squid bug was officially accepted 
and should become a part of v6.8 release (at least).



Thank you,

Alex.



On Fri, 9 Feb 2024 at 14:31, Alex Rousskov wrote:

On 2024-02-09 08:53, Robin Carlisle wrote:

 > I am trying the config workaround approach.

Please keep us posted on your progress.

 >  Below is the config snippet I have added.    I made the
 > assumption that for the /refresh_pattern, max-stale=NN /config,
the NN
 > is in minutes as per the rest of that config directive.

That assumption is natural but incorrect: Unlike the anonymous
positional min and max parameters (that use minutes), refresh_pattern
max-stale=NN uses seconds. Documentation improvements are welcome.

Said that, the workaround should still prevent the application of the
broken default refresh_pattern max-stale=0 rule, so you should still
see
positive results for the first NN seconds of the response age.

Instead of specifying max-stale=NN, consider adding refresh_pattern
rules recommended by squid.conf.documented (and included in
squid.cond.default). Those rules do not have max-stale options at all,
and, hence, Squid will use (explicit or default) max_stale directive
instead.

HTH,

Alex.


 > I am testing this right now
 >
 > # this should allow stale objects up to 1 year if allowed by
 > Cache-Control repsonseheaders ...
 >
 > # ... setting both options just in case
 >
 > max_stale 525600 minutes
 >
 > refresh_pattern . 0  20% 4320 max-stale=525600
 >
 >
 > Thanks again for your help
 >
 >
 > Robin
     >
 >
 >
 >
 > On Thu, 8 Feb 2024 at 17:42, Alex Rousskov
 > mailto:rouss...@measurement-factory.com>
 > <mailto:rouss...@measurement-factory.com
<mailto:rouss...@measurement-factory.com>>> wrote:
 >
 >     Hi Robin,
 >
 >           AFAICT from the logs you have privately shared and your
 >     squid.conf
 >     that you have posted earlier, your Squid overwrites
 >     stale-if-error=31536000 in the response with "refresh_pattern
 >     max-stale=0" default. That 0 value is wrong. The correct value
 >     should be
 >     taken from max_stale directive that defaults to 1 week, not zero:
 >
 >           refresh_pattern
 >           ...
 >           max-stale=NN provide a maximum staleness factor. Squid
won't
 >           serve objects more stale than this even if it failed to
 >           validate the object. Default: use the max_stale global
limit.
 >
 >     This wrong default is a Squid bug AFAICT. I posted an
_untested_ fix as
 >     Squid PR 1664: https://github.com/squid-cache/squid/pull/1664
<https://github.com/squid-cache/squid/pull/1664>
 >     <https://github.com/squid-cache/squid/pull/1664
<https://github.com/squid-cache/squid/pull/1664>>
 >
 >     If possible, please test the corresponding patch:
 >
https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch 
<https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch>
 <https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch 
<https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch>>
 >
 >     AFAICT, you can also work around that bug by configuring an
explicit
 >     refresh_pattern rule with an explicit max-stale option (see
 >     squid.conf.documented for examples). I have not tested that
theory
 >     either.
 >
 >
 >     HTH,
 >
 >     Alex.
 >
 >
 >     On 2024-02-07 13:45, Robin Carlisle wrote:
 >      > 

Re: [squid-users] stale-if-error returning a 502

2024-02-09 Thread Alex Rousskov

On 2024-02-09 08:53, Robin Carlisle wrote:


I am trying the config workaround approach.


Please keep us posted on your progress.

 Below is the config snippet I have added.    I made the 
assumption that for the /refresh_pattern, max-stale=NN /config, the NN 
is in minutes as per the rest of that config directive.


That assumption is natural but incorrect: Unlike the anonymous 
positional min and max parameters (that use minutes), refresh_pattern 
max-stale=NN uses seconds. Documentation improvements are welcome.


Said that, the workaround should still prevent the application of the 
broken default refresh_pattern max-stale=0 rule, so you should still see 
positive results for the first NN seconds of the response age.


Instead of specifying max-stale=NN, consider adding refresh_pattern 
rules recommended by squid.conf.documented (and included in 
squid.cond.default). Those rules do not have max-stale options at all, 
and, hence, Squid will use (explicit or default) max_stale directive 
instead.


HTH,

Alex.



I am testing this right now

# this should allow stale objects up to 1 year if allowed by 
Cache-Control repsonseheaders ...


# ... setting both options just in case

max_stale 525600 minutes

refresh_pattern . 0  20% 4320 max-stale=525600


Thanks again for your help


Robin




On Thu, 8 Feb 2024 at 17:42, Alex Rousskov 
<mailto:rouss...@measurement-factory.com>> wrote:


Hi Robin,

      AFAICT from the logs you have privately shared and your
squid.conf
that you have posted earlier, your Squid overwrites
stale-if-error=31536000 in the response with "refresh_pattern
max-stale=0" default. That 0 value is wrong. The correct value
should be
taken from max_stale directive that defaults to 1 week, not zero:

      refresh_pattern
      ...
      max-stale=NN provide a maximum staleness factor. Squid won't
      serve objects more stale than this even if it failed to
      validate the object. Default: use the max_stale global limit.

This wrong default is a Squid bug AFAICT. I posted an _untested_ fix as
Squid PR 1664: https://github.com/squid-cache/squid/pull/1664
<https://github.com/squid-cache/squid/pull/1664>

If possible, please test the corresponding patch:

https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch
 
<https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch>

AFAICT, you can also work around that bug by configuring an explicit
refresh_pattern rule with an explicit max-stale option (see
squid.conf.documented for examples). I have not tested that theory
either.


HTH,

Alex.


On 2024-02-07 13:45, Robin Carlisle wrote:
 > Hi,
 >
 > I have just started my enhanced logging journey and have a small
snippet
 > below that might illuminate the issue ...
 >
 > /2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507)
 > handleIMSReply: origin replied with error 502, forwarding to
client due
 > to fail_on_validation_err/
 >
 > A few lines below in the log it looks like squid sent :-
 >
 > /2024/02/07 17:06:39.212 kid1| 11,2| Stream.cc(280)
sendStartOfMessage:
 > HTTP Client REPLY:
 > -
 > HTTP/1.1 502 Bad Gateway
 > Server: squid/5.7
 > Mime-Version: 1.0
 > Date: Wed, 07 Feb 2024 17:06:39 GMT
 > Content-Type: text/html;charset=utf-8
 > Content-Length: 3853
 > X-Squid-Error: ERR_READ_ERROR 0
 > Vary: Accept-Language
 > Content-Language: en
 > X-Cache: MISS from labs-maul-st-15
 > X-Cache-Lookup: HIT from labs-maul-st-15:3129
 > Via: 1.1 labs-maul-st-15 (squid/5.7)
 > Connection: close/
 >
 >
 > The rest of the logs are quite large and contain URLs I cannot put
 > here.   The logs were generated with debug_options to ALL,3.
 >
 > Any ideas?   Or should I generate more detailed logs and send them
 > privately?
 >
 > Thanks again,
 >
 > Robin
 >
 >
 >
 >
 > On Fri, 2 Feb 2024 at 11:20, Robin Carlisle
 > mailto:robin.carli...@framestore.com>
<mailto:robin.carli...@framestore.com
<mailto:robin.carli...@framestore.com>>>
 > wrote:
 >
 >     Hi, thanks for your reply.
 >
 >     I have been looking at :
 >
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control> 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control>>
 >
 >     /The stale-if-error response directive indicates that the

Re: [squid-users] stale-if-error returning a 502

2024-02-08 Thread Alex Rousskov

Hi Robin,

AFAICT from the logs you have privately shared and your squid.conf 
that you have posted earlier, your Squid overwrites 
stale-if-error=31536000 in the response with "refresh_pattern 
max-stale=0" default. That 0 value is wrong. The correct value should be 
taken from max_stale directive that defaults to 1 week, not zero:


refresh_pattern
...
max-stale=NN provide a maximum staleness factor. Squid won't
serve objects more stale than this even if it failed to
validate the object. Default: use the max_stale global limit.

This wrong default is a Squid bug AFAICT. I posted an _untested_ fix as 
Squid PR 1664: https://github.com/squid-cache/squid/pull/1664


If possible, please test the corresponding patch:
https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch

AFAICT, you can also work around that bug by configuring an explicit 
refresh_pattern rule with an explicit max-stale option (see 
squid.conf.documented for examples). I have not tested that theory either.



HTH,

Alex.


On 2024-02-07 13:45, Robin Carlisle wrote:

Hi,

I have just started my enhanced logging journey and have a small snippet 
below that might illuminate the issue ...


/2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507) 
handleIMSReply: origin replied with error 502, forwarding to client due 
to fail_on_validation_err/


A few lines below in the log it looks like squid sent :-

/2024/02/07 17:06:39.212 kid1| 11,2| Stream.cc(280) sendStartOfMessage: 
HTTP Client REPLY:

-
HTTP/1.1 502 Bad Gateway
Server: squid/5.7
Mime-Version: 1.0
Date: Wed, 07 Feb 2024 17:06:39 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3853
X-Squid-Error: ERR_READ_ERROR 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from labs-maul-st-15
X-Cache-Lookup: HIT from labs-maul-st-15:3129
Via: 1.1 labs-maul-st-15 (squid/5.7)
Connection: close/


The rest of the logs are quite large and contain URLs I cannot put 
here.   The logs were generated with debug_options to ALL,3.


Any ideas?   Or should I generate more detailed logs and send them 
privately?


Thanks again,

Robin




On Fri, 2 Feb 2024 at 11:20, Robin Carlisle 
mailto:robin.carli...@framestore.com>> 
wrote:


Hi, thanks for your reply.

I have been looking at :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control>

/The stale-if-error response directive indicates that the cache can
reuse a stale response when an upstream server generates an error,
or when the error is generated locally. Here, an error is considered
any response with a status code of 500, 502, 503, or 504.

Cache-Control: max-age=604800, stale-if-error=86400
In the example above, the response is fresh for 7 days (604800s).
Afterwards, it becomes stale, but can be used for an extra 1 day
(86400s) when an error is encountered.

After the stale-if-error period passes, the client will receive any
error generated/

Given what you have said and what the above docs say - I am still
confused as it looks like (in my test cases) the cached response can
be used for 3600 secs (this works), after which the cached response
can still be used for an additional 31536000 seconds on an error
(this doesnt work).

I am going to dig into the error logging you suggested to see if I
can make sense of that - and will send on if I can't.

Thanks v much for your help again,

Robin





On Thu, 1 Feb 2024 at 18:27, Alex Rousskov
mailto:rouss...@measurement-factory.com>> wrote:

On 2024-02-01 12:03, Robin Carlisle wrote:
 > Hi, I am having trouble with stale-if-error response.

If I am interpreting Squid code correctly, in primary use cases:

* without a Cache-Control:stale-if-error=X in the original
response,
Squid sends a stale object if revalidation results in a 5xx error;

* with a Cache-Control:stale-if-error=X and object age at most
X, Squid
sends a stale object if revalidation results in a 5xx error;

* with a Cache-Control:stale-if-error=X and object age exceeding X,
Squid forwards the 5xx error response if revalidation results in
a 5xx
error;

In other words, stale-if-error=X turns on a "fail on validation
errors"
behavior for stale objects older than X. It has no other effects.

In your test case, the stale objects are much younger than
stale-if-error value (e.g., Age~=3601 vs. stale-if-error=31536000).
Thus, stale-if-error should have no relevant effect.

Something else is probably preventing your Squid from serving
the stale
response when facing a 5xx error. I do not know what that
something is.

I recommend sharing (privately if y

Re: [squid-users] stale-if-error returning a 502

2024-02-07 Thread Alex Rousskov

On 2024-02-07 13:45, Robin Carlisle wrote:
I have just started my enhanced logging journey and have a small snippet 
below that might illuminate the issue ...


/2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507) 
handleIMSReply: origin replied with error 502, forwarding to client due 
to fail_on_validation_err/


This confirms that Squid considers the cached response stale and, hence, 
applies the last bullet logic from my earlier summary ("object age 
exceeding X"). We still do not know why.




should I generate more detailed logs and send them privately?


Privately sharing a pointer to the current (or, better, ALL,9) 
compressed logs while reproducing the problem is (still) the best way 
forward IMO.



Cheers,

Alex.



On Fri, 2 Feb 2024 at 11:20, Robin Carlisle wrote:

Hi, thanks for your reply.

I have been looking at :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control>

/The stale-if-error response directive indicates that the cache can
reuse a stale response when an upstream server generates an error,
or when the error is generated locally. Here, an error is considered
any response with a status code of 500, 502, 503, or 504.

Cache-Control: max-age=604800, stale-if-error=86400
In the example above, the response is fresh for 7 days (604800s).
Afterwards, it becomes stale, but can be used for an extra 1 day
(86400s) when an error is encountered.

After the stale-if-error period passes, the client will receive any
error generated/

Given what you have said and what the above docs say - I am still
confused as it looks like (in my test cases) the cached response can
be used for 3600 secs (this works), after which the cached response
can still be used for an additional 31536000 seconds on an error
(this doesnt work).

I am going to dig into the error logging you suggested to see if I
can make sense of that - and will send on if I can't.

Thanks v much for your help again,

Robin





On Thu, 1 Feb 2024 at 18:27, Alex Rousskov wrote:

On 2024-02-01 12:03, Robin Carlisle wrote:
 > Hi, I am having trouble with stale-if-error response.

If I am interpreting Squid code correctly, in primary use cases:

* without a Cache-Control:stale-if-error=X in the original
response,
Squid sends a stale object if revalidation results in a 5xx error;

* with a Cache-Control:stale-if-error=X and object age at most
X, Squid
sends a stale object if revalidation results in a 5xx error;

* with a Cache-Control:stale-if-error=X and object age exceeding X,
Squid forwards the 5xx error response if revalidation results in
a 5xx
error;

In other words, stale-if-error=X turns on a "fail on validation
errors"
behavior for stale objects older than X. It has no other effects.

In your test case, the stale objects are much younger than
stale-if-error value (e.g., Age~=3601 vs. stale-if-error=31536000).
Thus, stale-if-error should have no relevant effect.

Something else is probably preventing your Squid from serving
the stale
response when facing a 5xx error. I do not know what that
something is.

I recommend sharing (privately if you need to protect sensitive
info) a
pointer to a compressed ALL,9 cache.log collected while
reproducing the
problem (using two transactions similar to the ones you have shared
below -- a successful stale hit and a problematic one):

https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>

Alternatively, you can try to study cache.log yourself after
setting
debug_options to ALL,3. Searching for "refresh" and
"handleIMSReply" may
yield enough clues.


HTH,

Alex.




 > # /etc/squid/squid.conf :
 >
 > acl to_aws dstdomain .amazonaws.com <http://amazonaws.com>
<http://amazonaws.com <http://amazonaws.com>>
 >
 > acl from_local src localhost
 >
 > http_access allow to_aws
 >
 > http_access allow from_local
 >
 > cache allow all
 >
 > cache_dir ufs /var/cache/squid 1024 16 256
 >
 > http_port 3129 ssl-bump cert=/etc/squid/maul.pem
 > generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
 >
 > sslcrtd_program /usr/lib/squid/security_file_certgen -s
 > /var/lib/squid/ssl_db -M 4MB
 >
 > acl step1 at_s

Re: [squid-users] New Squid prefers IPv4

2024-02-06 Thread Alex Rousskov

On 2024-02-06 10:16, Rob van der Putten wrote:

On 05/02/2024 18:32, Antony Stone wrote:


On Monday 05 February 2024 at 17:32:51, Rob van der Putten wrote:



On 05/02/2024 17:16, Dieter Bloms wrote:

On Mon, Feb 05, Rob van der Putten wrote:

After upgrading Squid from 3 to 5 the percentage of IPv6 reduced from
61% to less then 1%.
Any ideas?


yes, since squid5 the happy eyeball algorithm as described in rfc 8305
is used.
If your ipv4 connectivity is better than ipv6 than ipv4 is used.


I'm not quite sure how this is established. It prefers IPv4 even when
the IPv6 ping is slightly smaller.


I believe ping (ICMP) timings are irrelevant.  The client (squid in 
this case) does a DNS lookup for the hostname's A and  records,


A before . Bind responds within the same millisecond.


If Squid sends two DNS queries, then the first DNS answer seen/processed 
by Squid will normally trigger the first (called "primary") TCP 
connection establishment attempt. A "spare" connection attempt may or 
may not happen a bit later. DNS cache and persistent connections may 
play their natural role.




then makes two
simultaneous HTTP connections to the server (one IPv4, on IPv6) and 
whichever
one responds first *by HTTP* is then regarded as being the best way to 
route traffic thereafter.


I do not see Squid opening two connections simultaneously and then 
closing one. It's just one connection.


What you see matches Squid code (and the Happy Eyeballs RFC/intent). As 
I said in my earlier response, it is easy to misinterpret Antony's 
high0-level summary. Please do not use it for low-level triage. See my 
response for details.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New Squid prefers IPv4

2024-02-05 Thread Alex Rousskov

On 2024-02-05 11:32, Rob van der Putten wrote:

On 05/02/2024 17:16, Dieter Bloms wrote:

On Mon, Feb 05, Rob van der Putten wrote:
After upgrading Squid from 3 to 5 the percentage of IPv6 reduced from 
61% to less then 1%. Any ideas?


yes, since squid5 the happy eyeball algorithm as described in rfc 8305
is used. If your ipv4 connectivity is better than ipv6 than ipv4 is used.


I'm not quite sure how this is established.


See RFC 8305 for the general approach, search squid.conf.documented for 
"Happy Eyeballs" to find relevant configuration directives, and see the 
following Squid commit message for a subset of Squid implementation caveats:


https://github.com/squid-cache/squid/commit/5562295321debdf33b59f772bce846bf6dd33c26


Antony is correct that ICMP is pretty much irrelevant here. A brief 
algorithm description in Antony's response is easy to misinterpret, but 
it can be used as a rough approximation of what is actually going on.


AFAICT, we do not have a good understanding of how the implemented 
algorithm actually behaves in various deployment environments. If you 
believe your IPv6 connectivity is better than your IPv4 connectivity, 
you may want to investigate why your Squid favors IPv4.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external icap issue with squid 5 and higher

2024-02-02 Thread Alex Rousskov

On 2024-02-02 17:36, Yvain PAYEN wrote:


I just sent you a Onedrive link to 2 pcap files, one for http request
and one for https request.


Thank you. The ICAP service you are using is sending a malformed ICAP 
response to Squid. That ICAP response promises that there will be no 
HTTP body in the encapsulated HTTP message:


Encapsulated: res-hdr=0, null-body=524

... but the service does send a body after HTTP headers. That HTTP body 
contains an HTML resource explaining that the CONNECT message was 
blocked and redirecting the user to blockpage.cgi, but that content does 
not really matter here. What matters is that there are some bytes after 
the encapsulated HTTP header. There should be no such bytes (or the ICAP 
Encapsulated header should have res-body=184 instead of null-body=524).


The null-body offset in the ICAP Encapsulated header is wrong. It should 
be 184 bytes (i.e. the size of the encapsulated HTTP response header), 
not 524 bytes. FWIW, 524 is the sum of the encapsulated HTTP response 
header (184 bytes) and the encapsulated HTTP response body (340 bytes). 
It sounds like the ICAP service thinks that it is encapsulating an HTTP 
response header, but it is actually encapsulating the whole HTTP response.


Since this is an ICAP framing error(s), Squid rejects the whole 
transaction and bypasses the ICAP service (as configured).


To fix this, fix the ICAP service configuration (or code).

It is also possible to modify Squid code to ignore these errors, but I 
do not recommend that, and a hard-coded or rigid tolerance code like 
that should not be accepted by the Squid Project for the official inclusion.



The ICAP response in the "http request" capture does not have this 
problem. It contains an encapsulated HTTP 302 Moved header without any 
encapsulated HTTP body. That encapsulation matches the ICAP Encapsulated 
header.



HTH,

Alex.




-Message d'origine-
De : Alex Rousskov 
Envoyé : vendredi 2 février 2024 18:45
À : Yvain PAYEN ; squid-users@lists.squid-cache.org
Objet : Re: [squid-users] external icap issue with squid 5 and higher

⚠ FR : Ce message provient de l'extérieur de l'organisation. N'ouvrez pas de 
liens ou de pièces jointes à moins que vous ne sachiez que le contenu est 
fiable.  ⚠



On 2024-02-02 12:04, Yvain PAYEN wrote:


We don't use ssl_bump, icap service only analyze HTTP CONNECT requests


Great, that simplifies things a lot.



Adaptation::Icap::Xaction::noteCommRead threw exception: > check
failed: readBuf.isEmpty()> exception location: ModXact.cc(1219)

stopParsing

It looks like Squid found some leftovers after the ICAP response that Squid 
(thought it) had fully parsed. I do not yet know whether that ICAP response was 
malformed or Squid is buggy.

Can you share raw ICAP response bytes (preferrably in libpcap or similar raw 
packet capture format) collected by tcpdump, wireshark, or a similar tool? You 
can obfuscate/convert that ICAP response to text as needed, but if those extra 
bytes get lost in those conversions, then we would not be able to tell what 
those bytes are (e.g., they may contain whitespace characters that get easily 
lost).


Thank you,

Alex.



   2024/02/02 17:40:41.943 kid1| 93,3| ModXact.cc(679) callException: 
bypassing 0x558f358fdae0*2 exception: check failed: readBuf.isEmpty()
   exception location: ModXact.cc(1219) stopParsing  [FD 
17;rp(1)S(2)YG/Rw job17]
   2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(720) disableBypass: will 
never start bypass because already started to bypass
   2024/02/02 17:40:41.943 kid1| 93,5| Xaction.cc(127) disableRepeats: 
Adaptation::Icap::ModXact still cannot be repeated because preparing to echo 
content [FD17;rp(1)S(2)G/Rw job17]
   2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(724) disableBypass: not 
protecting group bypass because preparing to echo content
   2024/02/02 17:40:41.943 kid1| 93,3| Xaction.cc(564) setOutcome: WARNING: 
resetting outcome: from ICAP_SAT to ICAP_ECHO
   2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(962) prepEchoing: cloning 
virgin message 0x558f358ff040
   2024/02/02 17:40:41.943 kid1| 93,3| Xaction.cc(564) setOutcome: WARNING: 
resetting outcome: from ICAP_ECHO to ICAP_ERR_OTHER
   2024/02/02 17:40:41.943 kid1| 93,4| ServiceRep.cc(97) noteFailure:  
failure 1 out of 10 allowed in 0sec [up,fail1]
   2024/02/02 17:40:41.943 kid1| 93,2| AsyncJob.cc(130) callException: 
check failed: !adapted.header
exception location: ModXact.cc(971) prepEchoing
   2024/02/02 17:40:41.943 kid1| 93,5| AsyncJob.cc(85) mustStop: 
Adaptation::Icap::ModXact will stop, reason: exception
   2024/02/02 17:40:41.943 kid1| 93,5| AsyncJob.cc(140) callEnd: 
Adaptation::Icap::Xaction::noteCommRead(conn8 local=X.X.X.X:46704 
remote=X.X.X.X:1344 FD  17 flags=1, data=0x558f358fe888) ends job [FD 
17;rp(1)S(2)/StoppedRw job17]
   2024/02/02 17:40:41.943 kid1| 93,5| ModXact.cc(1295) swanS

Re: [squid-users] external icap issue with squid 5 and higher

2024-02-02 Thread Alex Rousskov

On 2024-02-02 12:04, Yvain PAYEN wrote:


We don't use ssl_bump, icap service only analyze HTTP CONNECT requests


Great, that simplifies things a lot.


Adaptation::Icap::Xaction::noteCommRead threw exception: > check failed: readBuf.isEmpty()> exception location: ModXact.cc(1219) 

stopParsing

It looks like Squid found some leftovers after the ICAP response that 
Squid (thought it) had fully parsed. I do not yet know whether that ICAP 
response was malformed or Squid is buggy.


Can you share raw ICAP response bytes (preferrably in libpcap or similar 
raw packet capture format) collected by tcpdump, wireshark, or a similar 
tool? You can obfuscate/convert that ICAP response to text as needed, 
but if those extra bytes get lost in those conversions, then we would 
not be able to tell what those bytes are (e.g., they may contain 
whitespace characters that get easily lost).



Thank you,

Alex.



2024/02/02 17:40:41.943 kid1| 93,3| ModXact.cc(679) callException: 
bypassing 0x558f358fdae0*2 exception: check failed: readBuf.isEmpty()
exception location: ModXact.cc(1219) stopParsing  [FD 
17;rp(1)S(2)YG/Rw job17]
2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(720) disableBypass: will 
never start bypass because already started to bypass
2024/02/02 17:40:41.943 kid1| 93,5| Xaction.cc(127) disableRepeats: 
Adaptation::Icap::ModXact still cannot be repeated because preparing to echo 
content [FD17;rp(1)S(2)G/Rw job17]
2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(724) disableBypass: not 
protecting group bypass because preparing to echo content
2024/02/02 17:40:41.943 kid1| 93,3| Xaction.cc(564) setOutcome: 
WARNING: resetting outcome: from ICAP_SAT to ICAP_ECHO
2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(962) prepEchoing: 
cloning virgin message 0x558f358ff040
2024/02/02 17:40:41.943 kid1| 93,3| Xaction.cc(564) setOutcome: 
WARNING: resetting outcome: from ICAP_ECHO to ICAP_ERR_OTHER
2024/02/02 17:40:41.943 kid1| 93,4| ServiceRep.cc(97) noteFailure:  
failure 1 out of 10 allowed in 0sec [up,fail1]
2024/02/02 17:40:41.943 kid1| 93,2| AsyncJob.cc(130) callException: 
check failed: !adapted.header
 exception location: ModXact.cc(971) prepEchoing
2024/02/02 17:40:41.943 kid1| 93,5| AsyncJob.cc(85) mustStop: 
Adaptation::Icap::ModXact will stop, reason: exception
2024/02/02 17:40:41.943 kid1| 93,5| AsyncJob.cc(140) callEnd: 
Adaptation::Icap::Xaction::noteCommRead(conn8 local=X.X.X.X:46704 
remote=X.X.X.X:1344 FD  17 flags=1, data=0x558f358fe888) ends job [FD 
17;rp(1)S(2)/StoppedRw job17]
2024/02/02 17:40:41.943 kid1| 93,5| ModXact.cc(1295) swanSong: swan 
sings [FD 17;rp(1)S(2)/StoppedRw job17]
2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(616) stopSending: Enter 
stop sending
2024/02/02 17:40:41.943 kid1| 93,7| ModXact.cc(619) stopSending: 
Proceed with stop sending

It seems to bypass because something gone wrong.

Yvain PAYEN

Pôle Opérations & Technologies
Equipe Infrastructure système
T. +33 (0)5 57 57 01 85 (Poste 1185)
M. +33 (0)7 87 30 34 01

Absent tous les mercredi

Tessi France
Immeuble Cassiopée
1-3 avenue des Satellites
33185 Le Haillan

Pensez à l'environnement avant d'imprimer cet e-mail.

-Message d'origine-
De : squid-users  De la part de Alex 
Rousskov
Envoyé : vendredi 2 février 2024 17:19
À : squid-users@lists.squid-cache.org
Objet : Re: [squid-users] external icap issue with squid 5 and higher

⚠ FR : Ce message provient de l'extérieur de l'organisation. N'ouvrez pas de 
liens ou de pièces jointes à moins que vous ne sachiez que le contenu est 
fiable.  ⚠



On 2024-02-02 11:00, Yvain PAYEN wrote:

Hi Squid users,

I have an issue with an external icap service I have to use (from
Forcepoint).

This service is working great with squid v3 and v4.

Starting v5 (v6 also tested) the service only work with plain text
http requests, all requests for https content are allowed even if the
website should be denied.


Do you use ssl_bump rules to decode affected HTTPS traffic? Or is your service 
supposed to analyze plain HTTP CONNECT requests?

With Squid v6, does your ICAP service actually receive expected "requests for https 
content" for analysis from Squid? Or does Squid allow them without contacting the 
ICAP service with those requests? You can check service logs and/or enable icap.log in 
Squid to answer these high-level questions (see icap_log).



My first question is : do you know if a big change in the icap code
happened between v4 and v5 ?


I do not recall, unfortunately; it was too long ago. Please keep in mind that 
your problems may not be triggered by ICAP code changes (if any).



My second question : How can I trace only icap debug logs


ICAP code uses debug section 93. See debug_options directive and 
docs/debug-sections.txt.


HTH,

Alex.




Service is setup like this :

icap_service service_req 

Re: [squid-users] external icap issue with squid 5 and higher

2024-02-02 Thread Alex Rousskov

On 2024-02-02 11:00, Yvain PAYEN wrote:

Hi Squid users,

I have an issue with an external icap service I have to use (from 
Forcepoint).


This service is working great with squid v3 and v4.

Starting v5 (v6 also tested) the service only work with plain text http 
requests, all requests for https content are allowed even if the website 
should be denied.


Do you use ssl_bump rules to decode affected HTTPS traffic? Or is your 
service supposed to analyze plain HTTP CONNECT requests?


With Squid v6, does your ICAP service actually receive expected 
"requests for https content" for analysis from Squid? Or does Squid 
allow them without contacting the ICAP service with those requests? You 
can check service logs and/or enable icap.log in Squid to answer these 
high-level questions (see icap_log).



My first question is : do you know if a big change in the icap code 
happened between v4 and v5 ?


I do not recall, unfortunately; it was too long ago. Please keep in mind 
that your problems may not be triggered by ICAP code changes (if any).




My second question : How can I trace only icap debug logs


ICAP code uses debug section 93. See debug_options directive and 
docs/debug-sections.txt.



HTH,

Alex.




Service is setup like this :

icap_service service_req reqmod_precache icap://10.1.1.1:1344/icap bypass=1

Regards,

*Yvain PAYEN*

*
**Pôle Opérations & Technologies
*Equipe Infrastructure système
T. +33 (0)5 57 57 01 85 (Poste 1185)

M. +33 (0)7 87 30 34 01

Absent tous les mercredi


Tessi France
Immeuble Cassiopée

1-3 avenue des Satellites
33185 Le Haillan


*yvain.pa...@tessi.fr 
www.tessi.eu 
***
Pensez à l'environnement avant d'imprimer cet e-mail.**


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] chunked transfer over sslbump

2024-02-02 Thread Alex Rousskov

On 2024-01-19 09:08, Arun Kumar wrote:

Sorry, due to organization policy not possible to upload the debug logs.


I really doubt your organization prohibits sharing information with 
trusted parties. It is up to you whether to make me (or any other Squid 
developer who is willing to help you) such a party.




Anything to look specifically in the debug logs?


Yes, of course, but I, personally, do not have enough free time to help 
you navigate debugging logs via email. That is why I am suggesting 
sharing those logs with me (while making that sharing comply with any 
organizational policies you need to comply with, of course). This is the 
best I can offer. If that is not good enough, I hope that others can 
offer more/different help.



Good luck,

Alex.


Also please suggest if we can tweak the below sslbump configuration, to 
make the chunked transfer work seamless.


/http_port tcpkeepalive=60,30,3 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB tls-cert= tls-key= 
cipher=... options=NO_TLSv1,... tls_dh=prime256v1:/

/
/
/ssl_bump stare all/

PS: Any documentations/video available to understand the 
bump/stare/peek/splice better? Not understanding much from the 
squid-cache.org contents.


On Friday, January 12, 2024 at 02:10:40 PM EST, Alex Rousskov 
 wrote:


On 2024-01-12 09:21, Arun Kumar wrote:
 > On Wednesday, January 10, 2024 at 11:09:48 AM EST, Alex Rousskov wrote:
 >
 >
 > On 2024-01-10 09:21, Arun Kumar wrote:
 >  >> i) Retry seems to fetch one chunk of the response and not the 
complete.

 >  >> ii) Enabling sslbump and turning ICAP off, not helping.
 >  >> iii)  gcc version is 7.3.1 (Red Hat 7.3.1-17)
 >
 >  >GCC v7 has insufficient C++17 support. I recommend installing GCC v9 or
 > better and then trying with Squid v6.6 or newer.
 >
 > Arun: Compiled Squid 6.6 with gcc 11.4 and still seeing the same issue.

Glad you were able to upgrade to Squid v6.6!


 >  > FWIW, if the problem persists in Squid v6, sharing debugging logs 
would

 > be the next recommended step.
 >
 > Arun: /debug_options ALL,6 /giving too much log. Any particular option
 > we can use to debug this issue?


Please share[^1] a pointer to compressed ALL,9 cache.log collected while
reproducing the problem with Squid v6.6:

https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>

Debugging logs are for developers. Developers can deal with large
volumes of debugging information. You can use services like DropBox to
share large compressed logs. Said that, the better you can isolate the
problem/traffic, the higher are the chances that a developer will (have
the time to) find the answer to your question in the noisy log.

[^1]: Please feel free to share privately if needed, especially if you
are using sensitive configuration or transactions.

Alex.


 >  > Also want to point out that, squid connects to another non-squid proxy
 >  > to reach internet.
 >  > cache_peer  parent  0 no-query default
 >  >
 >  > On Tuesday, January 9, 2024 at 02:18:14 PM EST, Alex Rousskov wrote:
 >  >
 >  >
 >  > On 2024-01-09 11:51, Zhang, Jinshu wrote:
 >  >
 >  >  > Client got below response headers and body. Masked few details.
 >  >
 >  > Thank you.
 >  >
 >  >
 >  >  > Retry seems to fetch data remaining.
 >  >
 >  > I would expect a successful retry to fetch the entire response, 
not just

 >  > the remaining bytes, but perhaps that is what you meant. Thank you for
 >  > sharing this info.
 >  >
 >  >
 >  >  > Want to point out that removing sslbump everything is working fine,
 >  >  > but we wanted to keep it for ICAP scanning.
 >  >
 >  > What if you keep SslBump enabled but disable any ICAP analysis
 >  > ("icap_enable off")? This test may tell us if the problem is between
 >  > Squid and the origin server or Squid and the ICAP service...
 >  >
 >  >
 >  >  > We tried compiling 6.x in Amazon linux, using latest gcc, but 
facing

 >  > similar error -
 >  >
 > 
https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html> <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html>> <[squid-users] compile error in squid v6.1 <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html>>>

 >  >
 >  > What is the "latest gcc" version in your environment? I suspect it is
 >  > n

Re: [squid-users] stale-if-error returning a 502

2024-02-01 Thread Alex Rousskov

On 2024-02-01 12:03, Robin Carlisle wrote:

Hi, I am having trouble with stale-if-error response.


If I am interpreting Squid code correctly, in primary use cases:

* without a Cache-Control:stale-if-error=X in the original response, 
Squid sends a stale object if revalidation results in a 5xx error;


* with a Cache-Control:stale-if-error=X and object age at most X, Squid 
sends a stale object if revalidation results in a 5xx error;


* with a Cache-Control:stale-if-error=X and object age exceeding X, 
Squid forwards the 5xx error response if revalidation results in a 5xx 
error;


In other words, stale-if-error=X turns on a "fail on validation errors" 
behavior for stale objects older than X. It has no other effects.


In your test case, the stale objects are much younger than 
stale-if-error value (e.g., Age~=3601 vs. stale-if-error=31536000). 
Thus, stale-if-error should have no relevant effect.


Something else is probably preventing your Squid from serving the stale 
response when facing a 5xx error. I do not know what that something is.


I recommend sharing (privately if you need to protect sensitive info) a 
pointer to a compressed ALL,9 cache.log collected while reproducing the 
problem (using two transactions similar to the ones you have shared 
below -- a successful stale hit and a problematic one): 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


Alternatively, you can try to study cache.log yourself after setting 
debug_options to ALL,3. Searching for "refresh" and "handleIMSReply" may 
yield enough clues.



HTH,

Alex.





# /etc/squid/squid.conf :

acl to_aws dstdomain .amazonaws.com 

acl from_local src localhost

http_access allow to_aws

http_access allow from_local

cache allow all

cache_dir ufs /var/cache/squid 1024 16 256

http_port 3129 ssl-bump cert=/etc/squid/maul.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB


sslcrtd_program /usr/lib/squid/security_file_certgen -s 
/var/lib/squid/ssl_db -M 4MB


acl step1 at_step SslBump1

ssl_bump bump step1

ssl_bump bump all

sslproxy_cert_error deny all

cache_store_log stdio:/var/log/squid/store.log

logfile_rotate 0

shutdown_lifetime 3 seconds


# /usr/bin/proxy-test :

#!/bin/bash

curl --proxy http://localhost:3129  \

   --cacert /etc/squid/stuff.pem \

   -v "https://stuff.amazonaws.com/api/v1/stuff/stuff.json 
" \


   -H "Authorization: token MYTOKEN" \

   -H "Content-Type: application/json" \

   --output "/tmp/stuff.json"



Tests  ..


At this point in time the network cable is unattached.  Squid returns 
the cached object it got when the network was online earlier. The Age of 
this object is just still under the max_age of 3600.     Previously I 
was using offline_mode but I found that it did not try to revalidate 
from the origin after the object expired (defined via max-age response). 
   My understanding is that stale-if-error should work under my 
circumstances.



# /var/log/squid/access.log

1706799404.440      6 127.0.0.1 NONE_NONE/200 0 CONNECT 
stuff.amazonaws.com:443  - HIER_NONE/- -


1706799404.440      0 127.0.0.1 TCP_MEM_HIT/200 20726 GET 
https://stuff.amazonaws.com/stuff.json 
 - HIER_NONE/- application/json



# extract from /usr/bin/proxy-test

< HTTP/1.1 200 OK

< Date: Thu, 01 Feb 2024 13:57:11 GMT

< Content-Type: application/json

< Content-Length: 20134

< x-amzn-RequestId: 3a2d3b26-df73-4b30-88cb-1a9268fa0df2

< Last-Modified: 2024-02-01T13:00:45.000Z

< Access-Control-Allow-Origin: *

< x-amz-apigw-id: SdZwpG7qiYcERUQ=

< Cache-Control: public, max-age=3600, stale-if-error=31536000

< ETag: "cec102b43372840737ab773c2e77858b"

< X-Amzn-Trace-Id: Root=1-65bba337-292be751134161b03555cdd6

< Age: 3573

< X-Cache: HIT from labs-maul-st-31

< X-Cache-Lookup: HIT from labs-maul-st-31:3129

< Via: 1.1 labs-maul-st-31 (squid/5.7)

< Connection: keep-alive




Below .. the curl script executes again.  The Age has gone over the 
max-age so squid attempted to refresh from the origin.  The machine is 
still offline so the refresh failed.   I expected that the 
stale-if-error response would instruct squid to return the cached object 
as a 200.



# /var/log/squid/access.log

1706799434.464      5 127.0.0.1 NONE_NONE/200 0 CONNECT 
stuff.amazonaws.com:443  - HIER_NONE/- -


1706799434.464      0 127.0.0.1 TCP_REFRESH_FAIL_ERR/502 4235 GET 
https://stuff.amazonaws.com/stuff.json 
 - HIER_NONE/- text/html



# extract from /usr/bin/proxy-test

< HTTP/1.1 502 Bad Gateway

< Server: squid/5.7

< Mime-Version: 1.0

< Date: Thu, 01 Feb 2024 14:57:14 GMT

< Content-Type: text/html;charset=utf-8

< Content-Length: 3853

< X-Squid-Error: ERR_READ_ERROR 0

< Vary: Accept-Language

< Content-Language: en

< X-Cache: MISS 

Re: [squid-users] does the logging of cache.log support the log modules like daemon, syslog, udp ...

2024-02-01 Thread Alex Rousskov

On 2024-02-01 07:15, Dieter Bloms wrote:


Is it possible to send the cache.logs to the syslog socket /dev/log ?


cache_log does not have access_log's concept of logging modules.

* To send level-0/1 cache.log messages to syslog, use "squid -s ..." or 
"squid -l... ...". By default, syslog is only used for a few special 
messages that are not printed to cache.log (e.g., "Exiting due to 
repeated, frequent failures") and these level-0 cache.log messages:


FATAL: Received Segment Violation...dying.
FATAL: Received Bus Error...dying.
FATAL: Received signal ... ...dying.
ERROR: Squid BUG: ...

and

FATAL: 
Squid Cache (Version ...): Terminated abnormally


* To send level-X (and more important) cache.log messages to standard 
error stream, use "squid -dX ...". That stderr output can be redirected 
as needed using shell output redirection mechanisms, of course. By 
default, modern Squids do not log to stderr in most cases.



The above two options do not tell Squid what cache.log messages to emit. 
They only affect which emitted cache.log messages to copy to syslog 
and/or stderr. To tell Squid what cache.log messages to emit, see "squid 
-X ...", debug_options, and cache_log_message. By default, Squid emits 
level-0/1 messages in most cases.



If the above information is not in Squid wiki, please consider 
submitting a pull request that adds (a polished version of) it:

https://github.com/squid-cache/squid-cache.github.io/pulls


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid - Queue overflow

2024-01-31 Thread Alex Rousskov

On 2024-01-29 07:09, Andre Bolinhas wrote:


I'm getting this error in cache.log

2024/01/29 14:33:03 kid5| ERROR: Collapsed forwarding queue overflow for 
kid1 at 1024 items

     current master transaction: master2163155


This leads Squid stops filtering or check any of the ACL rules, allowing 
users to navigate to all websites without any kind of filtering or control.


FWIW, I am surprised by such a side effect. Are you sure that it is this 
particular ERROR that leads to access controls bypass? Are there any 
other alarming messages in cache.log?




Can you help to understand and correct this issue please.


My recommendation is to upgrade to Squid v6 and address "Your cache is 
running out of filedescriptors" WARNINGs that you have reported in 
another squid-users thread. Once your Squid descriptor resources match 
the traffic Squid receives, this ERROR may disappear. If the ERROR is 
still there after those two changes, it may be easier to triage it in a 
cleaner environment.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread Alex Rousskov

On 2024-01-31 09:23, David Touzeau wrote:


Hi %note is used by our external_acls and for log other tokens
And we use also Group as token.
it can disabled by direcly removing source kerberos code before 
compiling but i would like to know if there is another way


In most cases, one does not have to (and does not really want to) log 
_all_ transaction annotations. It is possible to specify annotations 
that should be logged by using the annotation name as a %note parameter.


For example, to just log annotation named foo, use %note{foo} instead of 
%note.


In many cases, folks that log multiple annotations, prepend the 
annotation name so that it is easier (especially for humans) to extract 
the right annotation from the access log record:


... foo=%note{foo} bar=%note{bar} ...


HTH,

Alex.



Le 31/01/2024 à 14:36, Andrey K a écrit :

Hello, David,

> Anyway to remove these entries from the log ?
I think you should correct logformat directive in your squid 
configuration to disable annotations logging (%note): 
http://www.squid-cache.org/Doc/config/logformat/


Kind regards,
      Ankor.





ср, 31 янв. 2024 г. в 15:51, David Touzeau :

Anyway to remove these entries from the log ?

Le 31/01/2024 à 10:01, Andrey K a écrit :

Hello, David,

group values in your logs are BASE64-encoded binary AD-groups SIDs.
You can try to decode them by a simple perl script sid-reader.pl
 (see below):

echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl
sid-reader.pl 

And finally convert SID to a group name:
wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

Kind regards,
      Ankor


*sid-reader.pl :*
#!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

my $binary_sid;
my @parts;
while(<>){
  push @parts, $_;
}
  $binary_sid = join('', @parts);

  my($sid_rev, $num_auths, $id1, $id2, @ids) =
                unpack("H2 H2 n N V*", $binary_sid);
  my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, @ids);
  print "$sid_string\n";


вт, 30 янв. 2024 г. в 18:49, David Touzeau :


Hi when using Kerberos with Squid when in access log a long
Group tags:

I would like to know how to disable Squid to grab groups
suring authentication verification and in other way, how to
decode Group value

example of an access.log

|1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443
CONNECT eu-mobile.events.data.microsoft.com:443
 leblud
HIER_DIRECT/13.69.239.72:443  -
mac="00:00:00:00:00:00"


Re: [squid-users] CONNECT Response Headers

2024-01-29 Thread Alex Rousskov

On 2024-01-22 16:28, Alex Coomans wrote:

I'd like to be able to set headers on the response sent to a CONNECT 
request, but the documentation notes reply_header_add does not work for 
that - is there another option or a way to achieve this without needing 
to MITM the TLS?


AFAICT, Squid does not have code that can customize headers of a regular 
200 (Connection established) response to a CONNECT requests (with or 
without MitM). This functionality is rarely needed (because most clients 
tend to ignore most CONNECT response headers), but quality pull requests 
adding that functionality are welcomed.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-18 Thread Alex Rousskov

On 2024-01-18 09:53, Robin Carlisle wrote:


My expectation/hope is that squid would return the cached object on
any network failure in between ubuntu-pc and the AWS endpoint - and
continue to return this cached object forever.   Is this something
squid can do? It would seem that offline_mode should do this?


Yes and yes. The reason you are getting errors are not related to cache 
hits or misses. Those errors happen _before_ Squid gets the requested 
resource URL and looks up that resource in Squid cache.



ssl_bump peek step1
ssl_bump bump all 


To get that URL (in your configuration), Squid must bump the connection. 
To bump the connection at step2, Squid must contact the origin server. 
When the cable is unplugged, Squid obviously cannot do that: The attempt 
to open a Squid-AWS connection fails.


> .../200 0 CONNECT stuff.amazonaws.com:443 - HIER_DIRECT
> .../503 4087 GET https://stuff.amazonaws.com/api/... - HIER_NONE

Squid reports bumping errors to the client using HTTP responses. To do 
that, Squid remembers the error response, bumps the client connection, 
receives GET from the client on that bumped connection, and sends that 
error response to the client. This is why you see both CONNECT/200 and 
GET/503 access.log records. Note that Squid does not check whether the 
received GET request would have been a cache hit in this case -- the 
response to that request has been preordained by the earlier bumping 
failure.



Solution candidates to consider include:

* Stop bumping: https_port 443 cert=/etc/squid/stuff.pem

Configure Squid as (a reverse HTTPS proxy for) the AWS service. Use 
https_port. No SslBump rules/options! The client would think that it is 
sending HTTPS requests directly to the service. Squid will forward 
client requests to the service. If this works (and I do not have enough 
information to know that this will work in your specific environment), 
then you will get a much simpler setup.



* Bump at step1, before Squid contacts AWS: ssl_bump bump all

Bugs notwithstanding, there will be no Squid-AWS connection for cache 
hits. The resulting certificate will not be based on AWS service info, 
but it looks like your client is ignorant enough to ignore related 
certificate problems.



HTH,

Alex.


Hi, Hoping someone can help me with this issue that I have been 
struggling with for days now.   I am setting up squid on an ubuntu PC to 
forward HTTPS requests to an API and an s3 bucket under my control on 
amazon AWS.  The reason I am setting up the proxy is two-fold...


1) To reduce costs from AWS.
2) To provide content to the client on the ubuntu PC if there is a 
networking issue somewhere in between the ubuntu PC and AWS.


Item 1 is going well so far.   Item 2 is not going well.   Setup details ...

*# squid - setup cache folder*
mkdir -p /var/cache/squid
chown -R proxy:proxy  /var/cache/squid

*# ssl - generate key*
apt --yes install squid-openssl libnss3-tools
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
   -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com 
" \

   -keyout /etc/squid/stuff.pem -out /etc/squid/stuff.pem
chown root:proxy /etc/squid/stuff.pem
chmod 644  /etc/squid/stuff.pem

*# ssl - ssl DB*
mkdir -p /var/lib/squid
rm -rf /var/lib/squid/ssl_db
/usr/lib/squid/security_file_certgen -c -s /var/lib/squid/ssl_db -M 4MB
chown -R proxy:proxy /var/lib/squid/ssl_db

*# /etc/squid/squid.conf :*
acl to_aws dstdomain .amazonaws.com 
acl from_local src localhost
http_access allow to_aws
http_access allow from_local
cache allow all
cache_dir ufs /var/cache/squid 1024 16 256
offline_mode on
http_port 3129 ssl-bump cert=/etc/squid/stuff.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib/squid/security_file_certgen -s 
/var/lib/squid/ssl_db -M 4MB

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
sslproxy_cert_error deny all
cache_store_log stdio:/var/log/squid/store.log
logfile_rotate 0

*# /usr/bin/proxy-test :*
#!/bin/bash
curl --proxy http://localhost:3129  \
   --cacert /etc/squid/stuff.pem \
   -v "https://stuff.amazonaws.com/api/v1/stuff/stuff.json 
" \

   -H "Authorization: token MYTOKEN" \
   -H "Content-Type: application/json" \
   --output "/tmp/stuff.json"



When network connectivity is GOOD, everything works well and I get cache 
HITS ...


*# /var/log/squid/access.log*
1705587538.837    238 127.0.0.1 NONE_NONE/200 0 CONNECT 
stuff.amazonaws.com:443  - 
HIER_DIRECT/3.136.246.238  -
1705587538.838      0 127.0.0.1 TCP_MEM_HIT/200 32818 GET 
https://stuff.amazonaws.com/api/v1/stuff/stuff.json 
 - HIER_NONE/- 
application/json


*# extract from /usr/bin/proxy-test output*
< HTTP/1.1 200 OK
< Date: Thu, 18 Jan 2024 13:38:01 GMT
< Content-Type: 

Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-01-16 Thread Alex Rousskov

On 2024-01-16 10:44, Stephen Borrill wrote:

On 16/01/2024 14:43, Stephen Borrill wrote:
I have created a local DNS entry for 
forcesafesearch.google.com that only returns the A record. I think 
that should work around it (for that site, but not others).


Huh, it appears not to work around it properly. See error of "no DNS 
records" when it has literally just found the address in the cache.


These level-7 debugging records are meant for developers. The snippet 
below is not as self-contradictory as it may appear to a casual 
observer. It implies that the transaction hit a cached _set_ of DNS 
lookups. That set was previously formed from a usable DNS A response 
record (216.239.38.120) and an empty DNS  response ("No DNS records").


Alex.


2024/01/16 15:40:06.409 kid1| 14,4| ipcache.cc(617) nbgethostbyname: 
forcesafesearch.google.com
2024/01/16 15:40:06.409 kid1| 14,4| ipcache.cc(657) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: HIT for 
'forcesafesearch.google.com'
2024/01/16 15:40:06.409 kid1| 14,7| ipcache.cc(253) forwardIp: 
216.239.38.120
2024/01/16 15:40:06.409 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector260781 found conn1888968 local=0.0.0.0 
remote=216.239.38.120:443 HIER_DIRECT flags=1, destination #1 for 
forcesafesearch.google.com:443
2024/01/16 15:40:06.409 kid1| 44,2| peer_select.cc(1180) handlePath: 
always_direct = ALLOWED
2024/01/16 15:40:06.409 kid1| 44,2| peer_select.cc(1181) handlePath: 
never_direct = DENIED
2024/01/16 15:40:06.409 kid1| 44,2| peer_select.cc(1182) handlePath:
timedout = 0
2024/01/16 15:40:06.409 kid1| 14,7| ipcache.cc(236) finalCallback: 
0x189fb5e38  lookup_err=No DNS records



On 10/01/2024 12:40, Stephen Borrill wrote:

On 09/01/2024 15:42, Alex Rousskov wrote:

On 2024-01-09 05:56, Stephen Borrill wrote:

On 09/01/2024 09:51, Stephen Borrill wrote:

On 09/01/2024 03:41, Alex Rousskov wrote:

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) 
connected via IPv4-only periodically fails to connect to the 
destination and then requires a restart to fix it (reload is 
not sufficient).


The problem appears to be that a host that has one address 
each of IPv4 and IPv6 occasionally has its IPv4 address go 
missing as a destination. On closer inspection, this appears 
to happen when the IPv6 address (not the IPv4) address is 
marked as bad.


ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in 
216.239.38.120 #1/2-0



Thank you for sharing more debugging info!


The following seemed odd to. It finds an IPv4 address (this host 
does not have IPv6), puts it in the cache and then says "No DNS 
records":


2024/01/09 12:31:24.020 kid1| 14,4| ipcache.cc(617) 
nbgethostbyname: schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(313) ipcacheRelease: 
ipcacheRelease: Releasing entry for 'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(670) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: MISS for 
'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(480) ipcacheParse: 1 
answers for schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(549) updateTtl: use 
first 69 from RR TTL 69
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(535) addGood: 
schoolbase.online #1 20.54.32.34
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(253) forwardIp: 
20.54.32.34
2024/01/09 12:31:24.020 kid1| 44,2| peer_select.cc(1174) 
handlePath: PeerSelector72389 found conn564274 local=0.0.0.0 
remote=20.54.32.34:443 HIER_DIRECT flags=1, destination #1 for 
schoolbase.online:443
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(459) latestError: 
ERROR: DNS failure while resolving schoolbase.online: No DNS records
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(586) 
ipcacheHandleReply: done with schoolbase.online: 20.54.32.34 #1/1-0
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(236) finalCallback: 
0x1b7381f38  lookup_err=No DNS records


It seemed to happen about the same time as the other failure, so 
perhaps another symptom of the same.


The above log line is self-contradictory AFAICT: It says that the 
cache has both IPv6-looking and IPv4-looking address at the same 
cache position (0) and, judging by the corresponding code, those 
two IP addresses are equal. This is not possible (for those 
specific IP address values). The subsequent Squid behavior can be 
explained by this (unexplained) conflict.


I assume you are running official Squid v6.5 code.


Yes, compiled from source on NetBSD. I have the patch I refer to 
here applied too:

https://lists.squid-cache.org/pipermail/squid-users/2023-November/026279.html


I can suggest the following two steps for going forward:

1. Upgrade to the latest Squid v6 in hope that the problem goes away.


I

Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-01-16 Thread Alex Rousskov

On 2024-01-16 06:01, Stephen Borrill wrote:
The problem is no different with 6.6. Is there any more debugging I can 
provide, Alex?


Yes, but I need to give you a patch that adds that (temporary) debugging 
first (assuming I fail to reproduce the problem in the lab). The ball is 
on my side (unless somebody else steps in). Unfortunately, I do not have 
any free time for any of that right now. If you do not hear from me 
sooner, please ping me again on or after February 8, 2024.



Thank you,

Alex.


On 10/01/2024 12:40, Stephen Borrill wrote:

On 09/01/2024 15:42, Alex Rousskov wrote:

On 2024-01-09 05:56, Stephen Borrill wrote:

On 09/01/2024 09:51, Stephen Borrill wrote:

On 09/01/2024 03:41, Alex Rousskov wrote:

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) connected 
via IPv4-only periodically fails to connect to the destination 
and then requires a restart to fix it (reload is not sufficient).


The problem appears to be that a host that has one address each 
of IPv4 and IPv6 occasionally has its IPv4 address go missing as 
a destination. On closer inspection, this appears to happen when 
the IPv6 address (not the IPv4) address is marked as bad.


ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in 
216.239.38.120 #1/2-0



Thank you for sharing more debugging info!


The following seemed odd to. It finds an IPv4 address (this host does 
not have IPv6), puts it in the cache and then says "No DNS records":


2024/01/09 12:31:24.020 kid1| 14,4| ipcache.cc(617) nbgethostbyname: 
schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(313) ipcacheRelease: 
ipcacheRelease: Releasing entry for 'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(670) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: MISS for 
'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(480) ipcacheParse: 1 
answers for schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(549) updateTtl: use 
first 69 from RR TTL 69
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(535) addGood: 
schoolbase.online #1 20.54.32.34
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(253) forwardIp: 
20.54.32.34
2024/01/09 12:31:24.020 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector72389 found conn564274 local=0.0.0.0 
remote=20.54.32.34:443 HIER_DIRECT flags=1, destination #1 for 
schoolbase.online:443
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(459) latestError: 
ERROR: DNS failure while resolving schoolbase.online: No DNS records
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(586) 
ipcacheHandleReply: done with schoolbase.online: 20.54.32.34 #1/1-0
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(236) finalCallback: 
0x1b7381f38  lookup_err=No DNS records


It seemed to happen about the same time as the other failure, so 
perhaps another symptom of the same.


The above log line is self-contradictory AFAICT: It says that the 
cache has both IPv6-looking and IPv4-looking address at the same 
cache position (0) and, judging by the corresponding code, those two 
IP addresses are equal. This is not possible (for those specific IP 
address values). The subsequent Squid behavior can be explained by 
this (unexplained) conflict.


I assume you are running official Squid v6.5 code.


Yes, compiled from source on NetBSD. I have the patch I refer to here 
applied too:

https://lists.squid-cache.org/pipermail/squid-users/2023-November/026279.html


I can suggest the following two steps for going forward:

1. Upgrade to the latest Squid v6 in hope that the problem goes away.


I have just upgraded to 6.6.

2. If the problem is still there, patch the latest Squid v6 to add 
more debugging in hope to explain what is going on. This may take a 
few iterations, and it will take me some time to produce the 
necessary debugging patch.


Unfortunately, I don't have a test case that will cause the problem so 
I need to run this at a customer's production site that is 
particularly affected by it. Luckily, the problem recurs pretty quickly.


Here's a run with 6.6 where the number of destinations drops from 2 to 
1 before reverting. Not seen this before - usually once it has dropped 
to 1 (the IPv6 address), it stays there until a restart (and this did 
happen about a minute after this log fragment). Happy to test out any 
debugging patch.


2024/01/10 11:55:49.849 kid1| 14,4| ipcache.cc(617) nbgethostbyname: 
forcesafesearch.google.com
2024/01/10 11:55:49.849 kid1| 14,3| Address.cc(389) lookupHostIP: 
Given Non-IP 'forcesafesearch.google.com': hostname or servname not 
provided or not known
2024/01/10 11:55:49.849 kid1| 14,4| ipcache.cc(657) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: HIT for 
'forcesafesearch.google.com'
2024/01/10 11:55:49.849 

Re: [squid-users] chunked transfer over sslbump

2024-01-12 Thread Alex Rousskov

On 2024-01-12 09:21, Arun Kumar wrote:

On Wednesday, January 10, 2024 at 11:09:48 AM EST, Alex Rousskov wrote:


On 2024-01-10 09:21, Arun Kumar wrote:
 >> i) Retry seems to fetch one chunk of the response and not the complete.
 >> ii) Enabling sslbump and turning ICAP off, not helping.
 >> iii)  gcc version is 7.3.1 (Red Hat 7.3.1-17)

 >GCC v7 has insufficient C++17 support. I recommend installing GCC v9 or
better and then trying with Squid v6.6 or newer.

Arun: Compiled Squid 6.6 with gcc 11.4 and still seeing the same issue.


Glad you were able to upgrade to Squid v6.6!



 > FWIW, if the problem persists in Squid v6, sharing debugging logs would
be the next recommended step.

Arun: /debug_options ALL,6 /giving too much log. Any particular option 
we can use to debug this issue?



Please share[^1] a pointer to compressed ALL,9 cache.log collected while 
reproducing the problem with Squid v6.6:


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction

Debugging logs are for developers. Developers can deal with large 
volumes of debugging information. You can use services like DropBox to 
share large compressed logs. Said that, the better you can isolate the 
problem/traffic, the higher are the chances that a developer will (have 
the time to) find the answer to your question in the noisy log.


[^1]: Please feel free to share privately if needed, especially if you 
are using sensitive configuration or transactions.


Alex.



 > Also want to point out that, squid connects to another non-squid proxy
 > to reach internet.
 > cache_peer  parent  0 no-query default
 >
 > On Tuesday, January 9, 2024 at 02:18:14 PM EST, Alex Rousskov wrote:
 >
 >
 > On 2024-01-09 11:51, Zhang, Jinshu wrote:
 >
 >  > Client got below response headers and body. Masked few details.
 >
 > Thank you.
 >
 >
 >  > Retry seems to fetch data remaining.
 >
 > I would expect a successful retry to fetch the entire response, not just
 > the remaining bytes, but perhaps that is what you meant. Thank you for
 > sharing this info.
 >
 >
 >  > Want to point out that removing sslbump everything is working fine,
 >  > but we wanted to keep it for ICAP scanning.
 >
 > What if you keep SslBump enabled but disable any ICAP analysis
 > ("icap_enable off")? This test may tell us if the problem is between
 > Squid and the origin server or Squid and the ICAP service...
 >
 >
 >  > We tried compiling 6.x in Amazon linux, using latest gcc, but facing
 > similar error -
 > 
https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html> <[squid-users] compile error in squid v6.1 <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html>>

 >
 > What is the "latest gcc" version in your environment? I suspect it is
 > not the latest GCC version available to folks running Amazon Linux, but
 > you may need to install some packages to get a more recent GCC version.
 > Unfortunately, I cannot give specific instructions for Amazon Linux
 > right now.
 >
 >
 > HTH,
 >
 > Alex.
 >
 >
 >  > HTTP/1.1 200 OK
 >  > Date: Tue, 09 Jan 2024 15:41:33 GMT
 >  > Server: Apache/mod_perl/2.0.10 Perl
 >  > Content-Type: application/download
 >  > X-Cache: MISS from ip-x-y-z
 >  > Transfer-Encoding: chunked
 >  > Via: xxx (ICAP)
 >  > Connection: keep-alive
 >  >
 >  > 1000
 >  > File-Id: xyz.zip
 >  > Local-Path: x/y/z.txt
 >  > Content-Size: 2967
 >  > < binary content >
 >  >
 >  >
 >  > Access log(1st attempt):
 >  > 1704814893.695    138 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 -
 > FIRSTUP_PARENT/10.x.y.z -
 >  > 1704814900.491  6779 172.17.0.2 TCP_MISS/200 138996535 POST
 > https://a.b.com/xyz <https://a.b.com/xyz> <https://a.b.com/xyz 
<https://a.b.com/xyz>> - FIRSTUP_PARENT/10.x.y.z

 > application/download
 >  >
 >  > Retry after 5 mins:
 >  > 1704815201.530    189 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 -
 > FIRSTUP_PARENT/10.x.y.z -
 >  > 1704815208.438  6896 x.y.0.2 TCP_MISS/200 138967930 POST
 > https://a.b.com/xyz <https://a.b.com/xyz> <https://a.b.com/xyz 
<https://a.b.com/xyz>> - FIRSTUP_PARENT/10.x.y.z

 > application/download
 >  >
 >  > Jinshu Zhang
 >  >
 >  >
 >  > Fannie Mae Confidential
 >  > -Original Message-
 >  > From: squid-users <mailto:squid-users-boun...@lists.squid-cache.org>

 > <mailto:squid-users-boun...@lists.squid-cache.org>> On Behalf Of Alex
 > Rousskov
 >  > Sent: Tuesday, January 9, 2024 9:53 AM
 >  > To: squ

Re: [squid-users] Is a workaround for SQUID-2023:9 to disable TRACE requests?

2024-01-10 Thread Alex Rousskov

On 2024-01-10 16:48, Dave Dykstra wrote:

https://github.com/squid-cache/squid/security/advisories/GHSA-rj5h-46j6-q2g5.  



... is another workaround to disable TRACE requests ...?


AFAICT, denying TRACE requests will not allow TRACE transactions to 
reach the problematic code related to that Advisory (under the typical 
conditions you probably care about). However, please note that the same 
or similar bugs can probably be triggered using other requests, under 
other conditions.


In other words, if you just want protection against a script kiddie 
blindly following "Use-After-Free in TRACE Requests" instructions on how 
to kill Squid, then denying TRACE requests should be sufficient. If you 
want protection from somebody who understands the underlying problem and 
spends the time on finding other ways to exploit it, then denying TRACE 
requests (or even disabling collapsed forwarding) may not be enough IMO.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] chunked transfer over sslbump

2024-01-10 Thread Alex Rousskov

On 2024-01-10 09:21, Arun Kumar wrote:

i) Retry seems to fetch one chunk of the response and not the complete.
ii) Enabling sslbump and turning ICAP off, not helping.
iii)  gcc version is 7.3.1 (Red Hat 7.3.1-17)


GCC v7 has insufficient C++17 support. I recommend installing GCC v9 or 
better and then trying with Squid v6.6 or newer.


FWIW, if the problem persists in Squid v6, sharing debugging logs would 
be the next recommended step.



HTH,

Alex.


Also want to point out that, squid connects to another non-squid proxy 
to reach internet.

/cache_peer  parent  0 no-query default/

On Tuesday, January 9, 2024 at 02:18:14 PM EST, Alex Rousskov wrote:


On 2024-01-09 11:51, Zhang, Jinshu wrote:

 > Client got below response headers and body. Masked few details.

Thank you.


 > Retry seems to fetch data remaining.

I would expect a successful retry to fetch the entire response, not just
the remaining bytes, but perhaps that is what you meant. Thank you for
sharing this info.


 > Want to point out that removing sslbump everything is working fine,
 > but we wanted to keep it for ICAP scanning.

What if you keep SslBump enabled but disable any ICAP analysis
("icap_enable off")? This test may tell us if the problem is between
Squid and the origin server or Squid and the ICAP service...


 > We tried compiling 6.x in Amazon linux, using latest gcc, but facing 
similar error - 
https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html <https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html>


What is the "latest gcc" version in your environment? I suspect it is
not the latest GCC version available to folks running Amazon Linux, but
you may need to install some packages to get a more recent GCC version.
Unfortunately, I cannot give specific instructions for Amazon Linux
right now.


HTH,

Alex.


 > HTTP/1.1 200 OK
 > Date: Tue, 09 Jan 2024 15:41:33 GMT
 > Server: Apache/mod_perl/2.0.10 Perl
 > Content-Type: application/download
 > X-Cache: MISS from ip-x-y-z
 > Transfer-Encoding: chunked
 > Via: xxx (ICAP)
 > Connection: keep-alive
 >
 > 1000
 > File-Id: xyz.zip
 > Local-Path: x/y/z.txt
 > Content-Size: 2967
 > < binary content >
 >
 >
 > Access log(1st attempt):
 > 1704814893.695    138 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 - 
FIRSTUP_PARENT/10.x.y.z -
 > 1704814900.491  6779 172.17.0.2 TCP_MISS/200 138996535 POST 
https://a.b.com/xyz <https://a.b.com/xyz> - FIRSTUP_PARENT/10.x.y.z 
application/download

 >
 > Retry after 5 mins:
 > 1704815201.530    189 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 - 
FIRSTUP_PARENT/10.x.y.z -
 > 1704815208.438  6896 x.y.0.2 TCP_MISS/200 138967930 POST 
https://a.b.com/xyz <https://a.b.com/xyz> - FIRSTUP_PARENT/10.x.y.z 
application/download

 >
 > Jinshu Zhang
 >
 >
 > Fannie Mae Confidential
 > -Original Message-
 > From: squid-users <mailto:squid-users-boun...@lists.squid-cache.org>> On Behalf Of Alex 
Rousskov

 > Sent: Tuesday, January 9, 2024 9:53 AM
 > To: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org>

 > Subject: [EXTERNAL] Re: [squid-users] chunked transfer over sslbump
 >
 >
 > On 2024-01-09 09:13, Arun Kumar wrote:
 >
 >> I have compiled/installed squid v5.8 in Amazon Linux and configured it
 >> with sslbump option. Squid is used as proxy to get response from https
 >> site. When the https site sends chunked response, it appears that the
 >> first response comes but it get stuck and doesn't receive the full
 >> response. Appreciate any help.
 >    There were some recent chunking-related changes in Squid, but none 
of them is likely to be responsible for the problems you are describing 
unless the origin server response is very special/unusual.

 >
 > Does the client in this test get the HTTP response header? Some HTTP 
response body bytes?

 >
 > To triage the problem, I recommend sharing the corresponding 
access.log records (at least). Seeing debugging of the problematic 
transaction may be very useful (but avoid using production security keys 
and other sensitive information in such tests):
 > 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>

 >
 > Please note that Squid v5 is not officially supported and has more 
known security vulnerabilities than Squid v6. You should be using Squid v6.

 >
 >
 > HTH,
 >
 > Alex.
 >
 > ___
 > squid-users mailing list
 > squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org>
 > https://lists.squid-cache.org/listinfo/squid-users 
<https://lists.squid-cache.org/listinfo

Re: [squid-users] ICAP too many errors and suspensions

2024-01-10 Thread Alex Rousskov

On 2024-01-09 19:32, John Zhu wrote:


We have the same “suspension” issue when “too many failure”.


To clarify, you have a "failure" issue. Suspension after 
icap_service_failure_limit is normal/expected.




https://www.mail-archive.com/squid-users@lists.squid-cache.org/msg22187.html


FWIW, AFAICT, the original problem was attributed to ClamAV service 
timing out Squid ICAP connection attempts while reloading its virus 
definitions:


https://lists.squid-cache.org/pipermail/squid-users/2021-February/023293.html


14:24:24 kid1| suspending ICAP service for too many failures 
14:24:24 kid1| essential ICAP service is suspended:



We tried enable the debug_options ALL,1 93,7

But have not reproduced suspensions and did not find the root cause.


Please note that it is probably enough to reproduce a single failure; 
reproducing suspensions is better, but can be more difficult, and is 
probably less important. If you cannot see individual failures until the 
service is suspended, then use "icap_service_failure_limit 1" during 
your test, so that the service is suspended after the _first_ failure.


If "ALL,1 93,7" debugging prevents you from reproducing the problem, try 
debug_options set to "ALL,1 93,5", then "ALL,1 93,4", etc.




Checking the source code, can we simply comment out the lines:

scheduleUpdate(squid_curtime+ TheConfig.service_revival_delay);
announceStatusChange("suspended", true);


I have to decline this opportunity to discuss Squid source code 
modifications on the squid-users mailing list. If you want to disable 
service suspensions without understanding why ICAP transactions fail, 
then use a very large icap_service_failure_limit in squid.conf.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] chunked transfer over sslbump

2024-01-09 Thread Alex Rousskov

On 2024-01-09 11:51, Zhang, Jinshu wrote:


Client got below response headers and body. Masked few details.


Thank you.



Retry seems to fetch data remaining.


I would expect a successful retry to fetch the entire response, not just 
the remaining bytes, but perhaps that is what you meant. Thank you for 
sharing this info.




Want to point out that removing sslbump everything is working fine,
but we wanted to keep it for ICAP scanning.


What if you keep SslBump enabled but disable any ICAP analysis 
("icap_enable off")? This test may tell us if the problem is between 
Squid and the origin server or Squid and the ICAP service...




We tried compiling 6.x in Amazon linux, using latest gcc, but facing similar 
error - 
https://lists.squid-cache.org/pipermail/squid-users/2023-July/026016.html


What is the "latest gcc" version in your environment? I suspect it is 
not the latest GCC version available to folks running Amazon Linux, but 
you may need to install some packages to get a more recent GCC version. 
Unfortunately, I cannot give specific instructions for Amazon Linux 
right now.



HTH,

Alex.



HTTP/1.1 200 OK
Date: Tue, 09 Jan 2024 15:41:33 GMT
Server: Apache/mod_perl/2.0.10 Perl
Content-Type: application/download
X-Cache: MISS from ip-x-y-z
Transfer-Encoding: chunked
Via: xxx (ICAP)
Connection: keep-alive

1000
File-Id: xyz.zip
Local-Path: x/y/z.txt
Content-Size: 2967
< binary content >


Access log(1st attempt):
1704814893.695138 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 - 
FIRSTUP_PARENT/10.x.y.z -
1704814900.491   6779 172.17.0.2 TCP_MISS/200 138996535 POST 
https://a.b.com/xyz - FIRSTUP_PARENT/10.x.y.z application/download

Retry after 5 mins:
1704815201.530189 x.y.0.2 NONE_NONE/200 0 CONNECT a.b.com:443 - 
FIRSTUP_PARENT/10.x.y.z -
1704815208.438   6896 x.y.0.2 TCP_MISS/200 138967930 POST https://a.b.com/xyz - 
FIRSTUP_PARENT/10.x.y.z application/download

Jinshu Zhang


Fannie Mae Confidential
-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Tuesday, January 9, 2024 9:53 AM
To: squid-users@lists.squid-cache.org
Subject: [EXTERNAL] Re: [squid-users] chunked transfer over sslbump


On 2024-01-09 09:13, Arun Kumar wrote:


I have compiled/installed squid v5.8 in Amazon Linux and configured it
with sslbump option. Squid is used as proxy to get response from https
site. When the https site sends chunked response, it appears that the
first response comes but it get stuck and doesn't receive the full
response. Appreciate any help.

   There were some recent chunking-related changes in Squid, but none of them 
is likely to be responsible for the problems you are describing unless the 
origin server response is very special/unusual.

Does the client in this test get the HTTP response header? Some HTTP response 
body bytes?

To triage the problem, I recommend sharing the corresponding access.log records 
(at least). Seeing debugging of the problematic transaction may be very useful 
(but avoid using production security keys and other sensitive information in 
such tests):
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction

Please note that Squid v5 is not officially supported and has more known 
security vulnerabilities than Squid v6. You should be using Squid v6.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-01-09 Thread Alex Rousskov

On 2024-01-09 05:56, Stephen Borrill wrote:

On 09/01/2024 09:51, Stephen Borrill wrote:

On 09/01/2024 03:41, Alex Rousskov wrote:

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) connected via 
IPv4-only periodically fails to connect to the destination and then 
requires a restart to fix it (reload is not sufficient).


The problem appears to be that a host that has one address each of 
IPv4 and IPv6 occasionally has its IPv4 address go missing as a 
destination. On closer inspection, this appears to happen when the 
IPv6 address (not the IPv4) address is marked as bad.



ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in 216.239.38.120 #1/2-0



Thank you for sharing more debugging info!

The above log line is self-contradictory AFAICT: It says that the cache 
has both IPv6-looking and IPv4-looking address at the same cache 
position (0) and, judging by the corresponding code, those two IP 
addresses are equal. This is not possible (for those specific IP address 
values). The subsequent Squid behavior can be explained by this 
(unexplained) conflict.


I assume you are running official Squid v6.5 code.

I can suggest the following two steps for going forward:

1. Upgrade to the latest Squid v6 in hope that the problem goes away.

2. If the problem is still there, patch the latest Squid v6 to add more 
debugging in hope to explain what is going on. This may take a few 
iterations, and it will take me some time to produce the necessary 
debugging patch.



HTH,

Alex.


Note that there have been many connections to 
clientservices.googleapis.com prior to this where markAsBad was not 
called, even though IPv6 connectivity was never available.


No markAsBad() is probably normal if Squid did not try to establish 
an IPv6 connection or did not wait long enough to know the result of 
that attempt. However, that does not explain why Squid selected an 
IPv6 address as the next "good" address right after marking that IPv6 
address as bad (at "restoreGoodness" line) when there was another 
good IP address available. It is as if Squid stored two identical 
IPv6 addresses (and not IPv4 ones), but that should not happen either.


This is tangentially related to this thread too:
https://lists.squid-cache.org/pipermail/squid-users/2023-November/026266.html

Once only the IPv6 address is being used, then it returns 503 for that 
host and thus can quickly get marked as dead by a downstream squid 
meaning it does not get used at all (and if it's the only peer all 
access stops).




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] chunked transfer over sslbump

2024-01-09 Thread Alex Rousskov

On 2024-01-09 09:13, Arun Kumar wrote:

I have compiled/installed squid v5.8 in Amazon Linux and configured it 
with sslbump option. Squid is used as proxy to get response from https 
site. When the https site sends chunked response, it appears that the 
first response comes but it get stuck and doesn't receive the full 
response. Appreciate any help.
 There were some recent chunking-related changes in Squid, but none of 
them is likely to be responsible for the problems you are describing 
unless the origin server response is very special/unusual.


Does the client in this test get the HTTP response header? Some HTTP 
response body bytes?


To triage the problem, I recommend sharing the corresponding access.log 
records (at least). Seeing debugging of the problematic transaction may 
be very useful (but avoid using production security keys and other 
sensitive information in such tests):

https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction

Please note that Squid v5 is not officially supported and has more known 
security vulnerabilities than Squid v6. You should be using Squid v6.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-01-08 Thread Alex Rousskov

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) connected via 
IPv4-only periodically fails to connect to the destination and then 
requires a restart to fix it (reload is not sufficient).


The problem appears to be that a host that has one address each of IPv4 
and IPv6 occasionally has its IPv4 address go missing as a destination. 
On closer inspection, this appears to happen when the IPv6 address (not 
the IPv4) address is marked as bad. A log fragment is as follows:


2024/01/08 13:18:39.974 kid1| 44,2| peer_select.cc(460) resolveSelected: 
Find IP destination for: clientservices.googleapis.com:443' via 
clientservices.googleapis.com
2024/01/08 13:18:39.974 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector82284 found conn696198 local=0.0.0.0 
remote=142.250.187.227:443 HIER_DIRECT flags=1, destination #1 for 
clientservices.googleapis.com:443
2024/01/08 13:18:39.974 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector82284 found conn696199 local=[::] 
remote=[2a00:1450:4009:820::2003]:443 HIER_DIRECT flags=1, destination 
#2 for clientservices.googleapis.com:443
2024/01/08 13:18:39.974 kid1| 44,2| peer_select.cc(479) resolveSelected: 
PeerSelector82284 found all 2 destinations for 
clientservices.googleapis.com:443
2024/01/08 13:18:40.245 kid1| 14,2| ipcache.cc(1031) markAsBad: 
[2a00:1450:4009:820::2003]:443 of clientservices.googleapis.com
2024/01/08 13:18:40.245 kid1| 14,3| ipcache.cc(946) seekNewGood: 
succeeded for clientservices.googleapis.com: [2a00:1450:4009:820::2003] 
#2/2-1
2024/01/08 13:18:40.245 kid1| 14,3| ipcache.cc(978) restoreGoodness: 
cleared all IPs for clientservices.googleapis.com; now back to 
[2a00:1450:4009:820::2003] #2/2-1
2024/01/08 13:18:42.065 kid1| 14,3| Address.cc(389) lookupHostIP: Given 
Non-IP 'clientservices.googleapis.com': hostname or servname not 
provided or not known
2024/01/08 13:18:42.065 kid1| 44,2| peer_select.cc(460) resolveSelected: 
Find IP destination for: clientservices.googleapis.com:443' via 
clientservices.googleapis.com
2024/01/08 13:18:42.065 kid1| 14,3| Address.cc(389) lookupHostIP: Given 
Non-IP 'clientservices.googleapis.com': hostname or servname not 
provided or not known
2024/01/08 13:18:42.065 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector82372 found conn697148 local=[::] 
remote=[2a00:1450:4009:820::2003]:443 HIER_DIRECT flags=1, destination 
#1 for clientservices.googleapis.com:443
2024/01/08 13:18:42.065 kid1| 44,2| peer_select.cc(479) resolveSelected: 
PeerSelector82372 found all 1 destinations for 
clientservices.googleapis.com:443



This shows two subsequent connection attempts to 
clientservices.googleapis.com. The first one has both IPv4 and IPv6 
destinations. The IPv6 address is passed to markAsBad. 


Yes.



After that the IPv4 address is not listed as a destination.


I do not see that. I see IPv6 address being selected as the first 
destination (instead of the IPv4 address).


I cannot explain why that happens though. Moreover, a combination of 
certain lines in your debug output near "seekNewGood" do not make sense 
to me -- I do not see how it is possible for Squid to display those 
exact debugging details, but I am probably missing something. Can you 
retest and repost similar lines with 14,9 (or at least 14,7) added to 
your debug_options (or share those lines privately; the more lines you 
can share, the better)?



Note that there have been many connections to 
clientservices.googleapis.com prior to this where markAsBad was not 
called, even though IPv6 connectivity was never available.


No markAsBad() is probably normal if Squid did not try to establish an 
IPv6 connection or did not wait long enough to know the result of that 
attempt. However, that does not explain why Squid selected an IPv6 
address as the next "good" address right after marking that IPv6 address 
as bad (at "restoreGoodness" line) when there was another good IP 
address available. It is as if Squid stored two identical IPv6 addresses 
(and not IPv4 ones), but that should not happen either.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid hangs and dies and can not be killed - needs system reboot

2023-12-19 Thread Alex Rousskov

On 2023-12-18 22:29, Amish wrote:

On 19/12/23 01:14, Alex Rousskov wrote:

On 2023-12-18 09:35, Amish wrote:


I use Arch Linux and today I updated squid from squid 5.7 to squid 6.6.


> Dec 18 13:01:24 mumbai squid[604]: kick abandoning conn199

I do not know whether the above problem is the primary problem in your 
setup, but it is a red flag. Transactions on the same connection may 
get stuck after that message; it is essentially a Squid bug.


I am not sure at all, but this bug might be related to Bug 5187 
workaround that went into Squid v6.2 (commit c44cfe7): 
https://bugs.squid-cache.org/show_bug.cgi?id=5187


Does Squid accept new TCP connections after it enters what you 
describe as a dead state? For example, does "telnet 127.0.0.1 8080" 
establishes a connection if executed on the same machine as Squid?


Yes it establishes connection. But I do not know what to do next. 


This tells us that your Squid is still listening for incoming 
connections. Most likely, it is not "dead" but running and just unable 
to make progress with those connections (for yet unknown reasons). That 
information is helpful but not sufficient (for me) to solve the problem 
you are describing.


The next step that I would recommend is to collect debugging information 
from the running process and share a pointer to the corresponding 
compressed cache.log file:


* Ideally, start collection when Squid starts and reproduce the problem 
while collecting full debugging information:

http://wiki.squid-cache.org/SquidFaq/BugReporting#full-debug-output

* If you have to, start collection after Squid is already in bad state 
and just before you use telnet or browser to tickle Squid:

http://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction

Do not use any secret information (e.g., production certificate keys) 
for these tests (unless you are going to share the logs privately with 
those you trust).


Do not downgrade to v5 for these tests.


HTH,

Alex.


Browser showed "Connection timed out" message. But I believe browser's 
also connected but nothing happened afterwards.




> kill -9 does nothing

Is it possible that you are trying to kill the wrong process? You 
should be killing this process AFAICT:


> root 601  0.0  0.2  73816 22528 ?    Ss   12:59 0:02
> /usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC


I did not clarify but all processes needed SIGKILL and vanished except 
the Dead squid process which still remained.


# systemctl stop squid

Dec 19 08:46:38 mumbai systemd[1]: squid.service: State 'stop-sigterm' 
timed out. Killing.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 601 
(squid) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 604 
(squid) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 607 
(security_file_c) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 608 
(security_file_c) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 609 
(security_file_c) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 610 
(security_file_c) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 611 
(security_file_c) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 622 
(log_file_daemon) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Main process exited, 
code=killed, status=9/KILL
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 604 
(squid) with signal SIGKILL.


Waited for 2 minutes for squid to stop then pressed ctrl-c to systemctl 
stop squid command.


As you can see in last line shows that attempt was made to kill DEAD 
process with PID 604.


# ps aux |grep squid
proxy    604  0.0  0.0  0 0 ?    D    Dec18   0:03 [squid]

Now only DEAD squid process remains.

What next? Should I downgrade to 5.9 and check?

Regards

Amish


Alex.


After the update from 5.7 to 6.6, squid starts but then reaches Dead 
state in a minute or two.


# ps aux | grep squid
root 601  0.0  0.2  73816 22528 ?    Ss   12:59 0:02 
/usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC

proxy    604  0.0  0.0  0 0 ?    D    12:59 0:03 [squid]
proxy    607  0.0  0.0  11976  7424 ?    S    12:59 0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    608  0.0  0.0  11976  7168 ?    S    12:59 0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    609  0.0  0.0  11712  5632 ?    S    12:59 0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    610  0.0  0.0  11712  5376 ?    S    12:59 0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    611  0.0  0.0  11712  5504 ? 

Re: [squid-users] squid hangs and dies and can not be killed - needs system reboot

2023-12-18 Thread Alex Rousskov

On 2023-12-18 09:35, Amish wrote:


I use Arch Linux and today I updated squid from squid 5.7 to squid 6.6.


> Dec 18 13:01:24 mumbai squid[604]: kick abandoning conn199

I do not know whether the above problem is the primary problem in your 
setup, but it is a red flag. Transactions on the same connection may get 
stuck after that message; it is essentially a Squid bug.


I am not sure at all, but this bug might be related to Bug 5187 
workaround that went into Squid v6.2 (commit c44cfe7): 
https://bugs.squid-cache.org/show_bug.cgi?id=5187


Does Squid accept new TCP connections after it enters what you describe 
as a dead state? For example, does "telnet 127.0.0.1 8080" establishes a 
connection if executed on the same machine as Squid?



> kill -9 does nothing

Is it possible that you are trying to kill the wrong process? You should 
be killing this process AFAICT:


> root 601  0.0  0.2  73816 22528 ?Ss   12:59   0:02
> /usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC

Alex.


After the update from 5.7 to 6.6, squid starts but then reaches Dead 
state in a minute or two.


# ps aux | grep squid
root 601  0.0  0.2  73816 22528 ?    Ss   12:59   0:02 
/usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC

proxy    604  0.0  0.0  0 0 ?    D    12:59   0:03 [squid]
proxy    607  0.0  0.0  11976  7424 ?    S    12:59   0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    608  0.0  0.0  11976  7168 ?    S    12:59   0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    609  0.0  0.0  11712  5632 ?    S    12:59   0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    610  0.0  0.0  11712  5376 ?    S    12:59   0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    611  0.0  0.0  11712  5504 ?    S    12:59   0:00 
(security_file_certgen) -s /var/cache/squid/ssl_db -M 4MB
proxy    622  0.0  0.0   6116  3200 ?    S    12:59   0:00 
(logfile-daemon) /var/log/squid/access.log


And then all requests get stuck. Notice the D (dead) state of squid.

I use multiple ports for multiple purposes. (It all worked fine in squid 
5.7)


Dec 18 12:59:10 mumbai squid[601]: Starting Authentication on port 
[::]:3128
Dec 18 12:59:10 mumbai squid[601]: Disabling Authentication on port 
[::]:3128 (interception enabled)
Dec 18 12:59:10 mumbai squid[601]: Starting Authentication on port 
[::]:8081
Dec 18 12:59:10 mumbai squid[601]: Disabling Authentication on port 
[::]:8081 (interception enabled)
Dec 18 12:59:12 mumbai squid[601]: Starting Authentication on port 
[::]:8082
Dec 18 12:59:12 mumbai squid[601]: Disabling Authentication on port 
[::]:8082 (interception enabled)
Dec 18 12:59:12 mumbai squid[601]: Starting Authentication on port 
[::]:8083
Dec 18 12:59:12 mumbai squid[601]: Disabling Authentication on port 
[::]:8083 (interception enabled)
Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port 
[::]:8084
Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port 
[::]:8084 (interception enabled)
Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port 
[::]:3136
Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port 
[::]:3136 (interception enabled)
Dec 18 12:59:13 mumbai squid[601]: Starting Authentication on port 
[::]:3137
Dec 18 12:59:13 mumbai squid[601]: Disabling Authentication on port 
[::]:3137 (interception enabled)

...
Dec 18 12:59:29 mumbai squid[604]: Adaptation support is on
Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted HTTP Socket 
connections at conn19 local=[::]:3128 remote=[::] FD 27 flags=41

    listening port: 3128
Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket 
connections at conn21 local=[::]:8080 remote=[::] FD 28 flags=9

    listening port: 8080
Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped 
HTTPS Socket connections at conn23 local=[::]:8081 remote=[::] FD 29 
flags=41

    listening port: 8081
Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket 
connections at conn25 local=[::]:8092 remote=[::] FD 30 flags=9

    listening port: 8092
Dec 18 12:59:29 mumbai systemd[1]: Started Squid Web Proxy Server.
Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket 
connections at conn27 local=[::]:8093 remote=[::] FD 31 flags=9

    listening port: 8093
Dec 18 12:59:29 mumbai squid[604]: Accepting SSL bumped HTTP Socket 
connections at conn29 local=[::]:8094 remote=[::] FD 32 flags=9

    listening port: 8094
Dec 18 12:59:29 mumbai squid[604]: Accepting NAT intercepted SSL bumped 
HTTPS Socket connections at conn31 local=[::]:8082 remote=[::] FD 33 

Re: [squid-users] [External] Re: Cache_peer breaks Squid 5.5

2023-12-14 Thread Alex Rousskov

On 2023-12-13 20:30, HENDERSON, GAVEN L RTX wrote:

> Old Good: squid-5.5-6.el9_3.1.0.1.src.rpm
> New Bad:  squid-5.5-6.el9_3.2.src.rpm

It looks like you are using some v5.5-based packages that contain some 
(unknown to me) additional changes on top of official Squid v5.5 
releases. The culprit is likely among those additional changes.


Unfortunately, I currently do not have the time to investigate what 
additional changes are located inside the packages you are using. If you 
do not receive further help from others on this mailing list, consider 
contacting whoever produced those packages (they were not produced by 
the Squid Project).


Alex.



Old Good:
Name: squid
Epoch   : 7
Version : 5.5
Release : 6.el9_3.1.0.1
Architecture: x86_64
Source RPM  : squid-5.5-6.el9_3.1.0.1.src.rpm
Build Date  : Fri 10 Nov 2023 11:47:19 PM EST

New Bad:
Name: squid
Epoch   : 7
Version : 5.5
Release : 6.el9_3.2
Architecture: x86_64
Source RPM  : squid-5.5-6.el9_3.2.src.rpm
Build Date  : Wed 22 Nov 2023 05:13:20 PM EST


-Original Message-
From: HENDERSON, GAVEN L RTX
Sent: Wednesday, December 13, 2023 11:04 AM
To: squid-users@lists.squid-cache.org
Subject: RE: [External] Re: [squid-users] Cache_peer breaks Squid 5.5

Thanks for all the information.  I primarially deal with Windows so I'm dealing 
with a bit of a learning curve here.  Any suggestions on how to install Squid 6 
on Rocky 9?  Can I do that with dnf/yum from a repo or an RPM?  I'm really 
hoping there's a way to do that without compiling.

-Original Message-
From: Alex Rousskov 
Sent: Wednesday, December 13, 2023 8:31 AM
To: HENDERSON, GAVEN L RTX ; 
squid-users@lists.squid-cache.org
Subject: Re: [External] Re: [squid-users] Cache_peer breaks Squid 5.5

On 2023-12-12 21:45, HENDERSON, GAVEN L RTX wrote:


assertion failed: peer_digest.cc:419: "EX"


... which is one of the following assertions (quoted with their official Squid 
v5.5 line numbers, none of which is 419):

  412 assert(fetch->pd && receivedData.data);
  416 assert(fetch->buf + fetch->bufofs == receivedData.data);
  421 assert(fetch->bufofs <= SM_PAGE_SIZE);

FWIW, the latest Squid v5 branch code (and future v5.10 if it is released), fixes the bug 
that resulted in assertions logged as "EX"
instead of the actual assertion text. The same bug is fixed in all v6 releases.

This new bug report might be related:
https://urldefense.com/v3/__https://bugs.squid-cache.org/show_bug.cgi?id=5330__;!!MvWE!GWVUtWGdcsSlCpQnSreaelvBFVUzRBoz315_Hs-MfksgZgo0TW82oiY4HVlajT31-yti6h8AijE4eqKOQEPrRqk0IxEc-t_G$



Any thoughts on how to fix this on Rocky 9?


Unfortunately, I failed to find good suspects for this v5 bug -- all obvious 
suspects came after v5.5 was released. However, I do not even know which 
version you were running before updating to v5.5. Please note that v5 is not 
officially supported by the Squid Project.


My recommendation is to update to v6.6 or later.


HTH,

Alex.



-Original Message-----
From: squid-users  On
Behalf Of Alex Rousskov
Sent: Tuesday, December 12, 2023 10:22 AM
To: squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Cache_peer breaks Squid 5.5

On 2023-12-12 11:25, HENDERSON, GAVEN L RTX wrote:

Sorry if this has already been answered.  I couldn't find anything online 
regarding the problem I am experiencing.  I have a Squid server acting as a 
proxy relay.  It listens on two ports and, depending on which port a request 
comes in, the request is forwarded to a specific upstream proxy.  Everything 
was working prior to the 5.5 update.  Now, Squid starts but crashes a few 
seconds later.  I have traced it down to the cache_peer directives.  If I 
comment them out, the service is stable.  I have also tried the stock 
configuration with only the cache_peer directive added and it still crashes so 
I don't think the problem is being triggered by some other part of my 
configuration.  I'm not seeing anything particularly helpful in the messages 
log:


What do you see in Squid's cache.log? I would expect a message about Squid 
hitting an assertion or detecting another catastrophic problem before crashing. 
If there is nothing in cache.log, consider (enabling core dumps in your OS 
configuration and) getting a backtrace from the crashed Squid process.


HTH,

Alex.




Nov 24 15:17:30 svr-squid squid[2259]: Squid Parent: squid-1 process
2290 exited due to signal 6 with status 0 Nov 24 15:17:30 svr-squid
squid[2259]: Squid Parent: squid-1 process 2290 will not be restarted
for 3600 seconds due to repeated, frequent failures Nov 24 15:17:30 svr-squid 
squid[2259]: Exiting due to repeated, frequent failures Nov 24 15:17:30 
svr-squid systemd[1]: mailto:systemd-coredump@21-2295-0.service: Deactivated 
successfully.
Nov 24 15:17:30 svr-squid systemd[1]: squid.service: Main process
exited, code=exited, status=1/FAILURE Nov 24 15:1

Re: [squid-users] [External] Re: Cache_peer breaks Squid 5.5

2023-12-13 Thread Alex Rousskov

On 2023-12-12 21:45, HENDERSON, GAVEN L RTX wrote:


assertion failed: peer_digest.cc:419: "EX"


... which is one of the following assertions (quoted with their official 
Squid v5.5 line numbers, none of which is 419):


412 assert(fetch->pd && receivedData.data);
416 assert(fetch->buf + fetch->bufofs == receivedData.data);
421 assert(fetch->bufofs <= SM_PAGE_SIZE);

FWIW, the latest Squid v5 branch code (and future v5.10 if it is 
released), fixes the bug that resulted in assertions logged as "EX" 
instead of the actual assertion text. The same bug is fixed in all v6 
releases.


This new bug report might be related:
https://bugs.squid-cache.org/show_bug.cgi?id=5330



Any thoughts on how to fix this on Rocky 9?


Unfortunately, I failed to find good suspects for this v5 bug -- all 
obvious suspects came after v5.5 was released. However, I do not even 
know which version you were running before updating to v5.5. Please note 
that v5 is not officially supported by the Squid Project.



My recommendation is to update to v6.6 or later.


HTH,

Alex.



-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Tuesday, December 12, 2023 10:22 AM
To: squid-users@lists.squid-cache.org
Subject: [External] Re: [squid-users] Cache_peer breaks Squid 5.5

On 2023-12-12 11:25, HENDERSON, GAVEN L RTX wrote:

Sorry if this has already been answered.  I couldn't find anything online 
regarding the problem I am experiencing.  I have a Squid server acting as a 
proxy relay.  It listens on two ports and, depending on which port a request 
comes in, the request is forwarded to a specific upstream proxy.  Everything 
was working prior to the 5.5 update.  Now, Squid starts but crashes a few 
seconds later.  I have traced it down to the cache_peer directives.  If I 
comment them out, the service is stable.  I have also tried the stock 
configuration with only the cache_peer directive added and it still crashes so 
I don't think the problem is being triggered by some other part of my 
configuration.  I'm not seeing anything particularly helpful in the messages 
log:


What do you see in Squid's cache.log? I would expect a message about Squid 
hitting an assertion or detecting another catastrophic problem before crashing. 
If there is nothing in cache.log, consider (enabling core dumps in your OS 
configuration and) getting a backtrace from the crashed Squid process.


HTH,

Alex.




Nov 24 15:17:30 svr-squid squid[2259]: Squid Parent: squid-1 process
2290 exited due to signal 6 with status 0 Nov 24 15:17:30 svr-squid
squid[2259]: Squid Parent: squid-1 process 2290 will not be restarted
for 3600 seconds due to repeated, frequent failures Nov 24 15:17:30 svr-squid 
squid[2259]: Exiting due to repeated, frequent failures Nov 24 15:17:30 
svr-squid systemd[1]: mailto:systemd-coredump@21-2295-0.service: Deactivated 
successfully.
Nov 24 15:17:30 svr-squid systemd[1]: squid.service: Main process
exited, code=exited, status=1/FAILURE Nov 24 15:17:30 svr-squid systemd[1]: 
squid.service: Failed with result 'exit-code'.

Here is my configuration:

http_port 192.168.0.1:80 name=port80
http_port 192.168.0.1:81 name=port81

acl port80_acl myportname port80
acl port81_acl myportname port81
acl bypass_parent dstdom_regex "/etc/squid/bypass_parent.txt"
acl domain_blacklist dstdomain "/etc/squid/domain_blacklist.txt"

http_access deny all domain_blacklist
always_direct allow bypass_parent
never_direct allow port80_acl
never_direct allow port81_acl

cache_peer proxy1.domain.com parent 80 0 default name=proxy80
cache_peer_access proxy80 allow port80_acl cache_peer_access proxy80
deny all

cache_peer proxy2.domain.com parent 80 0 default name=proxy81
cache_peer_access proxy81 allow port81_acl cache_peer_access proxy81
deny all

http_access allow all

logformat squid [%tl] %>a %>eui %Ss/%03>Hs %https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squ
id-users__;!!MvWE!FEDTJs2pTa3njy74FPFejy1UEla67Tsxo5KMiUxhSdHn4--dUL_t
kUqmWCJ64l_RHB9_uSLhEWcTeWWRcO7R6E68TUQsCFE6$


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://urldefense.com/v3/__https://lists.squid-cache.org/listinfo/squid-users__;!!MvWE!FEDTJs2pTa3njy74FPFejy1UEla67Tsxo5KMiUxhSdHn4--dUL_tkUqmWCJ64l_RHB9_uSLhEWcTeWWRcO7R6E68TUQsCFE6$


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache_peer breaks Squid 5.5

2023-12-12 Thread Alex Rousskov

On 2023-12-12 11:25, HENDERSON, GAVEN L RTX wrote:

Sorry if this has already been answered.  I couldn't find anything online 
regarding the problem I am experiencing.  I have a Squid server acting as a 
proxy relay.  It listens on two ports and, depending on which port a request 
comes in, the request is forwarded to a specific upstream proxy.  Everything 
was working prior to the 5.5 update.  Now, Squid starts but crashes a few 
seconds later.  I have traced it down to the cache_peer directives.  If I 
comment them out, the service is stable.  I have also tried the stock 
configuration with only the cache_peer directive added and it still crashes so 
I don't think the problem is being triggered by some other part of my 
configuration.  I'm not seeing anything particularly helpful in the messages 
log:


What do you see in Squid's cache.log? I would expect a message about 
Squid hitting an assertion or detecting another catastrophic problem 
before crashing. If there is nothing in cache.log, consider (enabling 
core dumps in your OS configuration and) getting a backtrace from the 
crashed Squid process.



HTH,

Alex.




Nov 24 15:17:30 svr-squid squid[2259]: Squid Parent: squid-1 process 2290 
exited due to signal 6 with status 0
Nov 24 15:17:30 svr-squid squid[2259]: Squid Parent: squid-1 process 2290 will 
not be restarted for 3600 seconds due to repeated, frequent failures
Nov 24 15:17:30 svr-squid squid[2259]: Exiting due to repeated, frequent 
failures
Nov 24 15:17:30 svr-squid systemd[1]: 
mailto:systemd-coredump@21-2295-0.service: Deactivated successfully.
Nov 24 15:17:30 svr-squid systemd[1]: squid.service: Main process exited, 
code=exited, status=1/FAILURE
Nov 24 15:17:30 svr-squid systemd[1]: squid.service: Failed with result 
'exit-code'.

Here is my configuration:

http_port 192.168.0.1:80 name=port80
http_port 192.168.0.1:81 name=port81

acl port80_acl myportname port80
acl port81_acl myportname port81
acl bypass_parent dstdom_regex "/etc/squid/bypass_parent.txt"
acl domain_blacklist dstdomain "/etc/squid/domain_blacklist.txt"

http_access deny all domain_blacklist
always_direct allow bypass_parent
never_direct allow port80_acl
never_direct allow port81_acl

cache_peer proxy1.domain.com parent 80 0 default name=proxy80
cache_peer_access proxy80 allow port80_acl
cache_peer_access proxy80 deny all

cache_peer proxy2.domain.com parent 80 0 default name=proxy81
cache_peer_access proxy81 allow port81_acl
cache_peer_access proxy81 deny all

http_access allow all

logformat squid [%tl] %>a %>eui %Ss/%03>Hs %https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: assertion failed: peer_digest.cc:399: "fetch->pd && receivedData.data"

2023-12-06 Thread Alex Rousskov

On 2023-12-06 08:08, Brendan Kearney wrote:


i am running squid 6.5


You are suffering from Bug 5318:
https://bugs.squid-cache.org/show_bug.cgi?id=5318

That bug has been fixed in v6. Recent daily snapshots contain that fix, 
and it will be a part of the upcoming v6.6 release.


Alex.


on fedora 38, and have found this issue when 
running "cache sharing" (or cache_peer siblings) between my 3 squid 
instances.  a couple weeks ago, this was happening and an update seems 
to have fixed the majority of issues.  when i ran into the issue, i 
could disable cache_peer siblings and restart the instances that 
failed.  the recent update seemed to have addressed the problem, but i 
turned on ssl bump for a subset of traffic and the issue returned.


when i have all 3 proxies started, all of them will work for a period of 
time, then slowly all but one of the proxies will die. the last standing 
proxy does not go down because all other cache_peer siblings are 
offline, and the logic causing the failure does not execute.


this was a larger issue before the recent patch/update, but is still an 
issue when performing ssl bumping on traffic.  is there something i can 
provide in the way of logs or diagnostics to help identify the issue?


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reconfigure drops in memory caches for external_acl_type

2023-11-29 Thread Alex Rousskov

On 2023-11-29 09:38, Ziert, Norman wrote:

in the very recent past I stumbled over that a "squid -k reconfigure" 
drops in memory caches for external_acl_type helpers, wich in my case 
leads to a massive query burst against local winbind 
(ext_wbinfo_group_acl) and infact the active directory 
domaincontrollers. This also leads to an massive impact in servicetime 
to endusers, where after a reconfigure the time needed to fullfill takes 
a recent time by group membership authorization within squid until the 
external_acl cache is "rewarmed".



I have veriefied that behavior back to v5.2.


I want to ask if this is a wanted behavior to squid or should I file a 
bug-report on this?



This behavior is usually unwanted, but there is no particular need for a 
bug report IMHO: We know that this and similar reconfiguration side 
effects are usually unwanted. We are actively working on avoiding 
various unwanted reconfiguration side effects -- the "smooth 
reconfiguration" project has merged about 25 pull requests in recent 
months (some of them were very complex/substantial -- we need to payoff 
a lot of very old technical debt first!), and more are coming. We will 
solve this problem in the foreseeable future.



Maybe I discovered a topic wich is connected to this discussion: 
https://wiki.squid-cache.org/Features/HotConf 


Yes, that old wiki page has some relevant pieces, but we have moved far 
ahead of those old discussions since then and have solved complex 
architectural problems with functional, deployment-tested code that 
supports smooth reconfiguration in some typical cases. We are actively 
working on polishing, merging, and enhancing those improvements.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Virtual Hosting Problem

2023-11-28 Thread Alex Rousskov

On 2023-11-28 05:29, Mario Theodoridis wrote:

Hello everyone,

i'm trying to use squid as a TLS virtual hosting proxy on a system with 
a public IP in front of several internal systems running TLS web servers.


I would like to proxy the incoming connections to the appropriate 
backend servers based on the hostname using SNI.


I'm using the following config to just try this with 1 backend to test 
with and fail already


Here the config:

http_port 3128
debug_options ALL,2
pinger_enable off
shutdown_lifetime 1 second
https_port 0.0.0.0:443 tproxy ssl-bump tls-cert=/root/dummy.pem
acl tlspls ssl::server_name_regex -i test\.regify\.com
cache_peer test.de.regify.com parent 443 0 proxy-only originserver 
no-digest no-netdb-exchange name=test

ssl_bump peek all
ssl_bump splice all
http_access allow all
cache_peer_access test allow all



It sounds like you want all traffic to go to the configured cache_peer, 
but the above configuration has no rules specifying that request routing 
requirement. Try adding something like


never_direct allow all
always_direct deny all

FWIW, cache_peer_access gives permission to access a peer if that peer 
is being considered by request routing rules; it is not a requirement to 
consider a peer.



> Also appreciated would be advise on where to find this documented.

While all squid.conf directives are documented, I am not aware of any 
high-quality web page dedicated to explaining overall request routing to 
Squid admins.



HTH,

Alex.




Starting squid gives me the following:

2023/11/28 11:13:21.919| 1,2| main.cc(1619) SquidMain: Doing post-config 
initialization
2023/11/28 11:13:21.919| 1,2| main.cc(1621) SquidMain: running 
RegisteredRunner::finalizeConfig

2023/11/28 11:13:21.919| Created PID file (/run/squid.pid)
2023/11/28 11:13:21.921| 1,2| main.cc(1453) StartUsingConfig: running 
RegisteredRunner::claimMemoryNeeds
2023/11/28 11:13:21.921| 1,2| main.cc(1454) StartUsingConfig: running 
RegisteredRunner::useConfig
2023/11/28 11:13:21.988 kid1| 1,2| main.cc(1619) SquidMain: Doing 
post-config initialization
2023/11/28 11:13:21.988 kid1| 1,2| main.cc(1621) SquidMain: running 
RegisteredRunner::finalizeConfig
2023/11/28 11:13:21.988 kid1| 1,2| main.cc(1453) StartUsingConfig: 
running RegisteredRunner::claimMemoryNeeds
2023/11/28 11:13:21.988 kid1| 1,2| main.cc(1454) StartUsingConfig: 
running RegisteredRunner::useConfig

2023/11/28 11:13:21.988 kid1| Current Directory is /
2023/11/28 11:13:21.988 kid1| Creating missing swap directories
2023/11/28 11:13:21.988 kid1| No cache_dir stores are configured.
2023/11/28 11:13:21.992| 1,2| main.cc(2051) watch_child: running 
RegisteredRunner::finishShutdown

2023/11/28 11:13:21.992| Removing PID file (/run/squid.pid)
2023/11/28 11:13:22.063| 1,2| main.cc(1619) SquidMain: Doing post-config 
initialization
2023/11/28 11:13:22.063| 1,2| main.cc(1621) SquidMain: running 
RegisteredRunner::finalizeConfig

2023/11/28 11:13:22.063| Created PID file (/run/squid.pid)
2023/11/28 11:13:22.066| 1,2| main.cc(1453) StartUsingConfig: running 
RegisteredRunner::claimMemoryNeeds
2023/11/28 11:13:22.066| 1,2| main.cc(1454) StartUsingConfig: running 
RegisteredRunner::useConfig
2023/11/28 11:13:22.131 kid1| 1,2| main.cc(1619) SquidMain: Doing 
post-config initialization
2023/11/28 11:13:22.132 kid1| 1,2| main.cc(1621) SquidMain: running 
RegisteredRunner::finalizeConfig
2023/11/28 11:13:22.132 kid1| 1,2| main.cc(1453) StartUsingConfig: 
running RegisteredRunner::claimMemoryNeeds
2023/11/28 11:13:22.132 kid1| 1,2| main.cc(1454) StartUsingConfig: 
running RegisteredRunner::useConfig

2023/11/28 11:13:22.132 kid1| Current Directory is /
2023/11/28 11:13:22.132 kid1| Starting Squid Cache version 4.13 for 
x86_64-pc-linux-gnu...

2023/11/28 11:13:22.132 kid1| Service Name: squid
2023/11/28 11:13:22.132 kid1| Process ID 2863502
2023/11/28 11:13:22.132 kid1| Process Roles: worker
2023/11/28 11:13:22.132 kid1| With 1024 file descriptors available
2023/11/28 11:13:22.132 kid1| Initializing IP Cache...
2023/11/28 11:13:22.135 kid1| 78,2| dns_internal.cc(1570) Init: 
idnsInit: attempt open DNS socket to: 0.0.0.0

2023/11/28 11:13:22.135 kid1| DNS Socket created at 0.0.0.0, FD 5
2023/11/28 11:13:22.135 kid1| Adding domain de.regify.com from 
/etc/resolv.conf
2023/11/28 11:13:22.135 kid1| Adding nameserver 192.168.1.1 from 
/etc/resolv.conf
2023/11/28 11:13:22.135 kid1| helperOpenServers: Starting 5/32 
'security_file_certgen' processes
2023/11/28 11:13:22.164 kid1| 46,2| Format.cc(71) parse: got definition 
'%>a/%>A %un %>rm myip=%la myport=%lp'
2023/11/28 11:13:22.165 kid1| 46,2| Format.cc(71) parse: got definition 
'%>a/%>A %un %>rm myip=%la myport=%lp'
2023/11/28 11:13:22.165 kid1| Logfile: opening log 
daemon:/var/log/squid/access.log
2023/11/28 11:13:22.165 kid1| Logfile Daemon: opening log 
/var/log/squid/access.log
2023/11/28 11:13:22.194 kid1| 71,2| store_digest.cc(96) 
storeDigestCalcCap: have: 0, want 0 entries; limits: [1, 0]
2023/11/28 

Re: [squid-users] Kerberos pac ResourceGroups parsing

2023-11-22 Thread Alex Rousskov

On 2023-11-21 23:05, Andrey K wrote:

I have posted a PR: https://github.com/squid-cache/squid/pull/1597 

This is my first contribution to open source. Could you please verify if 
everything is OK.


Thank you for posting that pull request! Let's continue this 
conversation on GitHub since squid-users mailing list is not meant for 
code reviews.


Alex.



чт, 16 нояб. 2023 г. в 17:01, Alex Rousskov:

On 2023-11-16 07:48, Andrey K wrote:

 > I have slightly patched the negotiate_kerberos_pac.cc to
 > implement ResourceGropIds-block parsing.

Please consider posting tested changes as a GitHub Pull Request:
https://wiki.squid-cache.org/MergeProcedure#pull-request
<https://wiki.squid-cache.org/MergeProcedure#pull-request>


Thank you,

Alex.


 > Maybe it will be useful for the community.
 > This patch can be included in future Squid-releases.
 >
 > Kind regards,
 >     Ankor.
 >
 > The patch for the
 > file src/auth/negotiate/kerberos/negotiate_kerberos_pac.cc below:
 >
 > @@ -362,6 +362,123 @@
 >       return ad_groups;
 >   }
 >
 > +
 > +char *
 > +get_resource_group_domain_sid(uint32_t ResourceGroupDomainSid){
 > +
 > +    if (ResourceGroupDomainSid!= 0) {
 > +        uint8_t rev;
 > +        uint64_t idauth;
 > +        char dli[256];
 > +        char *ag;
 > +        int l;
 > +
 > +        align(4);
 > +
 > +        uint32_t nauth = get4byt();
 > +
 > +        size_t length = 1+1+6+nauth*4;
 > +
 > +            ag=(char *)xcalloc((length+1)*sizeof(char),1);
 > +            // the first byte is a length of the SID
 > +            ag[0] = (char) length;
 > +            memcpy((void *)[1],(const void*)[bpos],1);
 > +            memcpy((void *)[2],(const void*)[bpos+1],1);
 > +            ag[2] = ag[2]+1;
 > +            memcpy((void *)[3],(const
void*)[bpos+2],6+nauth*4);
 > +
 > +
 > +
 > +        /* mainly for debug only */
 > +        rev = get1byt();
 > +        bpos = bpos + 1; /*nsub*/
 > +        idauth = get6byt_be();
 > +
 > +        snprintf(dli,sizeof(dli),"S-%d-%lu",rev,(long unsigned
int)idauth);
 > +        for ( l=0; l<(int)nauth; l++ ) {
 > +            uint32_t sauth;
 > +            sauth = get4byt();
 > +            snprintf((char
 > *)[strlen(dli)],sizeof(dli)-strlen(dli),"-%u",sauth);
 > +        }
 > +        debug((char *) "%s| %s: INFO: Got ResourceGroupDomainSid
%s\n",
 > LogTime(), PROGRAM, dli);
 > +        return ag;
 > +    }
 > +
 > +    return NULL;
 > +}
 > +
 > +char *
 > +get_resource_groups(char *ad_groups, char
*resource_group_domain_sid,
 > uint32_t ResourceGroupIds, uint32_t ResourceGroupCount){
 > +    size_t group_domain_sid_len = resource_group_domain_sid[0];
 > +    char *ag;
 > +    size_t length;
 > +
 > +    resource_group_domain_sid++; //now it points to the actual data
 > +
 > +
 > +    if (ResourceGroupIds!= 0) {
 > +        uint32_t ngroup;
 > +        int l;
 > +
 > +        align(4);
 > +        ngroup = get4byt();
 > +        if ( ngroup != ResourceGroupCount) {
 > +            debug((char *) "%s| %s: ERROR: Group encoding error =>
 > ResourceGroupCount: %d Array size: %d\n",
 > +                  LogTime(), PROGRAM, ResourceGroupCount, ngroup);
 > +            return NULL;
 > +        }
 > +        debug((char *) "%s| %s: INFO: Found %d Resource Group
rids\n",
 > LogTime(), PROGRAM, ResourceGroupCount);
 > +
 > +        //make a group template which begins with the
ResourceGroupDomainID
 > +        length = group_domain_sid_len+4;  //+4 for a rid
 > +        ag=(char *)xcalloc(length*sizeof(char),1);
 > +        memcpy((void *)ag,(const void*)resource_group_domain_sid,
 > group_domain_sid_len);
 > +
 > +
 > +        for ( l=0; l<(int)ResourceGroupCount; l++) {
 > +            uint32_t sauth;
 > +            memcpy((void *)[group_domain_sid_len],(const
 > void*)[bpos],4);
 > +
 > +            if (!pstrcat(ad_groups," group=")) {
 > +                debug((char *) "%s| %s: WARN: Too many groups !
size >
 > %d : %s\n",
 > +                      LogTime(), PROGRAM, MAX_PAC_GROUP_SIZE,
ad_groups);
 > +               xfree(ag);
 > +               return NULL;
 > 

Re: [squid-users] 6.x gives frequent connection to peer failed - spurious?

2023-11-21 Thread Alex Rousskov

On 2023-11-21 08:38, Stephen Borrill wrote:

On 15/11/2023 21:55, Alex Rousskov wrote:

On 2023-11-10 05:46, Stephen Borrill wrote:

With 6.x (currently 6.5) there are very frequent (every 10 seconds or 
so) messages like:

2023/11/10 10:25:43 kid1| ERROR: Connection to 127.0.0.1:8123 failed




why is this logged as a connection failure


The current error wording is too assuming and, in your case, evidently 
misleading. The phrase "Connection to X failed" should be changed to 
something more general like "Cannot contact cache_peer X" or "Cannot 
communicate with cache_peer X".


CachePeer::countFailure() patches welcome.


But the point is that it _can_ communicate with the peer, but the peer 
itself can't service the request. The peer returning 503 shouldn't be 
logged as a connection failure



My bad. I missed the fact that the described DNS error happens at a 
_peer_ Squid. Sorry.



Currently, Squid v6 treats most CONNECT-to-peer errors as a sign of a 
broken peer. In 2022, 4xx errors were excluded from that set[1]. At that 
time, we also proposed to make that decision configurable using a new 
cache_peer_fault directive[2], but the new directive was blocked as an 
"overkill"[3], so we hard-coded 4xx exclusion instead.


Going forward, you have several options, including these two:

1. Convince others that Squid should treat all 503 CONNECT errors from 
peers as it already treats all 4xx errors. Hard-code that new logic.


2. Convince others that cache_peer_fault or a similar directive is a 
good idea rather than an overkill. Resurrect its implementation[2].



[1] 
https://github.com/squid-cache/squid/commit/022dbabd89249f839d1861aa87c1ab9e1a008a47


[2] 
https://github.com/squid-cache/squid/commit/25431f18f2f5e796b8704c85fc51f93b6cc2a73d


[3] https://github.com/squid-cache/squid/pull/1166#issuecomment-1295806530


HTH,

Alex.



 > do I need to worry about it beyond it filing up the logs needlessly?

In short, "yes".

I cannot accurately assess your specific needs, but, in most 
environments, one should indeed worry that their cache_peer server 
names cannot be reliably resolved because failed resolution attempts 
waste Squid resources and increase transaction response time. 
Moreover, if these failures are frequent enough (relative to peer 
usage attempts), the affected cache_peer will be marked as DEAD (as 
you have mentioned):


 > 2023/11/09 08:55:22 kid1| Detected DEAD Parent: 127.0.0.1:8123


Problem seems to be easily reproducible:

1# env https_proxy=http://127.0.0.1:8084 curl https://www.invalid.domain/
curl: (56) CONNECT tunnel failed, response 503
2# grep invalid /usr/local/squid/logs/access.log|tail -1
1700573429.015  4 127.0.0.1:8084 TCP_TUNNEL/503 0 CONNECT 
www.invalid.domain:443 - FIRSTUP_PARENT/127.0.0.1:8123 -

3# date -r 1700573429 '+%Y/%m/%d %H:%M:%S'
2023/11/21 13:30:29
4# grep '2023/11/21 13:30:29' /usr/local/squid/logs/cache.log
2023/11/21 13:30:29 kid1| ERROR: Connection to 127.0.0.1:8123 failed


With 4.x there were no such messages.

By comparing to the peer squid logs, these seems to tally with DNS 
failures:
peer_select.cc(479) resolveSelected: PeerSelector1688 found all 0 
destinations for bugzilla.tucasi.com:443


Full ALL,2 log at the time of the reported connection failure:

2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New 
connection on FD 17
2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: 
connection on conn3 local=127.0.0.1:8123 remote=[::] FD 17 flags=9
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1332) 
parseHttpRequest: HTTP Client conn13206 local=127.0.0.1:8123 
remote=127.0.0.1:57843 FD 147 flags=1
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1336) 
parseHttpRequest: HTTP Client REQUEST:
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is 
ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(683) 
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is 
ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 44,2| peer_select.cc(460) 
resolveSelected: Find IP destination for: bugzilla.tucasi.com:443' 
via bugzilla.tucasi.com
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(479) 
resolveSelected: PeerSelector1526 found all 0 destinations for 
bugzilla.tucasi.com:443
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(480) 
resolveSelected:    always_direct = ALLOWED
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(481) 
resolveSelected:     never_direct = DENIED
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(482) 
resolveSelected:     timedout = 0
2023/11/10 10:25:43.163 kid1| 4,2| errorpage.cc(1397) buildBody: No 
existing error page language negotia

Re: [squid-users] Kerberos pac ResourceGroups parsing

2023-11-16 Thread Alex Rousskov

On 2023-11-16 07:48, Andrey K wrote:

I have slightly patched the negotiate_kerberos_pac.cc to 
implement ResourceGropIds-block parsing.


Please consider posting tested changes as a GitHub Pull Request:
https://wiki.squid-cache.org/MergeProcedure#pull-request


Thank you,

Alex.



Maybe it will be useful for the community.
This patch can be included in future Squid-releases.

Kind regards,
    Ankor.

The patch for the 
file src/auth/negotiate/kerberos/negotiate_kerberos_pac.cc below:


@@ -362,6 +362,123 @@
      return ad_groups;
  }

+
+char *
+get_resource_group_domain_sid(uint32_t ResourceGroupDomainSid){
+
+    if (ResourceGroupDomainSid!= 0) {
+        uint8_t rev;
+        uint64_t idauth;
+        char dli[256];
+        char *ag;
+        int l;
+
+        align(4);
+
+        uint32_t nauth = get4byt();
+
+        size_t length = 1+1+6+nauth*4;
+
+            ag=(char *)xcalloc((length+1)*sizeof(char),1);
+            // the first byte is a length of the SID
+            ag[0] = (char) length;
+            memcpy((void *)[1],(const void*)[bpos],1);
+            memcpy((void *)[2],(const void*)[bpos+1],1);
+            ag[2] = ag[2]+1;
+            memcpy((void *)[3],(const void*)[bpos+2],6+nauth*4);
+
+
+
+        /* mainly for debug only */
+        rev = get1byt();
+        bpos = bpos + 1; /*nsub*/
+        idauth = get6byt_be();
+
+        snprintf(dli,sizeof(dli),"S-%d-%lu",rev,(long unsigned int)idauth);
+        for ( l=0; l<(int)nauth; l++ ) {
+            uint32_t sauth;
+            sauth = get4byt();
+            snprintf((char 
*)[strlen(dli)],sizeof(dli)-strlen(dli),"-%u",sauth);

+        }
+        debug((char *) "%s| %s: INFO: Got ResourceGroupDomainSid %s\n", 
LogTime(), PROGRAM, dli);

+        return ag;
+    }
+
+    return NULL;
+}
+
+char *
+get_resource_groups(char *ad_groups, char *resource_group_domain_sid, 
uint32_t ResourceGroupIds, uint32_t ResourceGroupCount){

+    size_t group_domain_sid_len = resource_group_domain_sid[0];
+    char *ag;
+    size_t length;
+
+    resource_group_domain_sid++; //now it points to the actual data
+
+
+    if (ResourceGroupIds!= 0) {
+        uint32_t ngroup;
+        int l;
+
+        align(4);
+        ngroup = get4byt();
+        if ( ngroup != ResourceGroupCount) {
+            debug((char *) "%s| %s: ERROR: Group encoding error => 
ResourceGroupCount: %d Array size: %d\n",

+                  LogTime(), PROGRAM, ResourceGroupCount, ngroup);
+            return NULL;
+        }
+        debug((char *) "%s| %s: INFO: Found %d Resource Group rids\n", 
LogTime(), PROGRAM, ResourceGroupCount);

+
+        //make a group template which begins with the ResourceGroupDomainID
+        length = group_domain_sid_len+4;  //+4 for a rid
+        ag=(char *)xcalloc(length*sizeof(char),1);
+        memcpy((void *)ag,(const void*)resource_group_domain_sid, 
group_domain_sid_len);

+
+
+        for ( l=0; l<(int)ResourceGroupCount; l++) {
+            uint32_t sauth;
+            memcpy((void *)[group_domain_sid_len],(const 
void*)[bpos],4);

+
+            if (!pstrcat(ad_groups," group=")) {
+                debug((char *) "%s| %s: WARN: Too many groups ! size > 
%d : %s\n",

+                      LogTime(), PROGRAM, MAX_PAC_GROUP_SIZE, ad_groups);
+               xfree(ag);
+               return NULL;
+            }
+
+
+            struct base64_encode_ctx ctx;
+            base64_encode_init();
+            const uint32_t expectedSz = base64_encode_len(length) +1 /* 
terminator */;

+            char *b64buf = static_cast(xcalloc(expectedSz, 1));
+            size_t blen = base64_encode_update(, b64buf, length, 
reinterpret_cast(ag));

+            blen += base64_encode_final(, b64buf+blen);
+            b64buf[expectedSz-1] = '\0';
+            if (!pstrcat(ad_groups, reinterpret_cast(b64buf))) {
+                debug((char *) "%s| %s: WARN: Too many groups ! size > 
%d : %s\n",

+                      LogTime(), PROGRAM, MAX_PAC_GROUP_SIZE, ad_groups);
+               xfree(ag);
+               xfree(b64buf);
+               return NULL;
+            }
+            xfree(b64buf);
+
+
+
+            sauth = get4byt();
+            debug((char *) "%s| %s: Info: Got rid: %u\n", LogTime(), 
PROGRAM, sauth);

+            /* attribute */
+            bpos = bpos+4;
+        }
+
+        xfree(ag);
+       return ad_groups;
+    }
+
+    return NULL;
+}
+
+
  char *
  get_ad_groups(char *ad_groups, krb5_context context, krb5_pac pac)
  {
@@ -379,14 +496,14 @@
      uint32_t LogonDomainId=0;
      uint32_t SidCount=0;
      uint32_t ExtraSids=0;
-    /*
      uint32_t ResourceGroupDomainSid=0;
      uint32_t ResourceGroupCount=0;
      uint32_t ResourceGroupIds=0;
-    */
      char **Rids=NULL;
      int l=0;

+    char * resource_group_domain_sid=NULL;
+
      if (!ad_groups) {
          debug((char *) "%s| %s: ERR: No space to store groups\n",
                LogTime(), PROGRAM);
@@ -454,11 +571,11 @@
      bpos = bpos+40;
      SidCount 

Re: [squid-users] 6.x gives frequent connection to peer failed - spurious?

2023-11-15 Thread Alex Rousskov

On 2023-11-10 05:46, Stephen Borrill wrote:

With 6.x (currently 6.5) there are very frequent (every 10 seconds or 
so) messages like:

2023/11/10 10:25:43 kid1| ERROR: Connection to 127.0.0.1:8123 failed



> why is this logged as a connection failure

The current error wording is too assuming and, in your case, evidently 
misleading. The phrase "Connection to X failed" should be changed to 
something more general like "Cannot contact cache_peer X" or "Cannot 
communicate with cache_peer X".


CachePeer::countFailure() patches welcome.


> do I need to worry about it beyond it filing up the logs needlessly?

In short, "yes".

I cannot accurately assess your specific needs, but, in most 
environments, one should indeed worry that their cache_peer server names 
cannot be reliably resolved because failed resolution attempts waste 
Squid resources and increase transaction response time. Moreover, if 
these failures are frequent enough (relative to peer usage attempts), 
the affected cache_peer will be marked as DEAD (as you have mentioned):


> 2023/11/09 08:55:22 kid1| Detected DEAD Parent: 127.0.0.1:8123


HTH,

Alex.





With 4.x there were no such messages.

By comparing to the peer squid logs, these seems to tally with DNS 
failures:
peer_select.cc(479) resolveSelected: PeerSelector1688 found all 0 
destinations for bugzilla.tucasi.com:443


Full ALL,2 log at the time of the reported connection failure:

2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New 
connection on FD 17
2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: 
connection on conn3 local=127.0.0.1:8123 remote=[::] FD 17 flags=9
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1332) 
parseHttpRequest: HTTP Client conn13206 local=127.0.0.1:8123 
remote=127.0.0.1:57843 FD 147 flags=1
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1336) 
parseHttpRequest: HTTP Client REQUEST:
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is 
ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(683) 
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is 
ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 44,2| peer_select.cc(460) resolveSelected: 
Find IP destination for: bugzilla.tucasi.com:443' via bugzilla.tucasi.com
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(479) resolveSelected: 
PeerSelector1526 found all 0 destinations for bugzilla.tucasi.com:443
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(480) resolveSelected: 
   always_direct = ALLOWED
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(481) resolveSelected: 
    never_direct = DENIED
2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(482) resolveSelected: 
    timedout = 0
2023/11/10 10:25:43.163 kid1| 4,2| errorpage.cc(1397) buildBody: No 
existing error page language negotiated for ERR_DNS_FAIL. Using default 
error file.
2023/11/10 10:25:43.163 kid1| 33,2| client_side.cc(617) swanSong: 
conn13206 local=127.0.0.1:8123 remote=127.0.0.1:57843 flags=1


If my analysis is correct why is this logged as a connection failure and 
do I need to worry about it beyond it filing up the logs needlessly?


My concern is that this could lead to the parent being incorrectly 
declared DEAD thus impacting other traffic:


2023/11/09 08:55:22 kid1| Detected DEAD Parent: 127.0.0.1:8123
     current master transaction: master4581234



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Get IP of denied request

2023-11-01 Thread Alex Rousskov

On 2023-10-30 13:08, ma...@web.de wrote:

Am 27.10.23 um 16:22 schrieb Alex Rousskov:

1. Enhance Squid to resolve transaction destination address once (on
first use/need). Remember/reuse resolved IP addresses. Log them via some
new %resolved_dst and %dst_resolution_detail codes.

This improvement will help address a few use cases unrelated to this
discussion, but it will _not_ tell you which of the resolved addresses
actually matched which ACL. You will be able to guess in many cases, but
there will be exceptions.


2. Add a Squid feature where an evaluated ACL can be configured to
annotate the transaction with the information about that evaluation.

To start with, we can support annotations for _matched_ dst ACLs. For
example:

 # When matches, sets the following master transaction annotations:
 # * badDestination_input: used destination address (IP or name)
 # * badDestination_match: the matched destination IP
 # * badDestination_value: the matched ACL parameter value
 # * badDestination_ips: resolved destination IP(s)
 # * badDestination_errors: DNS resolution and other error(s)
 # * badDestination_matches: number of matches (this transaction)
 # * badDestination_evals: number of evaluations (this transaction)
 acl badDestination dst --on-match=annotate 127.0.0.0/24 10.0.0.1

If the same ACL matches more than once, the last(?) match wins, but the
aclfoo_matches annotation can be used to detect these cases. The
aclfoo_evals annotation can be used to detect whether this ACL was
"reached" at all.

If really needed, we can support turning individual annotations on and
off, but I doubt that complexity is worth associated performance
savings. After all, most ACLs will only match once per transaction
lifetime (for correctly written configurations). Access.log will be
configured to only log annotations of interest to the admin, of course.


The above approach can be extended to provide ACL debugging aid:

 # Dumps every mismatch information to cache.log at level 1
 acl goodDestination dst --on-mismatch=log 127.0.0.0/24

 # Dumps every evaluation information to cache.log at level 1
 acl redDestination dst --on-eval=log 127.0.0.0/24


3. Add a Squid feature where Squid (optionally) maintains an internal
database of recent ACL evaluation history and makes that information
accessible via cache manager queries like "which ACLs matched
transaction X?" (where X is logged %master_xaction ID).


The three sketched options are not mutually exclusive, of course. All
require non-trivial code changes.





Let me first ask some questions for clarification:
- Does squid cache all ips from dns responses with multiple ips or only
the one it uses for the request?


All.



- If it caches more than one ip - does squid use more than one of these
ips (e.g. as fallback or round robin) inside a single transaction or for
multiple transactions?


A single transaction will go through several single-name IPs after 
certain transaction failures, but not all failures are treated the same.


Multiple transactions may use multiple single-name IPs due to 
round-robin rotation of cached IPs and other factors.




Supposing squid uses only a single dst ip inside a single transaction,
your first option would be sufficient for our purpose!


In my first option, logged %resolved_dst (and the corresponding master 
transaction metadata) may contain multiple IPs (i.e. all resolved ones). 
AFAICT, if any one of those IPs match a "dst" ACL address, then that 
"dst" ACL should match, even if the transaction does not actually use 
the matched address: The "dst" ACL matches based on request-target info, 
not the _use_ of that info.


One of the difficulties here is to decide whether a transaction should 
wait for both IPv4 and IPv6 addresses to be resolved:


If a transaction proceeds with one family of addresses, then there will 
be no delays associated with waiting for the second one. However, in 
that case, Squid may declare a "dst" ACL mismatch even though there 
would have been a match if we waited for the second DNS answer! It feels 
like for correctness sake, we must wait if a "dst" ACL needs checking or 
a similar decision has to be made...




Resolving the ip only once and storing it inside the transaction would
also avoid ambiguous cases where different ips could theoretically be
used in different acls or worse in acls and the real connection.


Yes, that is one of the positive side effects of that option.



It could also improve the performance of acl evaluation because after
the resolution of the dst ip all following dst acls would be evaluated
fast(?).


A speedup is possible in some corner cases, but, in most cases, there 
will be no difference in this area because "all following dst ACLs" 
should "immediately" use _cached_ IP(s) in the current code AFAICT.




Your second option sounds rea

Re: [squid-users] Get IP of denied request

2023-10-27 Thread Alex Rousskov

On 2023-10-27 07:14, ma...@web.de wrote:

Am 26.10.23 um 21:11 schrieb Alex Rousskov:

On 2023-10-26 08:37, ma...@web.de wrote:

TL;DR: is there a way to get/log the resolved ip of a denied request?


TLDR: Bugs notwithstanding, use %


%

Sorry, my first response was wrong: As you have correctly explained, %is the destination address of the used next hop connection while dst ACL 
uses an address derived from the request-target. The two addresses do 
not have to match and, more importantly, it is wrong to expect a dst ACL 
to update/store that "used next hop address" information when the ACL 
itself does not open or reuse a connection to that address.




 acl matchDst1 dst 127.0.0.1
 acl markDst1 note matched=127.0.0.1
 http_access deny matchDst1 markDst1

 acl matchDst2 dst 127.0.0.2
 acl markDst2 note matched=127.0.0.2
 http_access deny matchDst2 markDst2

 logformat myFormat ... matched_dst=%note{matched}
 access_log ...

For long dst lists, the above approach will require scripting the
generation of the corresponding squid.conf portions or include files, of
course.


I don't think this scales to blacklists with 6-digit count sizes 


I think that depends on the definition of "to scale".



and it also doesn't work for blacklisted networks :-(


Agreed. This approach can only log which dst ACL data matched, and that 
data may be different from the IP.




I hoped there would be a way to get the ip as some kind of variable like
the header fields in logformat.


Yes, I know. I was just documenting an existing workaround for cases 
where it is usable.




Any ideas?


I have three:

1. Enhance Squid to resolve transaction destination address once (on 
first use/need). Remember/reuse resolved IP addresses. Log them via some 
new %resolved_dst and %dst_resolution_detail codes.


This improvement will help address a few use cases unrelated to this 
discussion, but it will _not_ tell you which of the resolved addresses 
actually matched which ACL. You will be able to guess in many cases, but 
there will be exceptions.



2. Add a Squid feature where an evaluated ACL can be configured to 
annotate the transaction with the information about that evaluation.


To start with, we can support annotations for _matched_ dst ACLs. For 
example:


# When matches, sets the following master transaction annotations:
# * badDestination_input: used destination address (IP or name)
# * badDestination_match: the matched destination IP
# * badDestination_value: the matched ACL parameter value
# * badDestination_ips: resolved destination IP(s)
# * badDestination_errors: DNS resolution and other error(s)
# * badDestination_matches: number of matches (this transaction)
# * badDestination_evals: number of evaluations (this transaction)
acl badDestination dst --on-match=annotate 127.0.0.0/24 10.0.0.1

If the same ACL matches more than once, the last(?) match wins, but the 
aclfoo_matches annotation can be used to detect these cases. The 
aclfoo_evals annotation can be used to detect whether this ACL was 
"reached" at all.


If really needed, we can support turning individual annotations on and 
off, but I doubt that complexity is worth associated performance 
savings. After all, most ACLs will only match once per transaction 
lifetime (for correctly written configurations). Access.log will be 
configured to only log annotations of interest to the admin, of course.



The above approach can be extended to provide ACL debugging aid:

# Dumps every mismatch information to cache.log at level 1
acl goodDestination dst --on-mismatch=log 127.0.0.0/24

# Dumps every evaluation information to cache.log at level 1
acl redDestination dst --on-eval=log 127.0.0.0/24


3. Add a Squid feature where Squid (optionally) maintains an internal 
database of recent ACL evaluation history and makes that information 
accessible via cache manager queries like "which ACLs matched 
transaction X?" (where X is logged %master_xaction ID).



The three sketched options are not mutually exclusive, of course. All 
require non-trivial code changes.


Would any of the above options address your needs? Any preferences or 
spec adjustments?




Thank you,

Alex.



Is there any way to get the ip logged that was used in the dst-acl aside
from debug logging? Maybe through some annotation mechanism?

Squid version is 6.2, as 6.4 crashes with assertion errors here, too.

thanks,
Martin


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Get IP of denied request

2023-10-26 Thread Alex Rousskov

On 2023-10-26 08:37, ma...@web.de wrote:


TL;DR: is there a way to get/log the resolved ip of a denied request?


TLDR: Bugs notwithstanding, use %
We have a rather large ip based malware blacklist (dst acl) and
sometimes a destination is blocked inadvertantly because of a false
positive entry in this list.
This happens most often with CDNs where the ips of a destination change
often and even move between different sites.

Because of this rapid change it's difficult to determine the blocked ip
in hindsight when analyzing access problems and makes it impossible to
correct the blacklist.

For normal requests the resolved and accessed ip is be logged with %


If a request was denied by a dst ACL based on its successfully resolved 
destination IP address but %that should be fixed IMO. Meanwhile, you can annotate every dst match 
and log that annotation. Here is an untested sketch:


acl matchDst1 dst 127.0.0.1
acl markDst1 note matched=127.0.0.1
acl all-of dst1 matchDst1 markDst1
http_access deny dst1

acl matchDst2 dst 127.0.0.2
acl markDst2 note matched=127.0.0.2
acl all-of dst2 matchDst2 markDst2
http_access deny dst2

logformat myFormat ... matched_dst=%note{matched}
access_log ...


The same thing with fewer lines (but with fewer ways to group dst1 and 
dst2 with other ACLs):


acl matchDst1 dst 127.0.0.1
acl markDst1 note matched=127.0.0.1
http_access deny matchDst1 markDst1

acl matchDst2 dst 127.0.0.2
acl markDst2 note matched=127.0.0.2
http_access deny matchDst2 markDst2

logformat myFormat ... matched_dst=%note{matched}
access_log ...

For long dst lists, the above approach will require scripting the 
generation of the corresponding squid.conf portions or include files, of 
course.



If a request was denied by a dst ACL because its destination IP address 
could not be resolved, then %of a way to distinguish this case from other cases where %feels like address resolution failures should be available via 
%err_detail, but I doubt Squid code populates that information in these 
cases. Another problem to fix!



HTH,

Alex.




Is there any way to get the ip logged that was used in the dst-acl aside
from debug logging? Maybe through some annotation mechanism?

Squid version is 6.2, as 6.4 crashes with assertion errors here, too.

thanks,
Martin

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] very poor performance of rock cache ipc

2023-10-16 Thread Alex Rousskov

On 2023-10-16 16:24, Julian Taylor wrote:

On 15.10.23 05:42, Alex Rousskov wrote:

On 2023-10-14 12:04, Julian Taylor wrote:

On 14.10.23 17:40, Alex Rousskov wrote:

On 2023-10-13 16:01, Julian Taylor wrote:



The reproducer uses as single request, the same very thing can be 
observed on a very busy squid


If a busy Squid sends lots of IPC messages between worker and disker, 
then either there is a Squid bug we do not know about OR that disker 
is just not as busy as one might expect it to be.


In Squid v6+, you can observe disker queues using mgr:store_queues 
cache manager report. In your environment, do those queues always have 
lots of requests when Squid is busy? Feel free to share (a pointer to) 
a representative sample of those reports from your busy Squid.


N.B. Besides worker-disker IPC messages, there are also worker-worker 
cache synchronization IPC messages. They also have the same "do not 
send IPC messages if the queue has some pending items already" 
optimization.


I checked the queues running with the configuration from my initial mail 
with workers increase and the queues are generally low, around 1-10 
items in the queue when sending around 100 parallel requests reading 
about 100mb data files. Here is a sample: https://dpaste.com/8SLNRW5F8
Also with the higher request rate than the single curl the majority of 
work throughput was more than doubled by increasing the blocksize.


How are the queues supposed to look like on a busy squid that is not 
spending a large portion of its time doing notify IPC?


The queues are supposed to look "not empty" -- a non-empty queue does 
not result in IPC notifications. Needless to say, the further away from 
"empty" the queues are, the lesser the chance they will become empty 
when cache manager report is _not_ "looking" at them.



Increasing the parallel requests does decrease the amount of overhead 
but its still pretty large, I measured about 10%-30% cpu overhead with 
100 parallel requests served from cache in the worker and disker

Here a snipped of a profile:
--22.34%--JobDialer::dial(AsyncCall&)
    |
    |--21.19%--Ipc::UdsSender::start()
    |   |
    |    --21.13%--Ipc::UdsSender::write()
    |   |
    |   |--16.12%--Ipc::UdsOp::conn()
    |   |  |
    |   |   --15.84%--comm_open_uds(int, int, 
sockaddr_un*, int)

    |   |    |--1.70%--commSetCloseOnExec(int)
    |   | --1.56%--commSetNonBlocking(int)
   ...
--12.98%--comm_close_complete(int)

Clearing and constructing the large Ipc::TypedMsgHdr is also very 
noticeable.


That the overhead and maximum throughput is so low for not so busy 
squids (say 1-10 requests per second but requests on average > 1MiB) is 
imo also a reason for concern and could be improved.


I agree.


If I understand the way it works correctly e.g. the worker when it gets 
a request splits it into 4k blocks and enqueues read requests into the 
ipc queue and if the queue is empty it emits a notify ipc so the disker 
starts popping from the queue.


Yes, at some level of abstraction, the above summary is not wrong. 
However, please keep in mind that, for a single HTTP transaction, most 
of the disk read requests are queued by worker, read by disk, and 
received by worker one read request at a time. There is no disk read 
"prefetching" (yet?).



On large requests that are answered immediately from the disker the 
problem seems to be that the queue is mostly empty and it sends an ipc 
ping pong for each 4k block.


Due to lack of prefetching, the total size of the HTTP response does not 
really affect the queue length. Only the transaction concurrency level 
does; on average, that is determined by mean response time multiplied by 
the I/O request rate from a particular worker to a particular disker.



So my though was when the request is larger than 4k enqueue multiple 
pending reads in the worker and only notify after a certain amount has 
been added to the queue, vice versa in the disker.


So I messed around a bit trying to reduce the notifications by delaying 
the Notify call in src/DiskIO/IpcIo/IpcIoFile.cc for larger requests but 
it ended up blocking after the first queue push with no notify. If I 
understand the queue correctly this is due to the reader requires a 
notify to initially start and and simply pushing multiple read requests 
onto the queue without notifying will not work


You are correct.



Is this approach feasible or am I misunderstanding how it works?


Prefetching is feasible in principle, but is not easy to implement well 
and will probably require configuration options (because it will slow 
down busy Squids that do not have the time to prefetch but may not know 
that).


I would consider increasing I/O size (and shared memory page size) 
instead, at least as the first step. Doing so well is not trivial 
either, but may be

Re: [squid-users] very poor performance of rock cache ipc

2023-10-14 Thread Alex Rousskov

On 2023-10-14 12:04, Julian Taylor wrote:

On 14.10.23 17:40, Alex Rousskov wrote:

On 2023-10-13 16:01, Julian Taylor wrote:

When using squid for caching using the rock cache_dir setting the 
performance is pretty poor with multiple workers.
The reason for this is due to the very high number of systemcalls 
involved in the IPC between the disker and workers.


Please allow me to rephrase your conclusion to better match (expected) 
reality and avoid misunderstanding:


By design, a mostly idle SMP Squid should use a lot more system calls 
per disk cache hit than a busy SMP Squid would:


* Mostly idle Squid: Every disk I/O may require a few IPC messages.
* Busy Squid: Bugs notwithstanding, disk I/Os require no IPC messages.


In your single-request test, you are observing the expected effects 
described in the first bullet. That does not imply those effects are 
"good" or "desirable" in your use case, of course. It only means that 
SMP Squid was no optimized for that use case; SMP rock design was 
explicitly targeting the opposite use case (i.e. a busy Squid).


The reproducer uses as single request, the same very thing can be 
observed on a very busy squid


If a busy Squid sends lots of IPC messages between worker and disker, 
then either there is a Squid bug we do not know about OR that disker is 
just not as busy as one might expect it to be.


In Squid v6+, you can observe disker queues using mgr:store_queues cache 
manager report. In your environment, do those queues always have lots of 
requests when Squid is busy? Feel free to share (a pointer to) a 
representative sample of those reports from your busy Squid.


N.B. Besides worker-disker IPC messages, there are also worker-worker 
cache synchronization IPC messages. They also have the same "do not send 
IPC messages if the queue has some pending items already" optimization.



and workaround improves both the single 
request case and the actual heavy loaded production squid in the same way.


FWIW, I do not think that observation contradicts anything I have said.


The hardware involved has a 10G card, not ssds but lots of ram so it has 
a very high page cache hit rate and the squid is very busy, so much it 
is overloaded by system cpu usage in default configuration with the rock 
cache. The network or disk bandwidth is barely ever utilized more than 
10% with all 8 cpus busy on system load.


The above facts suggest that the disk is just not used much OR there is 
a bug somewhere. Slower (for any reason, including CPU overload) IPC 
messages should lead to longer queues and the disappearance of "your 
queue is no longer empty!" IPC messages.



The only way to get the squid to utilize the machine is to increase the 
IO size via the request buffer change or not use the rock cache. UFS 
cache works ok in comparison, but requires multiple independent squid 
instances as it does not support SMP.


Increasing the IO size to 32KiB as I mentioned does allow the squid 
workers to utilize a good 60% of the hardware network and disk 
capabilities.


Please note that I am not disputing this observation. Unfortunately, it 
does not help me guess where the actual/core problem or bottleneck is. 
Hopefully, cache manager mgr:store_queues report will shed some light.



Roughly speaking, here, "busy" means "there are always some messages 
in the disk I/O queue [maintained by Squid in shared memory]".


You may wonder how it is possible that an increase in I/O work results 
in decrease (and, hopefully, elimination) of related IPC messages. 
Roughly speaking, a worker must send an IPC "you have a new I/O 
request" message only when its worker->disker queue is empty. If the 
queue is not empty, then there is no reason to send an IPC message to 
wake up disker because disker will see the new message when dequeuing 
the previous one. Same for the opposite direction: disker->worker...


This is probably true if you have slow disks and are actually IO bound, 
but on fast disks or high page cache hit rate you essential see this ipc 
ping pong and very little actual work being done.


AFAICT, "too slow" IPC messages should result in non-empty queues and, 
hence, no IPC messages at all. For this logic to work, it does not 
matter whether the system is I/O bound or not, whether disks are "slow" 
or not.




 > Is it necessary to have these read chunks so small

It is not. Disk I/O size should be at least the system I/O page size, 
but it can be larger. The optimal I/O size is probably very dependent 
on traffic patterns. IIRC, Squid I/O size is at most one Squid page 
(SM_PAGE_SIZE or 4KB).


FWIW, I suspect there are significant inefficiencies in disk I/O 
related request alignment: The code does not attempt to read from and 
write to disk page boundaries, probably resulting in multiple 
low-level disk I/Os per one Squid 4KB I/O in some (many?) cases. With 
moder

Re: [squid-users] very poor performance of rock cache ipc

2023-10-14 Thread Alex Rousskov

On 2023-10-13 16:01, Julian Taylor wrote:

When using squid for caching using the rock cache_dir setting the 
performance is pretty poor with multiple workers.
The reason for this is due to the very high number of systemcalls 
involved in the IPC between the disker and workers.


Please allow me to rephrase your conclusion to better match (expected) 
reality and avoid misunderstanding:


By design, a mostly idle SMP Squid should use a lot more system calls 
per disk cache hit than a busy SMP Squid would:


* Mostly idle Squid: Every disk I/O may require a few IPC messages.
* Busy Squid: Bugs notwithstanding, disk I/Os require no IPC messages.


In your single-request test, you are observing the expected effects 
described in the first bullet. That does not imply those effects are 
"good" or "desirable" in your use case, of course. It only means that 
SMP Squid was no optimized for that use case; SMP rock design was 
explicitly targeting the opposite use case (i.e. a busy Squid).


Roughly speaking, here, "busy" means "there are always some messages in 
the disk I/O queue [maintained by Squid in shared memory]".



You may wonder how it is possible that an increase in I/O work results 
in decrease (and, hopefully, elimination) of related IPC messages. 
Roughly speaking, a worker must send an IPC "you have a new I/O request" 
message only when its worker->disker queue is empty. If the queue is not 
empty, then there is no reason to send an IPC message to wake up disker 
because disker will see the new message when dequeuing the previous one. 
Same for the opposite direction: disker->worker...



> Is it necessary to have these read chunks so small

It is not. Disk I/O size should be at least the system I/O page size, 
but it can be larger. The optimal I/O size is probably very dependent on 
traffic patterns. IIRC, Squid I/O size is at most one Squid page 
(SM_PAGE_SIZE or 4KB).


FWIW, I suspect there are significant inefficiencies in disk I/O related 
request alignment: The code does not attempt to read from and write to 
disk page boundaries, probably resulting in multiple low-level disk I/Os 
per one Squid 4KB I/O in some (many?) cases. With modern non-rotational 
storage these effects are probably less pronounced, but they probably 
still exist.


BTW, please note that, IIRC, workers and diskers do not send HTTP bytes 
using IPC messages. Those IPC messages only carry small metainformation 
about I/O. HTTP bytes are stored in shared memory pages. I do not recall 
why the corresponding disk I/O IPC messages are so big, but it is 
probably just a code simplification (because larger IPC messages are 
needed for cache manager queries).



HTH,

Alex.


You can reproduce this very easily with a simple setup with following 
configuration in the current git HEAD and older versions:


maximum_object_size 8 GB
cache_dir rock /cachedir/cache 1024
cache_peer some.host parent 80 3130 default no-query no-digest
http_port 3128

Now download a larger file from some.host through the cache so it cached 
and repeat.


curl --proxy localhost:3128  http://some.host/file >  /dev/null

The download of the cached file from the local machine will be performed 
with a very low rate, on my not ancient machine 35mb/s with everything 
is being cached in memory.


If you check what is happening in the disker you see that it reads a 
4112 byte ipc message from the worker, performs a read of 4KiB size then 
opens a new socket to notifies the worker, does 4 fcntl calls on the 
socket and then sends a 4112 byte (2 x86 pages) size ipc message and 
then closes the socket, this repeats for every 4KiB read and you have 
the same thing in the receiving worker side.


Here an strace of one chunk of the request in the disker:

21:49:28 epoll_wait(7, [{events=EPOLLIN, data={u32=26, u64=26}}], 65536, 
827) = 1 <0.13>
21:49:28 recvmsg(26, {msg_name=0x557d7c4f06b8, msg_namelen=110 => 0, 
msg_iov=[{iov_base="\7\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=4112}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_DONTWAIT) = 4112 <0.27>
21:49:28 pread64(19, 
"\266E\337\37\374\201b\215\240\310`\216\366\242\350\210\215\22\377zu\302\244Tb\317\255K\10\"p\327"..., 4096, 10747944) = 4096 <0.15>

21:49:28 socket(AF_UNIX, SOCK_DGRAM, 0) = 11 <0.21>
21:49:28 fcntl(11, F_GETFD) = 0 <0.11>
21:49:28 fcntl(11, F_SETFD, FD_CLOEXEC) = 0 <0.11>
21:49:28 fcntl(11, F_GETFL) = 0x2 (flags O_RDWR) <0.11>
21:49:28 fcntl(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0 <0.12>
21:49:28 epoll_ctl(7, EPOLL_CTL_ADD, 11, 
{events=EPOLLOUT|EPOLLERR|EPOLLHUP, data={u32=11, u64=11}}) = 0 <0.23>
21:49:28 epoll_wait(7, [{events=EPOLLOUT, data={u32=11, u64=11}}], 
65536, 826) = 1 <0.15>
21:49:28 sendmsg(11, {msg_name={sa_family=AF_UNIX, 
sun_path="/tmp/local/var/run/squid/squid-kid-2.ipc"}, msg_namelen=42, 
msg_iov=[{iov_base="\7\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 

  1   2   3   4   5   6   7   8   9   10   >