Re: [squid-users] log_referrer question

2024-05-22 Thread Amos Jeffries

On 22/05/24 07:51, Alex Rousskov wrote:

On 2024-05-21 13:50, Bobby Matznick wrote:
I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


Please do not redefine built-in logformat configurations like "squid" 
and "combined". Name and define your own instead.




For built-in formats do not use logformat directive at all. Just 
configure the log output:


 access_log daemon:/var/log/squid/access.log combined


As Alex said, please do not try to re-define the built-in formats. If 
you must define *a* format with the same/similar details, use a custom 
name for yours.




So, checked with squid -v and do not see “—enable-referrer_log” as one 
of the configure options used during install. Would I need to 
reinstall, or is that no longer necessary in version 4.13?


referer_log and the corresponding ./configure options have been removed 
long time ago, probably before v4.13 was released.




Since Squid v3.2 that log has been a built-in logformat. Just configure 
a log like this:


 access_log daemon:/var/log/squid/access.log referrer


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-16 Thread Amos Jeffries

On 17/05/24 02:23, Bolinhas André wrote:

Hi Alex
Has I explain, by default I set those directives to off to avoid high 
cpu consumption.



Ah, actually with NTLM auth you are using *more* CPU per transaction 
with those turned off.


The thing is that auth takes a relatively long time to happen, so the 
transactions are slower. Hiding the fact that they are, in total, using 
more CPU and TCP networking resources.




My doubt is enabling persistent connection will help squid to process 
the request more efficiently and gain more performance or not.




With persistent connections disabled, every client request must:

 1) wait for a TCP socket to become free for use
 2) perform a full SYN / SYN+ACK exchange to open it for use
 3) perform a NTLM challenge-response over HTTP
 4) wait for a second TCP socket to become free for use
 5) perform a full SYN / SYN+ACK exchange to open it for use
 6) perform the actual HTTP NTLM authenticated transaction.

Then
 7) locate a server that can be used
 8) wait for a TCP socket to become free for use
 9) perform a full SYN / SYN+ACK exchange to open it for use
 10) send the request on to the found server


That is a LOT of time, CPU, and networking.


With persistent connections enabled, only the first request looks like 
above. The second, third etc look like below:



 11) perform the HTTP NTLM authenticated transaction.

Then
 12) locate a server that can be used
 13) send the request on to the found server


 14) perform the HTTP NTLM authenticated transaction.

Then
 15) locate a server that can be used
 16) send the request on to the found server


That is MUCH better for performance.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny_info URL not working

2024-05-12 Thread Amos Jeffries

On 12/05/24 17:48, Dieter Bloms wrote:

Hello,

On Sat, May 11, Vilmondes Queiroz wrote:


deny_info http://example.com !authorized_ips


does it works, if you add the http status code like:

deny_info 307:http://example.com !authorized_ips



Also the "!" is not valid here. The ACL on deny_info lines is the name 
of the one that is to be adjusted when it is used for a "deny" action 
by, for example, "http_access deny".



  acl authorized_ips src ...
  deny_info 307:http://example.com authorized_ips
  http_access deny !authorized_ips


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dynamic ACL with local auth

2024-05-08 Thread Amos Jeffries

On 8/05/24 19:55, Albert Shih wrote:

Le 06/05/2024 à 12:21:10+0300, ngtech1ltda écrit
Hi,



The right way to do it is to use an external acl helper that will use some kind 
of database for the settings.


Ok. I will check that.


The other option is to use a reloadable ACLs file.


But those this reload need a restart of the service ?


But you need to clarify exactly the goal if you want more then a basic advise.


Well..pretty simple task


Ah, this is about equivalent to "just create life" level of simplicity.


I expect that what you need is doable, but not in the way you are 
describing so far.



(p-PS. If you can mention how much experience you have working with 
Squid configuration it will help us know how much detail we can skip 
over when offering options.)





I need to build a squid server to allow/deny
people access to some data (website) because those website don't support
authentication.



So Squid needs to authenticate. Is that every request or on a 
per-resource (URL) basis?


 A) needs only simple auth setup
or
 B) needs auth setup, with ACL(s) defining when to authenticate



But the rule of access “allow/deny” are manage in other place through
another application.



What criteria/details is this other application checking?

Can any of its decision logic be codified as a sequence of Squid ACL 
types checked in some specific order?


How are you expecting Squid to communicate with it?



So the goal is to have some «thing» who going to retrieve the «permissions»
of the user and apply the ACL on squid.



Please explain/clarify what **exactly** a "permission" is in your design?


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Linux Noob - Squid Config

2024-05-07 Thread Amos Jeffries

On 7/05/24 07:59, Piana, Josh wrote:

Amos,

You raise a good point about Kerberos! I was not aware that Squid supported 
this method. Yes - I think we would preferably use this method, especially 
because this looks like it's much easier to setup and still checks all the 
boxes we need for security purposes.

With that being said, without using NTLM, can we bypass using Samba? We would 
rather not rely on that resource if possible.



I'm not sure how much of Samba need to be setup to use the NTLM helper. 
It has been a while since I used it.




In regards to your responses to all of the lines of code, I'll be going through 
that seperately and will get back to you if I have any more questions with it. 
After installing Squid, moving over and updating the old config, and adjusting 
the parameters you mentioned below, what else is there to do to finish setting 
up this server? I'm not entirely sure if Apache is needed anymore either. This 
would simplify and modernize our processes a great deal if this can be remopved 
as well.



There is no sign in the squid.conf as to what Apache was being used for.
So that and any other services the old machine had going will still need 
your attention, but they are not related to Squid.



Cheers
Amos



- Josh

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Monday, May 6, 2024 12:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Linux Noob - Squid Config

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


[ please keep responses on-list to assist any others who encounter the same 
issues in future ]

On 4/05/24 08:51, Piana, Josh wrote:

Hey Amos,

Thank you so much for getting back to me so quickly!

To answer your question about NTLM, I meant to say NTLMv2. We're trying to 
become compliant with newer security standards and this old box in depersate 
need of some love and updating.




Hmm. My question was more aiming as a yes/no answer.

Squid can certainly still support NTLM. But if possible going to just 
Negotiate/Kerberos auth would be a simpler config.

The /usr/bin/ntlm_auth authenticator you have been using is provided by Samba. 
So you will need to have Samba installed (yum install samba) and configured the 
same (or equivalent for its upgrade) as before Squid authentication is usable.

FYI; Modern Squid start helpers only as-needed. Meaning Squid will startup and 
run fine without a working auth helper ... until the point where a helper 
lookup is needed. So you can test Squid with some trivial requests before 
needing Samba fully working.



--
Current squid.conf file Output:

max_filedesc 4096



I advise changing this to at least:

max_filedescriptors 65536

Why? Modern web pages can cause clients to open up to a hundred connections to 
various servers to display a single web page. Each client of those connection 
consumes 3-4 file descriptors.

You will also need to check the OS limitation to ensure



cache_mgr itadmin@...
cache_effective_user squid
cache_effective_group squid
coredump_dir /opt/squid/var
pid_filename /var/run/squid.pid
shutdown_lifetime 5 seconds
error_directory /usr/local/share/squid/errors/English_CUSTOM



Check what customizations have been done to the files inside that directory.

If it is just the new templates for the deny_info lines later in your config; 
then you can copy those templates to the new machine.
And create symlnks from the

I suggest placing the custom error templates in a directory such as 
/etc/squid/errors/ and a symlink from the 
/usr/local/share/squid/errors/templates/ directory (or wherever the templates 
are put by yum install).
   [ This way upgrades that change the default templates will not erase your 
ones. At worst you should only have to re-create the symlinks manually. ]

(If you need it; to learn how to create symlinks type "man ln".)



logfile_rotate 0
debug_options ALL,1


You can remove the above line. It is a default setting.



buffered_logs on > cache_log /var/log/squid/general> cache_access_log
/var/log/squid/access



The above two lines should be more like:

cache_log /var/log/squid/cache.log
access_log daemon:/var/log/squid/access.log



cache_store_log none
log_mime_hdrs off


The above two lines can be removed. They are default settings.



log_fqdn off


Remove this line. It is not supported in modern Squid.



strip_query_terms off
http_port 10.46.11.20:8080
http_port 127.0.0.1:3128
icp_port 0


The above line can be removed. It is a default setting.



forwarded_for off


Change that "off" to;
   * "delete" for complete removal of the header), or
   * "transparent" for Squid to not add the header.



ftp_user anonftpuser@...
ftp_list_width 32
ftp_passive on
connect_timeout 30 seconds
peer_con

Re: [squid-users] Linux Noob - Squid Config

2024-05-06 Thread Amos Jeffries
ORTIFY_SOURCE=2 -fPIE -Os -g -pipe -fsigned-char' 'LDFLAGS=-pie'

--

New Box squid -v Output:

Squid Cache: Version 5.5
Service Name: squid

This binary uses OpenSSL 3.0.7 1 Nov 2022. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--libexecdir=/usr/lib64/squid' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,PAM,POP3,RADIUS,SASL,SMB,SMB_LM'
 '--enable-auth-ntlm=SMB_LM,fake' '--enable-auth-digest=file,LDAP' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group'
 '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests' 
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' 
'--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-diskio' 
'--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' 
'--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' 
'--disable-arch-native' '--disable-security-cert-validators' 
'--disable-strict-error-checking' '--with-swapdir=/var/spool/squid' 
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 
'CC=gcc' 'CFLAGS=-O2 -flto=auto -ffat-lto-objects -fexceptions -g 
-grecord-gcc-switches -pipe -Wall -Werror=format-security 
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -march=x86-64-v2 
-mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection 
-fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' 'CXX=g++' 'CXXFLAGS=-O2 
-flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -march=x86-64-v2 
-mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection 
-fcf-protection' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
'LT_SYS_LIBRARY_PATH=/usr/lib64:'

--

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Friday, May 3, 2024 4:21 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Linux Noob - Squid Config

[You don't often get email from squ...@treenet.co.nz. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 4/05/24 07:59, Piana, Josh wrote:

Hey Everyone.

I apologize in advance for any lack of formality normally shared on
mailing lists such as these, it’s my first time seeking product
support in this manner.



NO need to apologize. Help and questions is most of what we do here :-)



I want to start by saying that I’m new to Linux, been using Windows
environments my entire life. Such is the reason for me reaching out to
you all.

I have been tasked with modernizing a Squid box and feel very
overwhelmed, to say the least.

Current Setup:

èCentOS 5.0

èSquid 2.3

èApache 2.0.46

èSamba 3.0.9

Desired Setup:

èRHEL 9.2 OS

èNeeds to qualify for NTLM authentication



Hmm, does it *have* to be NTLM? that auth protocol was deprecated in 2006.



èWould like to remove legacy apps/services

èContinue to authenticate outgoing communication via AD

My question is, how do I get all of these services/apps to work
together? Do I just install the newest versions of each and migrate
the existing config files?

I was hoping for a better understanding on how all of these work
together and exactly how to configure or edit these as needed. I’ve
gotten as far as installing RHEL 9.2 on a fresh VM Server and trying
as best as I can to learn the basics on Linux and just the general
operation of a Linux ran environment. It feels like trying to ride a
bike with box w

Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-05 Thread Amos Jeffries

On 4/05/24 11:17, Emre Oksum wrote:

 >In this case, all your tcp_outgoing_addr lines being tested. Most of
 >them will not match.
Sorry I'm not really a Squid guy I was working on it due to a job that I 
took but I cannot figure this out. What do you mean most of them do not 
match? Does it mean Squid checks every ACL one by one that is defined in 
config to find the correct IPv6 address?


Yes, exactly so.

Each tcp_outgoing_address line of squid.conf is checked top-to-bottom, 
the ACLs on that line tested left-to-right against the Squid local-IP 
the client connected to.

 Most will non-match (as seen in the trace snippet you showed).
 One should match, at which point Squid uses the IP address on that 
tcp_outgoing_address line.



As mentioned earlier, this is all on *outgoing* Squid-to-server 
connections. tcp_outgoing_* directives have no effect on the client 
connection.



If that's the case I still 
didn't understand why Squid randomly sends Connection Reset flag to 
client.


That is what we are trying to figure out, yes.

I asked for the cache.log trace so I could look through and see when one 
of the problematic connections was identified by Squid as closed, and 
whether that was caused by something else Squid was doing - or whether 
the signal came to Squid from the OS.
 Which would tell us whether Squid had sent it, or if the OS had sent 
it to both Squid and client.


I/we will need a full cache.log trace from before a problematic 
connection was opened, to after it fails. At least several seconds 
before and after.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 09:48, Emre Oksum wrote:

Hi Amos,
 >FTR, "debug_options ALL" alone is invalid syntax and will not change
 >from the default cache.log output

Yes, you were right! I was surely missing on that one. I changed 
debug_options ALL to debug_options ALL 5 and now, I found these warnings 
in cache.log file:




FYI, these are not warnings. They are debug traces saying what is going on.

In this case, all your tcp_outgoing_addr lines being tested. Most of 
them will not match.




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 08:33, Emre Oksum wrote:

Hi Jonathan,

 >> Have you attempted to enable debugging ??
Yes, debugging was enabled but as I have pointed out, unfortunately it 
didn't give any information about the issue.
Maybe I was missing something? I don't know. debug_options was ALL in my 
squid.conf.


Sure, "ALL" sections.

But what display level:

 0 (critical only)?
 1 (important)?
 2 (protocol trace)?
 3-6 (debugs)?
 9 (raw I/O data traces)?


FTR, "debug_options ALL" alone is invalid syntax and will not change 
from the default cache.log output.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Linux Noob - Squid Config

2024-05-03 Thread Amos Jeffries

On 4/05/24 07:59, Piana, Josh wrote:

Hey Everyone.

I apologize in advance for any lack of formality normally shared on 
mailing lists such as these, it’s my first time seeking product support 
in this manner.




NO need to apologize. Help and questions is most of what we do here :-)


I want to start by saying that I’m new to Linux, been using Windows 
environments my entire life. Such is the reason for me reaching out to 
you all.


I have been tasked with modernizing a Squid box and feel very 
overwhelmed, to say the least.


Current Setup:

èCentOS 5.0

èSquid 2.3

èApache 2.0.46

èSamba 3.0.9

Desired Setup:

èRHEL 9.2 OS

èNeeds to qualify for NTLM authentication



Hmm, does it *have* to be NTLM? that auth protocol was deprecated in 2006.



èWould like to remove legacy apps/services

èContinue to authenticate outgoing communication via AD

My question is, how do I get all of these services/apps to work 
together? Do I just install the newest versions of each and migrate the 
existing config files?


I was hoping for a better understanding on how all of these work 
together and exactly how to configure or edit these as needed. I’ve 
gotten as far as installing RHEL 9.2 on a fresh VM Server and trying as 
best as I can to learn the basics on Linux and just the general 
operation of a Linux ran environment. It feels like trying to ride a 
bike with box wheels.





The installation of a basic Squid service for RHEL is easy.
Just open a terminal and enter this command:

   yum install squid


The next part is going over your old Squid configuration to see how much 
of it remains necessary or can be updated. It would be useful for the 
next steps to copy it to the RHEL machine as /etc/squid/squid.conf.old .


You can likely find it on the CentOS machine at /etc/squid/squid.conf or 
/usr/share/squid/etc/squid.conf depending on how that Squid was built.



If you are able to paste the contents of that file (without the '#' 
comment or empty lines) here, we can assist with getting the new Squid 
doing the same or equivalent actions.



Also please paste the output of "squid -v" run on both the old CentOS 
machine and on the new RHEL.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 02:29, Emre Oksum wrote:

Hi everyone,

I'm having a issue with Squid Cache 4.10 which I cannot fix for weeks 
now and kinda lost at the moment. I will be appreciated if someone can 
guide me through the issue I'm having.
I need to create a IPv6 HTTP proxy which should match the entry address 
to outgoing TCP address. For example, if user is connecting from 
fe80:abcd::1 it should exit the HTTP proxy from the same address. We got 
like 50k addresses like this at the moment.


What your "for example,..." describes is Transparent Proxy (TPROXY).


However, what you have in the config below is very different. The IP the 
client is connected **to** (not "from") is being pinned on outgoing 
connections.



The issue is, client connecting to the proxy is receiving "EOF" or 
"FLOW_CONTROL_ERROR" on their side.


The FLOW_CONTROL_ERROR is not something produced by Squid. Likely it 
comes from the TCP stack and/or OS routing system.


The EOF may be coming from either Squid or the OS. It also may be 
perfectly normal for the circumstances, or a side effect of an error 
elsewhere.



To solve will require identifying exactly what is sending those signals, 
and why. Since they are signals going to the client, focus on the 
client->Squid connections (not the Squid->server ones you talk about 
testing below).




When I test connection by connecting 
to whatismyip.com  everything works fine and 
entry IP always matches with outgoing IP for each of the 50k addresses. 
Client tells me this problem occurs both at GET and POST requests with 
around 10 MB of data.


Well, you are trying to manually force certain flow patterns that 
prohibit or break some major HTTP performance features. Some problems 
are to be expected.


The issues which I expect to occur in your proxy would not show up in a 
trivial outgoing-IP or connectivity test.



I initially thought that could be related to server resources being 
drained but upon inspecting server resource usage, Squid isn't even 
topping at 100% CPU or RAM anytime so not that.




IMO, "FLOW_CONTROL_ERROR" is likely related to quantity of traffic 
flooding through the proxy to specific origin servers.


The concept you are implementing of the outgoing TCP connection having 
the same IP as the incoming connection reduces the available TCP sockets 
by 25%. Prohibiting the OS from allocating ports on otherwise unused 
outgoing addresses when





My Squid.conf is like this at the moment:


Some improvements highlighted inline below.
Nothing stands out to me as being related to your issues.



auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
http_access deny !auth_users


Above two lines are backwards. Deny first, then allow.



cache deny all
dns_nameservers 
dns_v4_first off
via off
forwarded_for delete
follow_x_forwarded_for deny all
server_persistent_connections off


*If* the issue turns out to be congestion on Squid->server connections
enabling this might be worthwhile. Otherwise it should be fine.



max_filedesc 1048576


You can remove that line. "max_filedesc" was a RedHat hack from 20+ 
years ago when the feature was experimental.


Any value you set on the line above, will be erased and replaced by the 
line below:




max_filedescriptors 1048576
workers 8
http_port [::0]:1182


Above is just a complicated way to write:

 http_port 1182


Any particular reason not to use the registered port 3128 ?
(Not important, just wondering.)



acl binding1 myip fe80:abcd::1
tcp_outgoing_address fe80:abcd::1 binding1
acl binding2 myip fe80:abcd::2
tcp_outgoing_address fe80:abcd::2 binding2
acl binding3 myip fe80:abcd::3
tcp_outgoing_address fe80:abcd::3 binding3
...
...
...
access_log /var/log/squid/access.log squid



cache_store_log none


You can erase this line.
This is default setting. No need to manually set it.



cache deny all


You can erase this line.
This "cache deny all" exists earlier in the config.




I've tried to get a PCAP file and realized when client tries to connect 
with a new IPv6 address, Squid is not trying to open a new connection 
instead tries to resume a previously opened one on a different outgoing 
IPv6 address.


Can you provide the trace demonstrating that issue?

Although, as noted earlier your problems are apparently on the client 
connections. This is about server connections behaviour.



I set server_persistent_connections off which should have 
disabled this behavior but it's still the same.


Nod. Yes that should forbid re-use of connections.

I/we will need to see the PCAP trace along with a cache.log generated 
using "debug_options ALL,6" to confirm a bug or identify other breakage 
though.




I tried using a newer 
version of Squid but it behaved differently and did not follow my 
outgoing address specifications and kept connecting on IPv4.


That would seem to indicate that your IPv4 connectivity is better than 

Re: [squid-users] Best way to utilize time constraints with squid?

2024-05-01 Thread Amos Jeffries

Hi Jonathan,

There may be some misunderstanding of what I wrote earlier..

 "time" is just a check of the machine clock. When ACLs are checked it 
is always expected to work.



The problem I was referring to was that ssl_bump and https_access ACLs 
are *not* checked for already active connections. Only for new 
connections as they are setup.


For example; CONNECT tunnel and/or HTTPS connections might start on 
Monday and stay open and used until Friday.



HTH
Amos



On 30/04/24 04:54, Jonathan Lee wrote:

Squid -k parse also does not fail with use of the time ACL
Sent from my iPhone


On Apr 27, 2024, at 07:49, Jonathan Lee  wrote:

The time constraints for termination do appear to lock out all new connections 
until that timeframe has elapsed. My devices have connection errors during this 
duration.

Just to confirm ssl_bump can not be used with time ? Because my connections 
don’t work during the timeframe so that is a plus.


Sent from my iPhone


On Apr 27, 2024, at 00:41, Amos Jeffries  wrote:

On 26/04/24 17:15, Jonathan Lee wrote:
aclblock_hourstime01:30-05:00ssl_bumpterminateallblock_hourshttp_accessdenyallblock_hours
In this a good way to time lock squid with times lock down?


That depends on your criteria/definition of "good".

Be aware that http_access only checks *new* transactions. Large downloads, and 
long-running transactions such as CONNECT tunnel which start during an allowed 
time will continue running across the disallowed time(s).



To essentially terminate all connections and block http access.


The "terminate all connections" is not enforced by 'time` ACL. Once a 
transaction is allowed to start, it can continue until completion - be that milliseconds 
or days later.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Container Based Issues Lock Down Password and Terminate SSL

2024-04-27 Thread Amos Jeffries

On 24/04/24 17:27, Jonathan Lee wrote:

Hello fellow Squid users I wanted to ask a quick question for use with 
termination would http access for cache still work with this type of setup and 
custom refresh patterns?

I think it would terminate all but the clients and if they use the cache it 
would be ok.



These things are sequential, but otherwise not directly related.

SSL-Bump is about TLS handshake opening a connection from a client.

The "ssl_bump splice" action allows the client connection to go through 
Squid in the form of a blind tunnel. Caching (and thus refresh of cached 
objects) is not applicable to tunneled traffic.



The "ssl_bump terminate" action closes the client connection 
immediately. It should be obvious that nothing can be done in that 
connection once it is closed. HTTP(S) and/or caching are irrelevant - 
they can never happen on a terminated connection.





But I think an invasive container would be blocked my goal here.

acl markBumped annotate_client bumped=true
acl active_use annotate_client active=true
acl bump_only src 192.168.1.3 #webtv
acl bump_only src 192.168.1.4 #toshiba
acl bump_only src 192.168.1.5 #imac
acl bump_only src 192.168.1.9 #macbook
acl bump_only src 192.168.1.13 #dell

acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere

ssl_bump peek step1
miss_access deny no_miss active_use
ssl_bump splice https_login active_use
ssl_bump splice splice_only_mac splice_only active_use
ssl_bump splice NoBumpDNS active_use
ssl_bump splice NoSSLIntercept active_use
ssl_bump bump bump_only_mac bump_only active_use
acl activated note active_use true
ssl_bump terminate !activated



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] enctype aes256-cts found in keytab but cannot decrypt ticket

2024-04-27 Thread Amos Jeffries

On 24/04/24 17:31, ivc chgaki wrote:
hello. i hve Samba DC and squid. i created user, then SPN, and then 
exported keytab and imported him to squid. im using kerberos negotiate 
helper but when i try go to internet i have popup window with 
login/password and in cace.log log error



2024/04/21 21:41:58 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: gss_accept_sec_context() 
failed: Unspecified GSS failure.  Minor code may provide more 
information. Request ticket server 
HTTP/srv-proxy.mydomain@myadomain.com kvno 2 enctype aes256-cts 
found in keytab but cannot decrypt ticket; }}








HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tls_key_log

2024-04-27 Thread Amos Jeffries

On 25/04/24 19:57, Andrey K wrote:

Hello,

Does squid 6.9 allow you to log TLS 1.3 keys so that you can then 
decrypt traffic using Wireshark?
I found that there was an issue earlier with using tls_key_log to 
decrypt TLS 1.3: 
https://lists.squid-cache.org/pipermail/squid-users/2022-January/024424.html 


I tried using tls_key_log on Squid 6.9 to decrypt TLS 1.3, but 
without success.


You answer your own question here.



Is work on TLS 1.3 logging support still ongoing?



Not specifically. As I understand it logging is not the issue - Squid 
cannot log something it cannot see. TLS support has quieted down in 
recent times, but not stopped.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to utilize time constraints with squid?

2024-04-27 Thread Amos Jeffries

On 26/04/24 17:15, Jonathan Lee wrote:


aclblock_hourstime01:30-05:00ssl_bumpterminateallblock_hourshttp_accessdenyallblock_hours

In this a good way to time lock squid with times lock down?



That depends on your criteria/definition of "good".

Be aware that http_access only checks *new* transactions. Large 
downloads, and long-running transactions such as CONNECT tunnel which 
start during an allowed time will continue running across the disallowed 
time(s).





To essentially terminate all connections and block http access.



The "terminate all connections" is not enforced by 'time` ACL. Once a 
transaction is allowed to start, it can continue until completion - be 
that milliseconds or days later.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Container Based Issues Lock Down Password and Terminate SSL

2024-04-23 Thread Amos Jeffries

On 23/04/24 11:52, Jonathan Lee wrote:

Hello fellow Squid Accelerator/Dynamic Cache/Web Cache Users/PfSense users

I think this might resolve any container based issues/fears if they 
happened to get into the cache. Ie a Docker Proxy got installed and 
tried to data marshal the network card inside of a freeBSD jail or 
something like that. Biggest fear with my cache it is a big cache now


Please yet me know what you think or if it is wrong.

Here is my configuration. I wanted to share it as it might help to 
secure some of this.


FTR, this config was auto-generated by pfsense. A number of things which 
that tool forces into the config could be done much better in the latest 
Squid, but the tool does not do due to needing to support older Squid 
version.





Keep in mine I use cachemgr.cgi within Squidlight so I had to set the 
password and I have to also adapt the php status file to include the 
password and also the sqlight php file.


After that the status and gui pages work still with the new password. 
Only issues area that it shows up in clear text when it goes over the 
proxy I can see my password clear as day again that was an issue listed 
inside the Squid O’REILLY book also.



Please ensure you are using the latest Squid v6 release. That release 
has both a number of security fixes, and working https:// URL access to 
the manager reports.


The cachemgr.cgi tool is deprecated fro a number of issues including 
that style of embedding passwords in the URLs.


Francesco and I have created a tool that can be found at 
 for basic 
access to the reports directly from Browser.
That tool uses HTTP authentication configured via the well-documented 
proxy_auth ACLs and http_access for more secure access than the old URL 
based mechanism (which still exists, just deprecated).




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Warm cold times

2024-04-23 Thread Amos Jeffries

On 22/04/24 17:42, Jonathan Lee wrote:

Has anyone else taken up the fun challenge of doing windows update caching. It 
is amazing when it works right. It is a complex configuration, but it is worth 
it to see a warm download come down that originally took 30 mins instantly to a 
second client. I didn’t know how much of the updates are the same across 
different vendor laptops.



There have been several people over the years.
The collected information is being gathered at 



If you would like to check and update the information for the current 
Windows 11 and Squid 6, etc. that would be useful.


Wiki updates are now made using github PRs against the repository at 
.






Amazing stuff Squid team.
I wish I could get some of the Roblox Xbox stuff to cache but it’s a night to 
get running with squid in the first place, I had to splice a bunch of stuff and 
also wpad the Xbox system.


FWIW, what I have seen from routing perspective is that Roblox likes to 
use custom ports and P2P connections for a lot of things. So no high 
expectations there, but anything cacheable is great news.





On Apr 18, 2024, at 23:55, Jonathan Lee wrote:

Does anyone know the current warm cold download times for dynamic cache of 
windows updates?

I can say my experience was a massive increase in the warm download it was 
delivered in under a couple mins versus 30 or so to download it cold. The warm 
download was almost instant on the second device. Very green energy efficient.


Does squid 5.8 or 6 work better on warm delivery?


There is no significant differences AFAIK. They both come down to what 
you have configured. That said, the ongoing improvements may make v6 
some amount of "better" - even if only trivial.





Is there a way to make 100 percent sure a docker container can’t get inside the 
cache?


For Windows I would expect the only "100% sure" way is to completely 
forbid access to the disk where the cache is stored.



The rest of your questions are about container management and Windows 
configuration. Which are kind of off-topic.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR

2024-04-11 Thread Amos Jeffries

On 11/04/24 08:22, Jonathan Lee wrote:

Could it be related to this ??

"WARNING: Failed to decode EC parameters '/etc/dh-parameters.2048'. 
error:1E08010C:DECODER routines::unsupported”




That would certainly make Squid unable to use EC (Elliptic Curve) ciphers.


Unfortunately OpenSSL is not verbose enough to explain the actual 
problem in an easily understood way.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as a http/https transparent web proxy in 2024.... do I still have to build from source?

2024-04-11 Thread Amos Jeffries

On 11/04/24 21:55, PinPin Poola wrote:
I don't care which Linux distro tbh; but would prefer Ubuntu as I have 
most familiarity with it.




Latest Ubuntu provide the "squid-openssl" package, which contains the 
SSL-Bump and other OpenSSL-exclusive features.


Just install that package as you would the "squid" package. It can also 
be installed as a drop-in upgrade for the "squid" package.


One thing to be aware of in both cases, is that the SELinux security 
system does not allow Squid by default to access the /etc/ssl/* config 
area. So you may need to allow that depending on what your desired 
TLS/SSL settings in squid.conf are.




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-04-06 Thread Amos Jeffries

On 6/04/24 18:48, Jonathan Lee wrote:

Correction I can’t access it from the loop back


From the config in the other "Squid cache questions" thread you are 
only intercepting traffic on the loopback 127.0.0.1:3128 port. You 
cannot access it directly on "localhost".


You do have direct proxy (and thus manager) access via the 
192.168.1.1:3128 so this URL should work:

  http://192.168.1.1:3128/squid-internal-mgr/menu


.. or substitute the raw-IP for the visible_hostname setting **if** that 
hostname actually resolves to that IP.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries


On 6/04/24 11:34, Jonathan Lee wrote:
if (empty($settings['sslproxy_compatibility_mode']) || 
($settings['sslproxy_compatibility_mode'] == 'modern')) {

// Modern cipher suites
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS";

$sslproxy_options .= ",NO_TLSv1";
} else {
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS";

}

Should the RC4  be removed or allowed?

https://github.com/pfsense/FreeBSD-ports/pull/1365 






AFAIK it should be removed. What I was intending to point out was that 
its removal via "!RC4" is likely making the prior "EECDH+aRSA+RC4" 
addition pointless. Sorry if that was not clear.


If you check the TLS handshake and find Squid is working fine without 
advertising "EECDH+aRSA+RC4" it would be a bit simpler/easier to read 
the config by removing that cipher and just relying on the "!RC4".



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries

On 5/04/24 17:25, Jonathan Lee wrote:

ssl_bump splice https_login
ssl_bump splice splice_only
ssl_bump splice NoSSLIntercept
ssl_bump bump bump_only markBumped
ssl_bump stare all
acl markedBumped note bumped true
url_rewrite_access deny markedBumped


for good hits should the url_rewirte_access deny be splice not bumped 
connections ?


I feel I mixed this up



Depends on what the re-write program is doing.

Ideally no traffic should be re-written by your proxy at all. Every 
change you make to the protocol(s) as they go throug adds problems to 
traffic behaviour.


Since you have squidguard..
 * if it only does ACL checks, that is fine. But ideally those checks 
would be done by http_access rules instead.
 * if it is actually changing URLs, that is where the problems start 
and caching is risky.


If you are re-writing URLs just to improve caching, I recommend using 
Store-ID feature instead for those URLs. It does a better job of 
balancing the caching risk vs ratio gains, even though outwardly it can 
appear to have less HITs.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-04 Thread Amos Jeffries

On 4/04/24 17:48, Jonathan Lee wrote:

Is there any particular order to squid configuration??



Yes. 



Does this look correct?



Best way to find out is to run "squid -k parse", which should be done 
after upgrades as well to identify and fix changes between versions as 
we improve the output.



I actually get allot of hits and it functions amazing, so I wanted to 
share this in case I could improve something. Is there any issues with 
security?


Yes, the obvious one is "DONT_VERIFY_PEER" disabling TLS security 
entirely on outbound connections. That particular option will prevent 
you even being told about suspicious activity regarding TLS.


Also there are a few weird things in your TLS cipher settings, such as 
this sequence "  EECDH+aRSA+RC4:...:!RC4 "
 Which as I understand, enables the EECDH with RC4 hash, but also 
forbids all uses of RC4.



I am concerned that an invasive container could become 
installed in the cache and data marshal the network card.




You have a limit of 4 MB for objects allowed to pass through this proxy, 
exception being objects from domains listed in the "windowsupdate" ACL 
(not all Windows related) which are allowed up to 512 MB.


For the general case, any type of file which can store an image of some 
system is a risk for that type of vulnerability can be cached.


The place to fix that vulnerability properly is not the cache or Squid. 
It is the OS permissions allowing non-Squid software access to the cache 
files and/or directory.





Here is my config

# This file is automatically generated by pfSense
# Do not edit manually !


Since this file is generated by pfsense there is little that can be done 
about ordering issues and very hard to tell which of the problems below 
are due to pfsense and which due toy your settings.


FWIW, there are no major issues, just some lines not being necessary due 
to setting things to their default values, or just some blocks already 
denyign things that are blocked previously.





http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname 
cache_mgr 
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options flags=DONT_VERIFY_PEER
sslcrtd_children 10

logfile_rotate 0
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip

acl getmethod method GET

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain 

Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-03 Thread Amos Jeffries
There is no way to configure around this. The error produced by Squid is 
a hard-coded reaction to TLS level errors in the SSL-Bump process.


Squid needs some significant code redesign to do a better job of 
handling the situation. Which I understand is already underway, but 
still some way off usable.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] BWS after chunk-size

2024-04-03 Thread Amos Jeffries

On 2/04/24 16:03, root wrote:

Hi Team,

after an upgrade from squid 5.4.1 to squid 5.9, unable to parse HTTP 
chunked response containing whitespace after chunk size. >
I think the following bugs were fixed and worked fine in squid 5.9 and 
earlier.
https://bugs.squid-cache.org/show_bug.cgi?id=4492 





There was no bug. We caved to user pressure and relaxed the protocol 
validation to tolerate and "fix" known-bad syntax. That change is what 
opened the security issue...



However, after the fix for SQUID2023:1 in 5.9, it seems that it does not 
work properly.





Indeed. That particular broken syntax is being intentionally rejected as 
a security attack.



I could be wrong, but Can you please advise me know if there is a way or 
patch to fix this issue.




You need to fix or stop using the software which is adding BWS (bad 
whitespace) to the protocol syntax fixed.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] GCC optimizer is provably junk. Here is the evidence.

2024-03-24 Thread Amos Jeffries

This inflammatory post is not relevant to Squid.

Please do not followup to this thread.


Cheers
Amos Jeffries
The Squid Software Foundation
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from squid6.6 to 6.8 we have a lot of ICAP_ERR_OTHER and ICAP_ERR_GONE messages in icap logfiles

2024-03-13 Thread Amos Jeffries



On 12/03/24 04:31, Dieter Bloms wrote:

Hello,

after an upgrade from squid6.6 to squid6.8 on a debian bookworm we have a lot
of messages from type:

ICAP_ERR_GONE/000
ICAP_ERR_OTHER/200
ICAP_ERR_OTHER/408
ICAP_ERR_OTHER/204

and some of our users claim about bad performance and some get "empty
pages".
Unfortunately it is not deterministic, the page will appear the next
time it is called up. I can't see anything conspicuous in the cache.log.



Hmm, there was 
 
changing message I/O in particular. The behavioural changes from that 
might have impacted ICAP in some unexpected way.


Also, if you are using SSL-Bump to enable virus scanning then 
 
might also be having effects.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Manipulating request headers

2024-03-11 Thread Amos Jeffries

On 12/03/24 04:00, Ben Goz wrote:

By the help of God.

Hi all,
I'm using squid with ssl-bump I want to remove br encoding for request 
header Accept-Encoding

currently I'm doing it using the following configuration:
request_header_access Accept-Encoding deny all
request_header_add Accept-Encoding gzip,deflate

Is there a more gentle way of doing it?


You could use q-value to prohibit it instead.


Replace both the above lines with just this one:

 request_header_add Accept-Encoding br;q=0


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Proxy timing out 500/503 errors

2024-03-05 Thread Amos Jeffries

On 6/03/24 07:23, M, Anitha (CSS) wrote:

Hi team,

We are using squid service deployed as a KVM VM on SLES 15 Sp5 os image.

We are using squid. Rpm: *squid-5.7-150400.3.20.1.x86_64*

**

We are seeing too many 503 errors with this version of squid.

This is the squid configuration file. Pls review it and let us know if 
issues.




It appears that your configuration file consists of at least 2 different 
configuration files appended to each other.


Please start by running "squid -k parse" and fixing all the warnings it 
should produce.



We are performing squid scale testing, where every secs there will be 
200+requests reaching the squid and squid is spitting out 500/503 errors.




FYI: you have restricted Squid to no more than 3200 filedescriptors. 
That is rather low. I recommend at least 64K.




Squid.conf:

gl-pcesreblr-squidproxy03:/var/log/squid # cat /etc/squid/squid.conf
# Recommended minimum configuration:
acl localnet src 172.28.1.0/24
acl localnet src 172.28.4.0/24
acl localnet src 172.28.0.0/24
acl localnet src 172.28.0.12/32
connect_timeout 120 seconds
connect_retries 10
#debug_options ALL,5
#connect_retries_delay 5 seconds
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network 
(LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space 
(CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines

acl localnet src 172.28.11.0/24
#acl localnet src 172.16.0.0/12 # RFC 1918 local private network 
(LAN)
#acl localnet src 192.168.0.0/16    # RFC 1918 local private 
network (LAN)
#acl localnet src fc00::/7  # RFC 4193 local private network 
range
#acl localnet src fe80::/10 # RFC 4291 link-local (directly 
plugged) machines


acl blocksites url_regex "/etc/squid/blocksites"
http_access deny blocksites

debug_options ALL,7

acl SSL_ports port 443
acl SSL_ports port 8071
acl SSL_ports port 11052
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 53  # pdns
acl Safe_ports port 5300    # pdns
acl Safe_ports port 123 #NTP
acl Safe_ports port 8071
acl Safe_ports port 11052   # pdns web server
acl Safe_ports port 514 # rsyslog
acl CONNECT method CONNECT
acl SSL_ports port 8053
acl Safe_ports port 8053
acl SSL_ports port 3002
acl Safe_ports port 3002
acl SSL_ports port 3006
acl Safe_ports port 3006
acl SSL_ports port 8203
acl Safe_ports port 8203
acl SSL_ports port 8204
acl Safe_ports port 8204
acl SSL_ports port 8071
acl Safe_ports port 8071
acl Safe_ports port 8200
acl SSL_ports port 8099
acl Safe_ports port 8099
tcp_outgoing_address 20.20.30.5

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#


Please notice what the above line says.



# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
#http_access deny all
#http_access allow all

cache_peer proxy-in.its.hpecorp.net parent 443 0 no-query no-delay default


... so a server listening for plain-text HTTP on port 443. That is a bit 
broken. At least consider enabling TLS/SSL on connections to this peer 
so Squid can send it HTTPS traffic.




#cache_peer 16.242.46.11 parent 8080 0 no-query default
#cache_peer 10.132.100.29 parent 3128 0 no-query default

acl parent_proxy src all
http_access allow parent_proxy


The above two lines are identical to:
  http_access allow all

... no http_access lines following this one will ever have any effects.


never_direct allow parent_proxy


Likewise same as:
  never_direct allow all

... however you have always_direct rules later that override this.



# Squid normally listens to port 3128
http_port 3128

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

#
# Add any of your own refresh_pattern entries 

[squid-users] [squid-announce] [ADVISORY] SQUID-2024:1 Denial of Service in HTTP Chunked Decoding

2024-03-04 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2024:1
__

Advisory ID:   | SQUID-2024:1
Date:  | Mar 4, 2024
Summary:   | Denial of Service in HTTP Chunked Decoding
Affected versions: | Squid 3.5.27 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.7
Fixed in version:  | Squid 6.8
__

Problem Description:

 Due to an Uncontrolled Recursion bug, Squid may be vulnerable to
 a Denial of Service attack against HTTP Chunked decoder.

__

Severity:

 This problem allows a remote attacker to perform Denial of
 Service when sending a crafted chunked encoded HTTP Message.

__

Updated Packages:

This bug is fixed by Squid version 6.8.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 3.5.27 are not vulnerable.

 All Squid 3.5.27 to 4.17 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.7 are vulnerable.

__

Workaround:

  **There is no workaround for this issue**
__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-31 11:35:02 UTC Patches Released
 2024-03-04 06:27:00 UTC Fixed Version Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2024:2 Denial of Service in HTTP Header parser

2024-03-04 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2024:2
__

Advisory ID:   | SQUID-2024:2
Date:  | Feb 15, 2024
Summary:   | Denial of Service in HTTP Header parser
Affected versions: | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to a Collapse of Data into Unsafe Value bug,
 Squid may be vulnerable to a Denial of Service
 attack against HTTP header parsing.

__

Severity:

 This problem allows a remote client or a remote server to
 perform Denial of Service when sending oversized headers in
 HTTP messages.

 In versions of Squid prior to 6.5 this can be achieved if the
 request_header_max_size or reply_header_max_size settings are
 unchanged from the default.

 In Squid version 6.5 and later, the default setting of these
 parameters is safe. Squid will emit a critical warning in
 cache.log if the administrator is setting these parameters to
 unsafe values. Squid will not at this time prevent these settings
 from being changed to unsafe values.

__

Updated Packages:

Hardening against this issue is added to Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Run the following command to identify how (and whether)
 your Squid has been configured with relevant settings:

squid -k parse 2>&1 | grep header_max_size

 All Squid-3.0 up to and including 6.4 without header_max_size
 settings are vulnerable.

 All Squid-3.0 up to and including 6.4 with either header_max_size
 setting over 21 KB are vulnerable.

 All Squid-3.0 up to and including 6.4 with both header_max_size
 settings below 21 KB are not vulnerable.

 All Squid-6.5 and later without header_max_size configured
 are not vulnerable.

 All Squid-6.5 and later configured with both header_max_size
 settings below 64 KB are not vulnerable.

 All Squid-6.5 and later configured with either header_max_size
 setting over 64 KB are vulnerable.

__

Workaround:

For Squid older than 6.5, add to squid.conf:

  request_header_max_size 21 KB
  reply_header_max_size 21 KB


For Squid 6.5 and later, remove request_header_max_size
 and reply_header_max_size from squid.conf

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-25 11:47:19 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:11 Denial of Service in Cache Manager

2024-03-04 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:11
__

Advisory ID:   | SQUID-2023:11
Date:  | Jan 24, 2024
Summary:   | Denial of Service in Cache Manager
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.5
Fixed in version:  | Squid 6.6
__

Problem Description:

 Due to a hanging pointer reference bug Squid is vulnerable to a
 Denial of Service attack against Cache Manager error responses.

__

Severity:

 This problem allows a trusted client to perform Denial of Service
 when generating error pages for Client Manager reports.

__

Updated Packages:

  This bug is fixed by Squid version 6.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0.5 have not been tested and should be assumed
 to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.5 are vulnerable.

__

Workaround:

 Prevent access to Cache Manager using Squid's main access
 control:

  http_access deny manager

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-11-12 09:33:20 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:10 Denial of Service in HTTP Request parsing

2024-03-04 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:10
__

Advisory ID:   | SQUID-2023:10
Date:  | Dec 10, 2023
Summary:   | Denial of Service in HTTP Request parsing
Affected versions: | Squid 2.6 -> 2.7.STABLE9
   | Squid 3.1 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.5
Fixed in version:  | Squid 6.6
__

Problem Description:

 Due to an Uncontrolled Recursion bug, Squid may be vulnerable to a
 Denial of Service attack against HTTP Request parsing.

__

Severity:

This problem allows a remote client to perform Denial of Service attack
by sending a large X-Forwarded-For header when the
follow_x_forwarded_for feature is configured.

__

Updated Packages:

This bug is fixed by Squid version 6.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 To check for follow_x_forwarded_for run the following command:

  `squid -k parse 2>&1 |grep follow_x_forwarded_for`


 All Squid configured without follow_x_forwarded_for are not
 vulnerable.

 All Squid older than 5.0.5 have not been tested and should be
 assumed to be vulnerable when configured with
 follow_x_forwarded_for.

 All Squid-5.x up to and including 5.9 are vulnerable when
 configured with follow_x_forwarded_for.

 All Squid-6.x up to and including 6.5 are vulnerable when
 configured with follow_x_forwarded_for.

__

Workaround:

 Remove all follow_x_forwarded_for lines from squid.conf

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Thomas Leroy of the SUSE security team.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-11-28 07:35:46 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing IPv6 sockets in Squid 6.7 in some servers

2024-03-04 Thread Amos Jeffries

On 5/03/24 08:03, Dragos Pacher wrote:

Hello,

I am a Squid beginner and we would like to use Squid inside our 
organization only as a HTTPS traffic inspection/logging tool for some 
3rd party apps that we bought,
something close to what a "MITM proxy" is called but we will not do 
that, instead we use a self signed certificate and the 3rd party app 
owners know this. Everything is

100% completely legal. (Ps: I am the IT lead).



FYI: "MITM proxy" is a ridiculous term. "MITM" means "intermediary" in 
security terminology, "proxy" means "intermediary" in networking 
terminology.

 So that term just means "intermediary intermediary", yeah.



Any serious HTTPS inspection/logging by Squid needs some form of 
SSL-Bump configuration and those 3rd-party Apps MUST be configured with 
trust for the self-signed root CA you are using.



Without that nothing Squid (or any other proxy) does will allow traffic 
inspection beyond the initial TLS handshake.




Assuming that you have checked that detail, on to your issue ...


We will be using Squid only internally, no outside access. Here is my 
issue with the current knowledge of Squid: POC running well on 3 servers 
but on the 4th I get no IPv6

sockets:
ubuntu@A2-3:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 10.10.0.16:3128         0.0.0.0:*   
LISTEN      2891391/(squid-1)



Your problem is the https(s)_port "port" configuration parameter.


This Squid is configured to listen like:

  http_port 10.10.0.16:3128

or

  http_port example.com:3128

(when example.com has only address 10.10.0.16)


The "http_port" receives port 80 syntax traffic, it may also be
"https_port" which receives port 443 syntax traffic.




and on the other 3 I have IPv6:
ubuntu@A2-2:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 x.x.x.x:52386    x.x.x.x:443     ESTABLISHED 
997651/(squid-1)
tcp6       0      0 :::3128                 :::*   
  LISTEN      997651/(squid-1)



These Squid are configured to listen like:

 http_port 3128


Ensure that the machine/server the 4th Squid is running on has its 
http(s)_port line matching the other three machines port value.


At this point do not care about the "mode" or options later in the line. 
Your issue is solely the "port" parameter.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP response to avoid backend

2024-02-26 Thread Amos Jeffries

On 26/02/24 06:52, Ed wrote:

On 2024-02-24 17:26+, Ed wrote:

In varnish land this is doable in the vcl_miss hook, but I don't know
how to do that in squid.


I think I found a way, but maybe there's a better method - I'd like to
the cache_peer_access to apply to all backends, but this does seem to do
what I was after:

   acl bad_foo req_header ICAPHEADER -i foobar
   cache_peer_access server_1 deny bad_foo



Assuming that an ICAP service is controlling whether the peers are to be 
used that is the correct way.


However, if you have an ICAP service controlling whether a peer can be 
used consider having the ICAP service just send Squid the final 
response. There is a relatively huge amount of complexity, both in the 
config and what Squid has to do slowing the transaction down just for 
this maybe-a-HIT behaviour.



Alternatives to "cache_peer_access .. deny bad_foo" are:

A) "always_direct allow bad_foo",
  If you want the request to be served, but using servers from a DNS 
lookup instead of the configured cache_peer.


B) "miss_access deny bad_foo",
  If you do not want the cache MISS to be answered at all.


It has been a while since I tested it, but IIRC with miss_access a 
"deny_info" line may be used to change the default 403 error status into 
another in the 200-599 status range. Which includes redirects, 
retry-after, empty responses, and template pages responses ... whichever 
suits your need best.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can't verify the signature of squid-6.7.tar.gz

2024-02-26 Thread Amos Jeffries

Excellent news.

Thank you for the feedback on the solution.


Cheers
Amos

On 22/02/24 10:14, Miha Miha wrote:

Hi Amos,

It took me some time to check and verify.
I'm posting my findings here just to complete the thread.

Regarding this one:


On 8/02/24 02:19, Miha Miha wrote:

Hi Francesco,

I still get an issue, although a slightly different one:

#gpg --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: Signature made Tue 06 Feb 2024 10:51:28 PM EET using ? key ID FEF6E865
gpg: Can't check signature: Invalid public key algorithm


On Thu, Feb 8, 2024 at 7:58 AM Amos Jeffries  wrote:

The error mentions algorithm, so also check the ciphers/algorithms
supported by your GPG agent. The new key uses the EDDSA cipher instead
of typical RSA.


Indeed, the problem is with my gpg agent - gpg (GnuPG) 2.0.22 which
doesn't support EDDSA. My system is CentOS7 and it sticks to GnuPG
2.0.x

I did another test on Amazon Linux with gpg (GnuPG) 2.3.7 (it supports
EDDSA) and there I was able to verify the package with the given pub
key.

All questions are clarified. Thank you!

Regards,
Mihail

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Segment Violation with authorization

2024-02-15 Thread Amos Jeffries

On 16/02/24 15:30, Eternal Dreamer wrote:

Hi!
When I'm trying to send curl request with provided basic 
proxy-authorization credentials through my proxy I see Segment Violation 
error in my logs and empty reply from server. Command is:
curl -v --proxy-basic --proxy-user login:password --proxy 
http://192.168.3.19:8080  https://google.com 



In squid.conf I have 3 directives:

http_access allow some_acl
http_access allow some_acl some_acl_user_auth some_special_domain 
http_all_port http_all_proto
http_access allow some_acl some_acl_user_auth some_special_domain 
CONNECT https_port


If I comment first one authorization works fine and it looks good.



Authorize or Authenticate?

Different things and you are mixing them up in these rules.


But 
with all lines I even can't authorize to special domains without Segment 
Violation error.



The issue is likely somewhere else in what you have configured Squid to 
do. The initial "allow some_acl" line *authorizes* access, without 
*authenticating*. Resulting in there being no credentials for anything 
that Squid needs to do later.



If it helps this arrangement is clearer and does almost the same thing:

 http_access allow some_acl
 http_access deny !some_special_domain
 http_access deny !some_acl_user_auth
 http_access allow CONNECT https_port
 http_access allow http_all_port http_all_proto




I've tried to use different versions of squid from 3.5 to 7.0.
Squid before v5.0.1 ignores Proxy-Authorization header when it's not 
needed and works fine with this configuration.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error files removed from 6.7

2024-02-14 Thread Amos Jeffries



On 15/02/24 05:01, Stephen Borrill wrote:
I see the translations of error messages have been removed from 6.7 
compared to 6.6 (and earlier), but I see no mention of this in the 
changelog:

https://github.com/squid-cache/squid/blob/552c2ceef220f3bbcdbedf194eae419fc791098e/ChangeLog

Was this change intentional and, if so, why isn't it documented?


No it was not intentional, hopefully will be fixed with the next release 
due on 3rd March.


As a workaround the files from 6.6 can be used, or the latest langpack 
available separately at 



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anyone build Squid for on multiarch ie arm and arm64?

2024-02-13 Thread Amos Jeffries

On 13/02/24 07:22, ngtech1ltd wrote:

I have couple RouterOS devices which supports containers with the next CPU 
arches:
• x86_64
• arm64
• armv6
• armv7

And I was wondering if someone bothered compiling squid containers for these 
arches?

I know that there are packages for Debian and Ubuntu but these are not 6.x 
squid but rather 5.x.


Debian packages of Squid are up to 6.x if you base the container on the 
"Testing" repository.


FWIW; I'm not sure if publishing built Docker containers is much use 
compared to providing the docker configuration file + extras needed to 
build the container as-needed.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can't verify the signature of squid-6.7.tar.gz

2024-02-07 Thread Amos Jeffries



On 8/02/24 02:19, Miha Miha wrote:

Hi Francesco,

I still get an issue, although a slightly different one:

#gpg --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: Signature made Tue 06 Feb 2024 10:51:28 PM EET using ? key ID FEF6E865
gpg: Can't check signature: Invalid public key algorithm




The error mentions algorithm, so also check the ciphers/algorithms 
supported by your GPG agent. The new key uses the EDDSA cipher instead 
of typical RSA.





When I try to import the public keys (pgp.asc file) I see:

#gpg --import pgp.asc

...
gpg: key FEF6E865: no valid user IDs
gpg: this may be caused by a missing self-signature
...

All the rest keys have an user and e-mail.

When I list the imported pub keys with   gpg --list-keys I see
multiple keys, but not the FEF6E865

May be the pub key hasn't been properly imported?



Please check the contents of squid-6.7.tar.gz.asc. The full key ID 
should be provided there (FEF6E865 is one of its short-forms).


If you have any doubts about the keyring (pgp.asc file), you can try to 
fetch a fresh copy of it from <http://master.squid-cache.org/pgp.asc>




FTR; this is what I get working from a clean /tmp/squid pseudo-chroot 
directory to avoid my actual trusted+known keys:


## mkdir /tmp/squid

## wget http://master.squid-cache.org/pgp.asc

## gpg --homedir /tmp/squid --import pgp.asc
gpg: WARNING: unsafe permissions on homedir '/tmp/squid'
gpg: keybox '/tmp/squid/pubring.kbx' created
gpg: key B268E706FF5CF463: 1 duplicate signature removed
gpg: key B268E706FF5CF463: 4 signatures not checked due to missing keys
gpg: /tmp/squid/trustdb.gpg: trustdb created
gpg: key B268E706FF5CF463: public key "Amos Jeffries 
" imported

gpg: key 4250AB432402F2F8: 1 signature not checked due to a missing key
gpg: key 4250AB432402F2F8: public key "Duane Wessels 
" imported

gpg: key E75E90C039CC33DB: 202 signatures not checked due to missing keys
gpg: key E75E90C039CC33DB: public key "Henrik Nordstrom 
" imported

gpg: key 867BF9A9FBD3EB8E: 605 signatures not checked due to missing keys
gpg: key 867BF9A9FBD3EB8E: public key "Robert Collins 
" imported

gpg: key CD6DBF8EF3B17D3E: 1 signature not checked due to a missing key
gpg: key CD6DBF8EF3B17D3E: public key "Amos Jeffries (Squid Signing Key) 
" imported
gpg: key 28F85029FEF6E865: public key "Francesco Chemolli (code signing 
key) " imported
gpg: key 3AEBEC6EC66648FD: public key "Francesco Chemolli (kinkie) 
" imported

gpg: Total number processed: 7
gpg:   imported: 7
gpg: no ultimately trusted keys found

## wget http://master.squid-cache.org/Versions/v6/squid-6.7.tar.gz
## wget http://master.squid-cache.org/Versions/v6/squid-6.7.tar.gz.asc

## gpg --homedir /tmp/squid --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: WARNING: unsafe permissions on homedir '/tmp/squid'
gpg: Signature made Wed 07 Feb 2024 09:51:28 NZDT
gpg:using EDDSA key 29B4B1F7CE03D1B1DED22F3028F85029FEF6E865
gpg: Good signature from "Francesco Chemolli (code signing key) 
" [unknown]

gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the 
owner.

Primary key fingerprint: 29B4 B1F7 CE03 D1B1 DED2  2F30 28F8 5029 FEF6 E865



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] stale-if-error returning a 502

2024-02-07 Thread Amos Jeffries

On 8/02/24 07:45, Robin Carlisle wrote:

Hi,

I have just started my enhanced logging journey and have a small snippet 
below that might illuminate the issue ...


/2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507) 
handleIMSReply: origin replied with error 502, forwarding to client due 
to fail_on_validation_err/




Please check the log for the earlier 22,3 line saying "checking 
freshness of URI: ".


All the 22,3 lines between there and your found 88,3 line will tell the 
story of why refresh was done. That will give an hint about why the 
fail_on_validation_err flag was set.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is Squid 6 production ready?

2024-01-31 Thread Amos Jeffries

On 1/02/24 11:22, Miha Miha wrote:

On 10/01/24 12:18, Miha Miha wrote:

Release note of latest Squid 6.6 says: "...not deemed ready for
production use..."  For comparison Squid 5.1 was 'ready'. When v6 is
expected to be ready for prod systems?


On Fri, Jan 12, 2024 at 3:37 PM Amos Jeffries wrote:
Sorry, that is an oversight in the release notes text. Removing it now.

Squid 6 is production ready.


Hi Amos,
I still see the 6.6 release note unchanged. Could you please adjust.




The page is auto-generated from the release source code. It is too late 
to change the 6.6 package. The documentation has been updated already 
for 6.7 release.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Security advisories are not accessible

2024-01-29 Thread Amos Jeffries

Thanks for the notice.

This appears to be a github issue that has been occuring to many other 
projects for at least 5hrs now. For now we can only hope that it gets 
resolved soon



Cheers
Amos

On 30/01/24 01:50, Adam Majer wrote:

Hi,

http://www.squid-cache.org/Versions/v6/ lists security advisories link as

    https://github.com/squid-cache/squid/security/advisories

But going there "There aren’t any published security advisories". There 
are links to individual patches.


Thanks,
- Adam
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-20 Thread Amos Jeffries

On 20/01/24 02:05, Robin Carlisle wrote:


I do have 1 followup question which I think is unrelated, let me know if 
etiquette demands I create a new post for this. When I test using 
chromium browser, chromium sends OPTION requests- which I think is 
something to do with CORS.   These always cause cache MISS from squid,.. 
I think because the return code is 204...?




No, the reason is HTTP specification (RFC 9110 section 9.3.7):
   "Responses to the OPTIONS method are not cacheable."

If these actually are CORS (might be several other things also), then 
there is important differences in the response headers per-visitor. 
These cannot be cached, and Squid does not know how to correctly 
generate for those headers. So having Squid auto-respond is not a good idea.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-18 Thread Amos Jeffries

On 19/01/24 03:53, Robin Carlisle wrote:
Hi, Hoping someone can help me with this issue that I have been 
struggling with for days now.   I am setting up squid on an ubuntu PC to 
forward HTTPS requests to an API and an s3 bucket under my control on 
amazon AWS.  The reason I am setting up the proxy is two-fold...


1) To reduce costs from AWS.
2) To provide content to the client on the ubuntu PC if there is a 
networking issue somewhere in between the ubuntu PC and AWS.


Item 1 is going well so far.   Item 2 is not going well.   Setup details ...


...



When network connectivity is BAD, I get errors and a cache MISS.   In 
this test case I unplugged the ethernet cable from the back on the 
ubuntu-pc ...


*# /var/log/squid/access.log*
1705588717.420     11 127.0.0.1 NONE_NONE/200 0 CONNECT 
stuff.amazonaws.com:443  - 
HIER_DIRECT/3.135.162.228  -
1705588717.420      0 127.0.0.1 NONE_NONE/503 4087 GET 
https://stuff.amazonaws.com/api/v1/stuff/stuff.json 
 - HIER_NONE/- 
text/html


*# extract from /usr/bin/proxy-test output*
< HTTP/1.1 503 Service Unavailable
< Server: squid/5.7
< Mime-Version: 1.0
< Date: Thu, 18 Jan 2024 14:38:37 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3692
< X-Squid-Error: ERR_CONNECT_FAIL 101
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from ubuntu-pc
< X-Cache-Lookup: NONE from ubuntu-pc:3129
< Via: 1.1 ubuntu-pc (squid/5.7)
< Connection: close

I have also seen it error in a different way with a 502 but with the 
same ultimate result.


My expectation/hope is that squid would return the cached object on any 
network failure in between ubuntu-pc and the AWS endpoint - and continue 
to return this cached object forever.   Is this something squid can do? 
   It would seem that offline_mode should do this?





FYI,  offline_mode is not a guarantee that a URL will always HIT. It is 
simply a form of "greedy" caching - where Squid will take actions to 
ensure that full-size objects are fetched whenever it lacks one, and 
serve things as stale HITs when a) it is not specifically prohibited, 
and b) a refresh/fetch is not working.



The URL you are testing with should meet your expected behaviour due to 
the "Cache-Control: public, stale-of-error" header alone.

  Regardless of offline_mode configuration.


That said, getting a 5xx response when there is an object already in 
cache seems like something is buggy to me.


A high level cache.log will be needed to figure out what is going on 
(see https://wiki.squid-cache.org/SquidFaq/BugReporting#full-debug-output).
Be aware this list does not permit large posts so please provide a link 
to download in your reply not attachment.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is Squid 6 production ready?

2024-01-12 Thread Amos Jeffries

On 10/01/24 12:18, Miha Miha wrote:

Release note of latest Squid 6.6 says: "...not deemed ready for
production use..."  For comparison Squid 5.1 was 'ready'. When v6 is
expected to be ready for prod systems?



Sorry, that is an oversight in the release notes text. Removing it now.

Squid 6 is production ready.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid hangs and dies and can not be killed - needs system reboot

2023-12-19 Thread Amos Jeffries


On 19/12/23 16:29, Amish wrote:

Hi Alex,

Thank you for replying.

On 19/12/23 01:14, Alex Rousskov wrote:

On 2023-12-18 09:35, Amish wrote:


I use Arch Linux and today I updated squid from squid 5.7 to squid 6.6.


> Dec 18 13:01:24 mumbai squid[604]: kick abandoning conn199

I do not know whether the above problem is the primary problem in your 
setup, but it is a red flag. Transactions on the same connection may 
get stuck after that message; it is essentially a Squid bug.


I am not sure at all, but this bug might be related to Bug 5187 
workaround that went into Squid v6.2 (commit c44cfe7): 
https://bugs.squid-cache.org/show_bug.cgi?id=5187


Does Squid accept new TCP connections after it enters what you 
describe as a dead state? For example, does "telnet 127.0.0.1 8080" 
establishes a connection if executed on the same machine as Squid?


Yes it establishes connection. But I do not know what to do next. 
Browser showed "Connection timed out" message. But I believe browser's 
also connected but nothing happened afterwards.




Ah ... that port is an interception port. It should *not* connect.

Please ensure your firewall contains the "-t mangle" rules for each 
interception port you use. As shown at 






> kill -9 does nothing

Is it possible that you are trying to kill the wrong process? You 
should be killing this process AFAICT:


> root 601  0.0  0.2  73816 22528 ?    Ss   12:59 0:02
> /usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC


I did not clarify but all processes needed SIGKILL and vanished except 
the Dead squid process which still remained.


# systemctl stop squid

Dec 19 08:46:38 mumbai systemd[1]: squid.service: State 'stop-sigterm' 
timed out. Killing.



FWIW, Squid default shutdown grace period for clients to disconnect is 
longer that systemd typically is willing to wait for a service shutdown.


Please set "shutdown_lifetime 10 seconds" in your squid.conf.


Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 601 
(squid) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 604 
(squid) with signal SIGKILL.


This is systemd running the command " kill -9 604 ".

Per the Squid code: "XXX: In SMP mode, uncatchable SIGKILL only kills 
the master process".


You can try SIGTERM instead, and repeat up to 3 times if the first does 
not close the process.




HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IP based user identification/authentication

2023-12-07 Thread Amos Jeffries

On 7/12/23 15:34, Andrey K wrote:

Hello,

I was interested if I can configure some custom external helper that 
will be called before any authentication helpers and can perform user 
identification/authentication based on the client src-IP address.


Well, yes and no.



The order of authentication and authorization helpers is determined by 
what order you configure http_access tests.


So "yes" in that you can call it before authentication, and have it tell 
you what "user" it *thinks* is using that IP.



However, ...

It can look up in the external system information about the user logged 
in to the IP address and return the username and some annotation 
information on success.


Users do not "log into IP address" and ...



If the user has been identified, no subsequent authentications are required.
Identified users can be authorized later using standard squid mechanisms 
(for example, ldap user groups membership).


This feature can be especially useful in "transparent" proxy 
configurations where 407-"Proxy Authentication Required" response code 
is not applicable.



... with interception the user agent is not aware of the proxy 
existence. So it *will not* provide the credentials necessary for 
authentication. Not to the proxy, nor a helper.


So "no".

This is not a way to authenticate. It is a way to **authorize**. The 
difference is very important.


For more info lookup "captive portal" on how this type of configuration 
is done and used.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:9 Denial of Service in HTTP Collapsed Forwarding

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:9
__

Advisory ID:   | SQUID-2023:9
Date:  | December 1, 2023
Summary:   | Denial of Service
   | in HTTP Collapsed Forwarding
Affected versions: | Squid 3.5 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
Fixed in version:  | Squid 6.0.1
__

Problem Description:

 Due to a Use-After-Free bug Squid is vulnerable to a Denial of
 Service attack against collapsed forwarding

__

Severity:

 This problem allows a remote client to perform Denial of
 Service attack on demand when Squid is configured with collapsed
 forwarding.

CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.0.1.

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Run the following command to identify how (and whether)
 your Squid has been configured with collapsed forwarding:

squid -k parse 2>&1 | grep collapsed_forwarding


 All Squid-3.5 up to and including 5.9 configured with
 "collapsed_forwarding on" are vulnerable.

 All Squid-3.5 up to and including 5.9 configured with
 "collapsed_forwarding off" are not vulnerable.

 All Squid-3.5 up to and including 5.9 configured without any
 "collapsed_forwarding" directive are not vulnerable.

__

Workaround:

 Remove all collapsed_forwarding lines from your squid.conf.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

This vulnerability was discovered by Joshua Rogers of Opera
Software.

Fixed by The Measurement Factory.

__

Revision history:

 2022-09-03 18:41:32 UTC Patches Released
 2023-10-12 11:53:02 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:8 Denial of Service in Helper Process management

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:8
__

Advisory ID:   | SQUID-2023:8
Date:  | December 1, 2023
Summary:   | Denial of Service
   | in Helper Process management
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to an Incorrect Check of Function Return Value
 bug Squid is vulnerable to a Denial of Service
 attack against its Helper process management.

__

Severity:

 This problem allows a trusted client or remote server to perform
 a Denial of Service attack when the Squid proxy is under load.


CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.4 are vulnerable.

__

Workaround:

 There is no known workaround for this issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

This vulnerability was discovered by Joshua Rogers of Opera
Software.

Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-27 21:27:20 UTC Patch Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:7 Denial of Service in HTTP Message Processing

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:7
__

Advisory ID:   | SQUID-2023:7
Date:  | December 1, 2023
Summary:   | Denial of Service in HTTP Message Processing
Affected versions: | Squid 2.2 -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to a Buffer Overread bug Squid is vulnerable to a Denial of
 Service attack against Squid HTTP Message processing.

__

Severity:

 This problem allows a remote attacker to perform Denial of
 Service when sending easily crafted HTTP Messages.


CVSS Score of 8.6


__

Updated Packages:

 This bug is fixed by Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5 and older:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid-2.2 up to and including 4.17 have not been tested
 and should be assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.4 are vulnerable.

__

Workaround:

 There is no workaround for this issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-25 19:41:45 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:4 Denial of Service in SSL Certificate validation

2023-12-01 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2023:4
__

Advisory ID:   | SQUID-2023:4
Date:  | November 2, 2023
Summary:   | Denial of Service in SSL Certificate validation
Affected versions: | Squid 3.3 -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to an Improper Validation of Specified Index
 bug Squid is vulnerable to a Denial of Service
 attack against SSL Certificate validation.

__

Severity:

 This problem allows a remote server to perform Denial of
 Service against Squid Proxy by initiating a TLS Handshake with
 a specially crafted SSL Certificate in a server certificate
 chain.

 This attack is limited to HTTPS and SSL-Bump.

CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid older than 3.3.0.1 are not vulnerable.

 All Squid-3.3 up to and including 3.4.14 compiled without
   --enable-ssl are not vulnerable.

 All Squid-3.3 up to and including 3.4.14 compiled using
   --enable-ssl are vulnerable.

 All Squid-3.5 up to and including 3.5.28 compiled without
   --with-openssl are not vulnerable.

 All Squid-3.5 up to and including 3.5.28 compiled using
   --with-openssl  are vulnerable.

 All Squid-4.x up to and including 4.16 compiled without
   --with-openssl are not vulnerable.

 All Squid-4.x up to and including 4.16 compiled using
   --with-openssl  are vulnerable.

 Squid-5.x up to and including 5.9 compiled without
   --with-openssl are not vulnerable.

 All Squid-5.x up to and including 5.9 compiled using
   --with-openssl  are vulnerable.

 All Squid-6.x up to and including 6.3 compiled without
   --with-openssl are not vulnerable.

 All Squid-6.x up to and including 6.3 compiled using
   --with-openssl  are vulnerable.

__

Workaround:

Either,

 Disable use of SSL-Bump features:
  * Remove all ssl-bump options from http_port and https_port
  * Remove all ssl_bump directives from squid.conf

Or,

  Rebuild Squid using --without-openssl.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Andreas Weigel

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:5 Denial of Service in FTP

2023-12-01 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:5
__

Advisory ID:   | SQUID-2023:5
Date:  | October 22, 2023
Summary:   | Denial of Service in FTP
Affected versions: | Squid 5.0.3 -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to an Incorrect Conversion between Numeric Types
 bug Squid is vulnerable to a Denial of Service
 attack against FTP Native Relay input validation.

 Due to an Incorrect Conversion between Numeric Types
 bug Squid is vulnerable to a Denial of Service
 attack against ftp:// URL validation and access control.

__

Severity:

 This problem allows a remote client to perform Denial of Service
 when sending ftp:// URLs in HTTP Request messages or constructing
 ftp:// URLs from FTP Native input.

 This issue is triggered during access control security checks,
 meaning clients may not have been permitted to use the proxy yet.

 FTP support is always enabled and cannot be disabled completely.

CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0.3 are not vulnerable.

 All Squid-5.0.4 up to and including 5.6 are vulnerable.

 All Squid-6.x up to and including 6.3 are vulnerable.

__

Workaround:

 * The FTP Native Relay input validation vector can be secured by
   removing all ftp_port directives from squid.conf.

 * There are no workarounds to avoid the ftp:// URL validation and
   access control vector.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:1 Request/Response smuggling in HTTP(S) and ICAP

2023-12-01 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2023:1
__


Advisory ID:   | SQUID-2023:1
Date:  | October 22, 2023
Summary:   | Request/Response smuggling in HTTP(S) and ICAP
Affected versions: | Squid 2.6.STABLE10  -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to chunked decoder lenience Squid is vulnerable to
 Request/Response smuggling attacks when parsing HTTP/1.1
 and ICAP messages.

__

Severity:

 This problem allows a remote attacker to perform
 Request/Response smuggling past firewall and frontend security
 systems when the upstream server interprets the chunked encoding
 syntax differently from Squid.

 This attack is limited to the HTTP/1.1 and ICAP protocols which
 support receiving Transfer-Encoding:chunked.

CVSS Score of 9.3
<https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:L/A:N=3.1>

__

Updated Packages:

#This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 <http://www.squid-cache.org/Versions/v5/SQUID-2023_1.patch>

Squid 6:
 <http://www.squid-cache.org/Versions/v6/SQUID-2023_1.patch>

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.1 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.3 are vulnerable.

__

Workaround:

 * ICAP issues can be reduced by ensuring only trusted ICAP
   services are used, with TLS encrypted connections
   (ICAPS extension).

 *  There is no workaround for the HTTP Request Smuggling issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 <http://www.squid-cache.org/Support/mailing-lists.html>.

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 <https://bugs.squid-cache.org/>.

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Keran Mu and Jianjun Chen,
 from Tsinghua University and Zhongguancun Laboratory.

 Fixed by Amos Jeffries of Treehouse Networks Ltd.

__

Revision history:

 2023-09-01 04:34:00 UTC Initial Report
 2023-10-01 08:43:00 UTC Patch Available
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:2 Multiple issues in HTTP response caching

2023-12-01 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:2
__

Advisory ID:   | SQUID-2023:2
Date:  | October 22, 2023
Summary:   | Multiple issues in HTTP response caching.
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to an Improper Handling of Structural Elements
 bug Squid is vulnerable to a Denial of Service
 attack against HTTP and HTTPS clients.

 Due to an Incomplete Filtering of Special Elements
 bug Squid is vulnerable to a Denial of Service
 attack against HTTP and HTTPS clients.

__

Severity:

 The limits applied for validation of HTTP Response headers are
 applied before caching. Different limits may be in place at the
 later cache HIT usage of that response.

 The limits applied for validation of HTTP Response headers are
 applied to each received server response. Squid may grow a cached
 HTTP Response header with HTTP 304 updates beyond the configured
 maximum header size.

 Subsequent parsing to de-serialize a large header from disk cache
 can stall or crash the worker process. Resulting in Denial of
 Service to all clients using the proxy.

CVSS Score of 9.6


__

Updated Packages:

This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than v5 have not been tested and are presumed
 vulnerable.

 Squid v5.x up to and including 5.9 are vulnerable.

 Squid v6.x up to and including 6.3 are vulnerable.

__

Workaround:

 Disable disk caching by removing all cache_dir directives from
 squid.conf.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was independently discovered by Joshua Rogers
 of Opera Software and by The Measurement Factory.

 Fixed by The Measurement Factory.

__

Revision history:

2019-09-11: Initial report of header growth caused by HTTP 304.
2021-03-04: Initial report of caching of huge response headers.
2023-04-28 02:40:03 UTC Initial patches released.

_
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:3 Denial of Service in HTTP Digest Authentication

2023-12-01 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:3
__

Advisory ID:   | SQUID-2023:3
Date:  | October 22, 2023
Summary:   | Denial of Service in HTTP Digest Authentication
Affected versions: | Squid 3.2 -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to a buffer overflow bug Squid is vulnerable to a Denial of
 Service attack against HTTP Digest Authentication

__

Severity:

 This problem allows a remote client to perform buffer overflow
 attack writing up to 2 MB of arbitrary data to heap memory
 when Squid is configured to accept HTTP Digest Authentication.

 On machines with advanced memory protections this will result
 in a Denial of Service against all users of the Squid proxy.

CVSS Score of 9.9


__

Updated Packages:

This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0.5 have not been tested and should be assumed
 to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.3 are vulnerable.

__

Workaround:

  Disable HTTP Digest authentication until Squid can be
  upgraded or patched.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Alex Bason.

__

Revision history:

 2021-03-22 00:59:20 UTC Initial Report
 2023-10-13 17:31:11 UTC Patch Published
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Virtual Hosting Problem

2023-12-01 Thread Amos Jeffries

On 1/12/23 04:55, Mario Theodoridis wrote:

I do have one more problem at this point.

Using openssl i can work with what i have below, but i cannot add a 2nd 
certificate


https_port 0.0.0.0:443 accel defaultsite=regify.com \
     tls-cert=/etc/ssl/certs/regify.com.pem \
     tls-cert=/etc/ssl/certs/foo.com.pem

gives me

ERROR: OpenSSL does not support multiple server certificates. Ignoring 
addional cert= parameters.



If i instead use gnutls, i get dinged for using ssl::server

FATAL: Bungled /etc/squid/squid.conf line 29: acl stest1 
ssl::server_name test1.regify.com


is there a way to get the SNI host with gnutls?


There is , but we have not yet implemented it.

If the HTTPS URL domain is acceptable you can use the dstdomain ACL type 
instead as a workaround.





http://www.squid-cache.org/Doc/config/acl/ did not answer that for me.

Alternatively, can i get openssl to cope with multiple certs somehow?


AFAIK, no.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Module c-icap help

2023-11-30 Thread Amos Jeffries

On 30/11/23 22:22, MIKA wrote:


Hello everyone,
Thank you again for all the work you were able to do on this project.
I try to control the cookies with squid but it's impossible. the c-icap module 
in the squid.conf file does not seem to work because the c-icap server does not 
seem to work.
Can you help me please ?



Please describe with a bit more details what setup you have, what you 
expect it to be doing, and what you see it actually doing.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Virtual Hosting Problem

2023-11-28 Thread Amos Jeffries

On 28/11/23 23:29, Mario Theodoridis wrote:

Hello everyone,

i'm trying to use squid as a TLS virtual hosting proxy on a system with 
a public IP in front of several internal systems running TLS web servers.


I would like to proxy the incoming connections to the appropriate 
backend servers based on the hostname using SNI.


I'm using the following config to just try this with 1 backend to test 
with and fail already


Here the config:

http_port 3128
debug_options ALL,2
pinger_enable off
shutdown_lifetime 1 second
https_port 0.0.0.0:443 tproxy ssl-bump tls-cert=/root/dummy.pem


That should be:

  https_port 443 accel defaultsite=example.com \
tls-cert=/etc/squid/example.com.pem

The PEM file needs to be valid for all the domains served.



acl tlspls ssl::server_name_regex -i test\.regify\.com
cache_peer test.de.regify.com parent 443 0 proxy-only originserver 
no-digest no-netdb-exchange name=test


Missing "tls" option to enable TLS when talking to this peer.



ssl_bump peek all
ssl_bump splice all
http_access allow all
cache_peer_access test allow all


I appreciate this is a test. But be sure to keep the default Squid 
security rules ("deny !Safe_ports" etc) and only allow the hosted 
domains instead of "all". These DoS and attack protections are 
particularly important on a reverse-proxy where the general public has 
access.


FYI; "test what you will use" is important for proxies. One of the 
"irrelevant" config details may kill your real-world production setup 
where testing works fine without any security.





...
I've been reading the squid docs and other internet resources, but am 
failing to figure out why this is not working.


Any clue sticks would be appreciated.

Also appreciated would be advise on where to find this documented.



The Squid wiki ConfigExamples section has all the typical configuration 
types and a few of the more uncommon ones as well.
The one you are needing is 




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread Amos Jeffries



On 27/11/23 23:05, David Komanek wrote:


On 11/27/23 10:40, Amos Jeffries wrote:

On 27/11/23 22:21, David Komanek wrote:
here are the debug logs (IP addresses redacted) after connection 
attempt to https://samba.org/ :



...
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) 
sendStartOfMessage: HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the guy 
on the other side told me so. According to the log, do you think I 
can somehow make it working or is it definitely problem on the 
samba.org webserver?



That ERR_PROTOCOL_UNKNOWN indicates that your proxy is trying to 
SSL-Bump the CONNECT tunnel and not understanding the protocol inside 
the TLS layer - which is expected if that protocol is HTTP/2.



For now you should be able to use 
<http://www.squid-cache.org/Doc/config/on_unsupported_protocol/> to 
allow these tunnels. Alternatively use the "splice" action to 
explicitly bypass the SSL-Bump process.



Thank you for the quick response. So I should add

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN
on_unsupported_protocol tunnel foreignProtocol

to the squid.conf, right?


At the point the error exists is too late AFAIK.

I was thinking something like:
  acl foo dstdomain samba.org
  on_unsupported_protocol tunnel foo





Still, I don't understand, why is this case handled by my browsers (or 
squid?) differently from usual HTTPS traffic to other sites. I suppose 
that plenty of sites are accepting HTTP/2 nowadays. A huge lack of 
knowledge on my side :-)


I'm not clear exactly why you see this only now, and only with 
samba.org. Squid not supporting HTTP/2 yet is a big part of the problem 
though.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Https from sibling peers does not work

2023-11-27 Thread Amos Jeffries

On 27/11/23 22:38, Mihkel Tammepuu wrote:

Hello!
I am trying to set up a sibling cluster of 4 Squid instances. The purpose of 
the cluster is redundancy AND sharing cache disk space.



FWIW, if these are running on the same machine you may find SMP workers 
with rock type cache_dir easier to manage and more efficient with the 
caching than a traditional cluster.





Everything seems to work fine with http, but with https I cannot see requests 
being forwarded to siblings.
Interestingly, when using HTCP, the siblings do get HTCP_CLR requests, but not 
HTCP_TST requests and https content is NOT loaded from sibling even if it’s 
clearly present there.
I’m of course using SSL Bump, content from origin servers works fine. I’ve 
tried Squid 6.5 and 5.9 with same results.
What might be wrong? Any way to fix it?



I assume/suspect you have the traditional cache_peer setup without TLS 
between them.


Squid intentionally does not send decrypted HTTPS traffic over non-TLS 
connections. That includes your cache_peer.


Try adding the "tls" option to your cache_peer lines and ensure they all 
use https_port listening in forward-proxy mode to receive that traffic.



If you need more assistance, please show what your config is. We will 
need the specific details of that to see if any other changes are useful 
and/or advise on further troubleshooting.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Intercepted connections are not bumped

2023-11-27 Thread Amos Jeffries

On 23/11/23 23:05, Andrea Venturoli wrote:

Hello.

I've got the following config:


...
http_port 8080 ssl-bump cert=/usr/local/etc/squid/proxyCA.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
https_port 3129 intercept ssl-bump 
cert=/usr/local/etc/squid/proxyCA.pem generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB

...
acl step1 at_step SslBump1
ssl_bump splice !bumphosts
ssl_bump splice splicedom
ssl_bump peek step1
ssl_bump bump all
...


So I've got port 8080 where proxy-aware client connect and 3129, which 
is feeded intercepted https connection by ipfw.


Problem is: if a client connects explicitly via proxy (port 8080) it 
gets SSLBumped; if a client simply connects to its destination https 
port (so directed to 3129) it is tunneled.


Anything wrong in my config?



FYI, Intercepted traffic first gets interpreted as a CONNECT tunnel to 
the TCP dst-IP:port and processed by http_access to see if the client is 
allowed to make that type of connection.


To guess based on the info provided above I suspect that the 
fake-CONNECT raw-IP does not match your "bumphosts" ACL test. Causing 
that "ssl_bump splice !bumphosts" to occur.


That behaviour is why we typically recommend doing "peek" first, then 
the splice checks can be based on whatever TLS SNI value is found.



For further assistance please also show your http_access and ACL config 
lines. They will be needed for a better analysis of what is going on.





I think it worked in the past: has anything changed in this regard with 
Squid 6?



Changed since what version? Over time a lot of small changes can add up 
to large differences.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread Amos Jeffries

On 27/11/23 22:21, David Komanek wrote:
here are the debug logs (IP addresses redacted) after connection attempt 
to https://samba.org/ :



...
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the guy on 
the other side told me so. According to the log, do you think I can 
somehow make it working or is it definitely problem on the samba.org 
webserver?



That ERR_PROTOCOL_UNKNOWN indicates that your proxy is trying to 
SSL-Bump the CONNECT tunnel and not understanding the protocol inside 
the TLS layer - which is expected if that protocol is HTTP/2.



For now you should be able to use 
 to 
allow these tunnels. Alternatively use the "splice" action to explicitly 
bypass the SSL-Bump process.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] What's this 'errorno=104' error?

2023-11-22 Thread Amos Jeffries

On 22/11/23 07:01, Wen Yue wrote:
I configured Squid6.3 as a MITM proxy and used Chrome to browse web 
pages through this Squid proxy, such as twitter.com. However,
I noticed these error messages in the 
cache.log:


...
2023/11/22 01:33:38 kid1| ERROR: system call failure while accepting a 
TLS connection on conn8925690 local=10.0.0.5:3128  
remote=171.221.64.188:33454  FD 12 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=104



Depends on the OS the proxy was built for.

Assuming Linux it would be POSIX "ECONNRESET" meaning "Connection reset 
by peer".



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-22 Thread Amos Jeffries

On 22/11/23 23:03, David Komanek wrote:

Hello,

I have a strange problem (definitely some kind of my own ignorance) :

If I try to access anything on the site https://www.samba.org WITHOUT 
proxy, my browser negotiate happily for http/2 protocol and receives all 
the data. For http://www.samba.org WITHOUT proxy it starts with http/1.1 
which is auto-redirected from http to https and continues with http/2. 
So far so good.


But WITH proxy, it happens that squid is using http/1.0.


That is odd. Squid should always be sending requests as HTTP/1.1.

Have a look at the debug level "11,2" cache.log records to see if Squid 
is actually sending 1.0 or if it is just relaying CONNECT requests with 
possibly HTTP/1.0 inside.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] mime.conf path

2023-11-12 Thread Amos Jeffries

On 13/11/23 09:35, Sai Eshwar wrote:
Hello, I am trying to install squid on CentOS without root privilege 
following the information present at 
https://stackoverflow.com/questions/36651091/how-to-install-packages-in-linux-centos-without-root-user-with-automatic-depen 


after running squid, I get the following error:
FATAL: Unable to open configuration file: /etc/squid/squid.conf: (2) No 
such file or directory,
which is resolved by running squid with -f 
~/centos/etc/squid/squid.conf, but now I get the following error:

FATAL: MIME Config Table /etc/squid/mime.conf: (2) No such file or directory
How can I specify mime.conf path to squid?



Add to your squid.conf:

  mime_table ~/centos/etc/squid/mime.conf


There are likely other paths you need to change if you are going to 
continue and try to run it like this. Squid is designed to be run as 
root and auto-chroot itself into a lower user account - default 
"nobody", or whatever the "./configure --with-default-user=" option was 
set to.


If/when Squid produces more of these messages you can usually search in 
this list to find the directives related 




PS. You may find creating a chroot for Squid to run inside to be easier 
that explicitly configuring every path.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access.log - POST requests

2023-11-04 Thread Amos Jeffries

On 4/11/23 20:53, Stefan Meurer wrote:

Hello,

is there a way to remove out all POST requests from access.log file?



  acl POST method POST
  access_log stdio:/var/log/squid/access.log format=squid !POST

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [DMARC] log_db_daemon errors

2023-11-03 Thread Amos Jeffries

On 3/11/23 08:14, jose.rodriguez wrote:

On 2023-11-02 13:46, Brendan Kearney wrote:

list members,

i am trying to log to a mariadb database, and cannot get the 
log_db_daemon script working.  i think i have everything setup, but an 
error is being thrown when i try to run the script manually.


/usr/lib64/squid/log_db_daemon 
/database:3306/squid/access_log/brendan/pass


Connecting... dsn='DBI:mysql:database=squid:database:3306', 
username='brendan', password='...' at /usr/lib64/squid/log_db_daemon 
line 399.



(Replied without looking and it did not go to the list, but to the 
personal email, so will repeat it for completeness...)



That DSN seems wrong, as far as I can find it should look like this:

DBI:mysql:database=$database;host=$hostname;port=$port

Something is not being 'fed' right to the script?



Thank you for the catch. I have now opened this to fix it:



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache NTLM Authenticaion

2023-10-27 Thread Amos Jeffries

On 27/10/23 14:08, Andre Bolinhas wrote:

Hi

It's possible squid cache NTLM authentication from users?



NTLM tokens are unique per TCP connection. So no, caching is a pointless 
waste of CPU and memory. The best that can be done already is.



My goal is to store the credentials in cache in order to reduce the 
request to Active Directory.




The only way to do that is to reduce unique TCP connections between 
clients and Squid.


Check that 
 
directive is either absent or turned "on" explicitly.




I'm trying guide from this squid : auth_param configuration directive 
(squid-cache.org)  
but there is no information relative to cache the authentication / 
credentials.


Also, in NTLM did you recommend to use the keep_alive option?


If it works, yes. Though be aware it only affects the initial request of 
the NTLM handshake.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [ext] Re: Squid 6.4 assertion errors: FATAL: assertion failed: stmem.cc:98: "lowestOffset () <= target_offset" current master transaction: master655 (backtrace)]

2023-10-24 Thread Amos Jeffries

On 24/10/23 22:26, Ralf Hildebrandt wrote:

I'll add a "me too" to this. 6.3 reliable, 6.4 crashes and this is under
_very_ low load. NetBSD 9.3_STABLE.


You can check the debugging recommendation in
https://bugs.squid-cache.org/show_bug.cgi?id=5309

I'll try 6.4 on my test proxy now (with very low to no load at all),
and will also try 7.0/master



FTR; current workaround is to reverse this patch from 6.4:

https://github.com/squid-cache/squid/commit/a27bf4b84da23594150c7a86a23435df0b35b988

That is a partial removal of the SQUID-2003:2 vulnerability fix. I hope 
we can have this corrected and an updated fix in the scheduled 6.5 
release on Nov 5.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Spliced domains tunnel connect is very slow

2023-10-19 Thread Amos Jeffries

On 19/10/23 01:21, Ben Goz wrote:

By the help of God.

Hi,
I saw in my access log a traces that shows that spliced URLs tunneling 
is very slowly:


Please clarify what you mean by "slow" ?

 How have you determined speed ?
 What speed are you expecting / would you call non-slow ?



FYI, Several things to be aware of:

 1) CONNECT tunnel is not a simple thing with a constant "speed" of 
transfer. It represents and entire set of tunneled messages (or other 
opaque data) over indefinite timespan. Each of those messages has its 
own "speed" of transfer, with possible empty periods of 0 bytes 
transferred between.


 2) the SSL-Bump procedure may pause a CONNECT tunnel during TLS 
handshake and/or validation process to asynchronously fetch missing 
certificate details, and/or validate other data with ACLs, etc.
  Each of these subsidiary transactions may add indefinite effects on 
timing of the 'bumped' CONNECT tunnel.


 3) modern networking systems utilize "Happy Eyeballs" algorithms 
wherein they may open multiple TCP connections to various (or same) 
services in parallel and only utilize the fastest to connect. This can 
result in CONNECT tunnel being initiated and unused - either closed 
immediately or left open waiting activity for long periods.



So, as you should be able to see the log snippet shows some details 
about tunnels duration of use, you cannot tell "speed" from these logs.



For example:


18/Oct/2023:15:18:50 +0300 240841 192.168.3.98 TCP_TUNNEL/200 6225 
CONNECT beacons2.gvt2.com:443  - 
HIER_DIRECT/172.217.0.67  - beacons2.gvt2.com 
 - splice -



Tunnel was _open_ for 240 seconds. 6225 bytes transferred.

Those bytes may have been transferred in the first 1 milliseconds of the 
tunnel being open. Then Squid leaving it open waiting for further uses 
which never came.


... "slow" at 1.4 GB/sec.

 ... or it could have been "slow" at 10 bytes/sec the whole time. One 
cannot tell.



18/Oct/2023:15:18:50 +0300    680 192.168.3.173 TCP_TUNNEL/500 4977 
CONNECT mobile.events.data.microsoft.com:443 
 - HIER_DIRECT/13.89.178.26 
 - mobile.events.data.microsoft.com 
 - splice -


This tunnel was never open at all. It was *rejected*.
"speed" in that case was 3 KB/sec.


18/Oct/2023:15:18:51 +0300 127307 192.168.3.97 TCP_TUNNEL/500 3101 
CONNECT array612.prod.do.dsp.mp.microsoft.com:443 
 - 
HIER_DIRECT/20.54.24.148  - 
array612.prod.do.dsp.mp.microsoft.com 
 - splice -


Tunnel was _open_ for 127 seconds. 3101 bytes transferred.

Those bytes may have been transferred in the first 1 milliseconds of the 
tunnel being open. Then Squid leaving it open waiting for further uses 
which never came.


 ... "slow" at 376 MB/sec.

 ... or it could have been "slow" at 25 bytes/sec the whole time. One 
cannot tell.





This is my squid configurations:

acl NoSSLInterceptRegexp_always ssl::server_name --client-requested 
  "/usr/local/squid/etc/splice.list"


acl alwaysBump ssl::server_name --client-requested 
storage.googleapis.com  
youtubei.googleapis.com  www.eset.com 
 eset.com  
safebrowsing.googleapis.com  
play.google.com 

on_unsupported_protocol tunnel
acl DiscoverSNIHost at_step SslBump1
ssl_bump peek DiscoverSNIHost
ssl_bump bump alwaysBump   -  Used to bumd certain subdomains before the 
whole domain is bumped.

ssl_bump splice NoSSLInterceptRegexp_always
ssl_bump stare all



Other CONNECT requests are served noramly.
Is this issue could be a root cause for the generally slow internet?

Thanks,
Ben

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to configure a transparent, pass-all, Squid proxy?

2023-10-19 Thread Amos Jeffries

On 20/10/23 07:17, Bud Miljkovic wrote:
Chain EXTERNAL_RULES (2 references) pkts bytes target prot opt in out 
source destination 83158 15M DROP all -- * * 0.0.0.0/0 
 0.0.0.0/0 



FYI,  All of the traffic leaving the machine is being dropped by your 
iptables rules.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transparent HTTPS Squid proxy does not work!

2023-10-16 Thread Amos Jeffries


I think your problem is the NAT table rules. You are missing some 
critical exceptions to let squid make the outbound tunnels.



On 17/10/23 07:51, Bud Miljkovic wrote:

Let me try one more time.


Here is my system configuration:

{HW-Box} --> Local Server{ (eth0[port 444]) -+
 
      |

           +-+
           |
           |
           +-> ([3129] Transparent Squid proxy) ---> (eth1[port443]) 
}--+
 
                                       |
 
  +--- ---+

                                      |
                                      +->--{ INTERNET Server }

The setup and the problem:
    - The HW box tries to establish an HTTPS transparent connection with 
a server located within Internet.




Drop the word "transparent" from this thinking. It is a connection.



    - It uses the Local Server and send its request via eth0 interface.

    - The request is Pre-routed from eth0, port 443, to the Transparent 
Squid proxy (v3.5.25), listening at port 3129.




Correction. NAT'ed, not "routed".  The distinction is important and 
impacts which type of configuration will work and which will guarantee 
errors.


 * NAT is only working when performed on the machine running Squid.

 * "routed" can be done from a remote machine to the Squid machine. But 
does need additional NAT or TPROXY on the Squid machine.



    - For testing purposes, the Squid proxy is configured to pass only 
the HTTPS traffic transparently via the eth1 interface, using sing the 
`tcp_outgoing_address ` directive.  Please see the 
squid-ota.conf file content below.



To be clear FTR; Squid *cannot* guarantee a particular interface. Only 
the OS can decide that.


Your Squid is setting the TCP src-IP on outbound packets as a *Hint* 
(albeit a strong one) for the OS to use in its routing choices.





    - While testing, I am monitoring the eth1 output via tcpdump and I 
get the following:


      # tcpdump -i eth1 port 443 -n -X -q -w tcp_dump_24
        tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture 
size 262144 bytes

        0 packets captured
        1 packet received by filter
        0 packets dropped by kernel
        3 packets dropped by interface

    - But nothing is detected!?



Nod.

FWIW, I have had mixed success with tcpdump when NAT and MASQUERADE are 
happening on the machine. It plugs in at the lowest layer somewhere 
around the point packets are actually going to/from the network. So 
packets with internal temporary values in them may not show up in the dump.


When NAT, MASQUERADE and routing are manipulating things the iptables 
packet counters seem to be a lot more reliable indicator of what traffic 
is (not) going through, even if the dump shows less than expected.





    - From the above it appears that there is no eth1 output at port 443?



Possibly. Maybe.


I have included the printouts of the `iptables -nvL` and `iptables -nvL 
-t nat` commands.


Can someone tell me what I have done wrong here and perhaps suggest a 
solution?



Cheers,
Bud


=
Squid configuration file:

# 1) Visible hostname
visible_hostname ctct-r2

# 2) Initialize SSL database first
sslcrtd_program /usr/libexec/ssl_crtd -s /var/lib/ssl_db -M 4MB

# 3) Listen to incoming HTTP traffic
http_port 3128

# 4) Block all HTTP traffic
http_access deny all



Ah, This will block the CONNECT tunnels.

"CONNECT" is an HTTP request method. This (4) rule will both reject the 
CONNECT tunnel being handled, and/or will prevent the "splice" action 
from being allowed to pass it through Squid in its form as a "CONNECT" 
message.



Please keep the basic security rules (http_access deny ...) provided in 
the default squid.conf for your version. You can see them at 



The rules you have for your local policy should go after the line 
"INSERT YOUR OWN RULE(S) HERE".


 You may remove the existing "http_access allow ..." lines as needed if 
you do not want them to happen.





# 5) Listen for incoming HTTPS traffic and intercept it
https_port 3129 intercept ssl-bump cert=/etc/squid/ssl_cert/myCA.pem 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB


# 6) Pass the SSL (HTTPS) traffic trasparently throught
ssl_bump splice all

# Do not use caching
# cache_dir ufs /var/volatile/log/squid/logs 100 16 256

# 7) Send out all HTTPS traffic to destination server via given IP address
tcp_outgoing_address 10.3.19.92



Again this is not limited to one protocol (HTTPS). **All** TCP outgoing 
packets from your Squid will use this regardless of their protocol.





=
iptables -nvL -t nat

Chain 

Re: [squid-users] 2 year old security bugs not fixed?

2023-10-13 Thread Amos Jeffries

On 14/10/23 04:19, Dieter Bloms wrote:

Hello,

I stumbled across this page
https://joshua.hu/squid-security-audit-35-0days-45-exploits and wonder
if all these security holes are really still there.

Can someone from the developers give a status?

Thank you very much.




We continue to close the vulnerabilities we can. In the order we deem 
most urgent based on what we know of common use cases for Squid.


Some issues listed are missing their fix references, so the situation is 
(slightly) better than first appearances.  Right now I am going through 
the list again cross-checking his given titles against our security team 
records to make sure all of them have had the appropriate triage done 
and get his CVE references updated.




To quote the article:

"
The Squid Team have been helpful and supportive during the process of 
reporting these issues. However, they are effectively understaffed, and 
simply do not have the resources to fix the discovered issues. Hammering 
them with demands to fix the issues won’t get far.

"

If anyone wishes to help please volunteer in squid-dev or squid-bugs 
mailing lists.  has 
all the starter info.




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.9 Kerberos authentication problem

2023-10-12 Thread Amos Jeffries

On 6/10/23 06:15, Ludovit Koren wrote:

Amos Jeffries writes:


 > On 5/10/23 19:30, Ludovit Koren wrote:
 >> Hello,
 >> I am using squid 5.9 with AD Kerberos authentication and could not
 >> solve
 >> the problem of sending incorrect request according to client
 >> configuration followed by the correct one, i.e.:
 >> 1695983264.808  0 x.y.z TCP_DENIED/407 4135 CONNECT
 >> th.bing.com:443 - HIER_NONE/- text/html
 >> 1695983264.834 21 x.y.z TCP_TUNNEL/200 6080 CONNECT th.bing.com:443 
name@domain FIRSTUP_PARENT/squid-parent -
 >>

 > This looks fine to me. The first request is sent without credentials,
 > then the second contains the correct ones using the correct
 > authentication scheme.

ok, this is little bit longer output:




1695983167.837  0 x.y.z TCP_DENIED/407 4135 CONNECT th.bing.com:443 - 
HIER_NONE/- text/html
1695983167.842  1 x.y.z TCP_DENIED/407 4135 CONNECT th.bing.com:443 - 
HIER_NONE/- text/html
1695983167.873 27 x.y.z TCP_TUNNEL/200 6080 CONNECT th.bing.com:443 
name@domain FIRSTUP_PARENT/squid-parent -


Taking this set of th.bing.com requests as clearly a bunch related they 
look like an NTLM or Negotiate/NTLM authentication sequence.



The rest of the log entries are a little too spread out with a mix of 
domains to tell where the connections are.


Also, the 200 status CONNECT tunnels in this log extract were all 
running from a time before the first line of the log snippet. So we 
cannot see how they reached 200 status.





In the gw1.ris.datacentrum.sk, there is authentication on the site
inside SSL. It is not working.


FYI, "inside SSL" is just opaque bytes to Squid. Any failure there is 
between the client and server at the other end of the CONNECT tunnel. 
Nothing to do with this Squid.




As soon as I exclude
gw1.ris.datacentrum.sk from the authentication in squid, it starts
working.


That is an indication that the client software is unable to handle 
authentication on the CONNECT tunnel properly.




For better troubleshooting there are several steps to take:

* making a custom log format and a debug log for your Squid would be 
useful to get more details about each transaction.


 I suggest adding this to your squid.conf:

 logformat debug %ts.%03tu %6tr %>a cid=%>p_%lp_%ssl::bump_mode \
%Ss/%03>Hs %The "cid=" entry should be a semi-unique value per TCP connection. It is 
not true unique since ports get re-used, but should be reliable enough 
to separate overlapping connections with duplicate request URLs.


The user=/login=/token= part should allow you to see what/why the 407 is 
occuring. You can investigate the token value with this tool 
<https://gist.github.com/aseering/829a2270b72345a1dc42> to see if it is 
truly a Negotiate/Kerberos token vs a Negotiate/NTLM one.




If you need more assistance, I/we will need to see your squid.conf (in 
full but without the "#" comment lines) and the output trace from that 
debug.log.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.9 Kerberos authentication problem

2023-10-10 Thread Amos Jeffries

On 10/10/23 22:23, Ludovit Koren wrote:



Hi,

I am sorry to bother you once again, but I sent you and described just
the problem you were talking about and did not get any answer.



Sorry about that. Following up on the original thread in a short while.


PS. Normally no answer means nobody currently has a solution. In this 
case there is more troubleshooting that case be done to investigate, I 
just ran out of time to write it up for you.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.9 Kerberos authentication problem

2023-10-05 Thread Amos Jeffries

On 5/10/23 19:30, Ludovit Koren wrote:


Hello,

I am using squid 5.9 with AD Kerberos authentication and could not solve
the problem of sending incorrect request according to client
configuration followed by the correct one, i.e.:

1695983264.808  0 x.y.z TCP_DENIED/407 4135 CONNECT th.bing.com:443 - 
HIER_NONE/- text/html
1695983264.834 21 x.y.z TCP_TUNNEL/200 6080 CONNECT th.bing.com:443 
name@domain FIRSTUP_PARENT/squid-parent -



This looks fine to me. The first request is sent without credentials, 
then the second contains the correct ones using the correct 
authentication scheme.


TL;DR ... I would only be worried if there were sequences 2+ of these 
407 before the final 200 status.



CONNECT tunnels are typically opened on a brand new TCP connection. Also 
the Negotiate authentication scheme used for Kerberos requires a unique 
token per-connection which is only received in the first HTTP response 
from Squid to client. Meaning the full 2-stage authentication process is 
needed every time for it to figure out how it is supposed to 
authenticate on that TCP connection.


Compared to other HTTP requests which often re-use an already 
authenticated TCP connection. So they get away with assuming the offered 
authentication schemes, and/or Kerberos token, will remain unchanged. 
Allowing the negotiation stage to be skipped.



If you are worried; you can try running the testing/troubleshooting 
checks detailed at 





There are some web servers which are not working even when the correct
request follows afterwards.



The TCP connection between client and Squid is different (and 
independent) from the TCP connections between Squid and servers.
The authentication you are using is only between client and Squid, it 
has nothing to do with web servers.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [ext] Squid quits while starting?!

2023-10-02 Thread Amos Jeffries

On 2/10/23 10:28, Dave Blanchard wrote:

Squid's user friendliness could use a major overhaul.


Agreed. As one of the people trying to do that for the past decade ... 
any suggestions of better wording are welcome.




I absolutely despise programs which are designed this way.


Ah, there we have part of the problem, Squid is not exactly designed. As 
Francesco mentioned these days it is a collection of 40+ years of Ad Hoc 
community contributions.


The group of long-term developers (we call ourselves "core team") have 
tried to guide/wrangle some coherency out of that - some variable success.




It just silently fails on startup with no obvious reason or explanation given,


That is false. Squid writes as much information as it can about the 
problem to log, stderr, and if possible the system message log. There is 
nothing else a process like Squid can do.



even if one enables debug output to find out why.


If you are not seeing information detailing the problem in the above 
mentioned logs/outputs. Then the problem is either:


 1) something broken in the ability to write those log/output.

This is a major problem. How does one report about problems when the 
reporting method is broken? Sorry.


Using multiple outputs during startup helps reduce this but it is 
possible that all of them have been forbidden. Further troubleshooting 
can be done by running Squid with "squid -k parse" - the output to 
stderr should be printed to the shell for admin to view.



 2) Squid has no known information to display





Instead you have to get online and search, ask questions in a mailing list, etc 
to find out you have to type in some obscure commands.



Hmm. That is the normal process for any software error which you do not 
already fully understand.


For those we believe to be clear enough already we try to document 
troubleshooting procedures in the Squid wiki 
. Suggestions for better organization 
and/or missing texts of that content are always welcome.



"Too few ssl_crtd processes are running" 


 ... means *exactly* what it says.

Note: this is not an error message. It is part of the information Squid 
knows about the problem. It is listed to help inform you about what was 
going on when the error occured. It may be helpful to your troubleshooting.


Squid needs to use some "ssl_crtd" helpers. There are not enough of them 
running. ... Squid is about to start some to use. Then ...




and "FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"



... also means exactly what it says.


"ssl_crtd" is a binary separate from Squid. We humans know that it is 
one of the Squid Project bundled helpers, but Squid itself cannot tell.


Squid is not getting anything out of the helper. Not even a message 
saying what went wrong for it. That would have appeared as messages in 
your log between the two above lines .. .and what Matus was asking about.


Squid tries approximately 10 times to recover the helper before the 
FATAL message is produced and Squid itself exits instead of staying in 
the infinite loop of "start helper...start helper...start helper..."





are some of the dumbest error messages ever conceived. Reminds me of every web site these 
days with their "Oops something went wrong lol" errors.


This one is more inline with the classic "Keyboard not found. Press any 
key to continue".


It seems dumb, until you actually understand what it means. One must 
plug in keyboard (aks solve the problem) before the machine can be used.


Or for Squid ... only by researching and fixing the problem will you get 
any better info about the problem.




FYI: Documentation about ssl_crtd can be found at 
.
 The page need updating to detail this and other errors the helper 
produces and how to solve them. Any volunteers?



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TLS passthrough

2023-09-30 Thread Amos Jeffries

On 30/09/23 11:06, Fernando Giorgetti wrote:
If someone has already done that, with the client running in a different 
machine, I would love to know how.



There are several ways;

 1) run Squid on the gateway router for your network, or

 2) place Squid in a DMZ between the LAN gateway and WAN gateway.

 3) setup a custom route+gateway for port 80 and 443 LAN traffic as the 
Squid machine. Excluding traffic from that machine itself.





In case Squid runs on the same machine used as a network gateway to the 
client machine, I suppose the config would be similar, but if it's not 
running on the same machine used as the gateway, then it would be nice 
to see how.




That would be (1). See 
 for 
how to configure the gateway router running Squid.


The configuration difference between the at-source (aka, on client 
machine) you are/were using is just some iptables rules.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] SQUID-2021:8 Denial of Service in Gopher gateway

2023-09-27 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2021:8
__

Advisory ID:   | SQUID-2021:8
Date:  | September 27, 2023
Summary:   | Denial of Service in Gopher gateway
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
Fixed in version:  | Squid 6.0.1
__

Problem Description:

 Due to a NULL pointer de-reference bug Squid is vulnerable to
 a Denial of Service attack against Squid's Gopher gateway.

__

Severity:

 The gopher protocol is always available and enabled in Squid
 prior to Squid 6.0.1

 Responses triggering this bug are possible to be received from
 any gopher server, even those without malicious intent.

 CVSS Score of 7.5


__

Updated Packages:

   The gopher support has been removed in Squid version 6.0.1

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid-2.x up to and including 2.7.STABLE9 are vulnerable.

 All Squid-3.x up to and including 3.5.28 are vulnerable.

 All Squid-4.x up to and including 4.16 are vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

__

Workaround:

* Reject all gopher URL requests

  acl gopher proto gopher
  http_access deny gopher

 Note: removing the gopher port 70 from the Safe_ports ACL
   is not sufficient to avoid this vulnerability.
__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

__

Revision history:

 2021-02-20 08:37:38 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] SQUID-2020:13 Denial of Service in gopher gateway

2023-09-27 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2020:13
__

Advisory ID:   | SQUID-2020:13
Date:  | September 06, 2023
Summary:   | Denial of Service in gopher gateway
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.14
   | Squid 5.x -> 5.6
Fixed in version:  | Squid 6.0.1
__

Problem Description:

 Due to a buffer overflow bug Squid is vulnerable to a Denial of Service
 attack against Squid's gopher gateway.

__

Severity:

 This problem allows a remote gopher: server to trigger a buffer
 overflow by delivering large gopher protocol responses.
 On most operating systems with memory protection this will
 halt Squid service immediately, causing a denial of service to
 all Squid clients.

 The gopher protocol is always available and enabled in Squid
 prior to Squid 6.0.1

 Responses triggering this bug are possible to be received from
 any gopher server, even those without malicious intent.


CVSS Score of 7.5


__

Updated Packages:

 The gopher support has been removed in Squid version 6.0.1

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid-2.x up to and including 2.7.STABLE9 are vulnerable.

 All Squid-3.x up to and including 3.5.28 are vulnerable.

 All Squid-4.x up to and including 4.14 are vulnerable.

 All Squid-5.x up to and including 5.6 are vulnerable.

__

Workaround:

 * Reject all gopher URL requests

  acl gopher proto gopher
  http_access deny gopher

 Note: removing the gopher port 70 from the Safe_ports ACL
   is not sufficient to avoid this vulnerability.
__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Marco Grassi.

__

Revision history:

 2019-06-15 13:32:50 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Transparent deployment VS web services behind DNS load balancing

2023-09-26 Thread Amos Jeffries

On 26/09/23 05:35, Denis Roy wrote:
My installation is fairly simple: I run Squid 5.8 in transparent mode, 
on a pF based firewall (FreeBDS 14.0) .


I intercept both HTTP 80, and HTTPS 443. Splicing the exceptions I have 
in a whitelist, bumping everything else. Simple.


This is a relatively recent deployment, and it has been working well as 
far as web browser experience is concerned. Nonetheless, I have observed 
a certain amount of 409s sharing similarities (more on that later). Rest 
assured, I have made 100% certain my clients and Squid use the same 
resolver (Unbound), installed on the same box with a fairly basic 
configuration.


When I observe the 409s I am getting, they all share the same 
similarities: the original client request was from an application or OS 
related task,  using  DNS records with very low TTL. 5 minutes or less, 
often 2 minutes.  I could easily identify the vast majority of these 
domains as being load balanced with DNS solutions like Azure Traffic 
Manager, and Akamai DNS.


Now, this make sense: a thread on the client may essentially intiate a 
long running task that will last a couple of minutes (more than the 
TTL), during which it may actually establish a few connections without 
calling the gethostbyname function, resulting in squid detecting a 
forgery attempt since it will be unable to validate the dst IP match the 
intended destination domain. Essentially, creating "false positives'', 
and dropping legitimate traffic.




Aye, pretty good summary of the current issue.


I have searched a lot, and the only reliable way to completely solve 
this issue in a transparent deployment has been to implement a number of 
IP lists for such services (Azure Cloud, Azure Front Door, AWS, Akamai 
and such), bypassing squid completely based on the destination IP address.


I'd be interested to hear what other approaches there might be. Some 
package maintainers have chosen to drop the header check altogether ( 
https://github.com/NethServer/dev/issues/5348 
 ).



Nod. Thus opening everyone using that package to CVE-2009-0801 effects. 
This is a high risk action that IMO should be an explicit choice by 
admin to do, not a package distributor.



  I believe a better 
approach could be to just validate that the SNI of the TLS Client Hello 
match the certificate obtained from the remote web server, perform the 
usual certificate validation (is it trusted, valid, etc), and not rely 
so much on the DNS check


That is just swapping the client-presented Host header for 
client-presented SNI, and remotely-supplied DNS lookup for 
remotely-supplied certificate lookup. All the security considerations 
problems of CVE-2009-0801 open up again, but at the TLS trust layer 
instead of the HTTP(S) trust layer.


which can be expected to fail at times given 
how DNS load balancing is ubiquitous with native cloud solutions and 
large CDNs.  But implementing this change would require serious 
development, which I am completely unable to take care of.


Indeed.

Alternatively the design I have been trying to work towards slowly is 
verifying that all requests on the long-lived connection only go to the 
same origin/server IP. Once trust of that IP has been validated we 
should not have to re-verify every request against new DNS data, just 
against the past history of of the connection.


This approach though is also a lot of development.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] access_log UDP format

2023-09-25 Thread Amos Jeffries

On 22/09/23 01:15, Matus UHLAR - fantomas wrote:

Hello,

I'm curious if the udp:// logging is syslog-compatible.

Do I just need to congigure proper logformat?



The Squid "udp" logging module sends your log lines as opaque UDP packet 
payload to the named UDP server:port.


The Squid "syslog" logging module frames that data into the binary 
syslog protocol.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] A few things about Squid-cache

2023-09-24 Thread Amos Jeffries

On 25/09/23 07:49, Jason Long wrote:

Hello,
Thank you so much for your reply.
1- Regarding security, what parameters should be changed or added in the 
configuration file?




First steps with a new Squid install are to check in squid.conf for the 
"acl localnet" lines and adjust so it lists your LAN ranges. The common 
ones are listed there by default.


Then look for the "http_access" directive. That is the primary means of 
telling Squid what the network policy needs are.




2- How to configure Squid-cache service for 1000 clients?



Apart from the above (1) answer, Squid does not care about number of 
clients it will serve as many as your machine can handle. Until the 
hardware overloads the CPU, RAM or disks capacity limits.



For good advise we will need details...

 Forward or Reverse proxy installation?
 LAN or WAN clients?

 What policies do you need to comply with regarding client use of the 
proxy, or access to any special websites?



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid BUG: assurance failed: tok.skip(WellKnownUrlPathPrefix())

2023-09-16 Thread Amos Jeffries

On 15/09/23 18:23, Loučanský Lukáš wrote:
Ok - thanks for your reply. But this does not clarify it fully. You said 
cachemgr.cgi auto-detects the existence of MGR_INDEX template. But what 
is it supposed to do if none is found? Just displaying the message about 
missing MGR_INDEX? Or doing the old style "Cache Manager menu"? I have 
applied Alex's  patch to re-enable cache_object URI and now it obviously 
works - both cache_object and squid-internal-mgr. I mean - squidclient 
from debian packages (v4.6) works and the one compiled with squid v6.3 
works too. Because of my old network NOC system a use a very old 
cachemgr (in fact version index.cgi/3.1.16) to ask the current v6.3 
squid for its data and now it works too.


Ah, very old, the behaviour I described was for cachemgr.cgi v3.2.0.10 
and later.


The 3.1 version can only display the old style "Cache Manager menu" 
regardless of what the Squid server supports.


That also means it will not work with any Squid v7 or later proxies 
where the cache_object protocol has been dropped entirely.


My testing today found bug 5300, bug 5301 and few other annoyances in 
the v4+ CGI tool. So right now I am not recommending an upgrade, but it 
will be worth it once they are fixed.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid BUG: assurance failed: tok.skip(WellKnownUrlPathPrefix())

2023-09-14 Thread Amos Jeffries

On 15/09/23 09:55, Alex Rousskov wrote:

On 2023-09-14 06:40, Loučanský Lukáš wrote:

But - could someone (or you) clarify the next one for me? I've read 
some questions about the "new" cachemgr.cgi and the MGR_INDEX template. 


Sorry, I cannot help with cachemgr.cgi without heroic efforts. IMHO, 
Squid Project should remove cachemgr.cgi feature instead of supporting 
it. I hope that somebody else will help you with it.




FYI, the cachemgr.cgi provided by current Squid tries to auto-detect the 
existence of a working MGR_INDEX and simply provides a URL to that 
instead of doing the "old" CGI interface things.


The reasons to use it are:
 1) when managing very outdated Squid versions, or
 2) managing Squid which have not yet had a tool providing MGR_INDEX 
installed yet.



The recent changes to squidclient "mgr:" support and the cache_object:// 
 URL scheme have had a few issues. I am currently setting up a test 
environment to analyze and re-document the new/current situation. 
Hopefully some detailed info can be added to the wiki in the coming days.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.2 with WCCP

2023-09-11 Thread Amos Jeffries

On 11/09/23 20:16, ngtech1ltd wrote:

Hey,

What is required for testing the wccp code?



At minimum a Router or Switch with WCCPv2, plus separate machines for 
client and proxy.


Ideally;
 * at least two router/switch to test the changed code handling 
multiple routers.

 * ability to test both Mask and GRE assignment methods.
 * ability to test a mix of the capability and security settings in WCCPv2.



I can try to get a Cisco device for a basic testing.
Is there a specific bug report we can follow on this issue or maybe we should 
follow on the PR?



Test results in the PR please.

Cheers
Amos



Eliezer

-Original Message-
From: Amos Jeffries
Sent: Tuesday, August 22, 2023 15:16

On 22/08/23 01:34, Alex Rousskov wrote:

On 8/21/23 05:06, Callum Haywood wrote:


Does anyone understand what is causing these errors? Are there any
known issues or patches in progress?


A few years ago, several serious problems were discovered in WCCP code,
including security vulnerabilities:

https://github.com/squid-cache/squid/security/advisories/GHSA-rgf3-9v3p-qp82

Some of the WCCP bugs were fixed without testing; developers fixing
those bugs could not easily test WCCP. Some of the old WCCP bugs
remained and some of the new fixes were buggy.

Today, WCCP code remains problematic. If your customers rely on WCCP,
consider investing into revamping that neglected and buggy feature.



This PR <https://github.com/squid-cache/squid/pull/970> has some
progress towards a fix of those. See the latest comment (currently Sept
2022) for issues that still need to be resolved before that PR is ready
for merge attempt.

The major issue remains the ability to test.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid ssl_bump splice configuration

2023-08-29 Thread Amos Jeffries

On 30/08/23 07:57, Ben Goz wrote:

ב"ה

I managed to get the ssl splice configurations to work but when I'm 
splicing for example: play.google.com 


I see in cache log the following:

2023/08/29 22:54:53.688 kid1| 33,2| client_side.cc(3214) 
fakeAConnectRequest: fake a CONNECT request to force connState to tunnel 
for ssl-bump
2023/08/29 22:54:53.700 kid1| 33,2| client_side.cc(3214) 
fakeAConnectRequest: fake a CONNECT request to force connState to tunnel 
for splice
2023/08/29 22:54:53 kid1| SECURITY ALERT: Host header forgery detected 
on conn3362 local=172.217.22.110:443  
remote=192.168.26.100:55331  FD 540 
flags=17 (local IP does not match any domain IP)

     current master transaction: master2737
2023/08/29 22:54:53 kid1| SECURITY ALERT: on URL: play.google.com:443 



The host header forgery issue for play.google.com 
 is observed only for spliced connections, but 
when this url is bumped I don't see this error.

Why is splicing making this error?



Likely because splice is emulating a client-generated CONNECT request, 
which then faces the same forgery checks that hits the issues Google DNS 
TTL choices cause with the forgery detection. That is just an educated 
guess though.







‫בתאריך יום ב׳, 28 באוג׳ 2023 ב-13:54 מאת ‪Ben Goz‬‏ 
:‬


ב"ה

I'm using squid version:
nativ@arachimprodsrv3:/usr/local/squid/etc$
/usr/local/squid/sbin/squid -v
Squid Cache: Version 6.1-VCS
Service Name: squid

This binary uses OpenSSL 3.0.2 15 Mar 2022. configure options:
  '--with-large-files' '--with-openssl' '--enable-ssl'


FYI "--enable-ssl" no longer exists.

It was replaced by "--with-openssl".



'--enable-ssl-crtd' '--enable-icap-client'
'--enable-linux-netfilter' '--disable-ident-lookups'

Configured with ssl_bump and tproxy:
http_port 0.0.0.0:3128 
http_port 0.0.0.0:3129  tproxy
https_port 0.0.0.0:3130  tproxy ssl-bump \
   cert=/usr/local/squid/etc/ssl_cert/myCA.pem \
   generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
options=ALL,NO_SSLv3 sslflags=NO_DEFAULT_CA


Use tls-default-ca=off instead of the deprecated sslflags=NO_DEFAULT_CA.





And the following configurations:
acl NoSSLInterceptRegexp_always ssl::server_name "splice.list"
always_direct allow all


The above line tells Squid to never use cache_peer.

Without cache_peer directives to ignore this is just a pointless waste 
of Squid CPU cycles.





on_unsupported_protocol tunnel
acl DiscoverSNIHost at_step SslBump1
ssl_bump splice NoSSLInterceptRegexp_always
ssl_bump peek DiscoverSNIHost
ssl_bump bump all

the content of the file splice.list:
.prog.co.il
prog.co.il
www.prog.co.il


These later two patterns are sub-sets of the first pattern. The 
resulting pattern tree may be producing false negative ACL matches.





The tproxy redirections works fine with squid server but
unfortunately the urls in splice.list bumped although they should be
spliced as seen in the access log:

1693219853.255    626 192.168.28.254 TCP_MISS/200 64439 GET
https://www.prog.co.il/ -
HIER_DIRECT/172.67.196.36 text/html

And I see in the browser's certificate viewer my squid self signed
certificate.

What am I missing here?




Not clear. Maybe adding the TLS SNI, server certificate serverAltName 
field, ssl-bump stage/decision, and Host header (specifically the 
header, not the URI domain) to your log may show something useful.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to upgrade correctly?

2023-08-29 Thread Amos Jeffries

You should only need to:

 * stop squid

 * backup your existing installation (as mentioned by Eliezer)

 * install the current Debian "squid-openssl" package

 * run "squid -k parse" to check for squid.conf settings upgrade

 * manually check what "/opt/squid/var" was being used for;
- any configuration related files need to be updated to the 
/etc/squid system location.
- any logs etc in there being used by scripts or non-squid other 
software need those third-party code to be updated to the new squid 
default log locations.


 * start squid


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] To many ERR_CANNOT_FORWARD

2023-08-23 Thread Amos Jeffries

On 24/08/23 00:42, Andre Bolinhas wrote:

Hi

I'm using squid 5.2 but stating yesterday I'm getting too many errors 
ERR_CANNOT_FORWARD for random websites.




FYI, current Squid is 6.2 with 6.3 due out next week. Squid-5.x are 
officially end-of-life now.




What's could be the issue?

This is an extract of debug log




2023/08/22 15:50:28.564 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: 
leaving TunnelBlindCopyReadHandler(conn6384669 local=10.100.16.37:18006 
remote=20.90.216.0:443 HIER_DIRECT flags=1, data=0x56227564b5a8, size=0, 
buf=0x562275f76900)


One endpoint (20.90.216.0) of a CONNECT tunnel delivered the "end of 
connection" signal to Squid.


It looks like the server disconnected. Likely without having sent any 
actual protocol level response.



2023/08/22 15:50:28.564 kid1| 5,4| AsyncCallQueue.cc(59) fireNext: 
entering tunnelServerClosed(FD -1, data=0x56227564b5a8)
2023/08/22 15:50:28.564 kid1| 5,4| AsyncCall.cc(42) make: make call 
tunnelServerClosed [call56950820]
2023/08/22 15:50:28.564 kid1| 26,3| tunnel.cc(752) noteClosure: 
conn6384669 local=10.100.16.37:18006 remote=20.90.216.0:443 HIER_DIRECT 
flags=1
2023/08/22 15:50:28.564 kid1| 46,5| access_log.cc(308) stopPeerClock: 
First connection started: 1692715828.514861, current total response time 
value: -1000
2023/08/22 15:50:28.564 kid1| 26,4| tunnel.cc(1338) saveError: [none] ? 
ERR_CANNOT_FORWARD



That negative response time looks a bit weird. Maybe just a display bug, 
or your machines NTP clock doing some "time travel" that makes Squid 
think an error happened.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.2 with WCCP

2023-08-22 Thread Amos Jeffries

On 22/08/23 01:34, Alex Rousskov wrote:

On 8/21/23 05:06, Callum Haywood wrote:


Does anyone understand what is causing these errors? Are there any 
known issues or patches in progress?


A few years ago, several serious problems were discovered in WCCP code, 
including security vulnerabilities:


https://github.com/squid-cache/squid/security/advisories/GHSA-rgf3-9v3p-qp82

Some of the WCCP bugs were fixed without testing; developers fixing 
those bugs could not easily test WCCP. Some of the old WCCP bugs 
remained and some of the new fixes were buggy.


Today, WCCP code remains problematic. If your customers rely on WCCP, 
consider investing into revamping that neglected and buggy feature.




This PR  has some 
progress towards a fix of those. See the latest comment (currently Sept 
2022) for issues that still need to be resolved before that PR is ready 
for merge attempt.


The major issue remains the ability to test.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Outgoing traffic through certain device instead of IP?

2023-08-12 Thread Amos Jeffries

On 12/08/23 05:23, Robert 'Bobby' Zenz wrote:

I'd like to send all the outgoing traffic from Squid through a certain
network device instead of an IP. There's `tcp_outgoing_address` and
`udp_outgoing_address` which only accepts an IP as parameter, but
there's no way to use a certain device?


Squid is limited to selecting certain details of the TCP packets.
Device routing details are up to the operating system.



I just wanted to verify that there is currently no way to have a
certain network device specified because I couldn't find anything about
it in the documentation. 


You can have Squid can set dst-IP or TOS/QoS marking on packets. The OS 
routing services should be able to use those to do its selection.




My use-case here is that I have multiple

OpenVPN tunnels open and use Squid to funnel traffic through them
(including DNS queries which works great!). These OpenVPN tunnels all
have their own network device, but the IP address might or might not
change at some point, and when that happens Squid won't be able to
forward traffic anymore. Of course I can work around that (OpenVPN
`--ipchange` to fire a script when the IP changes), but I just wanted
to check whether I've missed something here.


In this case, leaving the outgoing IP to the OS is best.
http://www.squid-cache.org/Doc/config/tcp_outgoing_mark/> is available 
to mark (aka classify in QoS terms) Squid traffic for the OS routing.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 6.1 esi compile error, ubuntu 22.04

2023-08-07 Thread Amos Jeffries

On 7/08/23 20:00, Dmitry Melekhov wrote:

Hello!


Built  using --disable-esi without problems.



Could you tell me what can cause this?



Seemingly lack of the libxml2 dependency.

Please ensure you run this command before building Squid:
  apt-get build-dep squid


If this issue or others remain see Alex's response.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi & Internal Error: Missing Template MGR_INDEX

2023-07-28 Thread Amos Jeffries

On 29/07/23 14:42, Alex Rousskov wrote:

On 7/28/23 20:08, Brendan Kearney wrote:

i am running squid 6.1 on fedora 38, and cannot get the cachemgr.cgi 
working on this box.  I am getting the error:


Internal Error: Missing Template MGR_INDEX

when i try to connect using the cache manager interface.




That is the expected output when you are trying to access the manager 
interface directly from Squid. **Instead** of via the cachemgr.cgi.


If you want to try the new manager interface I have a prototype 
javascript tool available at .



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Stack overflow with large IP lists

2023-07-27 Thread Amos Jeffries

On 27/07/23 04:22, Alex Rousskov wrote:>
* I am curious whether your specific use case (going beyond splay tree 
destruction) be better addressed by a different storage type than splay 
trees. For example, have you considered whether using a IP 
address-friendly hash would be faster for, say, one million IP addresses?




There is a trie algorithm developed for rbldnsd which is extremely 
efficient for IP address storage and lookup on large lists. Using that 
for Squid ACL has been on my TODO list for years.


That software also contains efficient algorithms for automatic 
aggregation of overlapping IP ranges. Which is functionality Squid has 
needed for along time.


Any volunteers to assist with that are very welcome. Please contact 
squid-dev mailing list for followup.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to build Squid 6

2023-07-23 Thread Amos Jeffries

On 23/07/23 11:57, Henning Svane wrote:

Hi Alex

I have now followed the instruction below.
All compiling and building was done without problems.

When I run
sudo systemctl status squid
I get this message
Unit squid.service could not be found.
And /usr/sbin/squid do not exist

What do I miss?



The instructions Alex gave will build Squid under a /usr/local directory.

To have Squid installed where the system package would put it you need 
these ./configure options:


--prefix=/usr \
--localstatedir=/var \
--libexecdir=${prefix}/lib/squid \
--datadir=${prefix}/share/squid \
--sysconfdir=/etc/squid \
--with-default-user=proxy \
--with-logdir=/var/log/squid \
--with-pidfile=/run/squid.pid





I can see that the directory /etc/squid is not create I guess I have to make it 
myself, correct?
Can I used the old files from the old 5.2 installation?

If you wish to use the old config yes. Run "squid -k parse" first to see 
if there are any updates needed with the new version.


The /usr/lib/systemd/system/squid.service file from 5.2 might work, or 
you can also try the Debian 13 one attached.


HTH
Amos## Copyright (C) 1996-2023 The Squid Software Foundation and contributors
##
## Squid software is distributed under GPLv2+ license and includes
## contributions from numerous individuals and organizations.
## Please see the COPYING and CONTRIBUTORS files for details.
##

[Unit]
Description=Squid Web Proxy Server
Documentation=man:squid(8)
After=network.target network-online.target nss-lookup.target

[Service]
Type=notify
PIDFile=/run/squid.pid
Group=proxy
RuntimeDirectory=squid
RuntimeDirectoryMode=0775
ExecStartPre=/usr/sbin/squid --foreground -z
ExecStart=/usr/sbin/squid --foreground -sYC
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
NotifyAccess=all

[Install]
WantedBy=multi-user.target
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dstdomain from external ACL

2023-07-22 Thread Amos Jeffries

On 22/07/23 17:20, Alexeyяр Gruzdov wrote:

Wow…
Thank you so much !

For now I used a simple .py script that checks if url is in table and 
send reply OK or ERR, depends from result.


But allow ask you - how squid parse the url???
I think it uses the regexp, is that true???


All parsers in the 'squid' binary perform full parse with validation.




Because for example if I add the url to DB like example.com 
( base url name)
And if the proxy request will be even like to example.com/page1/ 
 - this will be matched. That’s great.




Oh, there are many moving parts involved there.

First is the HTTP request URL that Squid received, it could be any of 
origin-form, authority-form, or relative-url.


(... probably you configured Squid to only send the URL domain name to 
the helper.)


Second is what details you configured the external_acl_type directive to 
pass on.


Third is how the helper receives its input. The helper I suggested uses 
Perl string split to separate the concurrency channel-ID from the UID 
portion and pack("H*",...) for binary safety.


Fourth is how the helper is using its input to lookup the database.
 The helper I suggested uses SQL "=" operator, whose matching is 
string-wise exact equality.


As far as I know only the Perl string split is potentially using regex, 
but not in any way which would case the behaviour you describe.


If you are still using your own custom helper, look into how it is doing 
those third and fourth things.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dstdomain from external ACL

2023-07-21 Thread Amos Jeffries

On 21/07/23 00:23, Alexeyяр Gruzdov wrote:

Hello.

Looks I found how to do that and this works well for me:

The external helper script must check if the url is in DB and answer as 
OK (if there is) or ERR (if there isnt)




You can probably use the ext_sql_session_acl helper bundled with Squid 
instead of writing your own from scratch.
See 
 
for its parameters.


AIUI, you want the --uidcol to be the table of URLs and leave both 
--usercol and --tagcol unset.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New blood

2023-07-21 Thread Amos Jeffries

On 18/07/23 17:19, Mark Kenna wrote:
Hi all, I'm very new been struggling to lean how to do all of this can I 
get a few pointer please




Hi Mark,
 welcome to the Squid community.

First off, do you have any particular goals you are trying to make Squid 
perform?


For general knowledge about Squid capabilities and features you can 
explore the Squid wiki at . It has a wide 
range of age, but can give you a basic idea of the types of things you 
might use Squid for in your network.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >