Re: [squid-users] Squid as a http/https transparent web proxy in 2024.... do I still have to build from source?

2024-04-11 Thread David Komanek

Date: Thu, 11 Apr 2024 09:55:14 +
From: PinPin Poola
To:"squid-users@lists.squid-cache.org"

Subject: [squid-users] Squid as a http/https transparent web proxy in
2024 do I still have to build from source?
Message-ID:



Content-Type: text/plain; charset="iso-8859-1"

I have put this off for a while, as I find everything about squid very 
intimidating. The fact you still use an email mailing list and not a web forum site 
amazes & scares me in equal part.

I am probably using the wrong terminology here, but I now desperately need to 
build a http/https transparent web proxy with two interfaces, so that clients 
on a isolated/non-Internet routable subnet can download some large (25GB+) 
packages.

I don't care which Linux distro tbh; but would prefer Ubuntu as I have most 
familiarity with it.

I have watched a few old YouTube videos of people explaining that at the time to do this 
you had to build from source and add switches like "--enable-ssl --enable-ssl-crtd 
--with-openssl \" before compiling the code.


At least for FreeBSD binary-packaged squid these three switches should 
be on, but I don't know if they are sufficient.


# uname -vm
FreeBSD 13.3-RELEASE-p1 GENERIC amd64

# squid -v
Squid Cache: Version 6.6
Service Name: squid

This binary uses OpenSSL 1.1.1w-freebsd  11 Sep 2023. For legal 
restrictions on distribution see https://www.openssl.org/source/license.html


configure options:  '--with-default-user=squid' 
'--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' 
'--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' 
'--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' 
'--with-swapdir=/var/squid/cache' '--without-gnutls' 
'--with-included-ltdl' '--enable-build-info' 
'--enable-removal-policies=lru heap' '--disable-epoll' 
'--disable-arch-native' '--disable-strict-error-checking' 
'--without-systemd' '--without-netfilter-conntrack' '--without-cap' 
'--enable-eui' '--without-ldap' '--enable-cache-digests' 
'--enable-delay-pools' '--disable-ecap' '--disable-esi' 
'--without-expat' '--without-xml2' '--enable-follow-x-forwarded-for' 
'--with-pthreads' '--with-heimdal-krb5=/usr' 'CFLAGS=-I/usr/include -O2 
-pipe -fstack-protector-strong -isystem /usr/local/include 
-fno-strict-aliasing ' 'LDFLAGS=  -pthread -fstack-protector-strong 
-L/usr/local/lib ' 'LIBS=-lkrb5 -lgssapi -lgssapi_krb5 ' 
'KRB5CONFIG=/usr/bin/krb5-config' 'krb5_config=/usr/bin/krb5-config' 
'--enable-htcp' '--enable-icap-client' '--enable-icmp' 
'--enable-ident-lookups' '--enable-ipv6' '--enable-kqueue' 
'--with-large-files' '--enable-http-violations' '--without-nettle' 
'--enable-snmp' '--*enable-ssl*' '--*with-openssl*' 
'--enable-security-cert-generators=file' 
'LIBOPENSSL_CFLAGS=-I/usr/include' 'LIBOPENSSL_LIBS=-lcrypto -lssl' 
'--*enable-ssl-crtd*' '--disable-stacktraces' '--without-tdb' 
'--disable-ipf-transparent' '--enable-ipfw-transparent' 
'--disable-pf-transparent' '--without-nat-devpf' '--enable-forw-via-db' 
'--enable-wccp' '--enable-wccpv2' '--enable-auth-basic=DB NCSA PAM POP3 
RADIUS SMB_LM fake getpwnam NIS' '--enable-auth-digest=file' 
'--enable-auth-negotiate=kerberos wrapper' '--enable-auth-ntlm=fake 
SMB_LM' '--enable-log-daemon-helpers=file DB' 
'--enable-external-acl-helpers=file_userip unix_group delayer' 
'--enable-url-rewrite-helpers=fake LFS' 
'--enable-security-cert-validators=fake' 
'--enable-storeid-rewrite-helpers=file' '--enable-storeio=aufs diskd 
rock ufs' '--enable-disk-io=DiskThreads DiskDaemon AIO Blocking IpcIo 
Mmapped' '--prefix=/usr/local' '--mandir=/usr/local/man' 
'--disable-silent-rules' '--infodir=/usr/local/share/info/' 
'--build=amd64-portbld-freebsd13.2' 
'build_alias=amd64-portbld-freebsd13.2' 'CC=cc' 'CPPFLAGS=-isystem 
/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -pipe 
-fstack-protector-strong -isystem /usr/local/include 
-fno-strict-aliasing  -isystem /usr/local/include ' 'CPP=cpp' 
'PKG_CONFIG_LIBDIR=/wrkdirs/usr/ports/www/squid/work/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig' 
--enable-ltdl-convenience


# pkg info squid
squid-6.6
Name   : squid
Version    : 6.6
Installed on   : Thu Feb 22 10:57:12 2024 CET
Origin : www/squid
Architecture   : FreeBSD:13:amd64
Prefix : /usr/local
Categories : www
Licenses   : GPLv2
Maintainer : tim...@gmail.com
WWW    : http://www.squid-cache.org/
Comment    : HTTP Caching Proxy
Options    :
    ARP_ACL    : on
    AUTH_LDAP  : off
    AUTH_NIS   : on
    AUTH_SASL  : off
    AUTH_SMB   : off
    AUTH_SQL   : off
    CACHE_DIGESTS  : on
    DEBUG  : off
    DELAY_POOLS    : on
    DOCS   : on
    ECAP   : off
    ESI    : off
    EXAMPLES   : on
    FOLLOW_XFF : on
    FS_AUFS    : on
    FS_DISKD   : on
    FS_ROCK    : on
    

Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-04 Thread David Komanek



Date: Wed, 3 Apr 2024 11:05:02 -0400
From: Alex Rousskov
To:squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to
http
Message-ID:

Content-Type: text/plain; charset=UTF-8; format=flowed

On 2024-04-03 02:14, Lou?ansk? Luk?? wrote:


this has recently started me up more then let it go. For a while
chrome is upgrading in-page links to https.

Just to add two more pieces of related information to this thread:

Some Squid admins report that their v6-based code does not suffer from
this issue while their v5-based code does. I have not verified those
reports, but there may be more to the story here. What Squid version are
_you_ using?

One way to track progress with this annoying and complex issue is to
follow the following pull request. The current code cannot be officially
merged as is, and I would not recommend using it in production (because
of low-level bugs that will probably crash Squid in some cases), but
testing it in the lab and providing feedback to authors may be useful:

https://github.com/squid-cache/squid/pull/1668

HTH,

Alex.





Hello,

fortunately, I do not observe this problem accessing sites running only 
on port 80 (no 443 at all), but my configuration is simple:


squid 6.6 as FreeBSD binary package

not much about ssl in the config file though, just passing it through, 
no ssl juggling


acl SSL_ports port
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow 
http_access allow 
http_access allow 
http_access allow 
http_access allow 
http_access deny all

I don't think it was different with squid 5.9, which I used till 
November 2023.


Occasionally, I see another problem, which may or may not be related to 
squid ssl handling configuration: PR_END_OF_FILE_ERROR (Firefox) / 
ERR_CONNECTION_CLOSED (Chrome), typically accessing samba.org. But they 
use permanent redirect from http to https, so it another situation than 
http-only site.


David

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as an education tool

2024-02-12 Thread David Touzeau
being used as a policy enforcer 
rather than an education tool.
I believe in education as one of the top priorities compared to enforcing 
policies.
The nature of policies depends on the environment and the risks but eventually 
understanding the meaning of the policy
gives a lot to the cooperation of the user or an employee.

I have yet to see a solution like the next:
Each user has a profile/user which when receiving a policy block will be 
prompted with an option to allow temporarily
the specific site or domain.
Also, I have not seen an implementation which allows the user to disable or 
lower the policy strictness for a short period of time.

I am looking for such implementations if those exist already to run education 
sessions with teenagers.

Thanks,
Eliezer

___
squid-users mailing list
mailto:squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
mailto:squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users





--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau

Thank Alex

This will fix the issue!

Le 31/01/2024 à 17:43, Alex Rousskov a écrit :

On 2024-01-31 09:23, David Touzeau wrote:


Hi %note is used by our external_acls and for log other tokens
And we use also Group as token.
it can disabled by direcly removing source kerberos code before 
compiling but i would like to know if there is another way


In most cases, one does not have to (and does not really want to) log 
_all_ transaction annotations. It is possible to specify annotations 
that should be logged by using the annotation name as a %note parameter.


For example, to just log annotation named foo, use %note{foo} instead 
of %note.


In many cases, folks that log multiple annotations, prepend the 
annotation name so that it is easier (especially for humans) to 
extract the right annotation from the access log record:


    ... foo=%note{foo} bar=%note{bar} ...


HTH,

Alex.



Le 31/01/2024 à 14:36, Andrey K a écrit :

Hello, David,

> Anyway to remove these entries from the log ?
I think you should correct logformat directive in your squid 
configuration to disable annotations logging (%note): 
http://www.squid-cache.org/Doc/config/logformat/


Kind regards,
      Ankor.





ср, 31 янв. 2024 г. в 15:51, David Touzeau :

    Anyway to remove these entries from the log ?

    Le 31/01/2024 à 10:01, Andrey K a écrit :

    Hello, David,

    group values in your logs are BASE64-encoded binary AD-groups 
SIDs.

    You can try to decode them by a simple perl script sid-reader.pl
<http://sid-reader.pl> (see below):

    echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl
    sid-reader.pl <http://sid-reader.pl>

    And finally convert SID to a group name:
    wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

    Kind regards,
          Ankor


    *sid-reader.pl <http://sid-reader.pl>:*
    #!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

    my $binary_sid;
    my @parts;
    while(<>){
      push @parts, $_;
    }
      $binary_sid = join('', @parts);

      my($sid_rev, $num_auths, $id1, $id2, @ids) =
                    unpack("H2 H2 n N V*", $binary_sid);
      my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, 
@ids);

      print "$sid_string\n";


    вт, 30 янв. 2024 г. в 18:49, David Touzeau :


    Hi when using Kerberos with Squid when in access log a long
    Group tags:

    I would like to know how to disable Squid to grab groups
    suring authentication verification and in other way, how to
    decode Group value

    example of an access.log

    |1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443
    CONNECT eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
    HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
    mac="00:00:00:00:00:00"
user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAU

Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau





Hi %note is used by our external_acls and for log other tokens
And we use also Group as token.
it can disabled by direcly removing source kerberos code before 
compiling but i would like to know if there is another way


Le 31/01/2024 à 14:36, Andrey K a écrit :

Hello, David,

> Anyway to remove these entries from the log ?
I think you should correct logformat directive in your squid 
configuration to disable annotations logging (%note): 
http://www.squid-cache.org/Doc/config/logformat/


Kind regards,
      Ankor.





ср, 31 янв. 2024 г. в 15:51, David Touzeau :

Anyway to remove these entries from the log ?

Le 31/01/2024 à 10:01, Andrey K a écrit :

Hello, David,

group values in your logs are BASE64-encoded binary AD-groups SIDs.
You can try to decode them by a simple perl script sid-reader.pl
<http://sid-reader.pl> (see below):

echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl
sid-reader.pl <http://sid-reader.pl>

And finally convert SID to a group name:
wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

Kind regards,
      Ankor


*sid-reader.pl <http://sid-reader.pl>:*
#!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

my $binary_sid;
my @parts;
while(<>){
  push @parts, $_;
}
  $binary_sid = join('', @parts);

  my($sid_rev, $num_auths, $id1, $id2, @ids) =
                unpack("H2 H2 n N V*", $binary_sid);
  my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, @ids);
      print "$sid_string\n";


вт, 30 янв. 2024 г. в 18:49, David Touzeau :


Hi when using Kerberos with Squid when in access log a long
Group tags:

I would like to know how to disable Squid to grab groups
suring authentication verification and in other way, how to
decode Group value

example of an access.log

|1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443
CONNECT eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
mac="00:00:00:00:00:00"

user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEST/MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESLaEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlvQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESPLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES98IAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShPgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaHsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmegAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESiRgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/tgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5IEAAA==%0D%0Agroup:%20AQUAAAU

Re: [squid-users] Long Group TAG in access.log when using kerberos

2024-01-31 Thread David Touzeau

Anyway to remove these entries from the log ?

Le 31/01/2024 à 10:01, Andrey K a écrit :

Hello, David,

group values in your logs are BASE64-encoded binary AD-groups SIDs.
You can try to decode them by a simple perl script sid-reader.pl 
<http://sid-reader.pl> (see below):


echo AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==  | base64 -d | perl 
sid-reader.pl <http://sid-reader.pl>


And finally convert SID to a group name:
wbinfo -s S-01-5-21-407062282-1694779757-312552118-71814

Kind regards,
      Ankor


*sid-reader.pl <http://sid-reader.pl>:*
#!/usr/bin/perl
#https://lists.samba.org/archive/linux/2005-September/014301.html

my $binary_sid;
my @parts;
while(<>){
  push @parts, $_;
}
  $binary_sid = join('', @parts);

  my($sid_rev, $num_auths, $id1, $id2, @ids) =
                unpack("H2 H2 n N V*", $binary_sid);
  my $sid_string = join("-", "S", $sid_rev, ($id1<<32)+$id2, @ids);
  print "$sid_string\n";


вт, 30 янв. 2024 г. в 18:49, David Touzeau :


Hi when using Kerberos with Squid when in access log a long Group
tags:

I would like to know how to disable Squid to grab groups suring
authentication verification and in other way, how to decode Group
value

example of an access.log

|1706629424.779 130984 10.1.12.120 TCP_TUNNEL/500 5443 CONNECT
eu-mobile.events.data.microsoft.com:443
<http://eu-mobile.events.data.microsoft.com:443> leblud
HIER_DIRECT/13.69.239.72:443 <http://13.69.239.72:443> -
mac="00:00:00:00:00:00"

user:%20leblud%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBsMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESBa==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESj34AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQbcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlPQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNZUAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/MMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESh5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuc4AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESl8QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGnsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESihgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESnsEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8QYBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNtcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESX+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8KMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShMcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0XgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMwIBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQSUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESAQIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESufYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESNAkBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESccMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEStdYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFXkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESb6EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESFc==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESluoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESxY8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2cEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJ5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEST/MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESLaEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESlvQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESPLkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShxgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES98IAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShPgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESaHsAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmegAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESiRgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES/tgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5IEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESN9cAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESbQEBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESjZwAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESmsQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESvtIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESGAEBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESePYAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESfp0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESuj0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESA8gAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES7p8AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQu==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESZ50AAA==%0D%0Agroup:%20AQUAAAUVAA

[squid-users] Long Group TAG in access.log when using kerberos

2024-01-30 Thread David Touzeau
0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESZ3sAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESTvMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES3HgAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESJdkAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES5YcAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES6AUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESd/YAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESUsQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESz3gAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES2+0AAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShhgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESMLEAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESP+==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESk/QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESTfoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESixgBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqEShccAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESVwoAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQuwAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESA9==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQcMAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES0QUBAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESQO==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESu5wAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESYcIAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESE9MAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES7oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES9YQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES9oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESd5EAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES84QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES8oQAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqES74QAAA==%0D%0Agroup:%20AQUAAAUVCkdDGG1JBGW2KqESgHsAAA==%0D%0Agroup:%20AQEAABIB%0D%0Aaccessrule:%20final_allow%0D%0Afirst:%20ERROR%0D%0Awebfilter:%20pass%0D%0Aexterr:%20invalid_code_431%0D%0A 
ua="-" exterr="-|-"|


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread David Komanek


On 11/27/23 11:36, Amos Jeffries wrote:


On 27/11/23 23:05, David Komanek wrote:


On 11/27/23 10:40, Amos Jeffries wrote:

On 27/11/23 22:21, David Komanek wrote:
here are the debug logs (IP addresses redacted) after connection 
attempt to https://samba.org/ :



...
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) 
sendStartOfMessage: HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the 
guy on the other side told me so. According to the log, do you 
think I can somehow make it working or is it definitely problem on 
the samba.org webserver?



That ERR_PROTOCOL_UNKNOWN indicates that your proxy is trying to 
SSL-Bump the CONNECT tunnel and not understanding the protocol 
inside the TLS layer - which is expected if that protocol is HTTP/2.



For now you should be able to use 
<http://www.squid-cache.org/Doc/config/on_unsupported_protocol/> to 
allow these tunnels. Alternatively use the "splice" action to 
explicitly bypass the SSL-Bump process.



Thank you for the quick response. So I should add

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN
on_unsupported_protocol tunnel foreignProtocol

to the squid.conf, right?


At the point the error exists is too late AFAIK.

I was thinking something like:
  acl foo dstdomain samba.org
  on_unsupported_protocol tunnel foo





Still, I don't understand, why is this case handled by my browsers 
(or squid?) differently from usual HTTPS traffic to other sites. I 
suppose that plenty of sites are accepting HTTP/2 nowadays. A huge 
lack of knowledge on my side :-)


I'm not clear exactly why you see this only now, and only with 
samba.org. Squid not supporting HTTP/2 yet is a big part of the 
problem though.



Cheers
Amos



Hello,

I managed to google some options for curl useful in this context, and it 
is quite interesting:


working: curl - --http2 -x cache.my.domain:3128 https://www.samba.org/

working: curl - --http1.1 -x cache.my.domain:3128 https://www.samba.org/

rejected by samba.org: curl - --http1.0 -x cache.my.domain:3128 
https://www.samba.org/

    this returns a simple html page with code 403:
  403 Forbidden
  Request forbidden by administrative rules.
 

not working: chrome, firefox via proxy
   chrome returns "ERR_CONNECTION_CLOSED"
   firefox returns "PR_END_OF_FILE_ERROR"

So, it seems to me, there squid doesn't like something with the 
heavy-duty browsers in this case. Even if I disable http/2 in firefox, 
it makes no difference for me. I'm really confused.


Best regards,
David


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread David Komanek


On 11/27/23 11:36, Amos Jeffries wrote:


On 27/11/23 23:05, David Komanek wrote:


On 11/27/23 10:40, Amos Jeffries wrote:

On 27/11/23 22:21, David Komanek wrote:
here are the debug logs (IP addresses redacted) after connection 
attempt to https://samba.org/ :



...
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) 
sendStartOfMessage: HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the 
guy on the other side told me so. According to the log, do you 
think I can somehow make it working or is it definitely problem on 
the samba.org webserver?



That ERR_PROTOCOL_UNKNOWN indicates that your proxy is trying to 
SSL-Bump the CONNECT tunnel and not understanding the protocol 
inside the TLS layer - which is expected if that protocol is HTTP/2.



For now you should be able to use 
<http://www.squid-cache.org/Doc/config/on_unsupported_protocol/> to 
allow these tunnels. Alternatively use the "splice" action to 
explicitly bypass the SSL-Bump process.



Thank you for the quick response. So I should add

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN
on_unsupported_protocol tunnel foreignProtocol

to the squid.conf, right?


doesn't work


At the point the error exists is too late AFAIK.

I was thinking something like:
  acl foo dstdomain samba.org
  on_unsupported_protocol tunnel foo


doesn't work either


Redards,
David





Still, I don't understand, why is this case handled by my browsers 
(or squid?) differently from usual HTTPS traffic to other sites. I 
suppose that plenty of sites are accepting HTTP/2 nowadays. A huge 
lack of knowledge on my side :-)


I'm not clear exactly why you see this only now, and only with 
samba.org. Squid not supporting HTTP/2 yet is a big part of the 
problem though.



Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread David Komanek


On 11/27/23 10:40, Amos Jeffries wrote:

On 27/11/23 22:21, David Komanek wrote:
here are the debug logs (IP addresses redacted) after connection 
attempt to https://samba.org/ :



...
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) 
sendStartOfMessage: HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the guy 
on the other side told me so. According to the log, do you think I 
can somehow make it working or is it definitely problem on the 
samba.org webserver?



That ERR_PROTOCOL_UNKNOWN indicates that your proxy is trying to 
SSL-Bump the CONNECT tunnel and not understanding the protocol inside 
the TLS layer - which is expected if that protocol is HTTP/2.



For now you should be able to use 
<http://www.squid-cache.org/Doc/config/on_unsupported_protocol/> to 
allow these tunnels. Alternatively use the "splice" action to 
explicitly bypass the SSL-Bump process.



Thank you for the quick response. So I should add

acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN
on_unsupported_protocol tunnel foreignProtocol

to the squid.conf, right?


Still, I don't understand, why is this case handled by my browsers (or 
squid?) differently from usual HTTPS traffic to other sites. I suppose 
that plenty of sites are accepting HTTP/2 nowadays. A huge lack of 
knowledge on my side :-)



Sincerely,

  David


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-27 Thread David Komanek



Date: Thu, 23 Nov 2023 01:44:30 +1300
From: Amos Jeffries 
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] how to avoid use http/1.0 between squid and
the target
Message-ID: 
Content-Type: text/plain; charset=UTF-8; format=flowed

On 22/11/23 23:03, David Komanek wrote:

Hello,

I have a strange problem (definitely some kind of my own ignorance) :

If I try to access anything on the site https://www.samba.org WITHOUT
proxy, my browser negotiate happily for http/2 protocol and receives all
the data. For?http://www.samba.org WITHOUT proxy it starts with http/1.1
which is auto-redirected from http to https and continues with http/2.
So far so good.

But WITH proxy, it happens that squid is using http/1.0.

That is odd. Squid should always be sending requests as HTTP/1.1.

Have a look at the debug level "11,2" cache.log records to see if Squid
is actually sending 1.0 or if it is just relaying CONNECT requests with
possibly HTTP/1.0 inside.


Hello,

here are the debug logs (IP addresses redacted) after connection attempt 
to https://samba.org/ :


--
2023/11/27 09:58:07.345 kid1| 11,2| client_side.cc(1332) 
parseHttpRequest: HTTP Client conn21570 local=195.113.x.y:3128 
remote=10.10.a.b:53868 FD 666 flags=1
2023/11/27 09:58:07.345 kid1| 11,2| client_side.cc(1336) 
parseHttpRequest: HTTP Client REQUEST:

-
CONNECT samba.org:443 HTTP/1.1
Host: samba.org:443
Proxy-Connection: keep-alive
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/119.0.0.0 Safari/537.36



--
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(273) sendStartOfMessage: 
HTTP Client conn21576 local=195.113.x.y:3128 remote=10.10.a.b:16730 FD 
1267 flags=1
2023/11/27 09:58:07.370 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:

-
HTTP/1.1 400 Bad Request
Server: squid/6.5
Mime-Version: 1.0
Date: Mon, 27 Nov 2023 08:58:07 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3363
X-Squid-Error: ERR_PROTOCOL_UNKNOWN 0
Cache-Status: pteryx.natur.cuni.cz
Via: 1.1 pteryx.natur.cuni.cz (squid/6.5)
Connection: close

So, it seems it's not true that squid is using http/1.0, but the guy on 
the other side told me so. According to the log, do you think I can 
somehow make it working or is it definitely problem on the samba.org 
webserver?


Thanks again,

  David



HTH
Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] how to avoid use http/1.0 between squid and the target

2023-11-22 Thread David Komanek

Hello,

I have a strange problem (definitely some kind of my own ignorance) :

If I try to access anything on the site https://www.samba.org WITHOUT 
proxy, my browser negotiate happily for http/2 protocol and receives all 
the data. For http://www.samba.org WITHOUT proxy it starts with http/1.1 
which is auto-redirected from http to https and continues with http/2. 
So far so good.


But WITH proxy, it happens that squid is using http/1.0. The remote site 
is blocking this protocol, requiring at least http/1.1 (confirmed by the 
samba.org website maintainer), so the site remains inaccessible. But 
this is the only site where I have been able to encounter this problem. 
If I connect WITH proxy to other sites, squid is using http/1.1 as expected.


So, I'm lost here, unable to find the reason, why http/1.1 couldn't be 
used by squid in some rare cases. What am I missing here? I am not aware 
of any configuration directives which could cause this.


browsers: chrome, firefox (both updated)
squid: freebsd package (now version 6.5, but the I had the same problem 
with 5.9 before)


Thanks in advance for some hints here.

Best regards,

  David Komanek
  Charles University in Prague
  Faculty of Science


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to start Squid 6.3 "earlyMessages->size() < 1000"

2023-10-02 Thread David Touzeau

Thank you, you've enlightened me;
I had the GlobalWhitelistDSTNet directive declared twice in two 
different includes
This meant that an identical Acl declared in two different places would 
contradict each other on the same addresses and generate mass warnings.


On 02/10/2023 22:01, Alex Rousskov wrote:



Since Squid 6.x we have this strange behavior on acl dst
Many warnings is generated

2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.226' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.226' is a subnetwork of 
(A) '64.34.72.226'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.226' is 
ignored to keep splay tree searching predictable


(B) '*64.34.72.226*' is a subnetwork of (A) '*64.34.72.226*' --> 
Sure, this is the IP address.


Is it possible that you have two 64.34.72.226 entries in that 
GlobalWhitelistDSTNet ACL? Perhaps in another included configuration 
file or something like that?



You should probably remove '64.34.72.226' from the ACL named 
'GlobalWhitelistDSTNet' --> Why this is only the IP address in the 
acl ???


Squid thinks that there is more than one copy of 64.34.72.226 address 
in GlobalWhitelistDSTNet ACL. It could be Squid bug, of course. Please 
share a configuration that reproduces the issue or a pointer to 
compressed "squid -N -X -d9 ..." output while reproducing the problem.



2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"

Aborted


This assert is a side effect of the above ACL problem/bug - you 
probably have many IPs in that ACL and the corresponding WARNINGs 
exceed Squid hard-coded message accumulation limit. Now that we know 
how a broken(*) configuration can produce so many early cache.log 
messages, we should probably modify Squid to quit without asserting, 
but let's focus on the root cause of your problems -- those WARNING 
messages.


(*) I am not implying that _your_ configuration is broken.


Cheers,

Alex.


2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of 
(A) '64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is 
ignored to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.230' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of 
(A) '64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is 
ignored to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove 
'64.34.72.230' from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.232' is a subnetwork of 
(A) '64.34.72.232'


According to all warning, Squid won't start with this error

*2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"**

**Aborted*

How to avoid this ??

--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Unable to start Squid 6.3 "earlyMessages->size() < 1000"

2023-10-02 Thread David Touzeau


Hi

Since Squid 6.x we have this strange behavior on acl dst
Many warnings is generated

2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.226' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.226' is a subnetwork of (A) 
'64.34.72.226'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.226' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.226' 
from the ACL named 'GlobalWhitelistDSTNet'



(B) '*64.34.72.226*' is a subnetwork of (A) '*64.34.72.226*' --> Sure, 
this is the IP address.


You should probably remove '64.34.72.226' from the ACL named 
'GlobalWhitelistDSTNet' --> Why this is only the IP address in the acl ???



2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of (A) 
'64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.230' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.230' is a subnetwork of (A) 
'64.34.72.230'
2023/10/02 20:18:50| WARNING: because of this '64.34.72.230' is ignored 
to keep splay tree searching predictable
2023/10/02 20:18:50| WARNING: You should probably remove '64.34.72.230' 
from the ACL named 'GlobalWhitelistDSTNet'
2023/10/02 20:18:50| WARNING: (B) '64.34.72.232' is a subnetwork of (A) 
'64.34.72.232'


According to all warning, Squid won't start with this error

*2023/10/02 20:20:09| FATAL: assertion failed: debug.cc:606: 
"earlyMessages->size() < 1000"**

**Aborted*

How to avoid this ??

--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-09-18 Thread David Touzeau

Many thanks Francesco !!


On 17/09/2023 16:55, Francesco Chemolli wrote:

Hi David,
PR 1481 <https://github.com/squid-cache/squid/pull/1481> should 
address your problem, it needs to be reviewed,

merged to trunk, and backported to v6 so don't hold your breath,
but it should be just a matter of time.
Once done, you will also have to add a configuration line to your 
squid.conf (manual 
<http://www.squid-cache.org/Doc/config/cache_log_message/>)


On Mon, Aug 28, 2023 at 10:59 PM Francesco Chemolli 
 wrote:


That's a good question; not right now, unless you're willing to
patch the squid sources.
In that case, just remove the debugs() statement in lines 200-203
of file src/helper/Reply.cc .



On Mon, Aug 28, 2023 at 9:52 PM David Touzeau
 wrote:

Thanks You

As these changes affect many things for us ( use tags for
statistics / elasticsearchs) and it seems, this behavior is
just a warning (seems squid still work as expected like note acls)

Is there a way to remove these warnings because they increase
I/O and cache.log dramatically.

regards

On 28/08/2023 22:46, Francesco Chemolli wrote:

Hi David,
   you should use
itchart_=PASS

The trailing underscore signals Squid that this is a custom
header.

On Mon, Aug 28, 2023 at 3:54 PM David Touzeau
 wrote:


Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143
category-name=Trackers clog=cinfo:143-Trackers;"

squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to
add a trailing underscore: itchart_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: category=143
    advice: If this is a custom annotation, rename it to
add a trailing underscore: category_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: category-name=Trackers
    advice: If this is a custom annotation, rename it to
add a trailing underscore: category-name_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to
add a trailing underscore: clog_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or
unexpected from-helper annotation with a name reserved
for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to
add a trailing underscore: itchart_
    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

?




-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



-- 
    Francesco


-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  




-- 
    Francesco




--
    Francesco


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-08-28 Thread David Touzeau

Thanks You

As these changes affect many things for us ( use tags for statistics / 
elasticsearchs) and it seems, this behavior is just a warning (seems 
squid still work as expected like note acls)


Is there a way to remove these warnings because they increase I/O and 
cache.log dramatically.


regards

On 28/08/2023 22:46, Francesco Chemolli wrote:

Hi David,
   you should use
itchart_=PASS

The trailing underscore signals Squid that this is a custom header.

On Mon, Aug 28, 2023 at 3:54 PM David Touzeau  
wrote:



Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143 category-name=Trackers
clog=cinfo:143-Trackers;"

squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
itchart=PASS
    advice: If this is a custom annotation, rename it to add a
trailing underscore: itchart_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
category=143
    advice: If this is a custom annotation, rename it to add a
trailing underscore: category_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
category-name=Trackers
    advice: If this is a custom annotation, rename it to add a
trailing underscore: category-name_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to add a
trailing underscore: clog_
    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected
from-helper annotation with a name reserved for Squid use:
itchart=PASS
    advice: If this is a custom annotation, rename it to add a
trailing underscore: itchart_
    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

?




-- 
David Touzeau - Artica Tech France

Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users



--
    Francesco


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] 6.2: Unsupported or unexpected from-helper annotation with a name reserved for Squid use

2023-08-28 Thread David Touzeau


Hi

Since 6.2 ( aka migrating from 5.8 )

Squid claim about token sent by external_acl_helper

the external acl helper send
"OK itchart=PASS user=dtouzeau category=143 category-name=Trackers 
clog=cinfo:143-Trackers;"


squid claim
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: itchart_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: category=143
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: category_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: category-name=Trackers
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: category-name_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: clog=cinfo:143-Trackers;
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: clog_

    current master transaction: master278
2023/08/28 16:47:02 kid1| WARNING: Unsupported or unexpected from-helper 
annotation with a name reserved for Squid use: itchart=PASS
    advice: If this is a custom annotation, rename it to add a trailing 
underscore: itchart_

    current master transaction: master278

Did the helper instead of "itchart=PASS" must send

"itchart_=PASS"
or
"itchart_PASS"

?




--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] %LOGIN place in squid 5.8 acls

2023-04-24 Thread David Touzeau

Thanks Amos for the mistake, yes my explains was wrong.
Your are right, the first object !allowed_domains matches, so squid 
usually compute the second object. This an expected behavior.


According your suggest my problem was the first rule "http_access allow 
noauth_sites" in first place.
yes, it will allow requests but, requests will be allowed for all other 
rules too.

It make sense, why compute all others rules if the first one is allowed ?

if a add office365.com in noauth_sites object but i did not want 
office365.com for limited_users, the noauth_sites in first place will 
disable all "deny" rules.


I'm wrong ?


On 24/04/2023 11:22, Amos Jeffries wrote:

On 24/04/2023 11:33 am, David Touzeau wrote:
We have a "problem" with ACLs, and I don't know how to address this 
situation in Squid 5.8

Let me explain:
We have an Active Directory group named limited_users that is only 
allowed to surf on a very limited list of websites.
These users are therefore forbidden to surf on all sites not listed 
in allowed_domains
On the other hand, we have websites in noauth_sites that do not need 
to be authenticated by squid but are not allowed to be used by 
limited_users group


In logic, we would write the following ACLs.

external_acl_type ads_group ttl=3600 negative_ttl=1 concurrency=50 
children-startup=1 children-idle=1 children-max=20 ipv4 %LOGIN 
/lib/squid3/groups.pl


acls limited_users ads_group limited_users


This acl requires both login to succeed and group to match in order to 
return MATCH.




acls allowed_domains dstdomain siteallowed.com
acls allowed_domains dstdomain siteallowed.fr
acls allowed_domains dstdomain siteallowed.ch

acls noauth_sites dstdomain office365.com


http_access deny !allowed_domains limited_users all #ACL1
http_access allow noauth_sites #ACL2

But in this case, accessing to office365.com force Squid to send the 
407 Authentication  request in order to calculate the limited_users 
in  #ACL1, then the second ACL is not effective because the request 
is blocked before by the 407.


Sounds correct.

The %LOGIN switch in the external ACL ads_group activates the 
identification mode.


Yes.

If we use the %un switch instead , it works but it becomes the 
counter, ACL#1 is not processed anymore since the authentication is 
not requested because the %un switch is too smooth.


Yes. The login is not existing, therefore has no group.


What I don't understand is that SQUID is trying to calculate the 
limited_user object when the first allowed_domain object already 
returns FALSE.


You configured the "!" (not) operator to invert the match result.
Returning FALSE becomes a MATCH.


Whatever the result of the objects that follow allowed_domain, the 
rule will always fail.


Not quite. A request that provides credentials associated with the 
expected group will pass.


In the case where limited_user is in the first place, the logic is 
correct.


Two questions:

Is there a way for SQUID to not compute all http_access objects if 
the first one fails?


No. Because there is more than one HTTP request going on here. Each 
request is independent for Squid.




What would be the best rule that could meet this goal?


Structure your access lines as such;

  # things not requiring login are checked first
  http_access allow noauth_sites

  # then do the login
  http_access deny !login

  # then check things that need login
  http_access deny limited_users !allowed_sites


HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %LOGIN place in squid 5.8 acls

2023-04-23 Thread David Touzeau
We have a "problem" with ACLs, and I don't know how to address this 
situation in Squid 5.8

Let me explain:
We have an Active Directory group named limited_users that is only 
allowed to surf on a very limited list of websites.
These users are therefore forbidden to surf on all sites not listed in 
allowed_domains
On the other hand, we have websites in noauth_sites that do not need to 
be authenticated by squid but are not allowed to be used by 
limited_users group


In logic, we would write the following ACLs.

external_acl_type ads_group ttl=3600 negative_ttl=1 concurrency=50 
children-startup=1 children-idle=1 children-max=20 ipv4 %LOGIN 
/lib/squid3/groups.pl

acls limited_users ads_group limited_users
acls allowed_domains dstdomain siteallowed.com
acls allowed_domains dstdomain siteallowed.fr
acls allowed_domains dstdomain siteallowed.ch

acls noauth_sites dstdomain office365.com


http_access deny !allowed_domains limited_users all #ACL1
http_access allow noauth_sites #ACL2


But in this case, accessing to office365.com force Squid to send the 407 
Authentication  request in order to calculate the limited_users in  
#ACL1, then the second ACL is not effective because the request is 
blocked before by the 407.
The %LOGIN switch in the external ACL ads_group activates the 
identification mode.
If we use the %un switch instead , it works but it becomes the counter, 
ACL#1 is not processed anymore since the authentication is not requested 
because the %un switch is too smooth.


What I don't understand is that SQUID is trying to calculate the 
limited_user object when the first allowed_domain object already returns 
FALSE.
Whatever the result of the objects that follow allowed_domain, the rule 
will always fail.

In the case where limited_user is in the first place, the logic is correct.

Two questions:

Is there a way for SQUID to not compute all http_access objects  if the 
first one fails?


What would be the best rule that could meet this goal?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5: server_cert_fingerprint not working fine...

2022-11-19 Thread David Touzeau

Thanks Amos for this clarification,

We also have the same needs and indeed, we face with the same approach.

It is possible that the structure of Squid could not, in some cases, 
recovering this type of information.
Although the concept of a proxy is neither more nor less than a big 
browser that surfs instead of the client browsers.


The SHA1 and certificate information reception are very valuable because 
it ensures better detection of compromised sites (many malicious sites 
use the same information in their certificates).

This allows detecting "nests" of malicious sites automatically.

Unfortunately, there is madness in the approach to security, there is a 
race to strengthen the security of tunnels (produced by Google and 
browsers vendors).

What is the advantage of encrypting wikipedia and Youtube channels?

On the other hand, it is crucial to look inside these streams to detect 
threats.

This is antinomic...

So TLS 1.3 and soon the use of QUIC with UDP 80/443 will make use of a 
proxy useless as these features are rolled out  (trust Google to 
motivate them)

Unless the proxy manages to follow this protocol madness race...

For this reason, firewall manufacturers propose the use of client 
software that fills the gap of protocol visibility in their gateway 
products or you -can see a growth of workstation protections , such EDR 
concept


Just an ideological and non-technical approach...

Regards

Le 19/11/2022 à 16:50, Amos Jeffries a écrit :

On 19/11/2022 2:55 am, UnveilTech - Support wrote:

Hi Amos,

We have tested with a "ssl_bump bump" ("ssl_bump all" and "ssl_bump 
bump sslstep1"), it does not solve the problem.
According to Alex, we can also confirm it's a bug with Squid 5.x and 
TLS 1.3.


Okay.

It seems Squid is only compatible with TLS 1.2, it's not good for the 
future...


One bug (or lack of ability) does not make the entire protocol 
"incompatible". It only affects people trying to do the particular 
buggy action.
Unfortunately for you (and others) it happens to be accessing this 
server cert fingerprint.


I/we have been clear from the beginning that *when used properly* 
TLS/SSL cannot be "bump"ed - that is true for all versions of TLS and 
SSL before it. In that same "bump" use-case the server does not 
provide *any* details, it just rejects the proxy attempted connection. 
In some paranoid security environments the server can reject even for 
"splice" when the clientHello is passed on unchanged by the proxy. 
HTTPS use on the web is typically *neither* of those "proper" setups 
so SSL-Bump "bump" in general works and "splice" almost always.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5: server_cert_fingerprint not working fine...

2022-11-19 Thread David Touzeau

Thanks Amos for this clarification,

We also have the same needs and indeed, we face with the same approach.

It is possible that the structure of Squid could not, in some cases, 
recovering this type of information.
Although the concept of a proxy is neither more nor less than a big 
browser that surfs instead of the client browsers.


The SHA1 and certificate information reception are very valuable because 
it ensures better detection of compromised sites (many malicious sites 
use the same information in their certificates).

This allows detecting "nests" of malicious sites automatically.

Unfortunately, there is madness in the approach to security, there is a 
race to strengthen the security of tunnels (produced by Google and 
browsers vendors).

What is the advantage of encrypting wikipedia and Youtube channels?

On the other hand, it is crucial to look inside these streams to detect 
threats.

This is antinomic...

So TLS 1.3 and soon the use of QUIC with UDP 80/443 will make use of a 
proxy useless as these features are rolled out  (trust Google to 
motivate them)

Unless the proxy manages to follow this protocol madness race...

For this reason, firewall manufacturers propose the use of client 
software that fills the gap of protocol visibility in their gateway 
products or you -can see a growth of workstation protections , such EDR 
concept


Just an ideological and non-technical approach...

Regards

Le 19/11/2022 à 16:50, Amos Jeffries a écrit :

On 19/11/2022 2:55 am, UnveilTech - Support wrote:

Hi Amos,

We have tested with a "ssl_bump bump" ("ssl_bump all" and "ssl_bump 
bump sslstep1"), it does not solve the problem.
According to Alex, we can also confirm it's a bug with Squid 5.x and 
TLS 1.3.


Okay.

It seems Squid is only compatible with TLS 1.2, it's not good for the 
future...


One bug (or lack of ability) does not make the entire protocol 
"incompatible". It only affects people trying to do the particular 
buggy action.
Unfortunately for you (and others) it happens to be accessing this 
server cert fingerprint.


I/we have been clear from the beginning that *when used properly* 
TLS/SSL cannot be "bump"ed - that is true for all versions of TLS and 
SSL before it. In that same "bump" use-case the server does not 
provide *any* details, it just rejects the proxy attempted connection. 
In some paranoid security environments the server can reject even for 
"splice" when the clientHello is passed on unchanged by the proxy. 
HTTPS use on the web is typically *neither* of those "proper" setups 
so SSL-Bump "bump" in general works and "splice" almost always.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos - Cannot decrypt ticket for HTTP

2022-11-16 Thread David Touzeau

Hi

perhaps this one
https://wiki.articatech.com/en/proxy-service/troubleshooting/gss-cannot-decrypt-ticket


Le 16/11/2022 à 05:11, Михаил a écrit :

Hi everybody,
Could you help me to setup my new squid server? I have a problem with 
keytab authorization.
2022/11/16 11:35:39| ERROR: Negotiate Authentication validating user. 
Result: {result=BH, notes={message: gss_accept_sec_context() failed: 
Unspecified GSS failure.  Minor code may provide more information. 
Cannot decrypt ticket for HTTP/uisproxy-rop.***.***.corp@***.***.CORP 
using keytab key for HTTP/uisproxy-rop.***.***.corp@***.**.CORP; }}

Got NTLMSSP neg_flags=0xe2088297
2022/11/16 11:35:40| ERROR: Negotiate Authentication validating user. 
Result: {result=BH, notes={message: gss_accept_sec_context() failed: 
Unspecified GSS failure.  Minor code may provide more information. 
Cannot decrypt ticket for HTTP/uisproxy-rop.***.***.corp@***.***.CORP 
using keytab key for HTTP/uisproxy-rop.***.***.corp@***.***.CORP; }}
# kinit -V -k -t /etc/squid/keytab/uisproxy-rop-t.keytab 
HTTP/uisproxy-rop.***.***.corp

Using default cache: /tmp/krb5cc_0
Using principal: HTTP/uisproxy-rop.***.***.corp@***.***.CORP
Using keytab: /etc/squid/keytab/uisproxy-rop-t.keytab
Authenticated to Kerberos v5
# klist -ke /etc/squid/keytab/uisproxy-rop-t.keytab
Keytab name: FILE:/etc/squid/keytab/uisproxy-rop-t.keytab
KVNO Principal
 
--

   3 uisproxy-rop-t$@***.***.CORP (arcfour-hmac)
   3 uisproxy-rop-t$@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 uisproxy-rop-t$@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 UISPROXY-ROP-T$@***.***.CORP (arcfour-hmac)
   3 UISPROXY-ROP-T$@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 UISPROXY-ROP-T$@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (arcfour-hmac)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 HTTP/uisproxy-rop.***.***.corp@***.***.CORP (aes256-cts-hmac-sha1-96)
   3 host/uisproxy-rop@***.***.CORP (arcfour-hmac)
   3 host/uisproxy-rop@***.***.CORP (aes128-cts-hmac-sha1-96)
   3 host/uisproxy-rop@***.***.CORP (aes256-cts-hmac-sha1-96)
# klist -kt
Keytab name: FILE:/etc/squid/keytab/uisproxy-rop-t.keytab
KVNO Timestamp           Principal
 --- 
--

   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 uisproxy-rop-t$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 UISPROXY-ROP-T$@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 HTTP/uisproxy-rop.***.***.corp@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP
   3 11/16/2022 11:30:50 host/uisproxy-rop@***.***.CORP

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ACL based DNS server list

2022-11-02 Thread David Touzeau

It should be a good feature request that the Squid DNS client supports eDNS
eDNS can be used to send the source client IP address received by Squid 
to a remote DNS.
In this case the DNS will be able to change its behavior depending on 
the source IP address.


Amos, Alex ?

Le 30/10/2022 à 18:00, Grant Taylor a écrit :

On 10/25/22 7:27 PM, Sneaker Space LTD wrote:

Hello,


Hi,

Is there a way to use specific DNS servers based on the user or 
connecting IP address that is making the connection by using acls or 
any other method? If so, can someone send an example.


"Any other method" covers a LOT of things.  Including things outside 
of Squid's domain.


You could probably do some things with networking such that different 
clients connected to different instances of Squid each configured to 
use different DNS servers.  --  This is a huge hole in the ground and 
can cover a LOT of things.  All of which are outside of Squid's domain.





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
David Touzeau - Artica Tech France
Development team, level 3 support
--
P: +33 6 58 44 69 46
www:https://wiki.articatech.com
www:http://articatech.net  
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.7 + bump ERR_READ_ERROR|WITH_SERVER

2022-10-12 Thread David Touzeau
/2022 à 20:00, Alex Rousskov a écrit :

On 10/12/22 12:45, David Touzeau wrote:

Hi

We using squid 5.7 after adding ssl-bump we have sometimes several 
502 error  with extended error ERR_READ_ERROR|WITH_SERVER


1665589818.831 11 192.168.1.13 NONE_NONE/502 192616 OPTIONS 
https://www2.deepl.com/jsonrpc?method=LMT_split_text - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589839.288 11 192.168.1.13 NONE_NONE/502 506759 POST 
https://pollserver.lastpass.com/poll_server.php - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589719.879 44 192.168.1.13 NONE_NONE/502 506954 GET 
https://contile.services.mozilla.com/v1/tiles - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"



What does it means.


502 with ERR_READ_ERROR|WITH_SERVER may mean several things 
(unfortunately). Given HIER_NONE, I would suspect that Squid could not 
find a valid destination for the request. There is a similar recent 
squid-users thread at 
http://lists.squid-cache.org/pipermail/squid-users/2022-October/025289.html




how can we fix it ?


The first step is to identify what causes these errors.

Can you reproduce this problem at will? Perhaps by trying going to 
https://dnslabeldoesnotexist.com mentioned at the above thread? If you 
can, consider sharing (a pointer to) a compressed debugging cache.log 
from a test box that does not expose any internal secrets, as detailed 
at 
https://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction



HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support


*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.7 + bump ERR_READ_ERROR|WITH_SERVER

2022-10-12 Thread David Touzeau

Hi

We using squid 5.7 after adding ssl-bump we have sometimes several 502 
error  with extended error ERR_READ_ERROR|WITH_SERVER


1665589818.831 11 192.168.1.13 NONE_NONE/502 192616 OPTIONS 
https://www2.deepl.com/jsonrpc?method=LMT_split_text - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589839.288 11 192.168.1.13 NONE_NONE/502 506759 POST 
https://pollserver.lastpass.com/poll_server.php - HIER_NONE/-:- 
text/html mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"
1665589719.879 44 192.168.1.13 NONE_NONE/502 506954 GET 
https://contile.services.mozilla.com/v1/tiles - HIER_NONE/-:- text/html 
mac="68:54:5a:94:e7:56" - exterr="ERR_READ_ERROR|WITH_SERVER"


What does it means.

how can we fix it ?

regards


--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance recommendation

2022-09-24 Thread David Touzeau

Hi

We have some experience on cluster configuration.

https://wiki.articatech.com/en/proxy-service/hacluster

As using Kubernetes for Squid and for 40K users is a very "risky adventure".

Squid requires a very high disk performance (I/O) which means both a 
good hard disk drive and a decent controller card.


You will reach a functional limit of kubernete which by structure is not 
adapted to this type of service


Of course you can continue in this way

But we see this a lot from experience:

"To take on the load you're going to install a lot of instances on 
multiple virtualization servers.

Whereas 2 or 3 physical machines could handle it all."


Le 20/09/2022 à 21:52, Pintér Szabolcs a écrit :


Hi squid community,

I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.


Parameters: I need HA, caching(little objects only not like big 
windows updates), scaling(It is just secondly), and I want to use and 
modify(in production,in working hours) complex black- and whitelists


I have some idea:

1. A huge kubernetes cluster

pro: Easy to scale, change the config and update.

contra: I'm afraid of the network latency.(because of the most plus 
layers e.g. vm network stack, kubernetes network stack ith vxlan and 
etc.).


2. Simple VM-s with a HAProxy in tcp mode

pro: less network latency(I think)

contra: More time to Administration


Has anybody any experience with squid in kubernetes(or similar 
technology) with a large number of useres?


What do you think which is the most perfect solution or do you have 
other idea for the implementation?


Thanks!

Best, Szabolcs

--
*Pintér Szabolcs Péter*
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600
+36 30 471-3827
spin...@npsh.hu


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid][v5.6] : problem with "slow" or "fast" acl

2022-09-06 Thread David Touzeau

Hi Eric.

We had the same restrictions with the fast or slow ACLs.
Have you thought about creating a squid helper that calculates your needs?
So maybe you can get around this by using the acl "note" acl note xxx 
xxx which turns your helper results (slow) into "fast".




Le 05/09/2022 à 14:56, PERROT Eric DNUM SDCAST BST SSAIM a écrit :

Hello,

We use directives "reply_body_max_size", "request_body_max_size" and 
"delay_access" to limit upload, download and passband in our infra.


This configuration existes since a while, but we have noticed that 
with squid v4.16, our delay pool didn't react as we wanted anymore. We 
were excpeting improvment upgrading squid to v5.6. But it got worth :

- restriction still didn't work
- and squid had a segmentation fault each time some acl where used

Thanks to Alex Rousskov (bug 5231), after some investigation, it 
appears that we used "slow" acl (proxy_auth an time acl) where only 
"fast" acl where authorized...). The bug is still open as squid has 
not flagged the problem in cache logs,


My email, is to show you our configuration and the behaviour we 
espect, and the behaviour we finally have.
1 - squd v4.12 : we expect to limit downlod/upload and passband during 
working time for all login except those starting with cg_*

"
|## Gestion de bande passante ##
acl bureau time 09:00-12:00
acl bureau time 14:00-17:00
# Comptes generiques
|||acl my_ldap_auth proxy_auth REQUIRED
|acl cgen proxy_auth_regex cg_
reply_body_max_size 800 MB *bureau !cgen*
request_body_max_size 5 MB
# La limite de bande passante ne fonctionne plus avec le BUMP
# A tester ...
delay_pools 1
# Pendant time sauf cgen, emeraude
delay_class 1 4
delay_access 1 allow**||*||my_ldap_auth !cgen||***!emeraude
delay_access 1 deny all
# 512000 = 5120 kbits/user 640 ko
# 307200 = 3072 kbits/user 384 ko
delay_parameters 1 -1/-1 -1/-1 -1/-1 107200/107200
##|
"
=> with this configuration, the delay pool seemed not to work anymore, 
so we upgraded squid to v5.6. Which caused the squid segmentation 
fault...


2 - squid v5.6 : to solve the segmentation fault, we had to take off 
my_ldap_auth/cgen (proxy_auth acl) and bureau (time acl). The 
limitation work again, but we are no more able to limit restriction 
during working time, or for spécific login...

"
|## Gestion de bande passante ##
acl bureau time 09:00-12:00
acl bureau time 14:00-17:00
# Comptes generiques
acl userrgt src 10.0.0.0/8
|||acl my_ldap_auth proxy_auth REQUIRED
|acl cgen proxy_auth_regex cg_
reply_body_max_size 800 MB *userrgt*
request_body_max_size 5 MB
# La limite de bande passante ne fonctionne plus avec le BUMP
# A tester ...
delay_pools 1
# Pendant time sauf cgen, emeraude
delay_class 1 4
delay_access 1 allow||****!emeraude
delay_access 1 deny all
# 512000 = 5120 kbits/user 640 ko
# 307200 = 3072 kbits/user 384 ko
delay_parameters 1 -1/-1 -1/-1 -1/-1 107200/107200
##|
"

Can you tell me if what we want to do is still possible? Limiting 
upload/download/passband for all logged user except those starting by 
cg_*..?.


Thank you for the time reading, and thank you for your answers.

Regards,

Eric Perrot




Pour une administration exemplaire, préservons l'environnement.
N'imprimons que si nécessaire.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2 TCP_MISS_ABORTED/100 erros when uploading

2022-08-30 Thread David Ferreira
(forgot to reply all)
Hi alex,

I did a new capture like you said, since there's only a few http messages
in gonna post them here:

Squid 5.2:
dump -i ens224 -nn port 80 -s0 -w squid52output

POST /webserver/index.php HTTP/1.1
Accept-Encoding: deflate, gzip
Cookie: tickets[InDesign]=9ce90349BDxzBw6TeaGM1sYvG6dDlABl0NqbswO9;
tickets[]=716c37c9qD4eeMFPHZvbfAa8S7VMmLh3Skkgrb31;
AWSELB=0D3B27870CA3C45CF463C76E69DC284A499EFD0DF6EE047B11D31BB6D9B01943D41E6D72FB8A97227A031F20EAAC9364FE0968EA5AAEE1102734343F2F0133CD3A0C6A4A0C
Accept: */*
SOAPAction: "urn:#SaveObjects"
Content-Type: multipart/form-data; boundary=7d123
Content-Length: 4277021
Expect: 100-continue
Host: webserverhost
Via: 1.1 squid5.2host (squid/5.2)
X-Forwarded-For: 172.19.222.132
Cache-Control: max-age=259200
Connection: keep-alive

HTTP/1.1 100 Continue

HTTP/1.1 503 Service Unavailable.
Content-length:0

POST /webserver/index.php HTTP/1.1
Accept-Encoding: deflate, gzip
Cookie: tickets[InDesign]=9ce90349BDxzBw6TeaGM1sYvG6dDlABl0NqbswO9;
tickets[]=716c37c9qD4eeMFPHZvbfAa8S7VMmLh3Skkgrb31;
AWSELB=0D3B27870CA3C45CF463C76E69DC284A499EFD0DF6EE047B11D31BB6D9B01943D41E6D72FB8A97227A031F20EAAC9364FE0968EA5AAEE1102734343F2F0133CD3A0C6A4A0C
Accept: */*
SOAPAction: "urn:#GetDialog"
Content-Type: multipart/form-data; boundary=7d123
Content-Length: 1120
Expect: 100-continue
Host: webserverhost
Via: 1.1 squid5.2host (squid/5.2)
X-Forwarded-For: 172.19.222.132
Cache-Control: max-age=259200
Connection: keep-alive

HTTP/1.1 100 Continue

--
Squid 4.15:

dump -i ens224 -nn port 80 -s0 -w squid4.15output

POST /webserver/index.php HTTP/1.1
Accept-Encoding: deflate, gzip
Cookie: tickets[InDesign]=1ae95903t3jY2HDSgfvoEsfpsibbkf9mlNZ4eDjA;
tickets[]=716c37c9qD4eeMFPHZvbfAa8S7VMmLh3Skkgrb31;
AWSELB=0D3B27870CA3C45CF463C76E69DC284A499EFD0DF6EE047B11D31BB6D9B01943D41E6D72FB8A97227A031F20EAAC9364FE0968EA5AAEE1102734343F2F0133CD3A0C6A4A0C
Accept: */*
SOAPAction: "urn:#SaveObjects"
Content-Type: multipart/form-data; boundary=7d123
Content-Length: 4272865
Expect: 100-continue
Host: webserverhost
Via: 1.1 squid4.15host (squid/4.15)
X-Forwarded-For: 172.19.222.132
Cache-Control: max-age=259200
Connection: keep-alive

HTTP/1.1 100 Continue

POST /webserver/index.php HTTP/1.1
Accept-Encoding: deflate, gzip
Cookie: tickets[InDesign]=1ae95903t3jY2HDSgfvoEsfpsibbkf9mlNZ4eDjA;
tickets[]=716c37c9qD4eeMFPHZvbfAa8S7VMmLh3Skkgrb31;
AWSELB=0D3B27870CA3C45CF463C76E69DC284A499EFD0DF6EE047B11D31BB6D9B01943D41E6D72FB8A97227A031F20EAAC9364FE0968EA5AAEE1102734343F2F0133CD3A0C6A4A0C
Accept: */*
SOAPAction: "urn:#UnlockObjects"
Content-Type: multipart/form-data; boundary=7d123
Content-Length: 644
Host: webserverhost
Via: 1.1 squid4.15host (squid/4.15)
X-Forwarded-For: 172.19.222.132
Cache-Control: max-age=259200
Connection: keep-alive


HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Date: Tue, 30 Aug 2022 10:52:05 GMT
Server: Apache/2.4.6 (CentOS) PHP/7.1.26
Set-Cookie: tickets[InDesign]=1ae95903t3jY2HDSgfvoEsfpsibbkf9mlNZ4eDjA;
expires=Wed, 31-Aug-2022 10:52:05 GMT; Max-Age=86400; path=/webserver;
HttpOnly
X-Powered-By: PHP/7.1.26
Content-Length: 266
Connection: keep-alive

Again thank you for you time.
David

On Mon, 29 Aug 2022 at 18:18, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 8/29/22 12:17, David Ferreira wrote:
>
> > I tried to capture the http trafic with the following tcpdump:
> >
> > tcpdump -i any -nn port 80|grep -i http
>
> I am not cool enough to easily grok this kind of output. Please share (a
> link to) the packet capture file instead (tcpdump -s0 -w filename ...).
>
> Thank you,
>
> Alex.
>
>
> > Notes:
> > - 1.2.3.4 is the webserver ip
> > - 10.185.23.202 is the squid machine outbound interface
> >
> > Here's the results:
> >
> > Squid 4.15(Working one):
> >
> > ---
> > Im not that familiar with tcpdump, if there's a better way to capture
> > please say so, im also gonna build a squid v5 to test it out.
> >
> > Again, thanks for your time
> >
> >
> > On Mon, 29 Aug 2022 at 13:52, Alex Rousskov
> >  > <mailto:rouss...@measurement-factory.com>> wrote:
> >
> > On 8/29/22 06:17, David Ferreira wrote:
> >
> >  > I have some squid's running on rocky linux 8 with verion 4.15,
> > recently
> >  > been testing squid version 5.2(stable version that comes with
> > Rocky 9)to
> >  > upgrade the current ones and most of the configs/acls seem to
> > work fine.
> >  >
> >  > Unfortualy theres an application that we use that everytime it
> > tries to
> >  > upload files it fails on squid 5.2, on 4.15 is works completly
> > fine, so
> >

Re: [squid-users] Squid 5.2 TCP_MISS_ABORTED/100 erros when uploading

2022-08-29 Thread David Ferreira
Running squid parse does not show anything in both squid's, im gonna try to
tcpdump the trafic in both enviroments and build a squid v5 to test it out
like sugested by alex, will post the results after.

thank you again.

On Mon, 29 Aug 2022 at 15:00, Amos Jeffries  wrote:

> On 30/08/22 01:31, David Ferreira wrote:
> > Hi Amos,
> >
> > Thank you for the reply,
> >
> > here's my squid.conf, by default our client's(localnet) do not have
> > internet access and only match windows services acl's unless they are in
> > authorizednet.conf, in this case that's the only match acl for the
> > clients using this application, i also removed some of the includes i
> > have, it's mostly random src to random dstdomain, the clients in
> > question do not match this acl's at all.
> >
>
> Thanks. I do not see anything in those configurations.
>
> Are there any directives in the included files whose name mentions
> "response_" or "reply_" ?
>
> You could do a full check quickly with:
>squid -k parse 2>&1 | grep "Processing" | grep -E "response_|reply_"
>
> If there is any difference between the two versions that is something to
> look at closer for the reasons outlined by Alex already.
>
> The traffic details asked for by Alex already are the next step I would
> move on to checking.
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Com os melhores cumprimentos,

David Ferreira
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2 TCP_MISS_ABORTED/100 erros when uploading

2022-08-29 Thread David Ferreira
Hi Amos,

Thank you for the reply,

here's my squid.conf, by default our client's(localnet) do not have
internet access and only match windows services acl's unless they are in
authorizednet.conf, in this case that's the only match acl for the clients
using this application, i also removed some of the includes i have, it's
mostly random src to random dstdomain, the clients in question do not match
this acl's at all.

---
squid 4.15 squid.conf:
---

logformat timereadable %tl %6tr %>a %Ss/%03Hs %a %Ss/%03Hs %https://almalinux.pkgs.org/8/almalinux-appstream-x86_64/squid-4.15-3.module_el8.6.0+3010+383bc947.1.x86_64.rpm.html
Rocky 9 :
https://almalinux.pkgs.org/9/almalinux-appstream-x86_64/squid-5.2-1.el9_0.1.x86_64.rpm.html

Thank you!


On Mon, 29 Aug 2022 at 13:36, Amos Jeffries  wrote:

> On 29/08/22 22:17, David Ferreira wrote:
> > hi,
> >
> > First time using mailing lists, sorry about anything.
> >
>
> Welcome, and thanks for using Squid.
>
> Do not worry about mistakes. Helping with that type of thing is what
> this list is here for whether expert or beginner.
>
>
>
> >
> > Squid 4.15:
> > 26/Aug/2022:15:36:08 +0100273 172.19.222.132TCP_MISS/200 745 POST
> > http://websiteurl/index.php <http://websiteurl/index.php> -
> > HIER_DIRECT/websitedomain text/xml
> >
> > Squid 5.2:
> > 25/Aug/2022:15:10:00 +0100139 172.19.222.132 TCP_MISS_ABORTED/100 0
> > POST http://websiteurl <http://websiteurl>/index.php -
> > HIER_DIRECT/websitedomain -
> >
> > anyone has an ideia of what may be happening here?, been searching about
> > http errors 100 and so far i did not find anything that points me to the
> > problem.
> >
> > On the application side the error it shows when it tries to upload is:
> > "
> > Error storing the document on the server
> > Detail HTTP error 100
> > Send failure: Connection was aborted (55)
> > "
> >
>
> This is very odd.
>
>   * The "ABORTED" tag hints strongly that the client closed the
> connection here.
>
>
>   * Status code "100 Continue" is a normal part of HTTP/1.1.
>
> There is something wrong with the client application to be reporting
> that as an error code at all. Likely that bug is what triggered the abort.
>
>   * The difference in result between Squid v4 and v5 is also extremely
> odd. I do not think handling of status 100 had any significant changes
> since the Squid-3 days.
>
>
> Can you show us your config for both versions?
>   Omit lines that are commented out to reduce the sizes.
>   Take care to obscure private details while keeping it clear that
> detail A and B are different (eg don't use same symbol X for replacing
> both).
>
>
> Also FME, where can I/we find details of the Rocky Squid packages?
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
Com os melhores cumprimentos,

David Ferreira
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.2 TCP_MISS_ABORTED/100 erros when uploading

2022-08-29 Thread David Ferreira
hi,

First time using mailing lists, sorry about anything.

I have some squid's running on rocky linux 8 with verion 4.15, recently
been testing squid version 5.2(stable version that comes with Rocky 9)to
upgrade the current ones and most of the configs/acls seem to work fine.

Unfortualy theres an application that we use that everytime it tries to
upload files it fails on squid 5.2, on 4.15 is works completly fine, so far
ive test on squid 5.2 and 5.5 and it's the same behavior, im testing this
with default configurations and it always works on 4.15, access log only
shows this:

Squid 4.15:
26/Aug/2022:15:36:08 +0100273 172.19.222.132 TCP_MISS/200 745 POST
http://websiteurl/index.php - HIER_DIRECT/websitedomain text/xml

Squid 5.2:
25/Aug/2022:15:10:00 +0100139 172.19.222.132 TCP_MISS_ABORTED/100 0
POST http://websiteurl/index.php - HIER_DIRECT/websitedomain -

anyone has an ideia of what may be happening here?, been searching about
http errors 100 and so far i did not find anything that points me to the
problem.

On the application side the error it shows when it tries to upload is:
"
Error storing the document on the server
Detail HTTP error 100
Send failure: Connection was aborted (55)
"

Thanks in advance,

David
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] the free domains blacklists are gone..

2022-07-02 Thread David Touzeau


Hi Elieze,

Here a set of lists.

https://github.com/KeyofBlueS/hBlock-Launcher/blob/master/list.txt
https://lists.noads.online/lists/compilation.txt
https://github.com/GlacierSheep/DomainBlockList/tree/5bfcb0c2eabed2f9c82f0bac260e1d88550b5789
https://github.com/maravento/blackweb/blob/master/blackweb.tar.gz
https://github.com/ShadowWhisperer/BlockLists/tree/master/Lists
https://github.com/jerryn70/GoodbyeAds
https://blocklist.site/
https://www.blocked.org.uk
https://blocklist-tools.developerdan.com/blocklists
https://blocklistproject.github.io/Lists/#lists
https://blokada.org/blocklists/ddgtrackerradar/standard/hosts.txt
https://github.com/LINBIT/csync2
https://github.com/StevenBlack/hosts/blob/master/data/KADhosts/hosts
https://github.com/stamparm/maltrail
https://raw.githubusercontent.com/notracking/hosts-blocklists/master/dnscrypt-proxy/dnscrypt-proxy.blacklist.txt
https://blocklist-tools.developerdan.com/entries/search?q=nettflix.website
https://github.com/Import-External-Sources/hosts-sources/tree/master/data
https://hosts.gameindustry.eu/abusive-adblocking/
https://www.bentasker.co.uk/adblock/autolist.txt
https://github.com/VenexGit/DeepGuard
https://firebog.net/


Le 30/06/2022 à 19:00, ngtech1...@gmail.com a écrit :


Hey,

I have tried to download blacklists from couple sites that was 
publishing these in the past and all of them are gone.


The only free resource I have found was DNS blacklists.

I just wrote a dstdomain external helper that can work with a SQL DB 
and it seems to run pretty nice.


Until now I have tried MySQL, Maraidb, MSSQL, PostgreSQL and all of 
them works pretty nice.


There is an overhead in storing the data in a DB compared to a plain 
text file but the benefits are worth it.


The only lists I have found are for Pihole for example at:

https://github.com/blocklistproject/Lists

So now I just need to convert these to dstdomain format and it will 
work with Squid pretty nice.


Any recommendations for free lists are welcome.

Thanks,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Web: https://ngtech.co.il/ <https://ngtech.co.il/>

My-Tube: https://tube.ngtech.co.il/ <https://tube.ngtech.co.il/>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support
    

*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-26 Thread David Touzeau

Hi Eliezer

if you want to do transparent mode without having to put squid squidboix 
in front of your fortinet.


If you want to do transparent mode while your fortinet aggregates 
several VLANs, the WCCP mode is necessary


So you can control everything through your fortigate

By the way, fortinet offers their proxy based on WCCP to ensure a 
consistent integration with fortigate


My configuration is very simple to replicate :

We have added a service ID 80 on fortigate but failed caused by the 
squid bug


config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1


Le 24/06/2022 à 13:17, ngtech1...@gmail.com a écrit :


I am not sure and can spin up my Forti but from what I remember there 
are PBR functions in the Forti.


Why would a WCCP be required? To pass only ports 80 and 443 instead of 
all traffic?



--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-24 Thread David Touzeau

Hi Elizer

No, Fortinet is good.

In this case is connecting HTTP/HTTPs with WCCP from Fortinet to squid 
did not work, because SQUID refuse to communicate with Fortinet 
according to "Ignoring WCCPv2 message: truncated record" issue.


With Squid,  Fortinet report that is no WCCP server available.


Le 23/06/2022 à 18:33, ngtech1...@gmail.com a écrit :


Hey David,

Just trying to understand something:

Aren’t Fortinet something that should replace squid?

I assumed that it should do a much better job then Squid in many aeras.

What a Fortinet(I have one…) is not covering?

Thanks,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

Web: https://ngtech.co.il/

My-Tube: https://tube.ngtech.co.il/

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Thursday, 23 June 2022 19:12
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring 
WCCPv2 message: truncated record


Hi Alex,

is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet 
firewall devices. It should be very popular.

Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.

Le 23/06/2022 à 14:44, Alex Rousskov a écrit :

On 6/21/22 07:43, David Touzeau wrote:


We trying to using WCCP with Fortigate without success Squid
version  5.5 always claim "Ignoring WCCPv2 message: truncated
record"

What can be the cause ?


The most likely cause are bugs in untested WCCP fixes (v5 commit
7a73a54). Dormant draft PR 970 contains unfinished fixes for the
problems in that previous attempt:
https://github.com/squid-cache/squid/pull/970

IMHO, folks that need WCCP support should invest into that
semi-abandoned Squid feature or risk losing it. WCCP code needs
serious refactoring and proper testing. There are currently no
Project volunteers that have enough resources and capabilities to
do either.


https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F



HTH,

Alex.



We have added a service ID 80 on fortigate

config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp
flags=src_ip_hash priority=240 ports=80,443
wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206)
wccp2HandleUdp: wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect:
FD 38, type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230)
wccp2HandleUdp: Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message:
truncated record
 exception location: wccp2.cc(1133) CheckSectionLength



-- 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--

*Technical Support*




*David Touzeau***

Orgerus, Yvelines, France

*Artica Tech*


P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support


*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-23 Thread David Touzeau

Hi Alex,

is the v5 commit 7a73a54 already included in the latest 5.5,5.6 versions?

This is very unfortunate because WCCP is used by default by Fortinet 
firewall devices. It should be very popular.

Indeed, Fortinet is flooding the market.
I can volunteer for the funding and the necessary testing to be done.

Le 23/06/2022 à 14:44, Alex Rousskov a écrit :

On 6/21/22 07:43, David Touzeau wrote:

We trying to using WCCP with Fortigate without success Squid version  
5.5 always claim "Ignoring WCCPv2 message: truncated record"


What can be the cause ?


The most likely cause are bugs in untested WCCP fixes (v5 commit 
7a73a54). Dormant draft PR 970 contains unfinished fixes for the 
problems in that previous attempt:

https://github.com/squid-cache/squid/pull/970

IMHO, folks that need WCCP support should invest into that 
semi-abandoned Squid feature or risk losing it. WCCP code needs 
serious refactoring and proper testing. There are currently no Project 
volunteers that have enough resources and capabilities to do either.


https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F 




HTH,

Alex.



We have added a service ID 80 on fortigate

config system wccp
 edit "80"
 set router-id 10.10.50.1
 set group-address 0.0.0.0
 set server-list 10.10.50.2 255.255.255.255
 set server-type forward
 set authentication disable
 set forward-method GRE
 set return-method GRE
 set assignment-method HASH
 next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, 
type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: 
Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: 
truncated record

 exception location: wccp2.cc(1133) CheckSectionLength



--

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support


*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] WCCPV2 with fortigate ERROR: Ignoring WCCPv2 message: truncated record

2022-06-21 Thread David Touzeau

Hi

We trying to using WCCP with Fortigate without success Squid version  
5.5 always claim "Ignoring WCCPv2 message: truncated record"


What can be the cause ?

We have added a service ID 80 on fortigate

config system wccp
    edit "80"
    set router-id 10.10.50.1
    set group-address 0.0.0.0
    set server-list 10.10.50.2 255.255.255.255
    set server-type forward
    set authentication disable
    set forward-method GRE
    set return-method GRE
    set assignment-method HASH
    next
end

Squid wccp configuration

wccp2_router 10.10.50.1
wccp_version 3
# tested v4 do the same behavior
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_assignment_method hash
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp protocol=tcp flags=src_ip_hash 
priority=240 ports=80,443

wccp2_address 0.0.0.0
wccp2_weight 1

Squid claim in debug log

022/06/21 13:15:38.780 kid4| 80,6| wccp2.cc(1206) wccp2HandleUdp: 
wccp2HandleUdp: Called.
2022/06/21 13:15:38.781 kid4| 5,5| ModEpoll.cc(118) SetSelect: FD 38, 
type=1, handler=1, client_data=0, timeout=0
2022/06/21 13:15:38.781 kid4| 80,3| wccp2.cc(1230) wccp2HandleUdp: 
Incoming WCCPv2 I_SEE_YOU length 112.
2022/06/21 13:15:38.781 kid4| ERROR: Ignoring WCCPv2 message: truncated 
record

    exception location: wccp2.cc(1133) CheckSectionLength



--___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid load simulation tools for performance testing

2022-05-25 Thread David Touzeau

Use "siege" it can simulate x users for x urls

You can also use our free of charge appliance that allows you to easily 
use siege.


https://wiki.articatech.com/en/proxy-service/tuning/stress-your-proxy-server



Le 10/05/2022 à 07:33, Punyasloka Arya a écrit :

Dear ALL,

We have just installed Squid 5.5 (stable version) from source on ubuntu
20.0.4.
Before putting in the production network, we want to test the performance of
squid by monitoring critical parameters like response time,  cache hits, cache
misses etc
We would like to know tools/software/scripts to simulate load conditions for
500 users with at least 1K connections.

Any help is greatly appreciated.

From
Punyasloka Arya
PUNYASLOKA ARYAपुण्यश्लोक आर्या
Staffno:3880,Netops,TS(B)
Senior Research Engineer   वरिष्ठ अनुसंधान अभियंता
C-DOT  सी-डॉट
Electronics City,Phase-1   इलैक्ट्रॉनिक्स सिटी फेज़ I
Hosur Road,Bangalore   होसूर रोड, बेंगलूरु
560100 560100
### Please consider the environment and print this email only if necessary
.
Go Green ###

Disclaimer :
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you are not the intended recipient you are notified that disclosing,
copying, distributing or taking any action in reliance on the contents of
this
information is strictly prohibited. The sender does not accept liability
for any errors or omissions in the contents of this message, which arise
as
a
result.

--
Open WebMail Project (http://openwebmail.org)

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

--
Technical Support

    
*David Touzeau*
Orgerus, Yvelines, France
*Artica Tech*

P: +33 6 58 44 69 46
www: wiki.articatech.com <https://wiki.articatech.com>
www: articatech.net <http://articatech.net>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.4 : ERR_PROTOCOL_UNKNOWN and exception=18686e4e

2022-03-05 Thread David Touzeau

Hi

added  exterr="%err_code|%err_detail" in logging and result return some 
request with ERR_PROTOCOL_UNKNOWN|exception=18686e4e


1646498399.887 46 176.12.1.2 NONE_NONE/000 0 CONNECT 62.67.238.138:443 - 
HIER_NONE/-:- exterr="ERR_PROTOCOL_UNKNOWN|exception=18686e4e"


What does "exception=18686e4e" means, how to avoid/force squid to 
forward data ?


Does /on_unsupported_protocol /should fix this behavior ?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-14 Thread David Touzeau

Eliezer,

First of all, thank you for twisting your brain at our request.
I know your skills and your time is very valuable.

HotSpot+Cookies can be interesting but it has a constraint that 
kerberos/NTLM SSO fixes:


1)  Redirecting connections to a HotSpot requires Squid to be able to 
forward the redirection.
When using SSL sites without MAN-IN-THE-MIDDLE, we fall into structural 
issues.


2)  Even if this problem can be circumvented, it is necessary for the 
user to identify himself on the Splash Screen to understand who he is.

While this user is already identified with his Windows session.


Forget about NTLMv2 which does not provide the "Fake" anymore
The advantage of fake_ntlm is that when Squid performs its 407, 
naturally the browser sends its windows session username whether it is 
connected to an Active Directory or not.


This is what we want to catch in the end.

The HotSpot way is a half-solution. It circumvents the limit of 
identification but adds new network constraints you mention.


The dream is a plugin that forces Squid to generate a 407, asks to 
browsers "Give me your user account whatever it is" and allows access in 
any case to place the user=xxx switch for the next processing.


It almost looks like the "ident" method
http://www.squid-cache.org/Misc/ident.html
Without having to install a piece of software and a listening port on 
all the computers in the network


Le 14/02/2022 à 19:50, Eliezer Croitoru a écrit :


Hey David,

Transparent authentication using Kerberos can only be used with a 
directory service.


There are couple ways to authenticate…

You can use an “automatic” hotspot website that will use cookies to 
authenticate the client once in a very long time.


If the client request is not recognized or the client is not 
recognized for any reason it’s reasonable to redirect him into a 
captive portal.


I can try to work on a demo but I need to know more details about the 
network structure and to verify what is possible and not.


Every device ie Switch and router or AP etc should be mentioned to 
understand the scenario.


While you assume it’s a chimera I still believe it’s just a three 
heads Kerberos which… was proved to exists… in the movies and in the 
virtual world.


Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*David Touzeau 
*Sent:* Monday, February 14, 2022 03:21
*To:* Eliezer Croitoru 
*Cc:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Squid plugin sponsor


Thank you for your answer Elizer for all these details, but I've done 
some research to avoid soliciting the community for simple questions.


The objective is to not ask anything to the user and not to break his 
navigation with a session request.
To summarize, An SSO identification like kerberos with the following 
constraints:


 1. unknown Mac addresses
 2. DHCP IP with a short lease
 3. No Active Directory connection.




The network is in VLAN (Mac addr masked) and in DHCP with a short lease.
Even the notion of hotspot is complicated when you can't focus on a 
network attribute.

I try to find a way directly in the HTTP protocol.
This is the reason why a fake could be a solution.

But I think I'm trying to catch a chimera and we'll have to redesign 
the network architecture.


regards

Le 12/02/2022 à 06:27, Eliezer Croitoru a écrit :

Hey David,

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal
way possible.
(taking into account money, coding complexity and other humane parts)

You will need to authenticate the client against the main AUTH
service.

There is a definitive way or statistical way to implement this
solution.

With AD or Kerberos it’s possible to implement the solution in
such a way that windows will
“transparently” authenticate to the proxy service.

However you must understand that all of this requires an
infrastructure that will provide every piece of the setup.

If your setup doesn’t contains RDP like servers then it’s possible
that you can authenticate a user with an IP compared
to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the
user will be required to enter (manually or automatically)
the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very
simple setup to implement with active directory.

It’s also possible to implement a transparent authentication for
such a setup based on session tokens.

You actually don’t need to create a “fake” helper for such a setup
but you can create one that is based on Linux.

It’s an “Advanced” topic but if you do ask me it’s possible that
you can take this in steps.

The first step wo

Re: [squid-users] Squid plugin sponsor

2022-02-13 Thread David Touzeau


Thank you for your answer Elizer for all these details, but I've done 
some research to avoid soliciting the community for simple questions.


The objective is to not ask anything to the user and not to break his 
navigation with a session request.
To summarize, An SSO identification like kerberos with the following 
constraints:


1. unknown Mac addresses
2. DHCP IP with a short lease
3. No Active Directory connection.




The network is in VLAN (Mac addr masked) and in DHCP with a short lease.
Even the notion of hotspot is complicated when you can't focus on a 
network attribute.

I try to find a way directly in the HTTP protocol.
This is the reason why a fake could be a solution.

But I think I'm trying to catch a chimera and we'll have to redesign the 
network architecture.


regards

Le 12/02/2022 à 06:27, Eliezer Croitoru a écrit :


Hey David,

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal way 
possible.

(taking into account money, coding complexity and other humane parts)

You will need to authenticate the client against the main AUTH service.

There is a definitive way or statistical way to implement this solution.

With AD or Kerberos it’s possible to implement the solution in such a 
way that windows will

“transparently” authenticate to the proxy service.

However you must understand that all of this requires an 
infrastructure that will provide every piece of the setup.


If your setup doesn’t contains RDP like servers then it’s possible 
that you can authenticate a user with an IP compared

to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the user 
will be required to enter (manually or automatically)

the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very 
simple setup to implement with active directory.


It’s also possible to implement a transparent authentication for such 
a setup based on session tokens.


You actually don’t need to create a “fake” helper for such a setup but 
you can create one that is based on Linux.


It’s an “Advanced” topic but if you do ask me it’s possible that you 
can take this in steps.


The first step would be to use a session helper that will authenticate 
the user and will identify the user

based on it’s IP address.

If it’s a wireless setup you can use a radius based authentication ( 
can also be implemented on a wired setup).


Once you will authenticate the client transparently or in another way 
you can limit the usage of the username to
a specific client and with that comes a guaranteed situation that a 
username will not be used from two sources.


I don’t know about your experience but the usage of a captive portal 
is very common In such situations.


The other option is to create an agent in the client side that will 
identify the user against the proxy/auth service
and it will create a situation which an authorization will be acquired 
based on some degree of authentication.


In most SSO environments it’s possible that per request/domain/other 
there is a transparent validation.


In all the above scenarios which requires authentication the right way 
to do it would be to use the proxy as

a configured proxy compared to transparent.

I believe that one thing to consider is that once you authenticate 
against a RADIUS service you would just

minimize the user interaction.

The main point from what I understand is to actually minimize the 
authentication steps of the client.


My suggestion for you is to first try and asses the complexity of a 
session helper, raidus and captive portal.


These are steps that you will need to do in order to asses the 
necessity of transparent SSO.


Also take your time to compare how a captive portal is configured in 
the next general products:


  * Palo Alto
  * FortiGate
  * Untangle
  * Others

From the documentation you would see the different ways and “grades” 
that they implement the solutions.


Once you know what the market offers and their equivalent costs you 
will probably understand what
you want and what you can afford to invest in the development process 
of each part of setup.


All The Bests,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 11, 2022 17:03
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] Squid plugin sponsor

Hello

Thank you but this is not the objective and this is the reason for 
needing the "fake".
Access to Kerberos or NTLM ports of the AD, is not possible. An LDAP 
server would be present with accounts replication.

The idea is to do a silent authentication without joining the AD
We did not need the double user/password credential, only the user 
sent by the browser is required


If the user has

Re: [squid-users] Squid plugin sponsor

2022-02-11 Thread David Touzeau

Hello

Thank you but this is not the objective and this is the reason for 
needing the "fake".
Access to Kerberos or NTLM ports of the AD, is not possible. An LDAP 
server would be present with accounts replication.

The idea is to do a silent authentication without joining the AD
We did not need the double user/password credential, only the user sent 
by the browser is required


If the user has an Active Directory session then his account is 
automatically sent without him having to take any action.
If the user is in a workgroup then the account sent will not be in the 
LDAP database and will be rejected.
I don't need to argue about the security value of this method. It saves 
us from setting up a gas factory to make a kind of HotSpot


Le 11/02/2022 à 05:55, Dieter Bloms a écrit :

Hello David,

for me it looks like you want to use kerberos authentication.
With kerberos authentication the user don't have to authenticate against
the proxy. The authentication is done in the background.

Mayb this link will help:

https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

On Thu, Feb 10, David Touzeau wrote:


Hi

What we are looking for is to retrieve a "user" token without having to ask
anything from the user.
That's why we're looking at Active Directory credentials.
Once the user account is retrieved, a helper would be in charge of checking
if the user exists in the LDAP database.
This is to avoid any connection to an Active Directory
Maybe this is impossible


Le 10/02/2022 à 05:03, Amos Jeffries a écrit :

On 10/02/22 01:43, David Touzeau wrote:

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support
new protocols

ntlm_* helpers are specific to NTLM authentication. All LanManager (LM)
protocols should already be supported as well as currently possible.
NTLM is formally discontinued by MS and *very* inefficient.

NP: NTLMv2 with encryption does not *work* because that encryption step
requires secret keys the proxy is not able to know.


or go further produce a new negotiate_kerberos_auth_fake


With current Squid this helper only needs to produce an "OK" response
regardless of the input. The basic_auth_fake does that.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-10 Thread David Touzeau

Hi

What we are looking for is to retrieve a "user" token without having to 
ask anything from the user.

That's why we're looking at Active Directory credentials.
Once the user account is retrieved, a helper would be in charge of 
checking if the user exists in the LDAP database.

This is to avoid any connection to an Active Directory
Maybe this is impossible


Le 10/02/2022 à 05:03, Amos Jeffries a écrit :

On 10/02/22 01:43, David Touzeau wrote:

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support 
new protocols


ntlm_* helpers are specific to NTLM authentication. All LanManager 
(LM) protocols should already be supported as well as currently 
possible. NTLM is formally discontinued by MS and *very* inefficient.


NP: NTLMv2 with encryption does not *work* because that encryption 
step requires secret keys the proxy is not able to know.



or go further produce a new negotiate_kerberos_auth_fake



With current Squid this helper only needs to produce an "OK" response 
regardless of the input. The basic_auth_fake does that.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid plugin sponsor

2022-02-09 Thread David Touzeau

Hi

I would like to sponsor the improvement of ntlm_fake_auth to support new 
protocols or go further produce a new negotiate_kerberos_auth_fake


Who should start the challenge?

regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-07 Thread David Touzeau

You are the best,
We will launch a benchmark to see the diff

Le 07/02/2022 à 16:14, Eliezer Croitoru a écrit :


Hey David,

Since the handle_stdout runs in it’s own thread it’s sole purpose is 
to send results to stdout.


If I will run the next code in a simple software without the 0.5 sleep 
time:


 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

sys.stdout.flush()

 time.sleep(0.5)

what will happen is that the software will run with 100% CPU looping 
over and over on the size of the queue

while sometimes it will spit some data to stdout.

Adding a small delay with 0.5 secs will allow some “idle” time for the 
cpu in the loop preventing it from consuming

all the CPU time.

It’s a very old technique and there are others which are more 
efficient but it’s enough to demonstrate that a simple
threaded helper is much better then any PHP code that was not meant to 
be running as a STDIN/OUT daemon/helper software.


All The Bests,

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*David Touzeau 
*Sent:* Monday, February 7, 2022 02:42
*To:* Eliezer Croitoru ; 
squid-users@lists.squid-cache.org

*Subject:* Re: [squid-users] external helper development

Sorry Elizer

It was a mistake... No, your code is clean..
Impressive for the first shot
Many thanks for your example, we will run our stress tool to see the 
difference...


Just a question

Why did you send 500 milliseconds of sleep in the handle_stdoud ? Is 
it for let squid closing the pipe ?



Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :

    Hey David,

Not a fully completed helper but it seems to works pretty nice and
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;
<https://cloud1.ngtech.co.il/api/test.php>

def sig_handler(signum, frame):

sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users 
<mailto:squid-users-boun...@lists.squid-cache.org> *On Behalf Of
*David Touzeau
*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid
between opening a socket, sending data, receiving data and closing
the socket unlike direct access to a regex or a memory entry even
if the calculation has alre

Re: [squid-users] external helper development

2022-02-06 Thread David Touzeau

Sorry  Elizer

It was a mistake... No, your code is clean..
Impressive for the first shot
Many thanks for your example, we will run our stress tool to see the 
difference...


Just a question

Why did you send 500 milliseconds of sleep in the handle_stdoud ? Is it 
for let squid closing the pipe ?




Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :


Hey David,

Not a fully completed helper but it seems to works pretty nice and 
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;

def sig_handler(signum, frame):

    sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python 
plugin in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop 
and output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
    self._exiting=True
    break

    if line[-1] == '\n':line = line[:-1]
    channel = None
    options = line.split()

    try:
    if options[0].isdigit(): channel = options.pop(0)
    except IndexError:
    self.stdout("0 OK first=ERROR\n")
    continue

    # Processing here

    try:
    self.stdout("%s

Re: [squid-users] external helper development

2022-02-06 Thread David Touzeau

Thanks Elizer !!

I have tested your code as is in /lib/squid3/external_acl_first process 
but it take 100% CPU and squid freeze requests.

Seems a crazy loop somewhere...

root 105852  0.0  0.1  73712  9256 ?    SNs  00:27   0:00 squid
squid    105854  0.0  0.3  89540 27536 ?    SN   00:27   0:00 
(squid-1) --kid squid-1
squid    105855 91.6  0.5 219764 47636 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first
squid    105856 91.8  0.5 219768 47672 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first
squid    105857 92.9  0.5 293488 47696 ?    SNl  00:27   2:54 python 
/lib/squid3/external_acl_first
squid    105858 91.8  0.6 367228 49728 ?    SNl  00:27   2:52 python 
/lib/squid3/external_acl_first


I did not find where it should be...


Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :


Hey David,

Not a fully completed helper but it seems to works pretty nice and 
might be better then what exist already:


https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

#!/usr/bin/env python

import sys

import time

import urllib.request

import signal

import threading

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

RUNNING = True

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;

def sig_handler(signum, frame):

    sys.stderr.write("Signal is received:" + str(signum) + "\n")

    global quit

    quit = 1

    global RUNNING

    RUNNING=False

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

def handle_stdin(n):

    while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

threads.append(stdout_thread)

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

threads.append(stdin_thread)

while(RUNNING):

    time.sleep(3)

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Friday, February 4, 2022 16:29
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python 
plugin in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop 
and output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
  

Re: [squid-users] external helper development

2022-02-04 Thread David Touzeau

Elizer,

Thanks for all this advice and indeed your arguments are valid between 
opening a socket, sending data, receiving data and closing the socket 
unlike direct access to a regex or a memory entry even if the 
calculation has already been done.


But what surprises me the most is that we have produced a python plugin 
in thread which I provide you a code below.
The php code is like your mentioned example ( No thread, just a loop and 
output OK )


Results are after 6k requests, squid freeze and no surf can be made as 
with PHP code we can up to 10K requests and squid is happy

really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

    def __init__(self):
    self._exiting = False
    self._cache = {}

    def exit(self):
    self._exiting = True

    def stdout(self, lineToSend):
    try:
    sys.stdout.write(lineToSend)
    sys.stdout.flush()

    except IOError as e:
    if e.errno==32:
    # Error Broken PIPE!"
    pass
    except:
    # other execpt
    pass

    def run(self):
    while not self._exiting:
    if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
    line = sys.stdin.readline()
    LenOfline=len(line)

    if LenOfline==0:
    self._exiting=True
    break

    if line[-1] == '\n':line = line[:-1]
    channel = None
    options = line.split()

    try:
    if options[0].isdigit(): channel = options.pop(0)
    except IndexError:
    self.stdout("0 OK first=ERROR\n")
    continue

    # Processing here

    try:
    self.stdout("%s OK\n" % channel)
    except:
    self.stdout("%s ERROR first=ERROR\n" % channel)




class Main(object):
    def __init__(self):
    self._threads = []
    self._exiting = False
    self._reload = False
    self._config = ""

    for sig, action in (
    (signal.SIGINT, self.shutdown),
    (signal.SIGQUIT, self.shutdown),
    (signal.SIGTERM, self.shutdown),
    (signal.SIGHUP, lambda s, f: setattr(self, '_reload', True)),
    (signal.SIGPIPE, signal.SIG_IGN),
    ):
    try:
    signal.signal(sig, action)
    except AttributeError:
    pass



    def shutdown(self, sig = None, frame = None):
    self._exiting = True
    self.stop_threads()

    def start_threads(self):

    sThread = ClienThread()
    t = threading.Thread(target = sThread.run)
    t.start()
    self._threads.append((sThread, t))



    def stop_threads(self):
    for p, t in self._threads:
    p.exit()
    for p, t in self._threads:
    t.join(timeout = 1.0)
    self._threads = []

    def run(self):
    """ main loop """
    ret = 0
    self.start_threads()
    return ret


if __name__ == '__main__':
    # set C locale
    locale.setlocale(locale.LC_ALL, 'C')
    os.environ['LANG'] = 'C'
    ret = 0
    try:
    main = Main()
    ret = main.run()
    except SystemExit:
    pass
    except KeyboardInterrupt:
    ret = 4
    except:
    sys.exit(ret)

Le 04/02/2022 à 07:06, Eliezer Croitoru a écrit :


And about the cache of each helpers, the cost of a cache on a single 
helper is not much in terms of memory comparing to some network access.


Again it’s possible to test and verify this on a loaded system to get 
results. The delay itself can be seen from squid side in the cache 
manager statistics.


You can also try to compare the next ruby helper:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

About a shared “base” which allows helpers to avoid computation of the 
query…. It’s a good argument, however it depends what is the cost of

pulling from the cache compared to calculating the answer.

A very simple string comparison or regex matching would probably be 
faster than reaching a shared storage in many cases.


Also take into account the “concurrency” support from the helper side.

A helper that supports parallel processing of requests/lines can do 
better then many single helpers in more than once use case.


In any case I would suggest to enable requests concurrency from squid 
side since the STDIN buffer will emulate some level of concurrency

by itself and will allow squid to keep going forward faster.

Just to mention that SquidGuard have used a single helper cache for a 
very long time, ie every single SquidGuard helper has it’s own copy of 
the whole


configuration and database files in memory.

And again, if you do have any 

Re: [squid-users] external helper development

2022-02-03 Thread David Touzeau

Hi Elizer

You are right in a way but when squid loads multiple helpers, each 
helper will use its own cache.
Using a shared "base" allows helpers to avoid having to compute a query 
already found by another helper who already has the answer.


Concerning PHP what we find strange is that with our tests, a simple 
loop and an "echo OK", php goes faster: 1.5x than python.


Le 03/02/2022 à 07:09, Eliezer Croitoru a écrit :

Hey Andre,

Every language has a "cost" for it's qualities.
For example, Golang is a very nice language that offers a relatively simple way 
for concurrency support and cross hardware compilation/compatibility.
One cost in Golang is that the binary is in the size of an OS/Kernel.
In python you must write everything in a specific position and indentation and 
threading is not simple to implement for a novice.
However when you see what was written in Python you can see that most of 
OpenStack api's and systems are written in.. python and it means something.
I like very much ruby but it doesn't support threading by nature but supports 
"concurrency".
Squid doesn't implement threading but implements "concurrency".

Don't touch PHP as a helper!!! (+1 to Alex)

Also take into account that Redis or Memcached is less preferred in many cases 
if the library doesn't re-use the existing connection for multiple queries.
Squid also implements caching for helpers answers so it's possible to implement 
the helper and ACL's in such a way that squid caching will
help you to lower the access to the external API and or redis/memcahced/DB.
I also have good experience with some libraries which implements cache that I have used 
inside a helper with a limited size for "level 1" cache.
It's possible that if you will implement both the helper and server side of the 
solution like ufdbguard you would be able to optimize the system
to take very high load.

I hope the above will help you.
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email:ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of 
André Bolinhas
Sent: Wednesday, February 2, 2022 00:09
To: 'Alex 
Rousskov';squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

Hi
Thanks for the reply.
I will take a look on Rust as you recommend.
Also, between Python and Go and is the best for multithreading and concurrency?
Rust supports multithreading and concurrency?
Best regards

-Mensagem original-
De: squid-users  Em Nome De Alex 
Rousskov
Enviada: 1 de fevereiro de 2022 22:01
Para:squid-users@lists.squid-cache.org
Assunto: Re: [squid-users] external helper development

On 2/1/22 16:47, André Bolinhas wrote:

Hi

I’m building an external helper to get the categorization of an
website, I know how to build it, but I need you option about the best
language for the job in terms of performance, bottlenecks, I/O blocking..

The helper will work like this.

1º  will check the hot memory for faster response (memcache or redis)

2º If the result not exist in hot memory then will check an external
api to fetch the categorie and saved it in hot memory.

In what language do you recommend develop such helper? PHP, Python, Go..

If this helper is for long-term production use, and you are willing to learn 
new things, then use Rust[1]. Otherwise, use whatever language you are the most 
comfortable with already (except PHP), especially if that language has good 
libraries/wrappers for the external APIs you will need to use.

Alex.
[1]https://www.rust-lang.org/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-31 Thread David Touzeau
Is adapted_http_access supporting url_rewrite_program  ? It seems only 
supports ecap/icap


Le 31/01/2022 à 03:52, Amos Jeffries a écrit :

On 31/01/22 13:20, David Touzeau wrote:

But it makes 2 connections to the squid for just stopping queries.
It seems not really optimized.



The joys of using URL modification to decide security access.



I notice that for several reasons i cannot switch to an external_acl



:(



Is there a way / idea ?



<http://www.squid-cache.org/Doc/config/adapted_http_access/>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-30 Thread David Touzeau

Hi

I have built my own squid url_rewrite_program

protocol requires answering with

# OK status=301|302 url=
Or
# OK rewrite-url="http://blablaba;

In my case, especially for trackers/ads i would like to say to browsers: 
"Go away !" without need them to redirect.


Sure i can use these methods but...

1) 127.0.0.1 - browser is in charge of getting out

OK status=302 url="http://127.0.0.1; But this ain't clean or polished.


2) 127.0.0.1 - Squid is in charge of getting out

OK rewrite-url="http://127.0.0.1; But this very very ain't clean or 
polished.

Squid claim in logs for an unreachable URL and pollute events


3) Redirect to a dummy page with a deny acl

OK status=302 url="http://dummy.com;
acl dummy dstdomain dummy.com
http_access deny dummy
deny_info TCP_RESET dummy

But it makes 2 connections to the squid for just stopping queries.
It seems not really optimized.

I notice that for several reasons i cannot switch to an external_acl

Is there a way / idea ?


Regards







___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] security_file_certgen I/O

2021-12-01 Thread David Touzeau


Hi

We used Squid 5.2 and we see that security_file_certgen consume I/O
Is there any way to put the ssldb in memory without need to mount a tmpfs ?

regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] %notes in error pages

2021-11-27 Thread David Touzeau

Hi

Working like a charm !!!

Many thanks!!

Le 26/11/2021 à 17:43, Alex Rousskov a écrit :

On 11/25/21 4:46 PM, David Touzeau wrote:


We need to add %note added from external helper using a deny_info and
specific squid error page.

tried with %o or %m without success

Is there a token to build an error page with an external acl helper output ?

Use @Squid{%code} syntax to add logformat %code (including %note) to
your error page. The feature is available in v5 and beyond. More details
may be available athttps://github.com/squid-cache/squid/commit/7e6eabb


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %notes in error pages

2021-11-25 Thread David Touzeau


Hi,

We need to add %note added from external helper using a deny_info and 
specific squid error page.


tried with %o or %m without success

Is there a token to build an error page with an external acl helper output ?

Regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-23 Thread David Touzeau

Hi

According to your documentation,
cache dir rock : objects larger than 32,000 bytes cannot be cached
if aufs cannot be implemented in SMP configuration how can we handle 
larger files in cache ?


Le 23/11/2021 à 11:01, David Touzeau a écrit :

Ok thanks, we will investigate in this way

Le 22/11/2021 à 19:33, Alex Rousskov a écrit :

On 11/22/21 12:48 PM, David Touzeau wrote:

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?

I am limiting my answer to the problems in this email thread scope: aufs
cache_dirs are UFS-based cache_dirs. UFS-based cache_dirs are not
SMP-aware and are not supported in SMP configurations. Your choices include:

* drop SMP (i.e. remove "workers" and ARA)
* drop aufs (i.e. remove "cache_dir aufs" and ARA)

... where ARA is "adjust the rest of the configuration accordingly".


HTH,

Alex.



Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tlu.dl.delivery.mp.microsoft.com and HTTP range header

2021-11-23 Thread David Touzeau

Hi community,

tlu.dl.delivery.mp.microsoft.com is from the app store and it encounters 
an issue with high bandwidth usage.
We think that it was caused because Squid filtering the HTTP Range 
header from the HTTP requests.

This caused the app store download everything in an endless loop

We know that Squid is not currently compatible with http range :
https://wiki.squid-cache.org/Features/HTTP11#Range_Requests

Is there any workaround in order to avoid high bandwidth usage of 
Microsoft clients without needing caching objects ?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-23 Thread David Touzeau

Ok thanks, we will investigate in this way

Le 22/11/2021 à 19:33, Alex Rousskov a écrit :

On 11/22/21 12:48 PM, David Touzeau wrote:

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?

I am limiting my answer to the problems in this email thread scope: aufs
cache_dirs are UFS-based cache_dirs. UFS-based cache_dirs are not
SMP-aware and are not supported in SMP configurations. Your choices include:

* drop SMP (i.e. remove "workers" and ARA)
* drop aufs (i.e. remove "cache_dir aufs" and ARA)

... where ARA is "adjust the rest of the configuration accordingly".


HTH,

Alex.



Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-22 Thread David Touzeau

Here our SMP configuration:

workers 2

cache_dir rock /home/squid/cache/rock 0 min-size=0 max-size=131072 
slot-size=32000

if ${process_number} = 1
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=1
cache_dir    aufs    /home/squid/Caches/disk    50024    16    256 
min-size=131072 max-size=3221225472
endif

if ${process_number} = 2
memory_cache_mode always
cpu_affinity_map process_numbers=${process_number} cores=2
endif


where is the false settings ?
Missing cache_dir ?


Le 22/11/2021 à 18:18, Alex Rousskov a écrit :

On 11/22/21 11:55 AM, David Touzeau wrote:


What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930:
"!transients || e.hasTransients()"
We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?

This is a Squid bug or misconfiguration. If you are using a UFS-based
cache_dir with multiple workers, then it is a misconfiguration. If you
want to use SMP disk caching, please use rock store instead.

HTH,

Alex.
P.S. This assertion has been reported several times, including for Squid
v4, but it was probably always due to a Squid misconfiguration. We need
to find a good way to explicitly reject such configurations instead of
asserting (while not rejecting similar unsupported configurations that
still "work" from their admins point of view).
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.2: assertion failed: Controller.cc:930: "!transients || e.hasTransients()"

2021-11-22 Thread David Touzeau

Hi, community

What does mean this error :

2021/11/21 17:23:06 kid1| assertion failed: Controller.cc:930: 
"!transients || e.hasTransients()"

    current master transaction: master69


We are unable to start the service it always crashes.
How can we can fix it ( purge cache , reboot )... ?___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Stable Squid Version for production on Linux

2021-11-16 Thread David Touzeau

Hi,

For us it is Squid v4.17

Le 16/11/2021 à 17:40, Graminsta a écrit :


Hey folks  ;)

What is the most stable squid version for production on Ubuntu 18 or 20?

Marcelo


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-16 Thread David Touzeau

Any tips,

Is someone using Fake NTLM with modern browsers ?

Le 11/11/2021 à 13:16, David Touzeau a écrit :

Thanks Amos it will help understand something

I think modern browser sending NTLMv2 as the ntlm_fake_auth 
understanding only NTLMv1 ( perhaps )


Using curl with --proxy-ntlm option is OK for squid as using browser 
return allways a 407

DO you know the limitation of ntlm_fake_auth according NTLM version.
Is there a way to fix it ?

* CURL 

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 06 82 08 00  NTLMSSP. 


[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 06 82 08 00  15 3A CC 83 0B 80 7B 45  ...E
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'KK' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  03 00 00 00 18 00 18 00 NTLMSSP. 
[0010]  40 00 00 00 30 00 30 00  58 00 00 00 00 00 00 00 0.0. X...
[0020]  88 00 00 00 04 00 04 00  88 00 00 00 09 00 09 00  
[0030]  8C 00 00 00 00 00 00 00  00 00 00 00 06 82 08 00  
[0040]  EB C7 B7 11 26 62 FD 82  B0 45 68 62 E0 6C E6 A3 .b.. .Ehb.l..
[0050]  57 A7 E6 76 1C 7B 79 74  17 71 72 5B 72 38 DA 30 W..v..yt .qr.r8.0
[0060]  06 4D 15 1F 9B D1 A2 A5  01 01 00 00 00 00 00 00 .M.. 
[0070]  80 38 3C 2A EA D6 D7 01  57 A7 E6 76 1C 7B 79 74 .8.. W..v..yt
[0080]  00 00 00 00 00 00 00 00  74 6F 74 6F 6E 74 6C 6D  totontlm
[0090]  70 72 6F 78 79 proxy
ntlmauth.cc(244): pid=31874 :ntlm_unpack_auth: size of 149
ntlmauth.cc(245): pid=31874 :ntlm_unpack_auth: flg 00088206
ntlmauth.cc(246): pid=31874 :ntlm_unpack_auth: lmr o(64) l(24)
ntlmauth.cc(247): pid=31874 :ntlm_unpack_auth: ntr o(88) l(48)
ntlmauth.cc(248): pid=31874 :ntlm_unpack_auth: dom o(136) l(0)
ntlmauth.cc(249): pid=31874 :ntlm_unpack_auth: usr o(136) l(4)
ntlmauth.cc(250): pid=31874 :ntlm_unpack_auth: wst o(140) l(9)
ntlmauth.cc(251): pid=31874 :ntlm_unpack_auth: key o(0) l(0)
ntlmauth.cc(257): pid=31874 :ntlm_unpack_auth: Domain 't' (len=1).
*ntlmauth.cc(268): pid=31874 :ntlm_unpack_auth: Username 'toton' (len=5).*
ntlm_fake_auth.cc(210): pid=31874 :sending 'AF toton' to squid


* But when connecting any modern browser to squid ***

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2  NTLMSSP. 


[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  C9 F0 4C 07 E0 79 9F CF  ..L..y..
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  49 12 A5 8A C8 17 3E 9D  I...
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  09 6D 48 E6 12 9C 4B 30  .mH...K0
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  F5 F6 8C B4 16 B9 20 CD  
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU



Le 11/11/2021 à 08:40, Amos Jeffries a écrit :

On 11/11/21 14:12, David Touzeau wrote:

Hi,
i would like to use ntlm_fake_auth but it seems

Re: [squid-users] Squid 5.2 unstable in production mode

2021-11-11 Thread David Touzeau

Hi

Max filedescriptors is defined in squid.conf.
Yes, in some cases a c-icap was installed and the proxy became more 
stable for a while.

But Filedescriptors issue still unstable... I really did not know why.


Running Debian 11 is very difficult it is a very new OS and we consider 
debian 10 as currently stable .

Also the Squid 4 working very well on Debian 10


Le 11/11/2021 à 20:58, Flashdown a écrit :

Hi David,

well I am curious, where did you set the max filedescriptors? Only in 
the OS configuration? If so, you also need to define it in the 
squid.conf as well -> 
http://www.squid-cache.org/Versions/v5/cfgman/max_filedescriptors.html


Regarding the memory leak, do you use an adaption service such as c-icap?
If so, what is the result of: ss -ant | grep CLOSE_WAIT | wc -l

May you should try to build Squid5 against Debian 11 to have the 
latest version of any dependencies needed to see if the memory leak is 
gone or not.


I run multiple Squid 5.2 servers on Debian 11 in production and do not 
have any issues.

---
Best regards,
Enrico Heine

Am 2021-11-11 20:08, schrieb David Touzeau:

Hi

Just for information and i hope it will help.

We have installed Squid 5.1 and Squid 5.2 in production mode.
It seems that after several days, the Squid become very unstable.
We mention that when switching to 4.x we did not encounter these
errors with the same configuration, same users, same network ( replace
binaries and keep same configuration )

All production servers are installed in a virtual environment ( ESXI
or Nutanix ) on Debian 10.x with about 4 to 8 vCPUs and 8GB of memory.
and from 20 to 5000 users.

After severals tests we see that the number of users did not have
impact with the stability.
We encounter same errors on a 20 users proxy and the same way of a
5000 users proxy.

1) Memory leak
-
This was encounter on computer that handle more than 10Gb of memory,
squid eat more than 8Gb of memory.
After eating all memory, squid is unable to load helpers and freeze
listen ports.
A restart service free the memory and fix the issue.

2) Max filedescriptors issues:

This is a strange behavior that Squid did not accept defined
parameter:
Example we set 65535 filedescriptors but squidclient mgr:info report
4096 and sometimes return back to 1024.

Several times squid report

    current master transaction: master15881
2021/11/11 17:10:09 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:10:29 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:10:51 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:11:56 kid1| TCP connection to 127.0.0.1/2320 failed
    current master transaction: master15881
2021/11/11 17:13:02 kid1| WARNING! Your cache is running out of
filedescriptors
    listening port: MyPortNameID1
2021/11/11 17:13:19 kid1| WARNING! Your cache is running out of
filedescriptors

But a mgr:info report:

    memPoolFree calls:    4295601
File descriptor usage for squid:
    Maximum number of file descriptors:   10048
    Largest file desc currently in use:    262
    Number of file desc currently in use:  135
    Files queued for open:   0
    Available number of file descriptors: 9913
    Reserved number of file descriptors:  9789

After these errors the listen port is freeze and nobody is able to
surf.
a just "squid -k reconfigure" fix the issue and the proxy return to
normal mode for several minutes and back again to filedescriptors
issues.

There is no relationship between filedescriptors issues and the number
of clients.
Sometimes the issue is discovered during the night when there is no
user that using the proxy ( just some robots like windows update )

Is there something other we can investigate to help more stability of
the 5.x branch ?
Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 5.2 unstable in production mode

2021-11-11 Thread David Touzeau

Hi

Just for information and i hope it will help.

We have installed Squid 5.1 and Squid 5.2 in production mode.
It seems that after several days, the Squid become very unstable.
We mention that when switching to 4.x we did not encounter these errors 
with the same configuration, same users, same network ( replace binaries 
and keep same configuration )


All production servers are installed in a virtual environment ( ESXI or 
Nutanix ) on Debian 10.x with about 4 to 8 vCPUs and 8GB of memory.

and from 20 to 5000 users.

After severals tests we see that the number of users did not have impact 
with the stability.
We encounter same errors on a 20 users proxy and the same way of a 5000 
users proxy.



1) Memory leak
-
This was encounter on computer that handle more than 10Gb of memory, 
squid eat more than 8Gb of memory.
After eating all memory, squid is unable to load helpers and freeze 
listen ports.

A restart service free the memory and fix the issue.

2) Max filedescriptors issues:

This is a strange behavior that Squid did not accept defined parameter:
Example we set 65535 filedescriptors but squidclient mgr:info report 
4096 and sometimes return back to 1024.


Several times squid report

    current master transaction: master15881
2021/11/11 17:10:09 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:10:29 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:10:51 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:11:56 kid1| TCP connection to 127.0.0.1/2320 failed
    current master transaction: master15881
2021/11/11 17:13:02 kid1| WARNING! Your cache is running out of 
filedescriptors

    listening port: MyPortNameID1
2021/11/11 17:13:19 kid1| WARNING! Your cache is running out of 
filedescriptors


But a mgr:info report:

    memPoolFree calls:    4295601
File descriptor usage for squid:
    Maximum number of file descriptors:   10048
    Largest file desc currently in use:    262
    Number of file desc currently in use:  135
    Files queued for open:   0
    Available number of file descriptors: 9913
    Reserved number of file descriptors:  9789

After these errors the listen port is freeze and nobody is able to surf.
a just "squid -k reconfigure" fix the issue and the proxy return to 
normal mode for several minutes and back again to filedescriptors issues.


There is no relationship between filedescriptors issues and the number 
of clients.
Sometimes the issue is discovered during the night when there is no user 
that using the proxy ( just some robots like windows update )





Is there something other we can investigate to help more stability of 
the 5.x branch ?

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-11 Thread David Touzeau

Thanks Amos it will help understand something

I think modern browser sending NTLMv2 as the ntlm_fake_auth 
understanding only NTLMv1 ( perhaps )


Using curl with --proxy-ntlm option is OK for squid as using browser 
return allways a 407

DO you know the limitation of ntlm_fake_auth according NTLM version.
Is there a way to fix it ?

* CURL 

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 06 82 08 00  NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 06 82 08 00  15 3A CC 83 0B 80 7B 45  ...E
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'KK' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  03 00 00 00 18 00 18 00 NTLMSSP. 
[0010]  40 00 00 00 30 00 30 00  58 00 00 00 00 00 00 00 0.0. X...
[0020]  88 00 00 00 04 00 04 00  88 00 00 00 09 00 09 00  
[0030]  8C 00 00 00 00 00 00 00  00 00 00 00 06 82 08 00  
[0040]  EB C7 B7 11 26 62 FD 82  B0 45 68 62 E0 6C E6 A3 .b.. .Ehb.l..
[0050]  57 A7 E6 76 1C 7B 79 74  17 71 72 5B 72 38 DA 30 W..v..yt .qr.r8.0
[0060]  06 4D 15 1F 9B D1 A2 A5  01 01 00 00 00 00 00 00 .M.. 
[0070]  80 38 3C 2A EA D6 D7 01  57 A7 E6 76 1C 7B 79 74 .8.. W..v..yt
[0080]  00 00 00 00 00 00 00 00  74 6F 74 6F 6E 74 6C 6D  totontlm
[0090]  70 72 6F 78 79 proxy
ntlmauth.cc(244): pid=31874 :ntlm_unpack_auth: size of 149
ntlmauth.cc(245): pid=31874 :ntlm_unpack_auth: flg 00088206
ntlmauth.cc(246): pid=31874 :ntlm_unpack_auth: lmr o(64) l(24)
ntlmauth.cc(247): pid=31874 :ntlm_unpack_auth: ntr o(88) l(48)
ntlmauth.cc(248): pid=31874 :ntlm_unpack_auth: dom o(136) l(0)
ntlmauth.cc(249): pid=31874 :ntlm_unpack_auth: usr o(136) l(4)
ntlmauth.cc(250): pid=31874 :ntlm_unpack_auth: wst o(140) l(9)
ntlmauth.cc(251): pid=31874 :ntlm_unpack_auth: key o(0) l(0)
ntlmauth.cc(257): pid=31874 :ntlm_unpack_auth: Domain 't' (len=1).
*ntlmauth.cc(268): pid=31874 :ntlm_unpack_auth: Username 'toton' (len=5).*
ntlm_fake_auth.cc(210): pid=31874 :sending 'AF toton' to squid


* But when connecting any modern browser to squid ***

[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2  NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  C9 F0 4C 07 E0 79 9F CF  ..L..y..
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  49 12 A5 8A C8 17 3E 9D  I...
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  09 6D 48 E6 12 9C 4B 30  .mH...K0
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU
[0030]  50                                                  P
ntlm_fake_auth.cc(170): pid=31874 :Got 'YR' from Squid with data:
[]  4E 54 4C 4D 53 53 50 00  01 00 00 00 07 82 08 A2 NTLMSSP. 
[0010]  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
[0020]  0A 00 63 45 00 00 00 0F ..cE
ntlm_fake_auth.cc(197): pid=31874 :sending 'TT' to squid with data:
[]  4E 54 4C 4D 53 53 50 00  02 00 00 00 09 00 09 00 NTLMSSP. 
[0010]  AE AA AA AA 07 82 08 A2  F5 F6 8C B4 16 B9 20 CD  
[0020]  00 00 00 00 00 00 3A 00  57 4F 52 4B 47 52 4F 55  WORKGROU



Le 11/11/2021 à 08:40, Amos Jeffries a écrit :

On 11/11/21 14:12, David Touzeau wrote:

Hi,
i would like to use ntlm_fake_auth but it seems Squid refuse to 
switch to authenticated user and return a 407 to the browser and 
squid never accept

[squid-users] squid 5.2: ntlm_fake_auth refuse to valid credentials

2021-11-10 Thread David Touzeau

Hi,
i would like to use ntlm_fake_auth but it seems Squid refuse to switch 
to authenticated user and return a 407 to the browser and squid never 
accept  credentials.


What i missing ?

Configuration seems simple:
auth_param ntlm program /lib/squid3/ntlm_fake_auth -v
auth_param ntlm children 20 startup=5 idle=1 concurrency=0 queue-size=80 
on-persistent-overload=ERR

acl AUTHENTICATED proxy_auth REQUIRED
http_access deny  !AUTHENTICATED

Here the debug mode;

2021/11/11 01:36:16.862 kid1| 14,3| ipcache.cc(614) 
ipcache_gethostbyname: ipcache_gethostbyname: 'www.squid-cache.org', flags=1
2021/11/11 01:36:16.862 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'212.199.163.170' NOT found
2021/11/11 01:36:16.862 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'196.200.160.70' NOT found
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
NetworksBlackLists = 0
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access#29 = 0
2021/11/11 01:36:16.862 kid1| 28,5| Checklist.cc(397) bannedAction: 
Action 'DENIED/0' is not banned
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
http_access#30
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
NormalPorts
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(212) append: from c-string 
to id SBuf1021843
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 13 
for SBuf1021843
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf1021843 
new store capacity: 40
2021/11/11 01:36:16.862 kid1| 28,3| StringData.cc(33) match: 
aclMatchStringList: checking 'MyPortNameID1'
2021/11/11 01:36:16.862 kid1| 28,3| StringData.cc(36) match: 
aclMatchStringList: 'MyPortNameID1' found
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
NormalPorts = 1
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
!AUTHENTICATED
2021/11/11 01:36:16.862 kid1| 28,5| Acl.cc(124) matches: checking 
AUTHENTICATED
2021/11/11 01:36:16.862 kid1| 29,4| UserRequest.cc(354) authenticate: No 
connection authentication type
2021/11/11 01:36:16.862 kid1| 29,5| User.cc(36) User: Initialised 
auth_user '0x5570e8c4d240'.
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(99) UserRequest: 
initialised request 0x5570e8cdacf0
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(212) append: from c-string 
to id SBuf1021846
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 61 
for SBuf1021846
2021/11/11 01:36:16.862 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf1021846 
new store capacity: 128
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 33,2| client_side.cc(507) setAuth: Adding 
connection-auth to local=192.168.90.170:3128 remote=192.168.90.10:50746 
FD 12 flags=1 from new NTLM handshake request
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 28,3| AclProxyAuth.cc(131) checkForAsync: 
checking password via authenticator
2021/11/11 01:36:16.862 kid1| 29,5| UserRequest.cc(77) valid: Validated. 
Auth::UserRequest '0x5570e8cdacf0'.
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1292) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: Running servers 5
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1309) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: returning srv-Hlpr66
2021/11/11 01:36:16.862 kid1| 5,5| AsyncCall.cc(26) AsyncCall: The 
AsyncCall helperStatefulDispatchWriteDone constructed, 
this=0x5570e8c8f8e0 [call581993]
2021/11/11 01:36:16.862 kid1| 5,5| Write.cc(35) Write: local=[::] 
remote=[::] FD 10 flags=1: sz 60: asynCall 0x5570e8c8f8e0*1
2021/11/11 01:36:16.862 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 10, 
type=2, handler=1, client_data=0x7f9e5d8a75a8, timeout=0
2021/11/11 01:36:16.862 kid1| 84,5| helper.cc(1430) 
helperStatefulDispatch: helperStatefulDispatch: Request sent to 
ntlmauthenticator #Hlpr66, 60 bytes
2021/11/11 01:36:16.862 kid1| 28,4| Acl.cc(72) AuthenticateAcl: 
returning 2 sending credentials to helper.
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
AUTHENTICATED = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
!AUTHENTICATED = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access#30 = -1 async
2021/11/11 01:36:16.862 kid1| 28,3| Acl.cc(151) matches: checked: 
http_access = -1 async
2021/11/11 01:36:16.862 kid1| 33,4| Server.cc(90) readSomeData: 
local=192.168.90.170:3128 remote=192.168.90.10:50746 FD 12 flags=1: 
reading request...
2021/11/11 01:36:16.862 kid1| 33,5| AsyncCall.cc(26) AsyncCall: The 
AsyncCall Server::doClientRead constructed, this=0x5570e87cfd50 [call581994]
2021/11/11 01:36:16.862 kid1| 5,5| Read.cc(57) comm_read_base: 
comm_read, queueing read for local=192.168.90.170:3128 

Re: [squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-02 Thread David Touzeau
Ok, we will investigate on the Parent Proxy but it seems that when squid 
child claim about TCP failed, it understand that the peer is dead and 
the whole surf is stopped during several times ( a squid -k reconfigure  
fix the issue quickly  ) because it did not have any other path to 
forward the request.






Le 02/11/2021 à 16:17, Alex Rousskov a écrit :

On 11/2/21 10:40 AM, David Touzeau wrote:

2021/11/01 16:50:48.787 kid1| 93,3| Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT)
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: proto HTTP/1.1
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: status-code 503
2021/11/01 16:50:48.787 kid1| 74,5| parse: status-line: reason-phrase Service 
Unavailable
Server: squid
Date: Mon, 01 Nov 2021 15:50:48 GMT
X-Squid-Error: ERR_CONNECT_FAIL 110
2021/11/01 16:50:48.787 kid1| 83,3| bailOnResponseError: unsupported CONNECT 
response status code
2021/11/01 16:50:48.787 kid1| TCP connection to 127.0.0.1/2320 failed


A parent[^1] proxy is a Squid proxy that cannot connect to the server in
question. That Squid proxy responds with an HTTP 503 Error to your Squid
CONNECT request. Your Squid logs the "TCP connection to ... failed"
error that you were wondering about.

This sequence highlights a deficiency in Squid CONNECT error handling
code (and possibly cache_peer configuration abilities). Ideally, Squid
should recognize Squid error responses coming from a parent HTTP proxy
and avoid complaining about remote Squid-origin errors as if they are
local Squid-parent errors. IIRC, some folks still insist on Squid
complaining about the latter "within hierarchy" errors, but the former
"external Squid-origin" errors are definitely not supposed to be
reported to admins via level-0/1 messages in cache.log.


HTH,

Alex.

[^1]: Direct or indirect parent -- I could not tell quickly but you
should be able to tell by looking at addresses, configurations, and/or
access logs. My bet is that it is an indirect parent if you are still
using a load balancer between Squids.




Le 01/11/2021 à 15:53, Alex Rousskov a écrit :

On 11/1/21 7:55 AM, David Touzeau wrote:


The Squid uses the loopback as a parent.

The same problem occurs:
06:19:47 kid1| TCP connection to 127.0.0.1/2320 failed
06:15:13 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:41 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:38 kid1| TCP connection to 127.0.0.1/2320 failed
06:13:15 kid1| TCP connection to 127.0.0.1/2320 failed
06:11:23 kid1| TCP connection to 127.0.0.1/2320 failed
cache_peer 127.0.0.1 parent 2320 0 name=Peer11 no-query default
connect-timeout=3 connect-fail-limit=5 no-tproxy

It is impossible to tell for sure what is going on because Squid does
not (unfortunately; yet) report the exact reason behind these connection
establishment failures or even the context in which a failure has
occurred. You may be able to tell more by collecting/analyzing packet
captures. Developers may be able to tell more if you share, say, ALL,5
debugging logs that show what led to the failure report.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-02 Thread David Touzeau

Hi,

Take time to enable the debug log an parsing the 10GB of logs

Here the piece of code:

2021/11/01 16:50:48.786 kid1| 33,5| AsyncCall.cc(30) AsyncCall: The 
AsyncCall Server::clientWriteDone constructed, this=0x55849cb132b0 
[call252226641]
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(37) Write: conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1: sz 4529: 
asynCall 0x55849cb132b0*1
2021/11/01 16:50:48.786 kid1| 5,5| ModEpoll.cc(118) SetSelect: FD 95, 
type=2, handler=1, client_data=0x7f1caaa1a2d0, timeout=0
2021/11/01 16:50:48.786 kid1| 20,3| store.cc(467) unlock: 
store_client::copy unlocking key 115EFC0099150100 
e:=sXIV/0x55849dfec190*4
2021/11/01 16:50:48.786 kid1| 20,3| store.cc(467) unlock: 
ClientHttpRequest::doCallouts-sslBumpNeeded unlocking key 
115EFC0099150100 e:=sXIV/0x55849dfec190*3
2021/11/01 16:50:48.786 kid1| 28,4| FilledChecklist.cc(67) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x55849316fc88
2021/11/01 16:50:48.786 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x55849316fc88
2021/11/01 16:50:48.786 kid1| 84,5| helper.cc(1319) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: Running servers 4
2021/11/01 16:50:48.786 kid1| 84,5| helper.cc(1344) 
StatefulGetFirstAvailable: StatefulGetFirstAvailable: returning srv-Hlpr469
2021/11/01 16:50:48.786 kid1| 5,4| AsyncCall.cc(30) AsyncCall: The 
AsyncCall helperStatefulHandleRead constructed, this=0x55848ad88730 
[call252226642]
2021/11/01 16:50:48.786 kid1| 5,5| Read.cc(58) comm_read_base: 
comm_read, queueing read for conn9811325 local=[::] remote=[::] FD 49 
flags=1; asynCall 0x55848ad88730*1
2021/11/01 16:50:48.786 kid1| 5,5| ModEpoll.cc(118) SetSelect: FD 49, 
type=1, handler=1, client_data=0x7f1caaa18a20, timeout=0
2021/11/01 16:50:48.786 kid1| 5,4| AsyncCallQueue.cc(61) fireNext: 
leaving helperStatefulHandleRead(conn9811325 local=[::] remote=[::] FD 
49 flags=1, data=0x5584982781c8, size=300, buf=0x558498dde700)
2021/11/01 16:50:48.786 kid1| 1,5| CodeContext.cc(60) Entering: 
master25501192
2021/11/01 16:50:48.786 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn9812727 local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT 
FD 85 flags=1 (0, 0)
2021/11/01 16:50:48.786 kid1| 93,3| AsyncCall.cc(97) ScheduleCall: 
IoCallback.cc(131) will call Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, data=0x55849b747e18) [call252202273]
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(69) HandleWrite: conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1: off 0, 
sz 4529.
2021/11/01 16:50:48.786 kid1| 5,5| Write.cc(89) HandleWrite: write() 
returns 4529
2021/11/01 16:50:48.787 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn9813869 local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 
flags=1 (0, 0)
2021/11/01 16:50:48.787 kid1| 33,5| AsyncCall.cc(97) ScheduleCall: 
IoCallback.cc(131) will call Server::clientWriteDone(conn9813869 
local=10.33.50.22:3128 remote=10.33.50.109:50157 FD 95 flags=1, 
data=0x55849e4c8218) [call252226641]
2021/11/01 16:50:48.787 kid1| 1,5| CodeContext.cc(60) Entering: 
master25501192
2021/11/01 16:50:48.787 kid1| 93,3| AsyncCallQueue.cc(59) fireNext: 
entering Http::Tunneler::handleReadyRead(conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, data=0x55849b747e18)
2021/11/01 16:50:48.787 kid1| 93,3| AsyncCall.cc(42) make: make call 
Http::Tunneler::handleReadyRead [call252202273]
2021/11/01 16:50:48.787 kid1| 93,3| AsyncJob.cc(123) callStart: 
Http::Tunneler status in: [state:w FD 85 job26507207]
2021/11/01 16:50:48.787 kid1| 5,3| Read.cc(93) ReadNow: conn9812727 
local=127.0.0.1:23408 remote=127.0.0.1:2320 FIRSTUP_PARENT FD 85 
flags=1, size 65535, retval 7782, errno 0

2021/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 1 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
3 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
1 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
19 bytes
2021/11/01 16:50:48.787 kid1| 24,5| Tokenizer.cc(27) consume: consuming 
2 bytes
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(224) parse: 
status-line: retval 1
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(225) parse: 
status-line: proto HTTP/1.1
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(226) parse: 
status-line: status-code 503
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(227) parse: 
status-line: reason-phrase Service Unavailable
2021/11/01 16:50:48.787 kid1| 74,5| ResponseParser.cc(228) parse: 
Parser: bytes processed=34
2021/11/01 16:50:48.787 kid1| 74,5| Parser.cc(192) grabMimeBlock: mime 
header (0-171) {Server: squid^M

Mime-Version: 1.0^M
Date: Mon, 01 Nov 2021 15:50:48 GMT^M
Content-Type: text/html;charset=utf-8^M
Content-Length: 7577^M

[squid-users] Squid 5.2 Peer parent TCP connection to x.x.x.x/x failed

2021-11-01 Thread David Touzeau

Hello Community,

We use child Squid proxies that connect to boxes that act as parents.
In version 4.x this configuration does not pose any problem.
In version 5.2, since, we have a lot of errors like :

01h 47mn kid1| TCP connection to 10.32.0.18/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed
01h 47mn kid1| TCP connection to 10.32.0.17/3150 failed

However we are sure that the parent proxies are available.
To make sure this is the case, we installed a local HaProxy that scales 
with the parent proxies


The Squid uses the loopback as a parent.

The same problem occurs:
06:19:47 kid1| TCP connection to 127.0.0.1/2320 failed
06:15:13 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:41 kid1| TCP connection to 127.0.0.1/2320 failed
06:14:38 kid1| TCP connection to 127.0.0.1/2320 failed
06:13:15 kid1| TCP connection to 127.0.0.1/2320 failed
06:11:23 kid1| TCP connection to 127.0.0.1/2320 failed

But in no case the local HaProxy service was down

This makes us understand that the parent squid process randomly stalls 
when in fact there is no reason for this to happen.

There is a software problem rather than a network problem

It is possible that the configuration is wrong but we have tried many 
possibilities.


Here is our last configuration

cache_peer 127.0.0.1 parent 2320 0 name=Peer11 no-query default 
connect-timeout=3 connect-fail-limit=5 no-tproxy


maybe we forgot something?

Regards___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.1 memory usage

2021-10-08 Thread David Touzeau

Hi
Just to mention, we discover high memory usage too without ICAP and SSL bump
after several days, need to restart the service.

Le 08/10/2021 à 10:56, Steve Hill a écrit :


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so 
I'd expect memory usage to be fairly low (and it was under Squid 3.5), 
but some workers are growing pretty large.  I'm using ICAP and SSL bump.


I've got a worker using 5 GB which I've collected memory stats from - 
the things which stand out are:

 - Long Strings: 220 MB
 - Short Strings: 2.1 GB
 - Comm::Connection: 217 MB
 - HttpHeaderEntry: 777 MB
 - MemBlob: 773 MB
 - Entry: 226 MB

What's the best way of debugging this?  It there a way to list all of 
the Comm::Connection objects?


Thanks.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-21 Thread David Touzeau

Thanks amos !!

I think auth_schemes can be a workaround.
I will try it !



Le 21/09/2021 à 02:49, Amos Jeffries a écrit :

On 21/09/21 11:49 am, David Touzeau wrote:


When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}


This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.


I did not know why these browsers using NTLM as they did not 
connected to the Windows domain


Unlike Kerberos, NTLM does not require the machine to be connected to 
a domain to have credentials. AFAIK the browser still has access to 
the localhost user credentials for use in NTLM. Or the machine may 
even be trying to use the Basic auth credentials as LM tokens with 
NTLM scheme.




Why squid never get the Basic Authentication credentials. ?



That is a Browser decision. All Squid can do is offer the schemes it 
supports and they have to choose which is used.



Did i miss something ?


With Squid-5 you can use the auth_schemes directive to workaround 
issues like this.

 <http://www.squid-cache.org/Versions/v5/cfgman/auth_schemes.html>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-21 Thread David Touzeau
Thanks Louis for this tips but we did not want to use NTLM as it is an 
old way.

It requires a samba on the Squid Box

As Amos said, this is most a browser (that using Microsoft API ) issue

The best way is to make these browsers replicating the correct Firefox 
behavior.

Means swith to basic auth instead of trying this stupid NTLM method

Le 21/09/2021 à 09:38, L.P.H. van Belle a écrit :


in your smb.conf add
 # Added to enforced NTLM 2, must be set on all Samba AD-DC's and the 
needed members.
 # This is used in combination with ntlm_auth --allow-mschapv2
 ntlm auth = mschapv2-and-ntlmv2-only

In squid use:
auth_param negotiate program /usr/lib/squid/negotiate_wrapper_auth \
 --kerberos /usr/lib/squid/negotiate_kerberos_auth -k 
/etc/squid/krb5-squid-HTTP.keytab \
 -s HTTP/proxy.fq.dn@my.realm.tld \
 --ntlm /usr/bin/ntlm_auth --allow-mschapv2 --helper-protocol=gss-spnego 
--domain=ADDOM

  
If you connecting for ldap.. Dont use -h 192.168.90.10

Uses -H ldaps://host.name.fq.dn

Also push the root-CA off the domain to pc's with GPO for example
And in that GPO you can set the parts you need to enable for the users/pcs to 
make it all work.

But your close, your almost there..

On thing i have not looked at myself yet, ext_kerberos_ldap_group_acl
https://fossies.org/linux/squid/src/acl/external/kerberos_ldap_group/ext_kerberos_ldap_group_acl.8
Thats one i'll be using with squid 5.1, im still compiling everyting i need, 
but then im setting
It up, i'll document it and make and howto of it.

Greetz,

Louis





Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] 
Namens David Touzeau
Verzonden: dinsdag 21 september 2021 1:49
Aan: squid-users@lists.squid-cache.org
Onderwerp: [squid-users] squid 5.1: Kerberos: Unable to switch to basic 
auth with Edge - IE - Chrome


Hi all

i have setup Kerberos authentication with Windows 2019 domain using 
Squid 5.1 ( The Squid version did not fix the issue - Tested 4.x and 5.x)
In some cases, some computers are not joined to the domain and ween 
need to allow authenticate on Squid

To allow this,  Basic Authentication is defined in Squid  and we expect 
that browsers prompt a login to be authenticated and access to Internet

But the behavior is strange.

On a computer outside the windows domain:
Firefox is be able to be successfully authenticated to squid using 
basic auth.
Edge, Chrome and IE still try ujsing NTLM method and are allways 
rejected with a 407

When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}

This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.

I did not know why these browsers using NTLM as they did not connected 
to the Windows domain
Why squid never get the Basic Authentication credentials. ?

Did i miss something ?

Here it is my configuration.

auth_param negotiate program /lib/squid3/negotiate_kerberos_auth -r -s 
GSS_C_NO_NAME -k /etc/squid3/PROXY.keytab
auth_param negotiate children 20 startup=5 idle=1 concurrency=0 
queue-size=80 on-persistent-overload=ERR
auth_param negotiate keep_alive on

auth_param basic program /lib/squid3/basic_ldap_auth -v -R -b "DC=articatech,DC=int" -D 
"administra...@articatech.int" <mailto:administra...@articatech.int>  -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -v 3 -h 192.168.90.10
auth_param basic children 3
auth_param basic realm Active Directory articatech.int
auth_param basic credentialsttl 7200 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 1 seconds
authenticate_cache_garbage_interval 3600 seconds

acl AUTHENTICATED proxy_auth REQUIRED




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1: Kerberos: Unable to switch to basic auth with Edge - IE - Chrome

2021-09-20 Thread David Touzeau

Hi all

i have setup Kerberos authentication with Windows 2019 domain using 
Squid 5.1 ( The Squid version did not fix the issue - Tested 4.x and 5.x)
In some cases, some computers are not joined to the domain and ween need 
to allow authenticate on Squid


To allow this,  Basic Authentication is defined in Squid  and we expect 
that browsers prompt a login to be authenticated and access to Internet


But the behavior is strange.

On a computer outside the windows domain:
Firefox is be able to be successfully authenticated to squid using basic 
auth.
Edge, Chrome and IE still try ujsing NTLM method and are allways 
rejected with a 407


When edge, chrome and IE try to establish a session, Squid claim

2021/09/21 01:17:27 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: received type 1 NTLM token; }}


This let us understanding that these 3 browsers try NTLM instead of a 
Basic Authentication.


I did not know why these browsers using NTLM as they did not connected 
to the Windows domain

Why squid never get the Basic Authentication credentials. ?

Did i miss something ?

Here it is my configuration.

auth_param negotiate program /lib/squid3/negotiate_kerberos_auth -r -s 
GSS_C_NO_NAME -k /etc/squid3/PROXY.keytab
auth_param negotiate children 20 startup=5 idle=1 concurrency=0 
queue-size=80 on-persistent-overload=ERR

auth_param negotiate keep_alive on

auth_param basic program /lib/squid3/basic_ldap_auth -v -R -b 
"DC=articatech,DC=int" -D "administra...@articatech.int" -W 
/etc/squid3/ldappass.txt -f sAMAccountName=%s -v 3 -h 192.168.90.10

auth_param basic children 3
auth_param basic realm Active Directory articatech.int
auth_param basic credentialsttl 7200 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 1 seconds
authenticate_cache_garbage_interval 3600 seconds

acl AUTHENTICATED proxy_auth REQUIRED

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks, i will try in this way

Le 16/09/2021 à 21:03, Alex Rousskov a écrit :

On 9/16/21 2:52 PM, David Touzeau wrote:


It is true that it would be possible to use an external_acl in the
http_reply_access.

Do you think that adding it in this position I would be able to use
squid's resolution results ?

Yes, bugs notwithstanding, an external ACL evaluated at
http_reply_access time should have access to %
Le 16/09/2021 à 19:43, Alex Rousskov a écrit :

On 9/16/21 1:30 PM, David Touzeau wrote:


I'm turning to create a DNS resolution dev and I'm giving up looking
retreive this information through Squid.

Please note that if you do your own DNS resolution, then Squid DNS
resolution results will probably mismatch your results in some cases.
There have been many complaints about associated problems from folks
that went this route...

I am not sure what you are trying to do with that a %
Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform
DNS resolution inside the helper with the performance consequences
that this will impose.


I would be looking at a design where a helper classifies requests and
using that later on when the server is known to match up the IP vs the
classification. I'm struggling to think of a flow that works
efficiently though.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks for the clarification and I agree with you completely.

a multipath and a round-robin DNS method will return different records 
between our DNS calculation and the final squid results


It is true that it would be possible to use an external_acl in the 
http_reply_access.


Do you think that adding it in this position I would be able to use 
squid's resolution results ?



Le 16/09/2021 à 19:43, Alex Rousskov a écrit :

On 9/16/21 1:30 PM, David Touzeau wrote:


I'm turning to create a DNS resolution dev and I'm giving up looking
retreive this information through Squid.

Please note that if you do your own DNS resolution, then Squid DNS
resolution results will probably mismatch your results in some cases.
There have been many complaints about associated problems from folks
that went this route...

I am not sure what you are trying to do with that a %
Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform
DNS resolution inside the helper with the performance consequences
that this will impose.


I would be looking at a design where a helper classifies requests and
using that later on when the server is known to match up the IP vs the
classification. I'm struggling to think of a flow that works
efficiently though.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Amos,

Thank you for your response and kindness,
I'm turning to create a DNS resolution dev and I'm giving up looking 
retreive this information through Squid.


Le 16/09/2021 à 19:13, Amos Jeffries a écrit :

On 17/09/21 2:42 am, David Touzeau wrote:

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform 
DNS resolution inside the helper with the performance consequences 
that this will impose.




I would be looking at a design where a helper classifies requests and 
using that later on when the server is known to match up the IP vs the 
classification. I'm struggling to think of a flow that works 
efficiently though.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Thanks Amos for quick answer.

Can you take away any hope of a workaround with Squid ?

This makes me plan having to develop a function that has to perform DNS 
resolution inside the helper with the performance consequences that this 
will impose.




Le 16/09/2021 à 16:21, Amos Jeffries a écrit :

On 16/09/21 10:09 pm, David Touzeau wrote:

Hi comunity, Squid fans

I would like to use an external acl process for Geoip processing

i have tried to setup squid to send the remote peer address using %code but it always reply with a "-"


external_acl_type MyGeopip ttl=3600 negative_ttl=3600 
children-startup=2 children-idle=2 children-max=20 concurrency=1 ipv4 
%un %SRC %SRCEUI48 %>ha{X-Forwarded-For} %DST %ssl::>sni 
%USER_CERT_CN %note %

acl MyGeopip_acl external MyGeopip
http_access deny !MyGeopip_acl

I was thinking that Squid call the helper before resolving the remote 
route.




The problem is there is no server/peer connection at all for a 
transaction that has only been received and not yet processed by Squid.



So to force it, i have added a "fake" acl to force Squid to calculate 
the remote address.


acl fake_dst dst 127.0.0.2
http_access deny !fake_dst !MyGeopip_acl

But it failed too, the external_acl still receive the "-" instead of 
the remote public IP address of the server




Aye. There is still no server.

All this dst ACL changed was that Squid knows a group of IPs it 
*might* select from. The decision whether to use one of them (or 
somewhere entirely different) has not yet been made, so there is still 
no server.


The "%when automated retries are done, and is "-" at all points before any 
server contact.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1: external_acl_type: Get public remote address

2021-09-16 Thread David Touzeau

Hi comunity, Squid fans

I would like to use an external acl process for Geoip processing

i have tried to setup squid to send the remote peer address using %code but it always reply with a "-"


external_acl_type MyGeopip ttl=3600 negative_ttl=3600 children-startup=2 
children-idle=2 children-max=20 concurrency=1 ipv4 %un %SRC %SRCEUI48 
%>ha{X-Forwarded-For} %DST %ssl::>sni %USER_CERT_CN %note %/lib/squid3/squid-geoip


acl MyGeopip_acl external MyGeopip
http_access deny !MyGeopip_acl

I was thinking that Squid call the helper before resolving the remote route.

So to force it, i have added a "fake" acl to force Squid to calculate 
the remote address.


acl fake_dst dst 127.0.0.2
http_access deny !fake_dst !MyGeopip_acl

But it failed too, the external_acl still receive the "-" instead of the 
remote public IP address of the server



Where is the mistake ?

Regards

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-09-15 Thread David Touzeau

Many thanks

It fix the issue !

Le 15/09/2021 à 13:08, Graham Wharton a écrit :
You see this when starting as non rootuser. Squid should be started as 
root and then it changes identity to cache effective user as defined 
in config when it forks.


Graham Wharton
Lube Finder
Tel (UK) : 0800 955  0922
Tel (Intl) : +44 1305 898033
https://www.lubefinder.com

*From:* squid-users  on 
behalf of David Touzeau 

*Sent:* Wednesday, September 15, 2021 11:40:04 AM
*To:* squid-users@lists.squid-cache.org 

*Subject:* [squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): 
(1) Operation not permitted

On Debian 10 64bits  with squid 5.1 we have thousand warning as this:

2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation 
not permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation 
not permitted


When squid try to load external acls binaries

add chmod 04755 in binaries  did not resolve the issue.

No issue with same configuration with squid 3.5x branch

Any tips ?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 5.1/Debian WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-09-15 Thread David Touzeau

On Debian 10 64bits  with squid 5.1 we have thousand warning as this:

2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid2| WARNING: no_suid: setuid(0): (1) Operation not 
permitted
2021/09/15 08:00:18 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted


When squid try to load external acls binaries

add chmod 04755 in binaries  did not resolve the issue.

No issue with same configuration with squid 3.5x branch

Any tips ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Log to statsd

2021-08-11 Thread David Touzeau

Basically syslogd can do what you want : send via TCP, HTTP, UDP

So the deal is to use

logformat my_metrics      [statsd] %icap::tt %
Hi

Is there a way to configure Squid to output the logs to statsd rather 
than a file?

Today I have this:

+logformat my_metrics  %icap::tt %However I would like to avoid the overhead in parsing the log file by 
using statsd or something similar.


Thanks,
Moti

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-21 Thread David Mills
Hi Amos,

Upgrading to Squid 4.1 resolved the issue. I had to run

> /usr/lib64/squid/security_file_certgen -c -s /var/spool/squid/ssl_db -M 4MB
>

to get squid to start. But after that all worked well. We'll do a bit more
testing before we roll out to our production servers.

Thanks very much for your help.

Regards,

David Mills

Senior DevOps Engineer

 E: david.mi...@acusensus.com

 M: +61 411 513 404

 W: acusensus.com





On Sun, 18 Jul 2021 at 16:45, Amos Jeffries  wrote:

> On 16/07/21 4:38 pm, David Mills wrote:
> > Hi Amos,
> >
> > sorry for the big delay here - I've had lots of other things to attend
> > to. It turned on the logging you suggested. For a failed "apt update"
> > attempt on the client I get the following attached access.log and
> cache.log.
> >
> > Are any of the lines
> >
> > 2021/07/16 04:28:01.423 kid1| 83,5| bio.cc(396) adjustSSL: Extension
> > 13 does not supported!
> >
> > ...
> >
> > 20212021/07/16 04:28:32.465 kid1| 83,2| client_side.cc(3749)
> > Squid_SSL_accept: Error negotiating SSL connection on FD 11: Aborted
> > by client: 5
> > ...
> >
> > 2021/07/16 04:28:02.452 kid1| Error negotiating SSL on FD 17:
> > error:140920F8:SSL routines:ssl3_get_server_hello:unknown cipher
> > returned (1/-1/0)
> >
> > ...
> >
> > 2021/07/16 04:28:01.413 kid1| 83,2| client_side.cc(4293)
> > clientPeekAndSpliceSSL: SSL_accept failed.
> >
> >
> > important?
> >
>
> Very. It means the libssl Squid is built with and using is not able to
> understand the TLS the server is sending.
>
> Squid-4 should be more tolerant of this particular issue, or at least
> able to follow the on_unsupported_protocol directive when it is
> encountered.
>
> Older Squid depend more directly on the library TLS parsing - which
> cannot handle unknown values well.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>

-- 
DISCLAIMER: Acusensus puts the privacy and security of its clients, its 
data and information at the core of everything we do. The information 
contained in this email (including attachments) is intended only for the 
use of the person(s) to whom it is addressed to, as it may be confidential 
and contain legally privileged information. If you have received this email 
in error, please delete all copies and notify the sender immediately. Any 
views or opinions presented are
solely those of the author and do not 
necessarily represent the views of Acusensus
Pty Ltd. Please consider the 
environment
before printing this email.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-07 Thread David Mills
Hi Amos,

You said

> The traffic from Squid to the AArnet server is apparently using IPv6. Is
> that routing setup properly too?
>

The output of "ip address" shows both IPv4 and IPv6. What led you to make
the above conclusion?

Regards,

David Mills

Senior DevOps Engineer

 E: david.mi...@acusensus.com

 M: +61 411 513 404

 W: acusensus.com





On Thu, 8 Jul 2021 at 12:19, Amos Jeffries  wrote:

>
>
> On 8/07/21 11:44 am, David Mills wrote:
> > Hi Eliezer,
> >
> > We have:
> >
> > /etc/apt/apt.conf:
> >
> > Acquire::http::proxy
> > "
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> > <
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> >";
> > Acquire::https::proxy
> > "
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> > <
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> >";
> >
> >
> > /etc/apt/sources.list (comment lines removed for brevity)
> >
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates universe
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-backports main
> > restricted universe multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security universe
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security multiverse
> >
> >
> > squid.conf
> >
> ...
> > #
> > # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> > #
> >
> > # Redirect HTTP to HTTPS
> > acl port_80 port 80
> > acl gstatic dstdomain www.gstatic.com <http://www.gstatic.com>
> > http_access deny port_80 gstatic
> > deny_info 301:https://%H%R gstatic
> >
> > acl avpc dstdomain crop-assessment.acusensus-vpc
> > http_access deny port_80 avpc
> > deny_info 302: avpc
> >
> >
> > # Deny HTTP
> > http_access deny port_80
> >
> > # Whitelist of allowed sites
> > acl allowed_http_sites dstdomain "/etc/squid/squid.allowed.sites.txt"
> > http_access allow allowed_http_sites vpc_cidr
> >
>
> Is the "mirror.aarnet.edu.au" or a wildcard matching it listed in file
> squid.allowed.sites.txt ?
>
> (I assume so, but checking in case it is that simple).
>
>
> > # And finally deny all other access to this proxy
> > http_access deny all
> >
> > # Squid normally listens to port 3128
> > http_port 3128 ssl-bump cert=/etc/squid/cert.pem
> > acl allowed_https_sites ssl::server_name
> > "/etc/squid/squid.allowed.sites.txt"
> > acl step1 at_step SslBump1
> > acl step2 at_step SslBump2
> > acl step3 at_step SslBump3
> > ssl_bump peek step1 all
> > ssl_bump peek step2 allowed_https_sites
> > ssl_bump splice step3 allowed_https_sites
> > ssl_bump terminate step2 all
> >
> > # Uncomment and adjust the following to add a disk cache directory.
> > #cache_dir ufs /var/spool/squid 100 16 256
> >
> > # Leave coredumps in the first cache dir
> > coredump_dir /var/spool/squid
> > #
> > # Add any of your own refresh_pattern entries above these.
> > #
> > refresh_pattern ^ftp: 1440 20% 10080
> > refresh_pattern ^gopher: 1440 0% 1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> > refresh_pattern . 0 20% 4320
> >
> >
> >
> > Squid 3.5 is running on an EC2 instance running Amazon Linux 2. I'll
> > answer the questions you asked Ben for extra info.
> > ip address:
> >
> > 1: lo:  mtu 65536 qdisc

Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-07 Thread David Mills
Hi Amos,

Thanks for the info.

Yes, "mirror.aarnet.edu.au" is in the whitelist. IPv6 could be an issue as
I believe AWS ELBs may not support.

We'll try the logging you suggest and perhaps an upgrade to 4.0 if we have
no joy with 3.5.

Regards,

David Mills

Senior DevOps Engineer

 E: david.mi...@acusensus.com

 M: +61 411 513 404

 W: acusensus.com





On Thu, 8 Jul 2021 at 12:19, Amos Jeffries  wrote:

>
>
> On 8/07/21 11:44 am, David Mills wrote:
> > Hi Eliezer,
> >
> > We have:
> >
> > /etc/apt/apt.conf:
> >
> > Acquire::http::proxy
> > "
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> > <
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> >";
> > Acquire::https::proxy
> > "
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> > <
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128/
> >";
> >
> >
> > /etc/apt/sources.list (comment lines removed for brevity)
> >
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates universe
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-updates multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu/
> > <https://mirror.aarnet.edu.au/ubuntu/> focal-backports main
> > restricted universe multiverse
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security main restricted
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security universe
> > deb https://mirror.aarnet.edu.au/ubuntu
> > <https://mirror.aarnet.edu.au/ubuntu> focal-security multiverse
> >
> >
> > squid.conf
> >
> ...
> > #
> > # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> > #
> >
> > # Redirect HTTP to HTTPS
> > acl port_80 port 80
> > acl gstatic dstdomain www.gstatic.com <http://www.gstatic.com>
> > http_access deny port_80 gstatic
> > deny_info 301:https://%H%R gstatic
> >
> > acl avpc dstdomain crop-assessment.acusensus-vpc
> > http_access deny port_80 avpc
> > deny_info 302: avpc
> >
> >
> > # Deny HTTP
> > http_access deny port_80
> >
> > # Whitelist of allowed sites
> > acl allowed_http_sites dstdomain "/etc/squid/squid.allowed.sites.txt"
> > http_access allow allowed_http_sites vpc_cidr
> >
>
> Is the "mirror.aarnet.edu.au" or a wildcard matching it listed in file
> squid.allowed.sites.txt ?
>
> (I assume so, but checking in case it is that simple).
>
>
> > # And finally deny all other access to this proxy
> > http_access deny all
> >
> > # Squid normally listens to port 3128
> > http_port 3128 ssl-bump cert=/etc/squid/cert.pem
> > acl allowed_https_sites ssl::server_name
> > "/etc/squid/squid.allowed.sites.txt"
> > acl step1 at_step SslBump1
> > acl step2 at_step SslBump2
> > acl step3 at_step SslBump3
> > ssl_bump peek step1 all
> > ssl_bump peek step2 allowed_https_sites
> > ssl_bump splice step3 allowed_https_sites
> > ssl_bump terminate step2 all
> >
> > # Uncomment and adjust the following to add a disk cache directory.
> > #cache_dir ufs /var/spool/squid 100 16 256
> >
> > # Leave coredumps in the first cache dir
> > coredump_dir /var/spool/squid
> > #
> > # Add any of your own refresh_pattern entries above these.
> > #
> > refresh_pattern ^ftp: 1440 20% 10080
> > refresh_pattern ^gopher: 1440 0% 1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> > refresh_pattern . 0 20% 4320
> >
> >
> >
> > Squid 3.5 is running on an EC2 instance running Amazon Linux 2. I'll
> > answer the questions you asked Ben for extra info.
> > ip address:
> >
> > 1: lo:  mtu 65536

Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-07 Thread David Mills
ft forever preferred_lft forever
>

ip rule

> 0: from all lookup local
> 32766: from all lookup main
> 32767: from all lookup default
>

ip route show

> default via 10.0.12.1 dev eth0
> 10.0.12.0/24 dev eth0 proto kernel scope link src 10.0.12.111
> 169.254.169.254 dev eth0
>

ip route show table 100

>
>
iptables-save

>
>
squid -v

> Squid Cache: Version 3.5.20
> Service Name: squid
> configure options:  '--build=x86_64-koji-linux-gnu'
> '--host=x86_64-koji-linux-gnu' '--program-prefix=' '--prefix=/usr'
> '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
> '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
> '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
> '--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
> '--infodir=/usr/share/info' '--disable-strict-error-checking'
> '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
> '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
> '--with-logdir=$(localstatedir)/log/squid'
> '--with-pidfile=$(localstatedir)/run/squid.pid'
> '--disable-dependency-tracking' '--enable-eui'
> '--enable-follow-x-forwarded-for' '--enable-auth'
> '--enable-auth-basic=DB,LDAP,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,SMB_LM,getpwnam'
> '--enable-auth-ntlm=smb_lm,fake'
> '--enable-auth-digest=file,LDAP,eDirectory'
> '--enable-auth-negotiate=kerberos'
> '--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group'
> '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
> '--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups'
> '--enable-linux-netfilter' '--enable-removal-policies=heap,lru'
> '--enable-snmp' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,rock,ufs'
> '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio'
> '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads'
> '--disable-arch-native' 'build_alias=x86_64-koji-linux-gnu'
> 'host_alias=x86_64-koji-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches-m64 -mtune=generic
> -fpie' 'LDFLAGS=-Wl,-z,relro  -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2
> -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
>  -m64 -mtune=generic -fpie'
> 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
>

uname -a

> Linux ip-10-0-12-111.ap-southeast-2.compute.internal
> 4.14.231-173.361.amzn2.x86_64 #1 SMP Mon Apr 26 20:57:08 UTC 2021 x86_64
> x86_64 x86_64 GNU/Linux
>

Regards,

David Mills

Senior DevOps Engineer

 E: david.mi...@acusensus.com

 M: +61 411 513 404

 W: acusensus.com





On Wed, 7 Jul 2021 at 20:53, Eliezer Croitoru  wrote:

> Hey David,
>
> Just wondering if you have seen the apt related docs at:
>
> https://help.ubuntu.com/community/AptGet/Howto/#Setting_up_apt-get_to_use_a_http-proxy
>
> Eliezer
>
> From: squid-users  On Behalf
> Of David Mills
> Sent: Wednesday, July 7, 2021 2:26 AM
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and
> Squid proxy
>
> Hi,
>
> We've got a collection of Ubuntu 18.04 boxes out in the field. They
> connect to an AWS OpenVPN VPN and use a Squid 3.5 AWS hosted Proxy. They
> work fine.
>
> We have tried upgrading one to 20.04. Same setup. From the command line
> curl or wget can happily download an Ubuntu package from the Ubuntu Mirror
> site we use. But "apt update" gets lots of "IGN:" timeouts and errors.
>
> The package we test curl with is
> https://mirror.aarnet.edu.au/ubuntu/pool/main/c/curl/curl_7.68.0-1ubuntu2.5_amd64.deb
>
> The Squid log shows a line the doesn't occur for the successful 18.04 "apt
> updates":
> 1625190959.233 81 10.0.11.191 TAG_NONE/200 0 CONNECT
> http://mirror.aarnet.edu.au:443 - HIER_DIRECT/2001:388:30bc:cafe::beef -
>
> The full output of an attempt to update is:
> Ign:1 https://mirror.aarnet.edu.au/ubuntu focal InRelease
>
> Ign:2 https://mirror.aarnet.edu.au/ubuntu focal-updates InRelease
>
> Ign:3 https://mirror.aarnet.edu.au/ubuntu focal-backports InRelease
>
> Ign:4 https://mirror.aarnet.edu.au/ubuntu focal-security InRelease
>
> Err:5 https://mirror.aarnet.edu.au/ubuntu focal Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.11.82 3128]
> Err:6 https://mirror.aarnet.edu.au/ubuntu focal-updates Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.11.82 3128]
> Err:7 https://mirror.aarne

[squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-06 Thread David Mills
Hi,

We've got a collection of Ubuntu 18.04 boxes out in the field. They connect
to an AWS OpenVPN VPN and use a Squid 3.5 AWS hosted Proxy. They work fine.

We have tried upgrading one to 20.04. Same setup. From the command line
curl or wget can happily download an Ubuntu package from the Ubuntu Mirror
site we use. But "apt update" gets lots of "IGN:" timeouts and errors.

The package we test curl with is
https://mirror.aarnet.edu.au/ubuntu/pool/main/c/curl/curl_7.68.0-1ubuntu2.5_amd64.deb

The Squid log shows a line the doesn't occur for the successful 18.04 "apt
updates":
1625190959.233 81 10.0.11.191 TAG_NONE/200 0 CONNECT
mirror.aarnet.edu.au:443 - HIER_DIRECT/2001:388:30bc:cafe::beef -

The full output of an attempt to update is:

> Ign:1 https://mirror.aarnet.edu.au/ubuntu focal InRelease
>
> Ign:2 https://mirror.aarnet.edu.au/ubuntu focal-updates InRelease
>
> Ign:3 https://mirror.aarnet.edu.au/ubuntu focal-backports InRelease
>
> Ign:4 https://mirror.aarnet.edu.au/ubuntu focal-security InRelease
>
> Err:5 https://mirror.aarnet.edu.au/ubuntu focal Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.11.82 3128]
> Err:6 https://mirror.aarnet.edu.au/ubuntu focal-updates Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.11.82 3128]
> Err:7 https://mirror.aarnet.edu.au/ubuntu focal-backports Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.11.82 3128]
> Err:8 https://mirror.aarnet.edu.au/ubuntu focal-security Release
>
>   Could not wait for server fd - select (11: Resource temporarily
> unavailable) [IP: 10.0.1.26 3128]
> Reading package lists... Done
>
> N: Ignoring file 'microsoft-prod.list-keep' in directory
> '/etc/apt/sources.list.d/' as it has an invalid filename extension
> E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal Release'
> does not have a Release file.
> N: Updating from such a repository can't be done securely, and is
> therefore disabled by default.
> N: See apt-secure(8) manpage for repository creation and user
> configuration details.
> E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-updates
> Release' does not have a Release file.
> N: Updating from such a repository can't be done securely, and is
> therefore disabled by default.
> N: See apt-secure(8) manpage for repository creation and user
> configuration details.
> E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-backports
> Release' does not have a Release file.
> N: Updating from such a repository can't be done securely, and is
> therefore disabled by default.
> N: See apt-secure(8) manpage for repository creation and user
> configuration details.
> E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-security
> Release' does not have a Release file.
> N: Updating from such a repository can't be done securely, and is
> therefore disabled by default.
> N: See apt-secure(8) manpage for repository creation and user
> configuration details.
>

While running, the line

> 0% [Connecting to HTTP proxy (
> http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128)]
>
appears often and hang for a while.

I've tried upping the squid logging and allowing all, but they didn't offer
any additional information about the issue.

Any advice would be greatly appreciated.

Regards,

David Mills

Senior DevOps Engineer

 E: david.mi...@acusensus.com

 M: +61 411 513 404

 W: acusensus.com

-- 
DISCLAIMER: Acusensus puts the privacy and security of its clients, its 
data and information at the core of everything we do. The information 
contained in this email (including attachments) is intended only for the 
use of the person(s) to whom it is addressed to, as it may be confidential 
and contain legally privileged information. If you have received this email 
in error, please delete all copies and notify the sender immediately. Any 
views or opinions presented are
solely those of the author and do not 
necessarily represent the views of Acusensus
Pty Ltd. Please consider the 
environment
before printing this email.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.14 : no_suid: setuid(0): (1) Operation not permitted

2021-02-28 Thread David Touzeau

Thanks Alex

This bug is a really "fog"  while i'm using Debian 10.x

https://superuser.com/questions/731104/squid-proxy-cache-server-no-suid-setuid0-1-operation-not-permitted
https://forum.netgate.com/topic/67220/squid3-dev-transparente-con-clamav-64-bit-1a-prueba/2

Your answers since several years:
http://www.squid-cache.org/mail-archive/squid-users/201301/0399.html
https://www.mail-archive.com/search?l=squid-us...@squid-cache.org=subject:"\[squid\-users\]+Warning+in+cache.log"=newest=1

My last discuss on squid 4.13
https://www.spinics.net/lists/squid/msg93659.html


Many users says there is no impact on helpers and performance as it is 
just a warning...


Did you confirm it ?


Le 28/02/2021 à 01:58, Alex Rousskov a écrit :

On 2/27/21 7:22 PM, David Touzeau wrote:


Hi, regulary i have this error :

2021/02/28 01:18:43 kid1| helperOpenServers: Starting 5/32
'security_file_certgen' processes
2021/02/28 01:18:43 kid1| WARNING: no_suid: setuid(0): (1) Operation not
permitted

i have set the setuid permission

chown root:squid security_file_certgen
chmod 04755 security_file_certgen

or
chown squid:squid security_file_certgen
chmod 0755 security_file_certgen

in both cases, squid always claim with "the no_suid: setuid(0): (1)
Operation not permitted"

Sounds like bug 3785: https://bugs.squid-cache.org/show_bug.cgi?id=3785
That bug was filed many years ago and for a different helper/OS, but I
suspect it applies to your situation as well.



How can i fix it ?

Unfortunately, I do not know the answer to that question. If it is
indeed bug 3785, then its current status is reflected by comment #5 at
https://bugs.squid-cache.org/show_bug.cgi?id=3785#c5


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.14 : no_suid: setuid(0): (1) Operation not permitted

2021-02-27 Thread David Touzeau


Hi, regulary i have this error :

2021/02/28 01:18:43 kid1| helperOpenServers: Starting 5/32 
'security_file_certgen' processes
2021/02/28 01:18:43 kid1| WARNING: no_suid: setuid(0): (1) Operation not 
permitted


i have set the setuid permission

chown root:squid security_file_certgen
chmod 04755 security_file_certgen

or
chown squid:squid security_file_certgen
chmod 0755 security_file_certgen

in both cases, squid always claim with "the no_suid: setuid(0): (1) 
Operation not permitted"


How can i fix it ?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-01-14 Thread David Touzeau
Yes it seems the same bug but the ticket is not relevant (FreeBSD) as 
i'm on Debian and on a modern kernel


The main incomprehensible behavior that is issue occurs sometimes as 
setuid is a sticky bit permission..


squid -v output:

Squid Cache: Version 4.13
Service Name: squid

This binary uses OpenSSL 1.1.1d  10 Sep 2019. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html


configure options:  '--prefix=/usr' '--build=x86_64-linux-gnu' 
'--includedir=/include' '--mandir=/share/man' '--infodir=/share/info' 
'--localstatedir=/var' '--libexecdir=/lib/squid3' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
'--enable-gnuregex' '--enable-removal-policy=heap' 
'--enable-follow-x-forwarded-for' '--enable-removal-policies=lru,heap' 
'--enable-arp-acl' '--enable-truncate' '--with-large-files' 
'--with-pthreads' '--enable-esi' '--enable-storeio=aufs,diskd,ufs,rock' 
'--enable-x-accelerator-vary' '--with-dl' '--enable-linux-netfilter' 
'--with-netfilter-conntrack' '--enable-wccpv2' '--enable-eui' 
'--enable-auth' '--enable-auth-basic' '--enable-snmp' '--enable-icmp' 
'--enable-auth-digest' '--enable-log-daemon-helpers' 
'--enable-url-rewrite-helpers' '--enable-auth-ntlm' 
'--with-default-user=squid' '--enable-icap-client' 
'--disable-cache-digests' '--enable-poll' '--enable-epoll' 
'--enable-async-io=128' '--enable-zph-qos' '--enable-delay-pools' 
'--enable-http-violations' '--enable-url-maps' '--enable-ecap' 
'--enable-ssl' '--with-openssl' '--enable-ssl-crtd' 
'--enable-xmalloc-statistics' '--enable-ident-lookups' 
'--with-filedescriptors=65536' '--with-aufs-threads=128' 
'--disable-arch-native' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid/squid.pid' 
'--with-swapdir=/var/cache/squid' 'build_alias=x86_64-linux-gnu'



Le 14/01/2021 à 05:43, Amos Jeffries a écrit :

On 14/01/21 3:17 am, David Touzeau wrote:


Hi

This error is generated every 15 minutes when using any authenticator 
helper (ntlm, kerberos...)


Is there a way to investigate on this issue ?

kidxx| WARNING: no_suid: setuid(0): (1) Operation not permitted



This looks like <https://bugs.squid-cache.org/show_bug.cgi?id=3785>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] WARNING: no_suid: setuid(0): (1) Operation not permitted

2021-01-13 Thread David Touzeau


Hi

This error is generated every 15 minutes when using any authenticator 
helper (ntlm, kerberos...)


Is there a way to investigate on this issue ?

kidxx| WARNING: no_suid: setuid(0): (1) Operation not permitted

Sometimes, after rebooting the system, issue is fixed for an 
undetermined period.


Using squid 4.13 on Debian 10 Intel 64 bits.

regards


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau
Yes this an hton of the IP address (ip2long) , remove the .addr and 
switch to long2ip


Le 04/01/2021 à 14:56, ngtech1...@gmail.com a écrit :


Thanks David,

I don’t understand something:

1490677018.addr

Are these integers representing of ip addresses?

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*David Touzeau 
*Sent:* Monday, January 4, 2021 3:25 PM
*To:* ngtech1...@gmail.com; squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] PCI Certification compliance lists


Hi Eliezer:

http://articatech.net/tmpf/categories/banking.gz 
<http://articatech.net/tmpf/categories/banking.gz>
http://articatech.net/tmpf/categories/cleaning.gz 
<http://articatech.net/tmpf/categories/cleaning.gz>



Le 04/01/2021 à 10:27, ngtech1...@gmail.com 
<mailto:ngtech1...@gmail.com> a écrit :


Hey David.

Indeed it should be done with the local websites however, These
sites are pretty static.

Would it be OK to publish theses lists online as a file/files?

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to
use an external helper like for many other function.

If I can choose between “fast” as default and the ability to run a
“slow” external acl helper I can
choose what is right for/in my environment.

Currently I cannot program a helper that will decide if a CONNECT
connection should be spliced or bumped programmatically.

It forces me to reload this list manually which might take couple
seconds.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*squid-users 
<mailto:squid-users-boun...@lists.squid-cache.org> *On Behalf Of
*David Touzeau
*Sent:* Monday, January 4, 2021 10:23 AM
*To:* squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
*Subject:* Re: [squid-users] PCI Certification compliance lists

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

 1. Banking/transcations : 27 646 websites.
 2. AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own
list according it's country or company activity.




Le 03/01/2021 à 15:12, ngtech1...@gmail.com
<mailto:ngtech1...@gmail.com> a écrit :

I am looking for domains lists that can be used for squid to be PCI

Certified.

  


I have read this article:

https://www.imperva.com/learn/data-security/pci-dss-certification/  
<https://www.imperva.com/learn/data-security/pci-dss-certification/>

  


And couple others to try and understand what might a Squid proxy 
ssl-bump

exception rules should contain.

So technically we need:

- Banks

- Health care

- Credit Cards(Visa, Mastercard, others)

- Payments sites

- Antivirus(updates and portals)

- OS and software Updates signatures(ASC, MD5, SHAx etc..)

  


*https://support.kaspersky.com/common/start/6105  
<https://support.kaspersky.com/common/start/6105>

*

https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e  
<https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e>

set-product-with-a-third-party-firewall

*

https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s  
<https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s>


55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc


p=TS100291&_afrLoop=641093247174514=0%25=fals


e=false=0%25=100%25#!%40%40%3FshowFooter%3


Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2


525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3

D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9

  

  


If someone has the documents which instructs what domains to not 
inspect it

would also help a lot.

  


Thanks,

Eliezer

  




Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:ngtech1...@gmail.com  <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

  

  

  


___

squid-users mailing

Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau


Hi Eliezer:

http://articatech.net/tmpf/categories/banking.gz
http://articatech.net/tmpf/categories/cleaning.gz



Le 04/01/2021 à 10:27, ngtech1...@gmail.com a écrit :


Hey David.

Indeed it should be done with the local websites however, These sites 
are pretty static.


Would it be OK to publish theses lists online as a file/files?

The main issue is that ssl-bump requires couple “fast” acls.

I believe it should be a “fast” acl but we also need the option to use 
an external helper like for many other function.


If I can choose between “fast” as default and the ability to run a 
“slow” external acl helper I can

choose what is right for/in my environment.

Currently I cannot program a helper that will decide if a CONNECT 
connection should be spliced or bumped programmatically.


It forces me to reload this list manually which might take couple seconds.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

*From:*squid-users  *On 
Behalf Of *David Touzeau

*Sent:* Monday, January 4, 2021 10:23 AM
*To:* squid-users@lists.squid-cache.org
*Subject:* Re: [squid-users] PCI Certification compliance lists

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

  * Banking/transcations : 27 646 websites.
  * AV sofwtare and updates sites (fw, routers...) :  133 295 websites


I can give it to you the lists , they are incomplete and it should 
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.




Le 03/01/2021 à 15:12, ngtech1...@gmail.com 
<mailto:ngtech1...@gmail.com> a écrit :


I am looking for domains lists that can be used for squid to be PCI

Certified.

I have read this article:

https://www.imperva.com/learn/data-security/pci-dss-certification/  
<https://www.imperva.com/learn/data-security/pci-dss-certification/>

And couple others to try and understand what might a Squid proxy ssl-bump

exception rules should contain.

So technically we need:

- Banks

- Health care

- Credit Cards(Visa, Mastercard, others)

- Payments sites

- Antivirus(updates and portals)

- OS and software Updates signatures(ASC, MD5, SHAx etc..)

*https://support.kaspersky.com/common/start/6105  
<https://support.kaspersky.com/common/start/6105>

*

https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e  
<https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e>

set-product-with-a-third-party-firewall

*

https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s  
<https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s>

55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc

p=TS100291&_afrLoop=641093247174514=0%25=fals

e=false=0%25=100%25#!%40%40%3FshowFooter%3

Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2

525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3

D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9

If someone has the documents which instructs what domains to not inspect it

would also help a lot.

Thanks,

Eliezer



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:ngtech1...@gmail.com  <mailto:ngtech1...@gmail.com>

Zoom: Coming soon

___

squid-users mailing list

squid-users@lists.squid-cache.org  
<mailto:squid-users@lists.squid-cache.org>

http://lists.squid-cache.org/listinfo/squid-users  
<http://lists.squid-cache.org/listinfo/squid-users>



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PCI Certification compliance lists

2021-01-04 Thread David Touzeau

Hi Eiezer,

I can help you by giving a list but

Just by using "main domains":

 * Banking/transcations : 27 646 websites.
 * AV sofwtare and updates sites (fw, routers...) : 133 295 websites


I can give it to you the lists , they are incomplete and it should 
decrease squid performance by loading huge databases.
Perhaps it is better for the Squid administrator to fill it's own list 
according it's country or company activity.





Le 03/01/2021 à 15:12, ngtech1...@gmail.com a écrit :

I am looking for domains lists that can be used for squid to be PCI
Certified.

I have read this article:
https://www.imperva.com/learn/data-security/pci-dss-certification/

And couple others to try and understand what might a Squid proxy ssl-bump
exception rules should contain.
So technically we need:
- Banks
- Health care
- Credit Cards(Visa, Mastercard, others)
- Payments sites
- Antivirus(updates and portals)
- OS and software Updates signatures(ASC, MD5, SHAx etc..)

* https://support.kaspersky.com/common/start/6105
*
https://support.eset.com/en/kb332-ports-and-addresses-required-to-use-your-e
set-product-with-a-third-party-firewall
*
https://service.mcafee.com/webcenter/portal/oracle/webcenter/page/scopedMD/s
55728c97_466d_4ddb_952d_05484ea932c6/Page29.jspx?wc.contextURL=%2Fspaces%2Fc
p=TS100291&_afrLoop=641093247174514=0%25=fals
e=false=0%25=100%25#!%40%40%3FshowFooter%3
Dfalse%26_afrLoop%3D641093247174514%26articleId%3DTS100291%26leftWidth%3D0%2
525%26showHeader%3Dfalse%26wc.contextURL%3D%252Fspaces%252Fcp%26rightWidth%3
D0%2525%26centerWidth%3D100%2525%26_adf.ctrl-state%3D3wmxkd4vc_9


If someone has the documents which instructs what domains to not inspect it
would also help a lot.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4/5 feature request send login informations to peers

2020-11-19 Thread David Touzeau


Thanks Amos

You means using "login=PASS" in peer settings and in Proxy parent B and 
C use the "basic_fake_auth" helper to "simulate" the requested auth ?




Le 17/11/2020 à 11:43, Amos Jeffries a écrit :

On 17/11/20 9:27 pm, David Touzeau wrote:


Hi,

We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active 
Directory groups.


This first squid using parents as peer in order to access to internet 
in this way:


  | > SQUID B --> Internet 1
squid A ->
  | -> SQUID C -> Internet 2

1) We want using ACLs too ( for delegation purpose ) on Squid B and C
2) For legal logs purpose compliance.

In this case,  the username discovered in SQUIDA must be transmitted 
to SQUID B AND C and SQUID B-C must accept the information in order 
to use as login information to parse acls


Is it possible ?


You can send the username. But the security token is tied to the 
client<->SquidA TCP connection - it cannot be validated by other 
servers than SquidA.


This should not matter though. Since Squid A is only permitting 
authenticated traffic you can *authorize* at Squid B and C based only 
on the source being one of your Squid with valid username.





If not: wee have seen that the Proxy protocol accept to transmit the 
source IP/login information to peers that are compliance with proxy 
protocol.

but the peers method in squid did not allow to use Proxy protocol.
Is it possible to add the "Proxy Protocol" support in peers method ?



It is possible to implement (for Squid-6 earliest) PROXYv2 for 
cache_peer. But the credentials security token remains tied to SquidA 
service.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 4/5 feature request send login informations to peers

2020-11-17 Thread David Touzeau


Hi,

We a first Squid using Kerberos + Active Directory authentication.
This first squid is used to limit access using ACls and Active Directory 
groups.


This first squid using parents as peer in order to access to internet in 
this way:


 | > SQUID B --> Internet 1
squid A ->
 | -> SQUID C -> Internet 2

1) We want using ACLs too ( for delegation purpose ) on Squid B and C
2) For legal logs purpose compliance.

In this case,  the username discovered in SQUIDA must be transmitted to 
SQUID B AND C and SQUID B-C must accept the information in order to use 
as login information to parse acls


Is it possible ?

If not: wee have seen that the Proxy protocol accept to transmit the 
source IP/login information to peers that are compliance with proxy 
protocol.

but the peers method in squid did not allow to use Proxy protocol.
Is it possible to add the "Proxy Protocol" support in peers method ?






___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid4/5: Feature request identify access rules.

2020-11-07 Thread David Touzeau

When having several *_access http_access,reply_access...
In a stressed environment, it is difficult to hunt an issue or a wrong rule.

The debug mode is impossible because the proxy in production mode write too 
many logs..

But if we can identify the rule and add pointer to the log, it is possible to 
see a wrong rule or to see that a request is correctly passed trough.

Currently we have to do

acl acl1 src 1.2.3.4
http_access deny acl1



We suggest using the same token used in http_port:

acl acl1 src 1.2.3.4
http_access deny acl1 rulename=Rule.access1

And add a token for template eg %RULENAME and a token for logformat %rname that 
helps to identify the token.


Added in bugtrack

https://bugs.squid-cache.org/show_bug.cgi?id=5087


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simple REGEX not working...

2020-07-22 Thread David A. Gershman
Thank Amos.  Ironically I just found that out with testing and then a 
search pointing me here:


    https://wiki.squid-cache.org/Features/HTTPS

Sadly, I should have thought of that.  Been a long day I guess.

Thanks again!

--David

On 7/22/20 8:58 PM, Amos Jeffries wrote:

On 23/07/20 3:27 pm, David A. Gershman wrote:

Hello again,

After further testing, the looks like the only thing being regex'd
against is the domain name.  I shrunk the RE down to just:

     acl user_allowed url_regex http  # nothing more, just 'http'

and it /*still*/ failed!!!  It's as if the "whole url" (claimed by the
docs) is /not/ being compared against.  I'm just posting this here as an
FYI...no solution has been found. :(


Squid uses basic regex without extensions - the basic operators that
work in both GNU regex and POSIX regex can be expected to work.

Your mistake is thinking that URL always looks like "https://example.com/;.

For HTTPS traffic going through an HTTP proxy the URL is in
authority-form which looks like "example.com:443".
<https://tools.ietf.org/html/rfc7230#section-5.3.3>



On 7/22/20 7:22 PM, David A. Gershman wrote:

Hello,

I have the following in my config file:

     acl user_allowed url_regex ^https://example\.com/

but surfing to that site fails (authentication works fine).  My
ultimate goal is to have an RE comparable to the PCRE of:

     ^https?:\/\/.*?example\.com\/

While the PCRE works just fine in other tools (my own scripts, online,
etc.), I was unable to get it to work within Squid3.  As I stripped
away pieces of the RE in the config file, the only RE which seemed to
work was:

     example\.com

...not even having the ending '/'.  However, this obviously does not
meet my needs.


To get to the scheme and path information for HTTPS traffic you need
SSL-Bump functionality built into the proxy and configured to decrypt
the TLS traffic layer.

OpenSSL license currently (soon to change, yay!) does not permit Debian
to distribute a Squid binary package with that feature enabled so you
will have to rebuild the squid package yourself with relevant additions
or install a package from an independent repository.




I'm on Debian 10 and am unable to determine which RE library Debian
compiled Squid3 against (I've got a Tweet out to them to see if they
can point me in the right direction).

Squid3 has been removed from Debian long ago. You should be using
"squid" package these days which is Squid-4 on all current Debian.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Simple REGEX not working...

2020-07-22 Thread David A. Gershman

Hello again,

After further testing, the looks like the only thing being regex'd 
against is the domain name.  I shrunk the RE down to just:


    acl user_allowed url_regex http  # nothing more, just 'http'

and it /*still*/ failed!!!  It's as if the "whole url" (claimed by the 
docs) is /not/ being compared against.  I'm just posting this here as an 
FYI...no solution has been found. :(


--David

On 7/22/20 7:22 PM, David A. Gershman wrote:

Hello,

I have the following in my config file:

    acl user_allowed url_regex ^https://example\.com/

but surfing to that site fails (authentication works fine).  My 
ultimate goal is to have an RE comparable to the PCRE of:


    ^https?:\/\/.*?example\.com\/

While the PCRE works just fine in other tools (my own scripts, online, 
etc.), I was unable to get it to work within Squid3.  As I stripped 
away pieces of the RE in the config file, the only RE which seemed to 
work was:


    example\.com

...not even having the ending '/'.  However, this obviously does not 
meet my needs.


I'm on Debian 10 and am unable to determine which RE library Debian 
compiled Squid3 against (I've got a Tweet out to them to see if they 
can point me in the right direction).


Ultimately, I would like to get Squid to use PCREs.

Ideas?

Thanks!

--David

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Simple REGEX not working...

2020-07-22 Thread David A. Gershman

Hello,

I have the following in my config file:

    acl user_allowed url_regex ^https://example\.com/

but surfing to that site fails (authentication works fine).  My ultimate 
goal is to have an RE comparable to the PCRE of:


    ^https?:\/\/.*?example\.com\/

While the PCRE works just fine in other tools (my own scripts, online, 
etc.), I was unable to get it to work within Squid3.  As I stripped away 
pieces of the RE in the config file, the only RE which seemed to work was:


    example\.com

...not even having the ending '/'.  However, this obviously does not 
meet my needs.


I'm on Debian 10 and am unable to determine which RE library Debian 
compiled Squid3 against (I've got a Tweet out to them to see if they can 
point me in the right direction).


Ultimately, I would like to get Squid to use PCREs.

Ideas?

Thanks!

--David
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Not working: http://www.squid-cache.org/cgi-bin/swish-query.cgi

2020-07-22 Thread David A. Gershman

Hello,

The mailing list site

    http://www.squid-cache.org/Support/mailing-lists.html

states a search engine is available at

    http://www.squid-cache.org/cgi-bin/swish-query.cgi

However, going here results in a 404 not found.  Is there another search 
engine?


--David
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 4.10: ssl-bump on https_port requires tproxy/intercept which is missing in secure proxy method

2020-05-20 Thread David Touzeau

Thanks for the answer details

How to be a sponsor ? ( cost ) of such feature
Could you think it can be planned for 5.x ?
I think it should be a "future" "standard" in the same way of DNS over SSL

Le 19/05/2020 à 16:46, Alex Rousskov a écrit :

On 18/05/20 10:15 am, David Touzeau wrote:

Hi we want to use squid as * * * Secure Proxy * * * using https_port
We have tested major browsers and it seems working good.

To make it work, we need to deploy the proxy certificate on all browsers
to make the secure connection running.

I hope that deployment is not necessary -- an HTTPS proxy should be
using a certificate issued for its domain name and signed by a
well-known CA already trusted by browsers. An HTTPS proxy is not faking
anything. If browsers do require CA certificate import in this
environment, it is their limitation.


On 5/19/20 9:24 AM, Matus UHLAR - fantomas wrote:

David, note that requiring browsers to connect to your proxy over encrypted
(https) connection, and then decrypting tunnels to real server will lower
the clients' security

A proper SslBump implementation for HTTPS proxy will not be "decrypting
tunnels to real server". The security of such an implementation will be
the same as of SslBump supported today (plus the additional protections
offered by securing the browser-proxy communication).

Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4.x acl server_cert_fingerprint for bump no matches

2020-05-19 Thread David Touzeau


Thanks alex, made this one on squid 4.10


acl TestFinger server_cert_fingerprint 
77:F6:8D:C1:0A:DF:94:8B:43:1F:8E:0E:91:5E:0C:32:42:8B:99:C9

acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3
ssl_bump peek ssl_step2
ssl_bump splice ssl_step3 TestFinger
ssl_bump stare ssl_step2 all
ssl_bump bump all

But no luck, website still decrypted.




Le 13/05/2020 à 21:33, Alex Rousskov a écrit :

On 5/12/20 7:42 AM, David Touzeau wrote:

ssl_bump peek ssl_step1
ssl_bump splice TestFinger
ssl_bump stare ssl_step2 all
ssl_bump bump all
Seems TestFinger Acls did not matches in any case

You are trying to use step3 information (i.e., the server certificate)
during SslBump step2: The "splice TestFinger" line is tested during
step2 and mismatches because the server certificate is still unknown
during that step. That mismatch results in Squid staring during step2.
The "splice TestFinger" line is not tested during step3 because splicing
is not possible after staring. Thus, Squid reaches "bump all" and bumps.

For a detailed description of what happens (and what information is
available) during each SslBump step, please see
https://wiki.squid-cache.org/Features/SslPeekAndSplice

Also, if you are running v4.9 or earlier, please upgrade. We fixed one
server_cert_fingerprint bug, and that fix became a part of the v4.10
release (commit e0eca4c).


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >