[squid-users] Squid-Cache Zoom is coming

2022-06-11 Thread Eliezer Croitoru
Hey Everyone,

 

I have been working on a series of zoom meetings on Squid-Cache from 0 to
hero.

It's sort of a meet up with a basic agenda.

I would like to find the right time for these meetings and to put up a list
of subjects that I will touch in them.

However I would like to make some registration process for it to be
efficient.

 

Please if you are willing to participate give the thumbs up here in the list
or privately and I will try to put a date for these meetings.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] The usage of extended SNMPD commands to monitor squid.

2022-05-24 Thread Eliezer Croitoru
Since the Squid-Cache project doesn't maintain the SNMP part of it as far as
I know I was thinking about:

Using extended SNMPD ie in /etc/snmp/snmpd.conf

 

extend squid_x_stats /bin/bash /usr/local/bin/squid_x_stats.sh

 

while the binary itself probably will be a single command/script that will
have symlinks to itself with a different name (like what busybox provides
binaries).

With a set of these commands it would be possible to monitor squid via the
linux SNMPD and the backend would be a script.

To overcome a DOS from the SNMP side I can build a layer of caching in
files.
It would not be like the current squid SNMP tree of-course but as long the
data is there it can be used in any system that supports it.

I have used nagios/cacti/others to create graphs based on this concept.

 

I am currently working on the PHP re-testing project and it seems that PHP
7.4 is not exploding it's memory and crashes compared to older versions.

I still need a more stressed system to test the scripts.

I have created the next scripts for now:

*   Fake helper
*   Session helper based on Redis
*   Session helper based on FS in /var/spool/squid/session_helper_fs

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] linuxize.com and other sites captcha

2022-05-18 Thread Eliezer Croitoru
I have seen that many sites are against MITM since they want to be able to
reach the client directly and without any ICAP proxy in the middle.

There are services that gives captcha pages when these pages are being MITM
by squid, for example:

https://linuxize.com

 

@Alex, can we please try to define what cause this and if it is at all
possible to avoid this? (eventually.)

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] disable https inspection for licensing some apps

2022-05-18 Thread Eliezer Croitoru
 Hey Alex,

I have started working on some external_acl helper that will probe the
server certificate like what ufdbguard does but will be written 
probably in another language then C++ ... ie scripting or GoLang or Rust.
The idea is that there will be some cache or DB that will store information
about an IP+port paired with SNI.
A storage engine like a cache would help to "know" enough about the server
to ultimately decide if there is a risk in splicing this specific
connection.
It's also possible that the first time that the request will pass via thru
the proxy it will be bumped to probe the connection for more information
when possible.

In general for commercial products there is either a CDN service or a
dedicated service.
These usually are not the risk for the proxy users and can be spliced.
The main issue is if one service on a specific IP serves  more then one
domain that contains different content.
The best example is google CDN network that might serve on the same IP and
certificate and SNI(because of HTTP/2.0) different domains.

Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Alex Rousskov
Sent: Wednesday, May 18, 2022 21:39
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] disable https inspection for licensing some apps

On 5/18/22 12:28, robert k Wild wrote:

> acl DiscoverSNIHost at_step SslBump1
> acl NoSSLIntercept ssl::server_name "/usr/local/squid/etc/nointercept.txt"
> ssl_bump peek DiscoverSNIHost
> ssl_bump splice NoSSLIntercept
> ssl_bump bump all

OK, the above configuration makes the splice/bump decision based on 
plain text information provided by the TLS client.


> and in the nointercept.txt
> i have the url in there

ssl::server_name needs a host/domain name, not a regular URL. No URLs 
are exchanged in plain text between TLS client and the origin server.

Please note that, even after adjusting nointercept.txt to contain domain 
name(s), the above configuration may not always work in modern Squids: 
It will work when the client sends a matching domain name

* in the CONNECT request headers (and sends no TLS SNI at all)
* in the CONNECT request headers and in TLS SNI
* in TLS SNI (the CONNECT request headers should not matter).

It will also work when a CONNECT request is using an IP address that 
reverse-resolves to a matching domain name (which is not overwritten by 
a mismatching SNI).

In all other cases, Squid will bump traffic even if it is ultimately 
going to the server named in nointercept.txt.

There is no configuration that will address all possible cases in 
general. TLS makes that impossible (at least not without probing TLS 
origin servers which is something Squid does not do yet).


HTH,

Alex.


>, also i have it in the url white list so it can actually see the url
> 
> is there something else i need to add for this to work
> 
> or maybe some websites ie license website just dont like it going through
a proxy
> 
> 
> On Wed, 18 May 2022 at 16:57, robert k Wild  <mailto:robertkw...@gmail.com>> wrote:
> 
> hi all,
> 
> i have squid proxy configured as ssl bump and i white list some
> websites only
> 
> but for some websites i dont want to inspect https traffic as it
> breaks the cert when i want to license some apps via the url
> (whitelist url)
> 
> how can i disable https inspection for some websites please
> 
> many thanks,
> rob
> 
> -- 
> Regards,
> 
> Robert K Wild.
> 
> 
> 
> -- 
> Regards,
> 
> Robert K Wild.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] is there a way to tell squid to write external ip even that external ip not attached into the machine ?

2022-05-13 Thread Eliezer Croitoru
Hey Ahmad,

 

To be clear, a simple forward proxy that will fake the source IP address is
kind of a simple task with these days libraries.

I do not know your exact scenario and use case but it's possible to write
such a proxy in roughly 200 lines of code in Golang.
Take a peek at:

https://github.com/LiamHaworth/go-tproxy

 

It's a nice library for tproxy alone.

Squid is great but if you will touch any source code it's possible that
you'd better write your own customized proxy.

 

Good Luck,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: Ahmad Alzaeem <0xf...@gmail.com> 
Sent: Friday, May 13, 2022 18:34
To: Eliezer Croitoru ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] is there a way to tell squid to write external ip
even that external ip not attached into the machine ?

 

Hello Eliezer

I thought it could be done by editing squid src file  like to skip inet
address lookup .

 

Thanks 

 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > on behalf of Eliezer
Croitoru mailto:ngtech1...@gmail.com> >
Date: Friday, May 13, 2022 at 8:21 AM
To: squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] is there a way to tell squid to write external ip
even that external ip not attached into the machine ?

Hey Ahmad,

 

You should use a tproxy port with a PROXY protocol support and acls.

With these you can try to push traffic to the network from a local process
that will write the right details to squid that will generate a fake source
ip.


And since you have asked I assume you are not familiar enough with this kind
of setup so it's crucial you will understand what are doing
before trying and testing it since at might not work as you expect.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Ahmad
Alzaeem
Sent: Friday, May 13, 2022 16:13
To: squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org> ; Amos Jeffries
mailto:squ...@treenet.co.nz> >
Subject: [squid-users] is there a way to tell squid to write external ip
even that external ip not attached into the machine ?

 

 

Hello Guys ,

We are testing squid with a project such as we need squid to write and
proceed with tcp_outgoing address address even its not attached to the
machine by ifconfig or ip add  ?

 

After some tests we found that squid wont write the external Ip to be pushed
out the network card interface if the ip address is not added to the machine
.

 

Is there anyway to bypass this checkout and let squid ignore checking the
external ips if attached or not attached ?

Not sure if from config or may be editing src files .

 

 

Many Thanks 

 

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] is there a way to tell squid to write external ip even that external ip not attached into the machine ?

2022-05-13 Thread Eliezer Croitoru
Hey Ahmad,

 

You should use a tproxy port with a PROXY protocol support and acls.

With these you can try to push traffic to the network from a local process
that will write the right details to squid that will generate a fake source
ip.


And since you have asked I assume you are not familiar enough with this kind
of setup so it's crucial you will understand what are doing
before trying and testing it since at might not work as you expect.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of
Ahmad Alzaeem
Sent: Friday, May 13, 2022 16:13
To: squid-users@lists.squid-cache.org; Amos Jeffries 
Subject: [squid-users] is there a way to tell squid to write external ip
even that external ip not attached into the machine ?

 

 

Hello Guys ,

We are testing squid with a project such as we need squid to write and
proceed with tcp_outgoing address address even its not attached to the
machine by ifconfig or ip add  ?

 

After some tests we found that squid wont write the external Ip to be pushed
out the network card interface if the ip address is not added to the machine
.

 

Is there anyway to bypass this checkout and let squid ignore checking the
external ips if attached or not attached ?

Not sure if from config or may be editing src files .

 

 

Many Thanks 

 

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Thinking out loud about "applications" definition for squid

2022-05-10 Thread Eliezer Croitoru
OK so, an update.
I wrote a basic application that does just the basic features.

I am looking for someone that want's to help me enhance the feature.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Eliezer Croitoru  
Sent: Sunday, March 27, 2022 04:33
To: squid-users@lists.squid-cache.org
Subject: Thinking out loud about "applications" definition for squid

Hey,

I have been thinking about defining a specific way that will tag connections
with an APP ID for simplicity.
For example I have just seen couple support websites of web systems vendors
that provide their domains and ip addresses.
The basic example would be:
https://help.pluralsight.com/help/ip-allowlist

Which provides the next basic info:
*.pluralsight.com
*.typekit.com

# Video CDN
vid.pluralsight.com
vid5.pluralsight.com
vid20.pluralsight.com
vid21.pluralsight.com
vid30.pluralsight.com

# Excertises files
ip-video-course-exercise-files-us-west-2.s3.us-west-2.amazonaws.com

So it means that technically if I have this defined somewhere I can run an
external acl helper that will get all the details of the request and will
tag
the request and/or connection with an APP ID that can be allowed or denied
by the next external acl helper in the pipe line.
The next access log:
https://www.ngtech.co.il/squid/pluralsight-access-log.txt

is a bit redacted but still contains the relevant log lines.

So the relevant ACL options are:
http_access Allow/deny
TLS Splice/bump
Dst_ip - APP ID
Src_ip - Allow/Deny/others
Cache allow/deny
 
I would assume that every request with the dstdomain:
.pluralsight.com
ip-video-course-exercise-files-us-west-2.s3.us-west-2.amazonaws.com

Or SNI regex:
\.pluralsight\.com$
^ip-video-course-exercise-files-us-west-2\.s3\.us-west-2\.amazonaws\.com$

Should 100% be tagged with a pluralsight APP ID tag.

It would be a similar idea with goolge/gmail/Microsoft/AV/others
And since it's a very simple and re-producible APP ID tagging technique it
can be simplified into a set of helpers.

So first, what do you as a squid user think about it?
Can you and others help me work on a simple project that will help with this
specific idea?
A list of applications ID might be a good starter for the first
POC/Development process.

One place I have seen a similar implementation would be:
https://github.com/ntop/nDPI/blob/dev/src/include/ndpi_protocol_ids.h

I think that the goal would be that it would be possible to use an API that
will be able to change a rule or a ruleset per client paired with a
protocol.
Much like in a FW rules the helper would be able to run a query against a
small embedded json/other dbase/base that will contain all the relevant
details of the apps
And another part of it would be to contain the ruleset itself.

So for example a definition of:
Match: client, appID, verdict(allow/deny)
Match: client, appID, verdict(bump/splice)
Match: dst, appID, verdict(allow/deny)..

Would be pretty simple to define by the proxy admin.

Let me know how can you help with this project.

Thanks,
Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Reconfigure Downtime

2022-05-06 Thread Eliezer Croitoru
Hey Mani,

 

With “squid -k reconfigure” there shouldn’t be down time.

There are special scenarios which the complexity or the length of the 
configuration files will result
in a scenario of a slowdown in the overall performance of the service on a 
reconfiguration.

There is a very far possibility which will cause a drop of connections while 
running the reconfiguration.

I will define a rule: if “squid -k parse” doesn’t take too long ( max couple 
secs..which is a lot for most use cases)
it’s simple to assume that it won’t affect the service at all.

 

Just take into account that the proper “ratio” of reconfiguration should be no 
more then once per hour and 
the most widely used reconfiguration is once per day.

 

If you will want to make the service dynamic you can use external_acl and other 
helpers that will allow you
to prevent and reconfiguration of the service when not really required.

 

All The Bests,

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of 
Manikandan Swaminathan
Sent: Friday, May 6, 2022 04:33
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid Reconfigure Downtime

 

Hello,

 

I'm new to Squid and am currently researching the use/effects of running 
reconfigurations. I've come across a couple links and forums that talk about 
this, but since some of those are a bit dated I wanted to make sure I have the 
right info...

 

We're currently running Squid 4.8, and I want to know, what is the expected 
downtime when running "squid -k reconfigure"? How does this affect existing and 
incoming connections?

 

I ran a simple test in my machine where I reconfigure squid, while separately 
running multiple proxy requests. As far as I could tell, there wasn't any 
disruption, but I'd like to get some input from more experienced folks.

 

Thanks,

Mani

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid3/4 compilation error with Centos8/RH8

2022-05-02 Thread Eliezer Croitoru
Try to use the next SRPM:

https://www.ngtech.co.il/repo/centos/8/SRPMS/squid-4.17-8.el8.src.rpm

 

Good Luck,

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of
Ahmad Alzaeem
Sent: Monday, May 2, 2022 21:25
To: squid-users@lists.squid-cache.org
Subject: [squid-users] squid3/4 compilation error with Centos8/RH8

 

 

 

 

Hello Team ,

I found I only was able to build squid 5.x on Centos8/RH8 -  (Not able to
build 3.x or 4.x )

I was able to build  squid 3.x and 4.x on RH7/Centos7 .

 

It seems Its libssl error or so based on compilation error below (not sure
if need to upgrade or downgrade GCC) 

 

// 

cache_cf.o: In function `parseOneConfigFile(char const*, unsigned int)':

cache_cf.cc:(.text+0x805): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.cc:(.text+0xc2b): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.cc:(.text+0xd78): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.cc:(.text+0x10a4): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o: In function `parseConfigFileOrThrow(char const*)':

cache_cf.cc:(.text+0x1295): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o:cache_cf.cc:(.text+0x142e): more undefined references to
`Debug::Start[abi:cxx11](int, int)' follow

cache_cf.o: In function `dump_acl(StoreEntry*, char const*, ACL*)':

cache_cf.cc:(.text+0x3bc5): undefined reference to
`ACL::dumpOptions[abi:cxx11]()'

cache_cf.o: In function `parse_address(Ip::Address*)':

cache_cf.cc:(.text+0x3f7a): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o: In function `parse_acl_tos(acl_tos**)':

cache_cf.cc:(.text+0x432e): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o: In function `parse_http_header_access(HeaderManglers**)':

cache_cf.cc:(.text+0x49d7): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.cc:(.text+0x4a6d): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o: In function `parse_http_header_replace(HeaderManglers**)':

cache_cf.cc:(.text+0x4cc5): undefined reference to
`Debug::Start[abi:cxx11](int, int)'

cache_cf.o:cache_cf.cc:(.text+0x4d5b): more undefined references to
`Debug::Start[abi:cxx11](int, int)' follow

client_side.o: In function `EVP_PKEY_up_ref':

client_side.cc:(.text.EVP_PKEY_up_ref[EVP_PKEY_up_ref]+0x34): undefined
reference to `CRYPTO_add_lock'

client_side.o: In function `X509_up_ref':

client_side.cc:(.text.X509_up_ref[X509_up_ref]+0x34): undefined reference to
`CRYPTO_add_lock'

anyp/.libs/libanyp.a(PortCfg.o): In function
`Security::ServerOptions::sk_X509_NAME_free_wrapper::operator()(stack_st_X50
9_NAME*)':

PortCfg.cc:(.text._ZN8Security13ServerOptions25sk_X509_NAME_free_wrapperclEP
18stack_st_X509_NAME[_ZN8Security13ServerOptions25sk_X509_NAME_free_wrapperc
lEP18stack_st_X509_NAME]+0x22): undefined reference to `sk_pop_free'

security/.libs/libsecurity.a(PeerOptions.o): In function
`Security::PeerOptions::createBlankContext() const':

PeerOptions.cc:(.text+0x1896): undefined reference to `SSLv23_client_method'

security/.libs/libsecurity.a(ServerOptions.o): In function
`Security::ServerOptions::createBlankContext() const':

ServerOptions.cc:(.text+0xb4a): undefined reference to
`SSLv23_server_method'

security/.libs/libsecurity.a(ServerOptions.o): In function
`X509_CRL_up_ref':

ServerOptions.cc:(.text.X509_CRL_up_ref[X509_CRL_up_ref]+0x36): undefined
reference to `CRYPTO_add_lock'

security/.libs/libsecurity.a(Session.o): In function `tls_write_method(int,
char const*, int)':

Session.cc:(.text+0x677): undefined reference to `SSL_state'

ssl/.libs/libsslsquid.a(support.o): In function
`Ssl::MaybeSetupRsaCallback(std::shared_ptr&)':

support.cc:(.text+0x6c9): undefined reference to
`SSL_CTX_set_tmp_rsa_callback'

ssl/.libs/libsslsquid.a(support.o): In function
`Ssl::matchX509CommonNames(x509_st*, void*, int (*)(void*,
asn1_string_st*))':

support.cc:(.text+0x855): undefined reference to `sk_num'

support.cc:(.text+0x872): undefined reference to `sk_value'

support.cc:(.text+0x8c2): undefined reference to `sk_pop_free'

support.cc:(.text+0x8eb): undefined reference to `sk_pop_free'

ssl/.libs/libsslsquid.a(support.o): In function `ssl_verify_cb(int,
x509_store_ctx_st*)':

support.cc:(.text+0x19be): undefined reference to `sk_pop_free'

ssl/.libs/libsslsquid.a(support.o): In function `ssl_free_CertChain(void*,
void*, crypto_ex_data_st*, int, long, void*)':

support.cc:(.text+0x1ead): undefined reference to `sk_pop_free'

ssl/.libs/libsslsquid.a(support.o): In function `Ssl::Initialize()':

support.cc:(.text+0x2084): undefined reference to `SSL_get_ex_new_index'

support.cc:(.text+0x20b0): undefined reference to `SSL_CTX_get_ex_new_index'

support.cc:(.text+0x20df): undefined reference to `SSL_get_ex_new_index'

support.cc:(.text+0x210c):

Re: [squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

2022-05-01 Thread Eliezer Croitoru
Moving to Squid-Dev.

>From my tests this issue with the latest daily autogenerated sources package
is the same:
http://master.squid-cache.org/Versions/v6/squid-6.0.0-20220501-re899e0c27.ta
r.bz2
## START
n -fPIC -c -o DelayBucket.o DelayBucket.cc
AclRegs.cc: In lambda function:
AclRegs.cc:165:50: error: unused parameter 'name' [-Werror=unused-parameter]
RegisterMaker("clientside_mark", [](TypeName name)->ACL* { return new
Acl::ConnMark; });
 ~^~~~
AclRegs.cc: In lambda function:
AclRegs.cc:166:57: error: unused parameter 'name' [-Werror=unused-parameter]
 RegisterMaker("client_connection_mark", [](TypeName name)->ACL* {
return new Acl::ConnMark; });
~^~~~
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include
-I../libltdl -I../src -I../libltdl  -I/usr/include/libxml2  -Wextra
-Wno-unused-private-field -Wimplicit-fallthrough=2 -Wpointer-arith
-Wwrite-strings -Wcomments -Wshadow -Wmissing-declarations
-Woverloaded-virtual -Werror -pipe -D_REENTRANT -I/usr/include/libxml2
-I/usr/include/p11-kit-1   -O2 -g -pipe -Wall -Werror=format-security
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions
-fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelayConfig.o DelayConfig.cc
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include   -I../libltdl -I../src -I../libltdl
-I/usr/include/libxml2  -Wextra -Wno-unused-private-field
-Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
-I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelayPool.o DelayPool.cc
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include   -I../libltdl -I../src -I../libltdl
-I/usr/include/libxml2  -Wextra -Wno-unused-private-field
-Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
-I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelaySpec.o DelaySpec.cc
At global scope:
cc1plus: error: unrecognized command line option '-Wno-unused-private-field'
[-Werror]
cc1plus: all warnings being treated as errors

## END

I will try to publish my podman build later on.


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Monday, May 2, 2022 00:32
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

On 2/05/22 07:55, Eliezer Croitoru wrote:
> I have tried to build couple RPMs for the V6 beta but found that the 
> current daily autogenerated releases cannot be built.
> 
> Is there any specific git commit I should try to use?
> 

There is a new daily tarball out now. can you try wit that one please?


Also, squid-dev for beta and experimental code issues.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

2022-05-01 Thread Eliezer Croitoru
strings -Wcomments -Wshadow
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
-I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelaySpec.o DelaySpec.cc

At global scope:

cc1plus: error: unrecognized command line option '-Wno-unused-private-field'
[-Werror]

cc1plus: all warnings being treated as errors

make[3]: *** [Makefile:5929: AclRegs.o] Error 1

make[3]: *** Waiting for unfinished jobs

make[3]: Leaving directory '/home/builder/BUILD/squid-6.0.1/src'

make[2]: *** [Makefile:6046: all-recursive] Error 1

make[2]: Leaving directory '/home/builder/BUILD/squid-6.0.1/src'

make[1]: *** [Makefile:5037: all] Error 2

make[1]: Leaving directory '/home/builder/BUILD/squid-6.0.1/src'

make: *** [Makefile:591: all-recursive] Error 1

error: Bad exit status from /var/tmp/rpm-tmp.kiBaSw (%build)

 

 

RPM build errors:

Bad exit status from /var/tmp/rpm-tmp.kiBaSw (%build)

ufdio:   1 reads,17154 total bytes in 0.07 secs

ufdio:   1 reads, 5442 total bytes in 0.003573 secs

ufdio:   1 reads,17154 total bytes in 0.04 secs 

 

## END

 

Thanks,

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid compilation error in Docker

2022-04-30 Thread Eliezer Croitoru
I assume that the findutils is not a dependency of libtool since the required 
utilities can be supplied from couple sources.

One of them is busybox but I assumed that RedHat or CentOS will make it a 
requirement.

 

A good catch!

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of Ivan 
Larionov
Sent: Tuesday, April 26, 2022 23:24
To: Alex Rousskov 
Cc: Squid Users 
Subject: Re: [squid-users] squid compilation error in Docker

 

I think based on the compilation log that it's not used by squid directly but 
by libtool. I went through the whole log again and found the following errors 
which I missed originally:

 

"libtool: line 4251: find: command not found"

 

On Mon, Apr 25, 2022 at 1:08 PM Alex Rousskov mailto:rouss...@measurement-factory.com> > wrote:

On 4/25/22 15:41, Ivan Larionov wrote:
> Seems like "findutils" is the package which fixes the build.
> 
> Binsaries in this package:
> 
> # rpm -ql findutils | grep bin
> /bin/find
> /usr/bin/find
> /usr/bin/oldfind
> /usr/bin/xargs
> 
> If build depends on some of these then configure script should probably 
> check that they're available.


... and/or properly fail when their execution/use fails. I do not know 
whether this find/xargs dependency is inside Squid or inside something 
that Squid is using though, but I could not quickly find any direct uses 
by Squid sources (that would fail the build).


Alex.


> On Wed, Apr 13, 2022 at 9:38 PM Amos Jeffries wrote:
> 
> On 14/04/22 14:59, Ivan Larionov wrote:
>  > There were no errors earlier.
>  >
>  > Seems like installing openldap-devel fixes the issue.
>  >
>  > There were other dependencies installed together with it, not
> sure if
>  > they also affected the build or not.
> 
> 
> I suspect one or more of those other components is indeed the source of
> the change. Some of them are very low-level OS functionality updates
> (eg
> /proc and filesystem  utilities).
> 
> FWIW, The gist you posted looks suspiciously like reports we used to
> see
> when BSD people were having issues with the linker not receiving all
> the
> arguments passed to it. I would focus on the ones which interact
> with OS
> filesystem or the autotools / compiler/ linker.
> 
> 
> HTH
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org 
> <mailto:squid-users@lists.squid-cache.org> 
> <mailto:squid-users@lists.squid-cache.org 
> <mailto:squid-users@lists.squid-cache.org> >
> http://lists.squid-cache.org/listinfo/squid-users
> <http://lists.squid-cache.org/listinfo/squid-users>
> 
> 
> 
> -- 
> With best regards, Ivan Larionov.
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users




 

-- 

With best regards, Ivan Larionov.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-24 Thread Eliezer Croitoru
Hey Amos,

I am testing the session helper now for quite some time and it seems that
there is not memory leak and the helpers seems to run pretty stable for the
last days.
I will continue to run the test since it doesn't cause any issues for my
proxy at all, not in performance and not in any helpers crashes.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Thursday, April 14, 2022 07:18
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid-Cache VS PHP, put some things in
perspective

On 13/04/22 10:30, Eliezer Croitoru wrote:
> 
> I am looking for adventurous Squid Users which wants to help me test if 
> PHP 7.4+ still possess the same old 5.x STDIN bugs.
> 


Hi Eliezer, Thanks for taking on a re-investingation.


FTR, the old problem was not stdin itself. The issue was that PHP was 
designed with the fundamental assumption that it was used for scripts 
with very short execution times. Implying that all resources used would 
be freed quickly.

This assumption resulted in scripts (like helpers) which need to run for 
very long times having terrible memory and resource consumption side 
effects. Naturally that effect alone compounds badly when Squid attempts 
to run dozens or hundreds of helpers at once.

Later versions (PHP-3/4/5) that I tested had various attempts at 
internal Zend engine controls to limit the memory problems. Based on the 
same assumption though, so they chose to terminate helpers early. Which 
still causes Squid issues. PHP config settings for that Zend timeout 
were unreliable.


AFAIK, That is where we are with knowledge of PHP vs helper usage. 
PHP-6+ have not had any serious testing to check if the language updates 
have improved either resource usage or the Zend timeout/abort behaviour.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-17 Thread Eliezer Croitoru
OK so I have created a vagrant simple proxy:
https://github.com/elico/squid-php-helper-tests

It works with REDIS and can be tunned for production use.
It's based on Oracle Enterprise Linux 8 and seems to do the job.
Anyone is interested in trying to help testing if PHP is still leaking?

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Thursday, April 14, 2022 07:18
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid-Cache VS PHP, put some things in
perspective

On 13/04/22 10:30, Eliezer Croitoru wrote:
> 
> I am looking for adventurous Squid Users which wants to help me test if 
> PHP 7.4+ still possess the same old 5.x STDIN bugs.
> 


Hi Eliezer, Thanks for taking on a re-investingation.


FTR, the old problem was not stdin itself. The issue was that PHP was 
designed with the fundamental assumption that it was used for scripts 
with very short execution times. Implying that all resources used would 
be freed quickly.

This assumption resulted in scripts (like helpers) which need to run for 
very long times having terrible memory and resource consumption side 
effects. Naturally that effect alone compounds badly when Squid attempts 
to run dozens or hundreds of helpers at once.

Later versions (PHP-3/4/5) that I tested had various attempts at 
internal Zend engine controls to limit the memory problems. Based on the 
same assumption though, so they chose to terminate helpers early. Which 
still causes Squid issues. PHP config settings for that Zend timeout 
were unreliable.


AFAIK, That is where we are with knowledge of PHP vs helper usage. 
PHP-6+ have not had any serious testing to check if the language updates 
have improved either resource usage or the Zend timeout/abort behaviour.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid compilation error in Docker

2022-04-13 Thread Eliezer Croitoru
For CentOS 7 use the next:

RUN yum install -y epel-release \

   &&  yum clean all \

   &&  yum update -y \

   &&  yum install -y gcc gcc-c++ libtool libtool-ltdl make cmake \

   git pkgconfig sudo automake autoconf yum-utils rpm-build \

   &&  yum install -y libxml2 expat-devel openssl-devel libcap ccache \

   libtool-ltdl-devel cppunit cppunit-devel bzr git autoconf \

   automake libtool gcc-c++ perl-Pod-MinimumVersion bzip2 ed \

make openldap-devel pam-devel db4-devel libxml2-devel \

   libcap-devel screen vim nettle-devel redhat-lsb-core \

   autoconf-archive libtdb-devel libtdb redhat-rpm-config rpm-build 
rpm-devel \

   &&  yum install -y perl-libwww-perl ruby ruby-devel \

   &&  yum clean all

 

RUN yum update -y \

   &&  yum install -y systemd-units openldap-devel pam-devel \

   openssl-devel krb5-devel db4-devel expat-devel \

   libxml2-devel libcap-devel libtool libtool-ltdl-devel \

   redhat-rpm-config libdb-devel libnetfilter_conntrack-devel \

   gnutls-devel rpmdevtools wget \

   &&  yum clean all

 

 

For CentOS 8 Stream:

RUN dnf install -y epel-release dnf-plugins-core \
   &&  dnf config-manager --set-enabled powertools \
   &&  dnf clean all \
   &&  dnf update -y \
   &&  dnf install -y gcc gcc-c++ libtool libtool-ltdl make cmake \
   git pkgconfig sudo automake autoconf yum-utils rpm-build \
   &&  dnf install -y libxml2 expat-devel openssl-devel libcap ccache \
   libtool-ltdl-devel git autoconf \
   automake libtool gcc-c++ bzip2 ed \
   make openldap-devel pam-devel libxml2-devel \
   libcap-devel screen vim nettle-devel redhat-lsb-core \
   libtdb-devel libtdb redhat-rpm-config rpm-build rpm-devel \
   libnetfilter_conntrack-devel \
   &&  dnf install -y perl-libwww-perl ruby ruby-devel \
   &&  dnf clean all
 
RUN dnf update -y \
   &&  dnf install -y systemd-units openldap-devel pam-devel \
   openssl-devel krb5-devel expat-devel \
   libxml2-devel libcap-devel libtool libtool-ltdl-devel \
       redhat-rpm-config libdb-devel \
   gnutls-devel rpmdevtools wget \
   &&  dnf clean all

 

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of Ivan 
Larionov
Sent: Thursday, April 14, 2022 01:34
To: squid-users@lists.squid-cache.org
Subject: [squid-users] squid compilation error in Docker

 

Hi.

 

I have no issues building squid normally, but when I try to do exactly the same 
steps in docker I'm getting the following errors:

 

https://gist.github.com/xeron/5530fe9aa1f5bdcb6a72c6edd6476467

 

Example from that log:

 

cache_cf.o: In function `configFreeMemory()':

/root/build/src/cache_cf.cc:2982: undefined reference to 
`Adaptation::Icap::TheConfig'

 

I can't figure out what exactly is wrong. Doesn't look like any dependencies 
are missing.


 

Here's my build script:

 

  yum install -y autoconf automake file gcc72 gcc72-c++ libtool 
libtool-ltdl-devel pkgconfig diffutils \
libxml2-devel libcap-devel openssl-devel

  autoreconf -ivf

  ./configure --program-prefix= --prefix=/usr --exec-prefix=/usr \
--bindir=/usr/sbin --sbindir=/usr/sbin --sysconfdir=/etc/squid \
--libdir=/usr/lib --libexecdir=/usr/lib/squid \
--includedir=/usr/include --datadir=/usr/share/squid \
--sharedstatedir=/usr/com --localstatedir=/var \
--mandir=/usr/share/man --infodir=/usr/share/info \
--enable-epoll --enable-removal-policies=heap,lru \
--enable-storeio=aufs,rock \
--enable-delay-pools --with-pthreads --enable-cache-digests \
--with-large-files --with-filedescriptors=65536 \
--enable-htcp

  make -j$(nproc) install DESTDIR=$PWD/_destroot

 

Any ideas?

 

-- 

With best regards, Ivan Larionov.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fine tuning of Squid and OS configuration for handling more loads

2022-04-13 Thread Eliezer Croitoru
Thanks Alex for catching this!

I was sending the wrong response.
This response was related to bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=4117

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Alex Rousskov
Sent: Wednesday, April 13, 2022 23:32
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Fine tuning of Squid and OS configuration for
handling more loads

On 4/13/22 16:24, Eliezer Croitoru wrote:
> Can you please verify if the next is a workaround to your issue:
> cache_mem 16 MB
> cache deny all

Just to avoid misunderstanding, bug #5055 and virtually any other bug 
that manifests itself in FATAL messages from noteDestinationsEnd code is 
unrelated to HTTP caching and is unlikely to be mitigated by changing 
the cache memory size and/or denying HTTP caching.

Alex.


> -Original Message-
> From: squid-users  On Behalf Of
pvs
> Sent: Wednesday, April 13, 2022 19:23
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Fine tuning of Squid and OS configuration for
handling more loads
> 
> Thanks Alex.
> 
> We will upgrade the squid and see if the problem recurs
> 
> 
> 
> O 12-04-2022 19:57, Alex Rousskov wrote:
>> On 4/11/22 07:26, pvs wrote:
>>
>>> cache.log file is attached with contents narrowed down to the time
>>> just before squid process oscillating (going up and down), ultimately
>>> dying.
>>
>> Your Squid v5.2 is probably suffering from bug #5055:
>> https://bugs.squid-cache.org/show_bug.cgi?id=5055
>>
>> Upgrade your Squid to v5.4.1 or later.
>>
>>> We tried to analyze the syslog file, squid access/cache log files,
>>> not got any concrete clues.
>>
>> FWIW, look for messages mentioning words "FATAL":
>>
>>> 2022/03/02 09:05:42 kid1| FATAL: check failed: opening()
>>
>>>  exception location: tunnel.cc(1300) noteDestinationsEnd
>>
>>
>>
>> HTH,
>>
>> Alex.
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fine tuning of Squid and OS configuration for handling more loads

2022-04-13 Thread Eliezer Croitoru
Can you please verify if the next is a workaround to your issue:
cache_mem 16 MB
cache deny all

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of pvs
Sent: Wednesday, April 13, 2022 19:23
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Fine tuning of Squid and OS configuration for 
handling more loads

Thanks Alex.

We will upgrade the squid and see if the problem recurs



O 12-04-2022 19:57, Alex Rousskov wrote:
> On 4/11/22 07:26, pvs wrote:
>
>> cache.log file is attached with contents narrowed down to the time 
>> just before squid process oscillating (going up and down), ultimately 
>> dying.
>
> Your Squid v5.2 is probably suffering from bug #5055:
> https://bugs.squid-cache.org/show_bug.cgi?id=5055
>
> Upgrade your Squid to v5.4.1 or later.
>
>> We tried to analyze the syslog file, squid access/cache log files,
>> not got any concrete clues.
>
> FWIW, look for messages mentioning words "FATAL":
>
>> 2022/03/02 09:05:42 kid1| FATAL: check failed: opening()
>
>> exception location: tunnel.cc(1300) noteDestinationsEnd
>
>
>
> HTH,
>
> Alex.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Regards,

पं. विष्णु शंकर P. Vishnu Sankar
टीम लीडरTeam Leader-Network Operations
सी-डॉट  C-DOT
इलैक्ट्रॉनिक्स सिटी फेज़ IElectronics City Phase I
होसूर रोड बेंगलूरु  Hosur Road Bengaluru – 560100
फोन  Ph91 80 25119466
--
Disclaimer :
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed.
If you are not the intended recipient you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.
The sender does not accept liability for any errors or omissions in the 
contents of this message, which arise as a result.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-Cache VS PHP, put some things in perspective

2022-04-12 Thread Eliezer Croitoru
Hey Everybody,

 

Since I know Squid-Cache I remember hearing over and over to not use PHP
however it's a great language.

I had the pleasure of hearing a talk from the creator of PHP and it gave me
couple answers to my doubts and
I wanted to say couple good words about PHP.

 

First the talk is available at the next link:

https://youtu.be/wCZ5TJCBWMg

 

Title: 25 Years of PHP (by the Creator of PHP)

Description: PHP has been around for almost as long as the Web. 25 years!

Join me for a fun look at the highlights (and lowlights) of this crazy trip.
But I will also be trying to convince you to upgrade your PHP version. 

The performance alone should be enough, if not, I have a few other tricks up
my sleeve to try to win you over.

Performance optimization, static analysis, zero-cost profiling, dead code
elimination and escape analysis are just some of the concepts that will be
covered.

 

EVENT:

 

phpday 2019 | Verona, May 10-11th | phpday.it

 

SPEAKER:

 

Rasmus Lerdorf

 

PUBLICATION PERMISSIONS:

 

Original video was published with the Creative Commons license.

## END OF SECTION

 

PHP is a good language if not one of the best languages ever made.

And I can see daily how it gives many parts of the internet and the world to
just work and make the world a better place.
(There are. bad uses for anything good..)

I have been using Squid-Cache for the last 14 ~ years for many things and I
am really not a programmer.

I actually didn't even like to code and I have seen uses of PHP which amazed
me all these years.

For theses who want to run a squid helper with PHP you just need to
understand that PHP was not built for this purpose.

I assume that the availability of PHP helpers examples and the simplicity of
the language technical resources might be the cause of this.

 

I want to run a test of a PHP helper with PHP 7.4 and PHP 8.0 , they both
contains couple amazing improvements but needs to be tested.

The next Skelton:

https://gist.githubusercontent.com/elico/5d1cc6dceebbe7ae8f6cedf158396905/ra
w/1655125419b5063477723f9f1687167afd003665/fake-helper.php

 

Is a fake PHP helper for the tests.

I really recommend on other languages and other ways to implement a helper
solution but if we will be able to test this it's possible
that the conclusions will be more then satisfying to prove if the language
issues were fixed.

 

I need an idea for a testing helper and was thinking about a basic session
helper.

 

I my last take of a session helper what I did was to write the next:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/SplashPageTemplat
e

 

And I have also seen that there are couple examples for the:

readline_callback_handler_install

 

function in PHP which might result in a solution for the problem.

 

I am looking for adventurous Squid Users which wants to help me test if PHP
7.4+ still possess the same old 5.x STDIN bugs.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3-5 CPU optimization and best practise .

2022-03-31 Thread Eliezer Croitoru
Hey Ahmad,

 

It's possible to use another proxy software.

If you do not need any specific functions that squid has you can easily run
a forward proxy that will probably
be simpler to operate, unless. you have something specific that only squid
gives.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of
Ahmad Alzaeem
Sent: Thursday, March 31, 2022 19:19
To: Alex Rousskov ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 3-5 CPU optimization and best practise .

 

Hello Alex , 

Thanks for your reply ,

 

I thought as long as squid is only as forward proxy only and no https , we
may disable some built in squid features that is not required in my purpose
for getting lower CPU consumption such as use minimum squid functions .

 

We don't have any bottleneck in squid .

The only issue is when there is a very high traffic that will use the CPU at
higher scale .

So my only goal is decrease squid CPU consumption as much as I can .

 

So I build local dns server to fasten the lookup , but still don't see any
rich topics online for my goal .

 

 

Thanks 

 

 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > on behalf of Alex
Rousskov mailto:rouss...@measurement-factory.com> >
Date: Thursday, March 31, 2022 at 8:59 AM
To: squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Squid 3-5 CPU optimization and best practise .

On 3/31/22 11:04, Ahmad Alzaeem wrote:

> My main question is , is there any major changes in squid 5 that make it 
> faster than squid 3 or squid 4 in terms of low CPU usage?

I do not recall any _major_ changes in that area, but the http_port 
worker-queues option may be of interest to those looking for performance 
optimizations.


> Is there any best practice I can use to lower the cpu usage or response 
> time ?

YMMV, but I would start by using (the right number of) SMP workers with 
cpu_affinity_map and worker-queues. More on that at
https://wiki.squid-cache.org/Features/SmpScale#How_to_configure_SMP_Squid_fo
r_top_performance.3F

Beyond that, one would have to analyze your Squid performance to find 
out performance bottleneck(s) and then try to eliminate them or reduce 
their impact.


> Like Deny caching on the HDD or server_persistent_connections off 
>   similar directives

Disabling persistent connections will make things _worse_ in many cases 
but YMMV. Whether cache_dirs (and even shared memory cache) slow down or 
speed up an average response depends on your environment -- measure and 
adjust/remove accordingly.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org>

http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

2022-03-31 Thread Eliezer Croitoru
Hey Carl,

 

I didn’t even tried to push it into squid code since it’s a very simple patch.

The process of applying patches to the project compared to basic sense and 
testing is too much for me now.

I supplied the patch since squid is unusable without it in many environments.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: Carl Meunier  
Sent: Thursday, March 31, 2022 13:06
To: Eliezer Croitoru 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Thank you Eliezer.

 

I will test it today and let you know.

 

Just one question: Is this patch included in the next version of squid? That 
would be really interesting.

 

Thanks

Carl

 

On Wed, 30 Mar 2022, 21:25 Eliezer Croitoru, mailto:ngtech1...@gmail.com> > wrote:

Hey Carl,

 

The next SRPM:

https://www.ngtech.co.il/repo/centos/7/SRPMS/squid-4.17-8.el7.src.rpm

 

Should work on ARM based systems as well.

This however is not guaranteed since I do not have any option to verify this.

Once you will rebuild this let me know if it works for you.


Please pay attention that this version is patched with a series of 3 patches 
which allow intercepted CONNECT/SSL connections
to have different IP then the one which the squid DNS will respond with.

This patch is unique and can be disabled with a squid.conf directive.

 

This specific version is running in production for the last month or a bit more 
and seems to be doing the job for my use case.

 

All The Bests,

Eliezer

 

----

Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: Carl Meunier mailto:carlimeun...@gmail.com> > 
Sent: Wednesday, March 30, 2022 15:34
To: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Cc: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Thank you for your reply.

 

Yes you can publish the SRPM. I can build it. I have an ARM based machine that 
is ready.

 

Please let me know when it will be published.

 

 

 

On Wed, 30 Mar 2022, 14:12 Eliezer Croitoru, mailto:ngtech1...@gmail.com> > wrote:

Hey Carl,

 

I do not have an ARM based machine so I cannot build for it.

However it might be possible to build for ARM in the build services of Fedora:

https://copr.fedorainfracloud.org/

 

or OpenSUSE

https://build.opensuse.org/

 

The other option is to build the RPMs inside a container in your environment.

I will try to publish the latest SRPM for CentOS 7 which should work on ARM 
based systems.

 

Let me know if you can try to build for ARM locally using rpmbuild or a 
container.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Carl Meunier
Sent: Wednesday, March 30, 2022 12:10
To: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Hello,

 

I want to install squid rpms on Centos 7 on ARM arch. 

Are there any packages for this?

 

Thanks

Carl  

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

2022-03-30 Thread Eliezer Croitoru
Hey Carl,

 

The next SRPM:

https://www.ngtech.co.il/repo/centos/7/SRPMS/squid-4.17-8.el7.src.rpm

 

Should work on ARM based systems as well.

This however is not guaranteed since I do not have any option to verify this.

Once you will rebuild this let me know if it works for you.


Please pay attention that this version is patched with a series of 3 patches 
which allow intercepted CONNECT/SSL connections
to have different IP then the one which the squid DNS will respond with.

This patch is unique and can be disabled with a squid.conf directive.

 

This specific version is running in production for the last month or a bit more 
and seems to be doing the job for my use case.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: Carl Meunier  
Sent: Wednesday, March 30, 2022 15:34
To: Eliezer Croitoru 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Thank you for your reply.

 

Yes you can publish the SRPM. I can build it. I have an ARM based machine that 
is ready.

 

Please let me know when it will be published.

 

 

 

On Wed, 30 Mar 2022, 14:12 Eliezer Croitoru, mailto:ngtech1...@gmail.com> > wrote:

Hey Carl,

 

I do not have an ARM based machine so I cannot build for it.

However it might be possible to build for ARM in the build services of Fedora:

https://copr.fedorainfracloud.org/

 

or OpenSUSE

https://build.opensuse.org/

 

The other option is to build the RPMs inside a container in your environment.

I will try to publish the latest SRPM for CentOS 7 which should work on ARM 
based systems.

 

Let me know if you can try to build for ARM locally using rpmbuild or a 
container.

 

Thanks,

Eliezer

 

----

Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Carl Meunier
Sent: Wednesday, March 30, 2022 12:10
To: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Hello,

 

I want to install squid rpms on Centos 7 on ARM arch. 

Are there any packages for this?

 

Thanks

Carl  

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

2022-03-30 Thread Eliezer Croitoru
Hey Carl,

 

I do not have an ARM based machine so I cannot build for it.

However it might be possible to build for ARM in the build services of Fedora:

https://copr.fedorainfracloud.org/

 

or OpenSUSE

https://build.opensuse.org/

 

The other option is to build the RPMs inside a container in your environment.

I will try to publish the latest SRPM for CentOS 7 which should work on ARM 
based systems.

 

Let me know if you can try to build for ARM locally using rpmbuild or a 
container.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of Carl 
Meunier
Sent: Wednesday, March 30, 2022 12:10
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Are there centos 7 rpm's on ARM of squid 4 or 5?

 

Hello,

 

I want to install squid rpms on Centos 7 on ARM arch. 

Are there any packages for this?

 

Thanks

Carl  

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl aclname server_cert_fingerprint

2022-03-30 Thread Eliezer Croitoru
So just to illustrate the usage of :
server_cert_fingerprint

What this acl purpose is? Can it be documented?

Also, There are more things regaring ssl bump bumping and certificates.
Is it possible today to decide whether to bump or not a connection based on 
some part of the certificate,
else then the sni or domain?
Let say I want to see the certificate and only then decide if it's valid for me.
The best example is a Lets Encrypt certificates.
These are valid technically but for a specific sites I would like to do 
something else.
Also, can a tag be sent back from a sslcrtd_program??
I want to be able to do something with the certificate and response accordingly.

I have a use case which I want to differentiate between general domain 
validation certificates to EV ones.
Usually EV certificates are valid for PCI compliance and there for should not 
be bumped at all.

For example:
https://www.leumi.co.il/

Thanks,
Eliezer



Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Alex Rousskov  
Sent: Wednesday, January 27, 2021 22:07
To: squid-users@lists.squid-cache.org
Cc: Eliezer Croitoru 
Subject: Re: [squid-users] acl aclname server_cert_fingerprint

On 1/27/21 1:50 PM, Eliezer Croitoru wrote:

> I am still missing a way to make this work with the fingerprint.

I do not know what you are trying to accomplish (i.e. what "this" is).


> We first need to know the fingerprint but when squid "knows" about
> it, it's already too late. In what config scenario can it work?

Knowing the fingerprint (or any other server-sent detail!) is indeed not
useful for making bump-vs-splice decisions. Fingerprint knowledge can be
useful for many other decisions, including whether to allow an HTTP
request, whether to cache an HTTP response, and whether to terminate a
TLS connection.


HTH,

Alex.


> -Original Message-
> From: Alex Rousskov  
> Sent: Wednesday, January 27, 2021 8:43 PM
> To: squid-users@lists.squid-cache.org
> Cc: Eliezer Croitoru 
> Subject: Re: [squid-users] acl aclname server_cert_fingerprint
> 
> On 1/27/21 11:45 AM, Eliezer Croitoru wrote:
> 
>> I'm not sure I understood hat these errorcde and error detai.
> 
> FWIW, access log fields are configured using logformat %codes. Search
> squid.conf.documented for the words "err_code" and "err_detail" (no quotes).
> 
> 
>> acl tls_to_splice any-of ... NoBump_certificate_fingerprint
> 
>> acl tls_s1_connect at_step SslBump1
>> acl tls_s2_client_hello at_step SslBump2
> 
>> ssl_bump peek tls_s1_connect
>> ssl_bump splice tls_to_splice
>> ssl_bump stare tls_s2_client_hello
>> ssl_bump bump tls_to_bump
> 
> Bugs notwithstanding, the NoBump_certificate_fingerprint ACL will never
> match in the above configuration AFAICT:
> 
> * step1 is excluded by the earlier "peek if tls_s1_connect" rule. The
> server certificate is not yet available during that step anyway.
> 
> * step2 is reachable for a "splice" action, but the server certificate
> is still not yet available during that step.
> 
> * step3 is unreachable for a "splice" action because the only non-final
> action during step2 is "stare". Starting precludes splicing.
> 
> 
> HTH,
> 
> Alex.
> 
> 
>> -Original Message-
>> From: Alex Rousskov  
>> Sent: Wednesday, January 27, 2021 5:12 PM
>> To: Eliezer Croitoru ; 
>> squid-users@lists.squid-cache.org
>> Subject: Re: [squid-users] acl aclname server_cert_fingerprint
>>
>> On 1/26/21 2:09 AM, Eliezer Croitoru wrote:
>>
>>> I'm trying to understand what I'm doing wrong in the config that stil
>>> lets edition.cnn.com be decrypted instead of spliced?
>>
>> If you still need help, please share the relevant parts of your
>> configuration and logs. I would start with ssl_bump rules and access log
>> records containing additional %error_code/%err_detail fields.
>>
>> Alex.
>>
>>
>>
>>> -Original Message-
>>> From: Alex Rousskov  
>>> Sent: Tuesday, January 26, 2021 6:22 AM
>>> To: Eliezer Croitoru ; 
>>> squid-users@lists.squid-cache.org
>>> Subject: Re: [squid-users] acl aclname server_cert_fingerprint
>>>
>>> On 1/25/21 6:03 AM, Eliezer Croitoru wrote:
>>>> I'm trying to use:
>>>> acl aclname server_cert_fingerprint [-sha1] fingerprint
>>>>
>>>>
>>>> I have cerated the next file:
>>>> /etc/squid/no-ssl-bump-server-fingerprint.list
>>>>
>>>> And trying to use the next line:
>>>> acl NoBump_certificate_fingerprint server_cert_finge

[squid-users] Thinking out loud about "applications" definition for squid

2022-03-26 Thread Eliezer Croitoru
Hey,

I have been thinking about defining a specific way that will tag connections
with an APP ID for simplicity.
For example I have just seen couple support websites of web systems vendors
that provide their domains and ip addresses.
The basic example would be:
https://help.pluralsight.com/help/ip-allowlist

Which provides the next basic info:
*.pluralsight.com
*.typekit.com

# Video CDN
vid.pluralsight.com
vid5.pluralsight.com
vid20.pluralsight.com
vid21.pluralsight.com
vid30.pluralsight.com

# Excertises files
ip-video-course-exercise-files-us-west-2.s3.us-west-2.amazonaws.com

So it means that technically if I have this defined somewhere I can run an
external acl helper that will get all the details of the request and will
tag
the request and/or connection with an APP ID that can be allowed or denied
by the next external acl helper in the pipe line.
The next access log:
https://www.ngtech.co.il/squid/pluralsight-access-log.txt

is a bit redacted but still contains the relevant log lines.

So the relevant ACL options are:
http_access Allow/deny
TLS Splice/bump
Dst_ip - APP ID
Src_ip - Allow/Deny/others
Cache allow/deny
 
I would assume that every request with the dstdomain:
.pluralsight.com
ip-video-course-exercise-files-us-west-2.s3.us-west-2.amazonaws.com

Or SNI regex:
\.pluralsight\.com$
^ip-video-course-exercise-files-us-west-2\.s3\.us-west-2\.amazonaws\.com$

Should 100% be tagged with a pluralsight APP ID tag.

It would be a similar idea with goolge/gmail/Microsoft/AV/others
And since it's a very simple and re-producible APP ID tagging technique it
can be simplified into a set of helpers.

So first, what do you as a squid user think about it?
Can you and others help me work on a simple project that will help with this
specific idea?
A list of applications ID might be a good starter for the first
POC/Development process.

One place I have seen a similar implementation would be:
https://github.com/ntop/nDPI/blob/dev/src/include/ndpi_protocol_ids.h

I think that the goal would be that it would be possible to use an API that
will be able to change a rule or a ruleset per client paired with a
protocol.
Much like in a FW rules the helper would be able to run a query against a
small embedded json/other dbase/base that will contain all the relevant
details of the apps
And another part of it would be to contain the ruleset itself.

So for example a definition of:
Match: client, appID, verdict(allow/deny)
Match: client, appID, verdict(bump/splice)
Match: dst, appID, verdict(allow/deny)..

Would be pretty simple to define by the proxy admin.

Let me know how can you help with this project.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reconfiguring Squid every few seconds

2022-03-20 Thread Eliezer Croitoru
Hey Roee,

 

If Tiny-proxy works for you then it’s great.

 

All The Bests,

Eliezer

 

*   There are many ways to offer the same solution however the best 
solution is what works for you..

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: roee klinger  
Sent: Monday, March 21, 2022 02:34
To: Squid Users ; Eliezer Croitoru 

Subject: Re: [squid-users] Reconfiguring Squid every few seconds

 

Thank you everyone for your advice.

As far as I can tell, there is no graceful and easy way to do it in Squid out 
of the box,
I will have to use namespaces + virtual interfaces or mark outgoing traffic 
from Squid,
I am currently looking into these 2 solutions that you suggested, I will 
implement them
and update here how it goes after testing.

However, for now as much as I love Squid I need a fast and easy solution, so I 
decided to
use Tiny-proxy transparent proxy instead, where I can simply run the service 40 
times in parallel since it is so light.
Then, if there is a reboot of the modem, I can simply restart the specific 
service I need,
without effecting the other services and users.

Of course, this only works if you have a really simple configuration, for 
example like my case:
traffic from port 8001 -> out from modem1
traffic from port 8002 -> out from modem2
...
...

I will update shortly when I find a Squid solution,
Roee

 

On 20 Mar 2022, 14:33 +0200, Eliezer Croitoru mailto:ngtech1...@gmail.com> >, wrote:



To give some perspective you can see the next example:

https://github.com/elico/mwan-nft-lb-example

 

but you need to learn first how network namespaces works in linux.

You will probably need to run squid in it’s own namespace which will be managed 
from the “main” or “root” namespace.

It will probably be similar to a management interface and virtual routers on 
products like Palo Alto.

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: Eliezer Croitoru mailto:ngtech1...@gmail.com> >
Sent: Sunday, March 20, 2022 00:20
To: 'Squid Users' mailto:squid-users@lists.squid-cache.org> >
Subject: RE: [squid-users] Reconfiguring Squid every few seconds

 

Hey Roee,

 

The best solution for you case is to use a network namespace Router between the 
squid instance to the actual modem interface.

You can attach each modem to a network namespace and leave squid to do it’s 
thing with a static IP address.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of roee klinger
Sent: Saturday, March 19, 2022 02:48
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: [squid-users] Reconfiguring Squid every few seconds

 

Hello,

 

I have a server with multiple 4G modems with Squid running on it, the 4G modems 
get an internal private IP that is dynamic (unfortunately this can't be 
changed),

 

I set up Squid to use the interfaces as follows:

tcp_outgoing_address 

 

The configuration works well and everything works great, however, whenever I 
restart one of the modems (I have many, and I restart them a lot), I get a new 
internal private IP, and I need to reconfigure Squid, this means that I will be 
running "squid -k reconfigure" multiple times a minute.

 

Will this have a bad effect on Squid and traffic (I understand this does not 
cause Squid to restart)? What is my alternative?

 

Thanks,

Roee

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reconfiguring Squid every few seconds

2022-03-20 Thread Eliezer Croitoru
To give some perspective you can see the next example:

https://github.com/elico/mwan-nft-lb-example

 

but you need to learn first how network namespaces works in linux.

You will probably need to run squid in it’s own namespace which will be managed 
from the “main” or “root” namespace.

It will probably be similar to a management interface and virtual routers on 
products like Palo Alto.

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: Eliezer Croitoru  
Sent: Sunday, March 20, 2022 00:20
To: 'Squid Users' 
Subject: RE: [squid-users] Reconfiguring Squid every few seconds

 

Hey Roee,

 

The best solution for you case is to use a network namespace Router between the 
squid instance to the actual modem interface.

You can attach each modem to a network namespace and leave squid to do it’s 
thing with a static IP address.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of roee klinger
Sent: Saturday, March 19, 2022 02:48
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: [squid-users] Reconfiguring Squid every few seconds

 

Hello,

 

I have a server with multiple 4G modems with Squid running on it, the 4G modems 
get an internal private IP that is dynamic (unfortunately this can't be 
changed),

 

I set up Squid to use the interfaces as follows:

tcp_outgoing_address 

 

The configuration works well and everything works great, however, whenever I 
restart one of the modems (I have many, and I restart them a lot), I get a new 
internal private IP, and I need to reconfigure Squid, this means that I will be 
running "squid -k reconfigure" multiple times a minute.

 

Will this have a bad effect on Squid and traffic (I understand this does not 
cause Squid to restart)? What is my alternative?

 

Thanks,

Roee

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reconfiguring Squid every few seconds

2022-03-19 Thread Eliezer Croitoru
Hey Roee,

 

The best solution for you case is to use a network namespace Router between the 
squid instance to the actual modem interface.

You can attach each modem to a network namespace and leave squid to do it’s 
thing with a static IP address.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of roee 
klinger
Sent: Saturday, March 19, 2022 02:48
To: Squid Users 
Subject: [squid-users] Reconfiguring Squid every few seconds

 

Hello,

 

I have a server with multiple 4G modems with Squid running on it, the 4G modems 
get an internal private IP that is dynamic (unfortunately this can't be 
changed),

 

I set up Squid to use the interfaces as follows:

tcp_outgoing_address 

 

The configuration works well and everything works great, however, whenever I 
restart one of the modems (I have many, and I restart them a lot), I get a new 
internal private IP, and I need to reconfigure Squid, this means that I will be 
running "squid -k reconfigure" multiple times a minute.

 

Will this have a bad effect on Squid and traffic (I understand this does not 
cause Squid to restart)? What is my alternative?

 

Thanks,

Roee

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5 Proxy Return 503 Error When Set as Proxy in Oralce Yum.conf

2022-03-16 Thread Eliezer Croitoru
We need to know what the 503 meaning is.

It can be a gateway or another issue.

Use curl -v to try and access https://yum.oracle.com/ and see what the error 
page shows.

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
Johnny Lam
Sent: Wednesday, March 16, 2022 05:59
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid 3.5 Proxy Return 503 Error When Set as Proxy in 
Oralce Yum.conf

 

Hi all,

 

To reduce the bandwidth, we put the Cache Proxy in each regional office. 
However, the proxy return 503 error when we try to access yum.oracle.com 
<http://yum.oracle.com/> . Attached the error log and the squid.conf file for 
your reference. Do I need to do any SSL inspection to make it work ?

We already tried to set the company proxy in Cache Proxy setting and also the 
Windows Server Connection layer. But result the same.

The connection is like below. The cache proxy 3.5 is installed on Windows Server

Oralce Linux with yum.conf set > Cache Proxy > Company Proxy > Redhat/Oracle

 



Best Regards,

 

Johnny Lam

AWS Certified Solutions Architect 

Security Consultant

 

UDS Data Systems Ltd 

Office: +852 2851 0271  , Fax: +852 2851 0155 
 

Mobile: +852 9504 5848   China Mobile: +86 14715379352 
 

URL: http://www.udshk.com <http://www.udshk.com/>  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-14 Thread Eliezer Croitoru
Hey Antony and Ben,

I have verified that the Pinger process is at fault.
I don't know if it's a bug or not.
You can disable pinger and it will work:
http://www.squid-cache.org/Doc/config/pinger_enable/

pinger_enable off

will resolve this issue.

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Antony Stone
Sent: Monday, March 14, 2022 11:27
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port

On Monday 14 March 2022 at 05:42:35, ben wrote:

> Hi Eliezer,
> 
> SQUID started listening only after I run "ip6tables -P INPUT ACCEPT".

Without seeing the rest of your iptables rules, it's not clear whether this 
really does apply to every interface and every protocol, or whether there
are 
exception rules which over-ride this default policy rule.

However, there are quite a number of applications which will refuse to
start, 
or not operate correctly, if you do not permit loopback traffic (IPv4 as
well as 
IPv6), so this may be the cause of your problems.


Antony.

-- 
"Once you have a panic, things tend to become rather undefined."

 - murble

   Please reply to the list;
 please *don't* CC
me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-14 Thread Eliezer Croitoru
Hey Ben,

As I suspected it's the pinger that is causing this issue.
In my RPM builds it's disabled by defaults to avoid selinux complications.
I think it's a very special case you have found.
Try to use the:
pinger_enable off

and see if it fixes your issue.

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Eliezer Croitoru  
Sent: Monday, March 14, 2022 10:49
To: 'squid-users@lists.squid-cache.org' 
Subject: RE: [squid-users] SQUID refuses to listen on any TCP Port

Hey Ben,

I do not know if it won't listen.
I can try to reproduce locally and see the results.
I assume it might be related to the pinger process or maybe something else but 
not 100% sure.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of ben
Sent: Monday, March 14, 2022 06:43
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port


Hi Eliezer,

SQUID started listening only after I run "ip6tables -P INPUT ACCEPT". I 
had searched everywhere on the internet regarding this issue and no one 
mentioned that IPV6 iptables rules could stop squid from listening on 
TCP ports. And Squid just started without any complaint. To my limited 
knowledge, it seems no other software behaves like this?
At first I was indeed suspecting that the issue is related to IPV6. 
I had tried recompiling SQUID without IPV6 and disabed IPV6 using 
sysctl.conf, nothing worked. It never occured to me that my IPV6 INPUT 
POLICY could be the culprit.
   Again, many thanks for your help
> Hey Ben,
>
> The next step in this situation is to try and connect with netcat from a
> remote host and also locally.
>  From what I understood the issue was a firewall issue so Squid was
> listening.
> I can also assume that squid was listening.
> Just For future reference:
> Try to run the next commands:
> sudo netstat -ntlp
> sudo ss -ntlp
>
> and send their output to us.
> In ubuntu you will not see any "squid" when running plain netstat or ss.
>
> All The Bests,
> Eliezer
>
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
> -Original Message-
> From: squid-users  On Behalf Of
> ben
> Sent: Sunday, March 13, 2022 12:24
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port
>
>
> Hello,
>
> It looks like that if I start installing it right after OS installation,
> it works. But not if I install any packages, for example, make,
> gcc,strongswan and so on. once it stop working, it will not work anymore
> even if I remove all aforementioned packages. reboot also doesn't help.
> I also tried it on Debian 11. no luck either! it only works in old ubuntu 16
> I can be 100% sure that squid is not listening on any TCP ports. running
> these commands looks fine
>
> Thanks for your being patient and kind
>
> root@host8d66c97880:~# netcat -v -l -4 34568
> Listening on 0.0.0.0 34568
> ^C
> root@host8d66c97880:~# netcat -v -l -6 34568
> Listening on :: 34568
>
> I configured it to listen on 34568
>
>
>> netcat -v -l -6 
>> netcat -v -l -4 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-14 Thread Eliezer Croitoru
Hey Ben,

I do not know if it won't listen.
I can try to reproduce locally and see the results.
I assume it might be related to the pinger process or maybe something else but 
not 100% sure.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of ben
Sent: Monday, March 14, 2022 06:43
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port


Hi Eliezer,

SQUID started listening only after I run "ip6tables -P INPUT ACCEPT". I 
had searched everywhere on the internet regarding this issue and no one 
mentioned that IPV6 iptables rules could stop squid from listening on 
TCP ports. And Squid just started without any complaint. To my limited 
knowledge, it seems no other software behaves like this?
At first I was indeed suspecting that the issue is related to IPV6. 
I had tried recompiling SQUID without IPV6 and disabed IPV6 using 
sysctl.conf, nothing worked. It never occured to me that my IPV6 INPUT 
POLICY could be the culprit.
   Again, many thanks for your help
> Hey Ben,
>
> The next step in this situation is to try and connect with netcat from a
> remote host and also locally.
>  From what I understood the issue was a firewall issue so Squid was
> listening.
> I can also assume that squid was listening.
> Just For future reference:
> Try to run the next commands:
> sudo netstat -ntlp
> sudo ss -ntlp
>
> and send their output to us.
> In ubuntu you will not see any "squid" when running plain netstat or ss.
>
> All The Bests,
> Eliezer
>
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
> -Original Message-
> From: squid-users  On Behalf Of
> ben
> Sent: Sunday, March 13, 2022 12:24
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port
>
>
> Hello,
>
> It looks like that if I start installing it right after OS installation,
> it works. But not if I install any packages, for example, make,
> gcc,strongswan and so on. once it stop working, it will not work anymore
> even if I remove all aforementioned packages. reboot also doesn't help.
> I also tried it on Debian 11. no luck either! it only works in old ubuntu 16
> I can be 100% sure that squid is not listening on any TCP ports. running
> these commands looks fine
>
> Thanks for your being patient and kind
>
> root@host8d66c97880:~# netcat -v -l -4 34568
> Listening on 0.0.0.0 34568
> ^C
> root@host8d66c97880:~# netcat -v -l -6 34568
> Listening on :: 34568
>
> I configured it to listen on 34568
>
>
>> netcat -v -l -6 
>> netcat -v -l -4 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-13 Thread Eliezer Croitoru
Hey Ben,

The next step in this situation is to try and connect with netcat from a
remote host and also locally.
>From what I understood the issue was a firewall issue so Squid was
listening.
I can also assume that squid was listening.
Just For future reference:
Try to run the next commands:
sudo netstat -ntlp
sudo ss -ntlp

and send their output to us.
In ubuntu you will not see any "squid" when running plain netstat or ss.

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
ben
Sent: Sunday, March 13, 2022 12:24
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port


Hello,

It looks like that if I start installing it right after OS installation, 
it works. But not if I install any packages, for example, make, 
gcc,strongswan and so on. once it stop working, it will not work anymore 
even if I remove all aforementioned packages. reboot also doesn't help. 
I also tried it on Debian 11. no luck either! it only works in old ubuntu 16
I can be 100% sure that squid is not listening on any TCP ports. running 
these commands looks fine

Thanks for your being patient and kind

root@host8d66c97880:~# netcat -v -l -4 34568
Listening on 0.0.0.0 34568
^C
root@host8d66c97880:~# netcat -v -l -6 34568
Listening on :: 34568

I configured it to listen on 34568


> netcat -v -l -6 
> netcat -v -l -4 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-12 Thread Eliezer Croitoru
Hey Ben,

You should try the next test:
install netcat with "apt install netcat"

Then run the next two commands:
netcat -v -l -6 
netcat -v -l -4 

and send us the output of these.
You can change the ports to 3128 which doesn't work and see what might cause
this issue.
It's possible that your VPS hosting service is using a custom kernel or
another thing happens there.
It's not squid by itself that is at fault in this issue since it was tested
on more then one system over the last 10 years including Ubuntu 20.04.

All The Bests,
Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
ben
Sent: Saturday, March 12, 2022 05:25
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port


Hi Eliezer,
Thanks for your help. I tried your method. it worked in a few VM 
instances but not on others. All have the same version of ubuntu 20.04. 
I just have no idea why Squid is so glitchy. same methods, same OS, 
sometimes it did start accepting connection on the specificed TCP port, 
but most of times it didn't. I also tried to install from a PPA repos 
and it is the same story. Any suggestion?
> Hey Ben,
>
> I cannot tell you if there is something wrong with what you are doing or
the
> OS.
> What I did was to install a basic squid on ubuntu 20.4 which comes with
> version 4.10.
> I shut down the current running suqid.
> I took the /etc/squid/squid.conf file and left it with a single line.
> After that I started squid with "systemctl start squid" and it seems to
work
> fine.
> I shut down squid and downloaded my version and again left the squid.conf
> with a single line.
> Just.. adding the mime.conf into /etc/squid/ and it worked like a charm.
> Try to use "ss -ntlp" instead of netstat.
> You should first try this on a VM on you local desktop in virtualbox or
> hyperv.
>
> All The Bests,
> Eliezer
>
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
> -Original Message-
> From: squid-users  On Behalf Of
> ben
> Sent: Sunday, March 6, 2022 07:51
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port
>
>
> Hi Eliezer,
>
> It is a KVM VPS server with ubuntu 20.04. I just reinstalled the whole
> operating sytem and started it from scratch but got the same result: No
> tcp port listening. I don't know if it is something wrong with the OS
> template. Please let me know what I need to do. Thank you!
>> Hey Ben,
>>
>> If it doesn't work for you then you are clearly doing something wrong
>>
>> I can try to give you instructions on how to make it work 100% unless
your
> setup is messed up or is not a plain ubuntu 20.04.
>> Is it a simple VM?
>>
>> Eliezer
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-06 Thread Eliezer Croitoru
Hey Ben,

I cannot tell you if there is something wrong with what you are doing or the
OS.
What I did was to install a basic squid on ubuntu 20.4 which comes with
version 4.10.
I shut down the current running suqid.
I took the /etc/squid/squid.conf file and left it with a single line.
After that I started squid with "systemctl start squid" and it seems to work
fine.
I shut down squid and downloaded my version and again left the squid.conf
with a single line.
Just.. adding the mime.conf into /etc/squid/ and it worked like a charm.
Try to use "ss -ntlp" instead of netstat.
You should first try this on a VM on you local desktop in virtualbox or
hyperv.

All The Bests,
Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
ben
Sent: Sunday, March 6, 2022 07:51
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port


Hi Eliezer,

It is a KVM VPS server with ubuntu 20.04. I just reinstalled the whole 
operating sytem and started it from scratch but got the same result: No 
tcp port listening. I don't know if it is something wrong with the OS 
template. Please let me know what I need to do. Thank you!
> Hey Ben,
>
> If it doesn't work for you then you are clearly doing something wrong
>
> I can try to give you instructions on how to make it work 100% unless your
setup is messed up or is not a plain ubuntu 20.04.
> Is it a simple VM?
>
> Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-05 Thread Eliezer Croitoru
Hey Ben,

If it doesn't work for you then you are clearly doing something wrong

I can try to give you instructions on how to make it work 100% unless your 
setup is messed up or is not a plain ubuntu 20.04.
Is it a simple VM?

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of ben
Sent: Sunday, March 6, 2022 05:13
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port

Hi,Eliezer

Many thanks for your help. But sorry to tell you that it didn't work 
either. I've done with it and will try to use a different version of 
ubuntu. Thank you again!

在 2022/3/6 2:11, Eliezer Croitoru :
> My binaries include ssl bump.
> I belive it will be good enough for your use case.
> Just remember to first install squid package from ubuntu to make sure all 
> dependencies are installed and only then use my binaries.
> I still do not have an installation file but this will come later on in a 
> form of a Makefile.
> I will probably not provide a deb file since others do that better then me.
>
> What you need is this file:
> http://www.ngtech.co.il/repo/ubuntu/20.04/x86_64/squid-4.17-64-bin-stripped-only.tar
>
> There are other files there as well but this is the latest version.
> First untar the file into a temporary directory.
> Then and only then you should install the relevant binaries and conf files to
> their respective directories ie /etc/squid and /usr/sbin /usr/bin and 
> /usr/share...
>
> If you have any trouble let me know and I will try to help you if possible.
>
> All The Bests,
> Eliezer
>
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1...@gmail.com
>
> -Original Message-
> From: squid-users  On Behalf Of ben
> Sent: Friday, March 4, 2022 16:45
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port
>
> Hi,
>
> Does it have ssl enabled? I use squid mainly as a https proxy server and
> the default version on ubuntu 20.04 doesn't have it. Thank you for being
> so kind
>
> 在 2022/3/4 22:41, Eliezer Croitoru
>> Do you want to try another version of squid that was compiled by me?
>>
>> All The Bests,
>> Eliezer
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-05 Thread Eliezer Croitoru
My binaries include ssl bump.
I belive it will be good enough for your use case.
Just remember to first install squid package from ubuntu to make sure all 
dependencies are installed and only then use my binaries.
I still do not have an installation file but this will come later on in a form 
of a Makefile.
I will probably not provide a deb file since others do that better then me.

What you need is this file:
http://www.ngtech.co.il/repo/ubuntu/20.04/x86_64/squid-4.17-64-bin-stripped-only.tar

There are other files there as well but this is the latest version.
First untar the file into a temporary directory.
Then and only then you should install the relevant binaries and conf files to
their respective directories ie /etc/squid and /usr/sbin /usr/bin and 
/usr/share...

If you have any trouble let me know and I will try to help you if possible.

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of ben
Sent: Friday, March 4, 2022 16:45
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port

Hi,

Does it have ssl enabled? I use squid mainly as a https proxy server and 
the default version on ubuntu 20.04 doesn't have it. Thank you for being 
so kind

在 2022/3/4 22:41, Eliezer Croitoru
> Do you want to try another version of squid that was compiled by me?
>
> All The Bests,
> Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-04 Thread Eliezer Croitoru
Do you want to try another version of squid that was compiled by me?

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of ben
Sent: Friday, March 4, 2022 16:24
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port

Hi,

But the sad fact is that there is an empty result when running netstat 
-tunpal | grep 3128. Nor can I telnet localhost 3128 successfully.Any 
idea what it is going on? Thank you!

在 2022/3/4 21:18, Amos Jeffries :
> This log shows port 3128 being opened, right at the end.
>
>
> 2022/03/04 18:12:54.537| 33,2| AsyncCall.cc(25) AsyncCall: The 
> AsyncCall clientListenerConnectionOpened constructed, 
> this=0x5579e31de710 [call3]
> ...
>
> 2022/03/04 18:12:54.537| 54,3| StartListening.cc(58) StartListening: 
> opened listen local=[::]:3128 remote=[::] FD 12 flags=9
>
> 2022/03/04 18:12:54.537| 33,2| AsyncCall.cc(92) ScheduleCall: 
> StartListening.cc(59) will call 
> clientListenerConnectionOpened(local=[::]:3128 remote=[::] FD 12 
> flags=9, err=0, HTTP Socket port=0x5579e31de770) [call3]
>
>
> Amos 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID refuses to listen on any TCP Port

2022-03-03 Thread Eliezer Croitoru
Should "squid -kparse" be of help for such a scenario?
Else then using the default squid.conf it would be helpful to explain what are 
the basic squid.conf that is required for squid to run.
I believe that a wiki/doc about this would be nice (volunteering to write a 
draft later on)

Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Thursday, March 3, 2022 10:38
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SQUID refuses to listen on any TCP Port

On 3/03/22 14:48, ben wrote:
> Hi,Alex,
> 
> 
> Thanks for your help. I run squid with the option d1 and its output is 
> as followings
> 2022/03/03 09:17:39 kid1| Current Directory is /root
> 2022/03/03 09:17:39 kid1| Starting Squid Cache version 4.17 for 
> x86_64-pc-linux-gnu...
> 2022/03/03 09:17:39 kid1| Service Name: squid
> 2022/03/03 09:17:39 kid1| Process ID 239617
> 2022/03/03 09:17:39 kid1| Process Roles: worker
> 2022/03/03 09:17:39 kid1| With 1024 file descriptors available
> 2022/03/03 09:17:39 kid1| Initializing IP Cache...
> 2022/03/03 09:17:39 kid1| DNS Socket created at [::], FD 5
> 2022/03/03 09:17:39 kid1| DNS Socket created at 0.0.0.0, FD 8
> 2022/03/03 09:17:39 kid1| Adding nameserver 8.8.8.8 from /etc/resolv.conf
> 2022/03/03 09:17:39 kid1| Logfile: opening log 
> daemon:/var/log/squid/access.log
> 2022/03/03 09:17:39 kid1| Logfile Daemon: opening log 
> /var/log/squid/access.log
> 2022/03/03 09:17:39 kid1| Local cache digest enabled; rebuild/rewrite 
> every 3600/3600 sec
> 2022/03/03 09:17:39 kid1| Store logging disabled
> 2022/03/03 09:17:39 kid1| Swap maxSize 0 + 262144 KB, estimated 20164 
> objects
> 2022/03/03 09:17:39 kid1| Target number of buckets: 1008
> 2022/03/03 09:17:39 kid1| Using 8192 Store buckets
> 2022/03/03 09:17:39 kid1| Max Mem  size: 262144 KB
> 2022/03/03 09:17:39 kid1| Max Swap size: 0 KB
> 2022/03/03 09:17:39 kid1| Using Least Load store dir selection
> 2022/03/03 09:17:39 kid1| Current Directory is /root
> 2022/03/03 09:17:39 kid1| Finished loading MIME types and icons.
> 2022/03/03 09:17:39 kid1| HTCP Disabled.
> 
> the only line in squid.conf is http_port 3128


That itself could be the problem.

Please at least use the default config file shown in 
<https://wiki.squid-cache.org/Squid-4>, or better use the squid.conf 
file built with your custom binary.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Getting SSL Connection Errors (Eliezer Croitoru)

2022-02-26 Thread Eliezer Croitoru
Hey Usama,

 

I took the time to make sure that the script will work on amzn linux 2:

https://github.com/elico/squid-suppsave

 

it’s a Makefile and a tiny hardware data collection tool.

You can clone the git and then in the directory of the git repo you can enter 
the next command:

make amzn2-install-suppsave-deps support-save

 

And also, how did you installed squid on the amazon linux machine? Using:

amazon-linux-extras install squid4

 

??

And also just do you would know that I am compiling squid for amzn linux 2 and 
the files/repo is at:

https://www.ngtech.co.il/repo/amzn/2/x86_64/

 

It is not compiled with ecap support and it works for most use cases I have 
seen until now.

 

The support script will create a file at /etc/support….tar.gz

Please make sure that if you are using ssl bump you will need to remove the ssl 
bump root CA details.

If you still wish to send the full file to me as is, just make sure you will 
send it to me and only not and not to the public list.

(Unless it’s a testing machine..)

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
Usama Mehboob
Sent: Saturday, February 26, 2022 07:58
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Getting SSL Connection Errors (Eliezer Croitoru)

 

I think on previous mailing list I pasted the whole content. So I am again 
sending my reply in sort of confined way. :) 

Eliezer, I am running on amazon linux 2 ami which I suppose is based on
centos.
I ran the uname -a command and this is what I get;;
Linux ip-172-24-9-143.us-east-2.compute.internal
4.14.256-197.484.amzn2.x86_64 #1 SMP Tue Nov 30 00:17:50 UTC 2021 x86_64
x86_64 x86_64 GNU/Linux

[ec2-user@ip-172-24-9-143 ~]$ openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

thanks so much and let me know the script and I can run on this machine.
Usama


Message: 1
Date: Fri, 25 Feb 2022 07:01:12 +0200
From: "Eliezer Croitoru" mailto:ngtech1...@gmail.com> >
To: "'Usama Mehboob'" mailto:musamamehb...@gmail.com> 
>,
mailto:squid-users@lists.squid-cache.org> >
Subject: Re: [squid-users] Getting SSL Connection Errors
Message-ID: <006f01d82a04$b678b770$236a2650$@gmail.com <http://gmail.com> >
Content-Type: text/plain; charset="utf-8"

Hey Usama,



There are more missing details on the system.

If you provide the OS and squid details I might be able to provide a script 
that will pull most of the relevant details on the system.

I don?t know about this specific issue yet and it seems like there is a SSL 
related issue and it might not be even related to Squid.

(@Alex Or @Chrisots might know better then me)



All The Bests,





Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com>  
<mailto:ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> > 



From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Usama Mehboob
Sent: Thursday, February 24, 2022 23:45
To: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: [squid-users] Getting SSL Connection Errors



Hi I have a squid running on a linux box ( about 16GB ram and 4 cpu ) -- it 
runs fine for the most part but when I am launching multiple jobs that are 
connecting with salesforce BulkAPI, sometimes connections are dropped. its not 
predictable and happens only when there is so much load on squid. Can anyone 
shed some light on this? what can I do? is it a file descriptor issue?

I see only these error messages from the cache logs
```
PeerConnector.cc(639) handleNegotiateError: Error (error:04091068:rsa 
routines:INT_RSA_VERIFY:bad signature) but, hold write on SSL connection on FD 
109
```

Config file 
visible_hostname squid 

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 <http://10.0.0.0/8>  <http://10.0.0.0/8>  # RFC1918 
possible internal network
acl localnet src 172.16.0.0/12 <http://172.16.0.0/12>  <http://172.16.0.0/12>  
# RFC1918 possible internal network
acl localnet src 192.168.0.0/16 <http://192.168.0.0/16>  
<http://192.168.0.0/16>  # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
###acl Safe_ports port 21 # ftp testing after blocking itp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port

Re: [squid-users] Getting SSL Connection Errors

2022-02-24 Thread Eliezer Croitoru
Hey Usama,

 

There are more missing details on the system.

If you provide the OS and squid details I might be able to provide a script 
that will pull most of the relevant details on the system.

I don’t know about this specific issue yet and it seems like there is a SSL 
related issue and it might not be even related to Squid.

(@Alex Or @Chrisots might know better then me)

 

All The Bests,

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
Usama Mehboob
Sent: Thursday, February 24, 2022 23:45
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Getting SSL Connection Errors

 

Hi I have a squid running on a linux box ( about 16GB ram and 4 cpu ) -- it 
runs fine for the most part but when I am launching multiple jobs that are 
connecting with salesforce BulkAPI, sometimes connections are dropped. its not 
predictable and happens only when there is so much load on squid. Can anyone 
shed some light on this? what can I do? is it a file descriptor issue?

I see only these error messages from the cache logs
```
PeerConnector.cc(639) handleNegotiateError: Error (error:04091068:rsa 
routines:INT_RSA_VERIFY:bad signature) but, hold write on SSL connection on FD 
109
```

Config file 
visible_hostname squid 

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 <http://10.0.0.0/8>  # RFC1918 possible internal 
network
acl localnet src 172.16.0.0/12 <http://172.16.0.0/12>  # RFC1918 possible 
internal network
acl localnet src 192.168.0.0/16 <http://192.168.0.0/16>  # RFC1918 possible 
internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
###acl Safe_ports port 21 # ftp testing after blocking itp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
#http_access allow CONNECT SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed

# And finally deny all other access to this proxy

# Squid normally listens to port 3128
#http_port 3128
http_port 3129 intercept
https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept 
http_access allow SSL_ports #-- this allows every https website
acl step1 at_step SslBump1 
acl step2 at_step SslBump2 
acl step3 at_step SslBump3 
ssl_bump peek step1 all 

# Deny requests to proxy instance metadata 
acl instance_metadata dst 169.254.169.254 
http_access deny instance_metadata 

# Filter HTTP Only requests based on the whitelist 
#acl allowed_http_only dstdomain .veevasourcedev.com 
<http://veevasourcedev.com>  .google.com <http://google.com>  .pypi.org 
<http://pypi.org>  .youtube.com <http://youtube.com> 
#acl allowed_http_only dstdomain .amazonaws.com <http://amazonaws.com> 
#acl allowed_http_only dstdomain .veevanetwork.com <http://veevanetwork.com>  
.veevacrm.com <http://veevacrm.com>  .veevacrmdi.com <http://veevacrmdi.com>  
.veeva.com <http://veeva.com>  .veevavault.com <http://veevavault.com>  
.vaultdev.com <http://vaultdev.com>  .veevacrmqa.com <http://veevacrmqa.com> 
#acl allowed_http_only dstdomain .documentforce.com <http://documentforce.com>  
 .sforce.com <http://sforce.com>  .force.com <http://force.com>  
.forceusercontent.com <http://forceusercontent.com>  .force-user-content.com 
<http://force-user-content.com>  .lightning.com <http://lightning.com>  
.salesforce.com <http://salesforce.com>  .salesforceliveagent.com 
<http://salesforceliveagent.com>  .salesforce-communities.com 
<http://salesforce-communities.com>  .s

Re: [squid-users] is there any squid 4.x version has delay_pools working?

2022-02-24 Thread Eliezer Croitoru
Hey Ahmad,

 

Can you please give more details on the specific issue or issues you have
verified in 4.17?

What exactly doesn't work in delay_pools? Plain HTTP download or upload
speed?

Is it only on HTTP or also on CONNECT or HTTPS or SSL-BUMP connections?

 

Eliezer

 

*   I was thinking about creating a webinar about Squid ssl(TLS) bump

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of
Ahmad Alzaeem
Sent: Friday, February 25, 2022 02:14
To: squid-users@lists.squid-cache.org
Subject: [squid-users] is there any squid 4.x version has delay_pools
working?

 

I tried many squid 4.x versions and none of them has delay_pools to work .

I have it to work on 3.x versions .

 

is there any specific 4.x version that ws tested with delay pools to work ?

 

 

i would like to report it as bug at least in
<http://www.squid-cache.org/Versions/v4/squid-4.17-RELEASENOTES.html>
squid-4.17 which i tested today .

 

Regards 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trying to set up SSL cache - solved!

2022-02-23 Thread Eliezer Croitoru
Hey Dave,

Lots of tutorials and documentation are out there but ... or out of sync
or..
not good from 0.

What OS are you running squid ontop?

Eliezer

* We are trying to give good examples.


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Dave Blanchard
Sent: Thursday, February 24, 2022 05:09
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Trying to set up SSL cache - solved!

OK--I solved the problem by removing the "ssl_bump bump all" line. Works
fine now.

Damn, this proxy is a TOTAL PAIN IN THE ASS!! to configure. It seems like
90% of the tutorials out there are junk, largely because things keep
changing from version to version, obsoleting them. That having been said, it
does have a lot of features and when it's eventually configured right it
does work, so there's that. It's a lot like CUPS, in that way, or sendmail.

Please add more concrete examples to the Wiki reference pages! Thank you.

-- 
Dave Blanchard 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Splice certain SNIs which served by the same IP

2022-02-22 Thread Eliezer Croitoru
Just To mention that once Squid is not splicing the connection it would have
full control in the URL level.
I do not know the scenario but I have yet to have seen a similar case and
it's probably because I am bumping
almost all connections.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Tuesday, February 22, 2022 16:32
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Splice certain SNIs which served by the same IP

On 23/02/22 01:05, Ben Goz wrote:
> By the help of God.
> 
> If I'm using the self signed certificate that I created for the ssl 
> bump, then the browser considers it as the same certificate for any 
> domain I'm connecting to?
> 

Key thing to remember is that TLS server certificate validates the 
*server*, not the URL domain name.

HTTP/2 brings the feature of alternate server names. So once connected 
and talking, a server can tell the client a bunch of other domains that 
can be fetched from it.

Since you are using SSL-Bump "splice" to setup the connection Squid has 
no control or interaction over what the server and client tell each 
other within that connection.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid proxy really slow for web requests

2022-02-22 Thread Eliezer Croitoru
Thanks Rob,

 

A good catch.

It’s a very hard one to find.

 

All The Bests,

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: robert k Wild  
Sent: Tuesday, February 22, 2022 10:38
To: Eliezer Croitoru 
Cc: Squid Users 
Subject: Re: [squid-users] squid proxy really slow for web requests

 

Hi Eliezer,

 

Thanks for the reply, in the end I had to restart our firewall, as our squid 
server is on the dmz and squid users/clients accessing the squid server are on 
the lan, so they have to go through the firewall 

 

Once restarted I could access the webpages and I didn't get the timeout error 
any more

 

Thanks, 

Rob

 

On Tue, 22 Feb 2022, 05:37 Eliezer Croitoru, mailto:ngtech1...@gmail.com> > wrote:

Hey Rob,

 

I really didn’t understood the situation?

Since we are in 2022 I believe a screen capture(video/gif) for the scenario 
would be useful.

You can use the next tool to capture the scenario:

https://getsharex.com/

 

(if you are using windows)

 

Thanks,

Eliezer 

 

----

Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of robert k Wild
Sent: Monday, February 21, 2022 18:42
To: Squid Users mailto:squid-users@lists.squid-cache.org> >
Subject: [squid-users] squid proxy really slow for web requests

 

hi all,

 

today my squid responding to web requests from different clients is really slow

 

for example when i go on firefox/chrome and open multiple tabs to different 
websites, it normally shows the "error url page" as ive denied all websites 
apart from some

 

and some of the websites takes way to long i get "the connection has timed out"

 

on my squid server im running htop and pinging google and both seem fine

 

anything else what it could be

 

thanks,

rob


-- 

Regards, 

Robert K Wild.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid proxy really slow for web requests

2022-02-21 Thread Eliezer Croitoru
Hey Rob,

 

I really didn’t understood the situation?

Since we are in 2022 I believe a screen capture(video/gif) for the scenario 
would be useful.

You can use the next tool to capture the scenario:

https://getsharex.com/

 

(if you are using windows)

 

Thanks,

Eliezer 

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of 
robert k Wild
Sent: Monday, February 21, 2022 18:42
To: Squid Users 
Subject: [squid-users] squid proxy really slow for web requests

 

hi all,

 

today my squid responding to web requests from different clients is really slow

 

for example when i go on firefox/chrome and open multiple tabs to different 
websites, it normally shows the "error url page" as ive denied all websites 
apart from some

 

and some of the websites takes way to long i get "the connection has timed out"

 

on my squid server im running htop and pinging google and both seem fine

 

anything else what it could be

 

thanks,

rob


-- 

Regards, 

Robert K Wild.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Splice certain SNIs which served by the same IP

2022-02-21 Thread Eliezer Croitoru
Thanks Christos,

I was aware of such things but haven't seen such a case.
Is there any way to "reproduce" this?
I believe it should be documented in the wiki.

Thanks,

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of 
Christos Tsantilas
Sent: Monday, February 21, 2022 11:41
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Splice certain SNIs which served by the same IP

Hi Ben,

When HTTP/2 is used, requests for two different domains may served using 
the same TLS connection if both domains are served from the same remote 
server and use the same TLS certificate.
There is a description here:
https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/

And a similar problem report here:
https://bugs.chromium.org/p/chromium/issues/detail?id=1176673

Regards,
Christos


On 14/2/22 3:49 μ.μ., Ben Goz wrote:
> By the help of God.
> 
> Hi,
> Ny squid version is 4.15, using it on tproxy configuration.
> 
> I'm using ssl bump to intercept https connection, but I want to splice 
> several domains.
> I have a problem that when I'm splicing some google domains eg. 
> youtube.com <http://youtube.com> then
> gmail.com <http://gmail.com> domain also spliced.
> 
> I know that it is very common for google servers to host multiple 
> domains on single server.
> And I suspect that when I'm splicing for example youtube.com 
> <http://youtube.com> it'll also splices google.com <http://google.com>.
> 
>   Here are my squid configurations for the ssl bump:
> 
> https_port  ssl-bump tproxy generate-host-certificates=on 
> options=ALL dynamic_cert_mem_cache_size=4MB 
> cert=/usr/local/squid/etc/ssl_cert/myCA.pem 
> dhparams=/usr/local/squid/etc/dhparam.pem sslflags=NO_DEFAULT_CA
> 
> acl DiscoverSNIHost at_step SslBump1
> 
> acl NoSSLIntercept ssl::server_name  "/usr/local/squid/etc/url-no-bump"
> acl NoSSLInterceptRegexp ssl::server_name_regex -i 
> "/usr/local/squid/etc/url-no-bump-regexp"
> ssl_bump splice NoSSLInterceptRegexp_always
> ssl_bump splice NoSSLIntercept
> ssl_bump splice NoSSLInterceptRegexp
> ssl_bump peek DiscoverSNIHost
> ssl_bump bump all
> 
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-5.4 blocking on ipv6 outage

2022-02-20 Thread Eliezer Croitoru
Hey,

 

Bugs to the rescue

+1

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
Jason Haar
Sent: Monday, February 21, 2022 03:44
To: Squid Users 
Subject: [squid-users] squid-5.4 blocking on ipv6 outage

 

Hi there

 

I've noticed that the Internet ipv6 is not quite as reliable as ipv4, in that 
squid reports it cannot connect to web servers with an ipv6 error when the web 
server is still available over ipv4.

 

eg right now one of our Internet-based web apps (which has 2 ipv6 and 2 ipv4 IP 
addresses mapped to it's DNS name) is not responding over ipv6 for some reason 
(I dunno - not involved myself) - but is working fine over ipv4. Squid-5.4 is 
erroring out - saying that it cannot connect to the first ipv6 address with a 
"no route to host" error. But if I use good-ol' telnet to the DNS name, telnet 
shows it trying-and-failing against both ipv6 addresses and then succeeds 
against the ipv4. ie it works and squid doesn't. BTW the same squid server is 
currently fine with ipv6 clients talking to it and it talking over ipv6 to 
Internet hosts like google.com <http://google.com>  - ie this is an ipv6 outage 
on one Internet host where it's ipv4 is still working.

 

This doesn't seem like a negative_dns_ttl setting issue, it seems like squid 
just tries one address on a multiple-IP DNS record and stops trying? I even got 
tcpdump up and can see that when I do a "shift-reload" on the webpage, squid 
only sends a few SYN packets to the same non-working IPv6 address - it doesn't 
even try the other 3 IPs?

 

I also checked squidcachemgr.cgi and the DNS record isn't even cached in "FQDN 
Cache Stats and Contents", which I guess is consistent with it's opinion that 
it's not working.


 

Any ideas what's going on there? thanks!

 

-- 

Cheers

 

Jason Haar

Information Security Manager, Trimble Navigation Ltd.

Phone: +1 408 481 8171

PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Splice certain SNIs which served by the same IP

2022-02-20 Thread Eliezer Croitoru
Hey Ben,

 

I have seen your email however didn’t had enough time to respond.

I and others need some free time…

I am more then willing to test this issue in my local test environment.

I can test it on Oracle Enterprise Linux 8 with the latest 4.x version.

We can simplify things by creating a very specific environment without any 
unknowns.

You will need to provide the full details of the testing setup and the content 
of:

acl NoSSLIntercept ssl::server_name  "/usr/local/squid/etc/url-no-bump"
acl NoSSLInterceptRegexp ssl::server_name_regex -i 
"/usr/local/squid/etc/url-no-bump-regexp"



In my environment it works as expected without any issues while I am not user 
ssl::server_name_regex

The docs clearly state:

acl aclname ssl::server_name_regex [-i] \.foo\.com ...

  # regex matches server name obtained from various sources [fast]

 

 

So you should try to use:

acl aclname ssl::server_name [option] .foo.com ...
  # matches server name obtained from various sources [fast]

 

Instead as a starter point.

 

I understand you need some help but I and others have other obligations in life 
so it would happen from time to time

that someone is not free to try and help you.

 

All The Bests,

Eliezer

 

*   If someone would have provided me with enough food and other living 
expenses I might have been free enough to help you.

 

----

Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of Ben 
Goz
Sent: Thursday, February 17, 2022 14:47
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Splice certain SNIs which served by the same IP

 

By the help of God.

Any insights?

 

Thanks,

Ben

 

‫בתאריך יום ב׳, 14 בפבר׳ 2022 ב-15:49 מאת ‪Ben Goz‏ <‪ 
<mailto:ben.go...@gmail.com> ben.go...@gmail.com‏>:

By the help of God.

 

Hi,

Ny squid version is 4.15, using it on tproxy configuration.

 

I'm using ssl bump to intercept https connection, but I want to splice several 
domains.

I have a problem that when I'm splicing some google domains eg. youtube.com 
<http://youtube.com>  then

gmail.com <http://gmail.com>  domain also spliced.

 

I know that it is very common for google servers to host multiple domains on 
single server.

And I suspect that when I'm splicing for example youtube.com 
<http://youtube.com>  it'll also splices google.com <http://google.com> .

 

 Here are my squid configurations for the ssl bump:

 

https_port  ssl-bump tproxy generate-host-certificates=on options=ALL 
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/ssl_cert/myCA.pem 
dhparams=/usr/local/squid/etc/dhparam.pem sslflags=NO_DEFAULT_CA

acl DiscoverSNIHost at_step SslBump1

acl NoSSLIntercept ssl::server_name  "/usr/local/squid/etc/url-no-bump"
acl NoSSLInterceptRegexp ssl::server_name_regex -i 
"/usr/local/squid/etc/url-no-bump-regexp"
ssl_bump splice NoSSLInterceptRegexp_always
ssl_bump splice NoSSLIntercept
ssl_bump splice NoSSLInterceptRegexp
ssl_bump peek DiscoverSNIHost
ssl_bump bump all

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-14 Thread Eliezer Croitoru
Hey David,

 

Transparent authentication using Kerberos can only be used with a directory 
service.

There are couple ways to authenticate…

You can use an “automatic” hotspot website that will use cookies to 
authenticate the client once in a very long time.

If the client request is not recognized or the client is not recognized for any 
reason it’s reasonable to redirect him into a captive portal.

I can try to work on a demo but I need to know more details about the network 
structure and to verify what is possible and not.

Every device ie Switch and router or AP etc should be mentioned to understand 
the scenario.

While you assume it’s a chimera I still believe it’s just a three heads 
Kerberos which… was proved to exists… in the movies and in the virtual world.

 

Eliezer 

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: David Touzeau  
Sent: Monday, February 14, 2022 03:21
To: Eliezer Croitoru 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid plugin sponsor

 


Thank you for your answer Elizer for all these details, but I've done some 
research to avoid soliciting the community for simple questions.

The objective is to not ask anything to the user and not to break his 
navigation with a session request.
To summarize, An SSO identification like kerberos with the following 
constraints:

1.  unknown Mac addresses 
2.  DHCP IP with a short lease
3.  No Active Directory connection.




The network is in VLAN (Mac addr masked) and in DHCP with a short lease.
Even the notion of hotspot is complicated when you can't focus on a network 
attribute.
I try to find a way directly in the HTTP protocol. 
This is the reason why a fake could be a solution.

But I think I'm trying to catch a chimera and we'll have to redesign the 
network architecture.

regards

Le 12/02/2022 à 06:27, Eliezer Croitoru a écrit :

Hey David,

 

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal way possible.
(taking into account money, coding complexity and other humane parts)

 

You will need to authenticate the client against the main AUTH service.

There is a definitive way or statistical way to implement this solution.

With AD or Kerberos it’s possible to implement the solution in such a way that 
windows will
“transparently” authenticate to the proxy service.

However you must understand that all of this requires an infrastructure that 
will provide every piece of the setup.

If your setup doesn’t contains RDP like servers then it’s possible that you can 
authenticate a user with an IP compared
to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the user will be 
required to enter (manually or automatically) 
the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very simple setup 
to implement with active directory.

It’s also possible to implement a transparent authentication for such a setup 
based on session tokens.

 

You actually don’t need to create a “fake” helper for such a setup but you can 
create one that is based on Linux.

It’s an “Advanced” topic but if you do ask me it’s possible that you can take 
this in steps.

The first step would be to use a session helper that will authenticate the user 
and will identify the user
based on it’s IP address.

If it’s a wireless setup you can use a radius based authentication ( can also 
be implemented on a wired setup).

Once you will authenticate the client transparently or in another way you can 
limit the usage of the username to
a specific client and with that comes a guaranteed situation that a username 
will not be used from two sources.

I don’t know about your experience but the usage of a captive portal is very 
common In such situations.

The other option is to create an agent in the client side that will identify 
the user against the proxy/auth service
and it will create a situation which an authorization will be acquired based on 
some degree of authentication.

 

In most SSO environments it’s possible that per request/domain/other there is a 
transparent validation.

 

In all the above scenarios which requires authentication the right way to do it 
would be to use the proxy as
a configured proxy compared to transparent.

I believe that one thing to consider is that once you authenticate against a 
RADIUS service you would just
minimize the user interaction.

The main point from what I understand is to actually minimize the 
authentication steps of the client.

 

My suggestion for you is to first try and asses the complexity of a session 
helper, raidus and captive portal.

These are steps that you will need to do in order to asses the necessity of 
transparent SSO.

 

Also take your time to compare how a captive 

Re: [squid-users] https interception problem with Squid 5

2022-02-14 Thread Eliezer Croitoru
Can you share the squid.conf so I can try to reproduce the issue here locally 
and verify how it could  be resolved?

What OS and other relevant details such as “squid -v”  output might help.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

From: squid-users  On Behalf Of 
n...@fabbricapolitica.com
Sent: Monday, February 14, 2022 11:16
To: squid-users@lists.squid-cache.org
Subject: [squid-users] https interception problem with Squid 5

 

Good morning,

I have been using Squid as an http caching proxy for a long time.

It's the second time I configured Squid for https caching and 
interception/inspection.

The first time everything was fine

The second...not so much.

I use the ssl_bump feature.

With Squid 4.13 and Openssl v 1.1.1k-1 all works well without errors or 
warnings.

With Squid v. 5.2.1 and Openssl v. 3.0.1, I got one error and one warning.

I tried to use the same squid.conf for Squid 4 and Squid 5.

Here are the problems with Squid 5.

1) ERROR

I checked the configuration with the command "squid -k parse" and I got this 
error: ERROR: Unable to configure Ephemeral ECDH: error:0480006C:PEM 
routines::no start line

If I remove the curve name from tls-dh in the config file, the error disappears.

First question: Which is the problem? How can I do to keep the curve name 
(prime256v1)

2) WARNING

I checked the configuration with the command "squid -k parse" and I got this 
warning: WARNING: Failed to decode DH parameters 
'/var/lib/squid/ssl_cert/squid-self-signed_dhparam.pem'

I generated the file for the Diffie-Hellman algorithm with this command (it 
worked with Squid4): openssl dhparam -outform PEM -out 
squid-self-signed_dhparam.pem 2048

Second question: Have you an idea on how to fix this?

Thank you.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid plugin sponsor

2022-02-11 Thread Eliezer Croitoru
Hey David,

 

The general name of this concept is SSO service.

It can have single or multiple backends.

The main question is how to implement the solution in the optimal way possible.
(taking into account money, coding complexity and other humane parts)

 

You will need to authenticate the client against the main AUTH service.

There is a definitive way or statistical way to implement this solution.

With AD or Kerberos it’s possible to implement the solution in such a way that 
windows will
“transparently” authenticate to the proxy service.

However you must understand that all of this requires an infrastructure that 
will provide every piece of the setup.

If your setup doesn’t contains RDP like servers then it’s possible that you can 
authenticate a user with an IP compared
to pinning every connection to a specific user.

Also, the “cost” of non-transparent authentication is that the user will be 
required to enter (manually or automatically) 
the username and the password.

An HotSpot like setup is called “Captive Portal” and it’s a very simple setup 
to implement with active directory.

It’s also possible to implement a transparent authentication for such a setup 
based on session tokens.

 

You actually don’t need to create a “fake” helper for such a setup but you can 
create one that is based on Linux.

It’s an “Advanced” topic but if you do ask me it’s possible that you can take 
this in steps.

The first step would be to use a session helper that will authenticate the user 
and will identify the user
based on it’s IP address.

If it’s a wireless setup you can use a radius based authentication ( can also 
be implemented on a wired setup).

Once you will authenticate the client transparently or in another way you can 
limit the usage of the username to
a specific client and with that comes a guaranteed situation that a username 
will not be used from two sources.

I don’t know about your experience but the usage of a captive portal is very 
common In such situations.

The other option is to create an agent in the client side that will identify 
the user against the proxy/auth service
and it will create a situation which an authorization will be acquired based on 
some degree of authentication.

 

In most SSO environments it’s possible that per request/domain/other there is a 
transparent validation.

 

In all the above scenarios which requires authentication the right way to do it 
would be to use the proxy as
a configured proxy compared to transparent.

I believe that one thing to consider is that once you authenticate against a 
RADIUS service you would just
minimize the user interaction.

The main point from what I understand is to actually minimize the 
authentication steps of the client.

 

My suggestion for you is to first try and asses the complexity of a session 
helper, raidus and captive portal.

These are steps that you will need to do in order to asses the necessity of 
transparent SSO.

 

Also take your time to compare how a captive portal is configured in the next 
general products:

*   Palo Alto
*   FortiGate
*   Untangle
*   Others

 

>From the documentation you would see the different ways and “grades” that they 
>implement the solutions.

Once you know what the market offers and their equivalent costs you will 
probably understand what
you want and what you can afford to invest in the development process of each 
part of setup.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
David Touzeau
Sent: Friday, February 11, 2022 17:03
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid plugin sponsor

 

Hello

Thank you but this is not the objective and this is the reason for needing the 
"fake".
Access to Kerberos or NTLM ports of the AD, is not possible. An LDAP server 
would be present with accounts replication.
The idea is to do a silent authentication without joining the AD 
We did not need the double user/password credential, only the user sent by the 
browser is required

If the user has an Active Directory session then his account is automatically 
sent without him having to take any action.
If the user is in a workgroup then the account sent will not be in the LDAP 
database and will be rejected.
I don't need to argue about the security value of this method. It saves us from 
setting up a gas factory to make a kind of HotSpot

Le 11/02/2022 à 05:55, Dieter Bloms a écrit :

Hello David,
 
for me it looks like you want to use kerberos authentication.
With kerberos authentication the user don't have to authenticate against
the proxy. The authentication is done in the background.
 
Mayb this link will help:
 
https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 
On Thu, Feb 10, David Touzeau wrote:
 

Hi
 
What we are looking for is to retrieve a "

Re: [squid-users] Squid plugin sponsor

2022-02-10 Thread Eliezer Croitoru
Hey Dieter,

I have tried to use the mentioned wiki document to try and re-create a LAB
with AD 2012-2019.
I got stuck with a setup that is not usable in the terms of transparent
authentication.
I have tried on the next OS:
* Debian 10/11
* Ubuntu 18.04/20.04
* CentOS 7/8
* Oracle Enterprise Linux 7/8

I would be happy to try and re-create the lab here and to make sure that
there will be a well documented configuration guide.
If there is a good tutorial or guide I would be happy to try and verify if
it works in my lab.

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Dieter Bloms
Sent: Friday, February 11, 2022 06:56
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid plugin sponsor

Hello David,

for me it looks like you want to use kerberos authentication.
With kerberos authentication the user don't have to authenticate against
the proxy. The authentication is done in the background.

Mayb this link will help:

https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

On Thu, Feb 10, David Touzeau wrote:

> Hi
> 
> What we are looking for is to retrieve a "user" token without having to
ask
> anything from the user.
> That's why we're looking at Active Directory credentials.
> Once the user account is retrieved, a helper would be in charge of
checking
> if the user exists in the LDAP database.
> This is to avoid any connection to an Active Directory
> Maybe this is impossible
> 
> 
> Le 10/02/2022 à 05:03, Amos Jeffries a écrit :
> > On 10/02/22 01:43, David Touzeau wrote:
> > > Hi
> > > 
> > > I would like to sponsor the improvement of ntlm_fake_auth to support
> > > new protocols
> > 
> > ntlm_* helpers are specific to NTLM authentication. All LanManager (LM)
> > protocols should already be supported as well as currently possible.
> > NTLM is formally discontinued by MS and *very* inefficient.
> > 
> > NP: NTLMv2 with encryption does not *work* because that encryption step
> > requires secret keys the proxy is not able to know.
> > 
> > > or go further produce a new negotiate_kerberos_auth_fake
> > > 
> > 
> > With current Squid this helper only needs to produce an "OK" response
> > regardless of the input. The basic_auth_fake does that.
> > 
> > Amos
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users

> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Vulnerabilities with squid 4.15

2022-02-10 Thread Eliezer Croitoru
Hey Robert,

 

Don’t rush with the move from CentOS 7 to Ubuntu yet, CentOS 7 has good support 
for at-least a year from now.

I can try to help you by providing RPMs that has support for ecap which I 
understand you need.

Alternatively I can try to build an upgrade process for your self compiled 
version.

 

I can recommend on both:

*   Amazon Linux 2
*   Oracle Enterprise Linux 8\7
*   Open Suse

 

As a general alternative which I can support the RPM builds for.

I have also built binaries for Ubuntu and Debian but in a non deb package file 
but will be signed by me.

 

As Amos mentioned the current issue is with WCCP based setups.

Can you please elaborate more if you are using WCCP in your setup?

Also, Are you using SSL-BUMP by any chance? (I really don’t know about a setup 
that doesn’t require this these days)

 

If you would be able to share more information on your setup so I might be able 
to clone such a setup it will help a lot.

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: robert k Wild  
Sent: Thursday, February 10, 2022 21:28
To: NgTech LTD 
Cc: Squid Users 
Subject: Re: [squid-users] Vulnerabilities with squid 4.15

 

I have squid running on centos 7.9, I will move to ubuntu 20 04 03 as centos is 
officially dead to me

 

I have compiled from source ie make make install as I'm running squid with 
squidclamav cicap cicap modules 

 

All instances I have compiled from source ie make make install

 

I did a yum install clamav 

 

On Thu, 10 Feb 2022, 19:20 NgTech LTD, mailto:ngtech1...@gmail.com> > wrote:

Hey Robert,

 

First: your question is not silly.

The answer will defer based on the complexity of the upgrade process.

What Os are you using and also, did you compiled squid from sources or 
installed from a specific package?

Also, what is your squid setup purpose?

 

Eliezer 

 

בתאריך יום ה׳, 10 בפבר׳ 2022, 20:56, מאת robert k Wild ‏mailto:robertkw...@gmail.com> >:

Hi all,

 

Is there any security vulnerabilities with squid 4.15, should I update to 4.17 
or is it OK to still use as my squid proxy server 

 

Sorry for silly question

 

Thanks, 

Rob

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 5.4 is available

2022-02-10 Thread Eliezer Croitoru
Thanks Fred,

 

What is this image general purpose?

In what environment can it be used?

I have seen that the docker-compose contains three containers:

*   Squid
*   e2guardian
*   other

 

If you do have couple minutes to elaborate more about this specific use case it 
would be helpful.

I am working on a setup that will run Squid In linux ip namespaces.

I have seen it can work wonders to separate the interception part of the system 
from the actual OS.

In Palo Alto and couple other products they have the MANAGMNT Interface and 
“plane” usually named “Control Plane”

 

Thanks,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: FredB  
Sent: Wednesday, February 9, 2022 18:41
To: squid-users@lists.squid-cache.org; Eliezer Croitoru 
Subject: Re: [squid-users] [squid-announce] Squid 5.4 is available

 

Hello All

Here docker image builds, automatic at each official release

Amd64 and Arm (64 bits os only, tested on raspberry v3,v4)

https://hub.docker.com/r/fredbcode/squid

Fred

-- 
Envoyé de mon appareil Android avec Courriel K-9 Mail. Veuillez excuser ma 
brièveté.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 5.4 is available

2022-02-09 Thread Eliezer Croitoru
Hey All,

I have just published the latest 5.4 RPMS for:
* Oracle Linux 7+8
* CentOS Linux 7+8
* Amazon Linux 2

All the above includes my latest patch that allows intercepted connections
to be passed towards the destination host
in cases which the DNS resolution comes from another DNS which is not shared
between the clients and the proxy. (8.8.8.8,1.1.1.1 etc)

The next patch series has been used on 5.4-1:
https://gist.github.com/elico/eb0f4e99331af5c23a8f5999f405d37b

And the next patch was used on 4.17-8
https://gist.github.com/elico/630fa57d161b0c0b59ef68786d801589

All The Bests,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-announce  On
Behalf Of Amos Jeffries
Sent: Wednesday, February 9, 2022 10:53
To: squid-annou...@lists.squid-cache.org
Subject: [squid-announce] Squid 5.4 is available

The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-5.4 release!


This release is a bug fix release resolving several issues
found in the prior Squid-5 releases.


The major changes to be aware of:

  * Bug 5190: Preserve configured order of intermediate CA
certificate chain

  Previous Squid-5 releases inverted the CA certificate chain order
  when delivering the server handshake. Breaking clients which are
  unable to reorder the chain. This release once again conforms with
  TLS specification requirements.


  * Bug 5187: Properly track (and mark) truncated store entries

  Squid used an error-prone approach to identifying truncated responses:
  The response is treated as whole unless somebody remembers to mark
  it as truncated. This dangerous default naturally resulted in bugs
  where truncated responses are treated as complete under various
  conditions.

  This change reverses that approach: Responses not explicitly marked as
  whole are treated as truncated. This change affects all Squid-server
  FwdState-dispatched communications: HTTP, FTP, Gopher, and WHOIS. It
  also affects responses received from the adaptation services.

  Transactions that failed due to origin server or peer timeout (a common
  source of truncation) are now logged with a _TIMEOUT %Ss suffix and
  ERR_READ_TIMEOUT/WITH_SRV %err_code/%err_detail.

  Transactions prematurely canceled by Squid during client-Squid
  communication (usually due to various timeouts) now have WITH_CLT
  default %err_detail. This detail helps distinguish otherwise
  similarly-logged problems that may happen when talking to the client or
  to the origin server/peer.


  * Bug 5134: assertion failed: Transients.cc:221: "old == e"

  This bug appears when caching is enabled and a worker dies and
  is automatically restarted. The SMP cache management was missing
  some necessary cross-checks on hash collision before updating
  stored objects. The worker recovery logic detected the hash collision
  better and would abort with the given error.


  * Bug 5132: Close the tunnel if to-server conn closes after client

  This bug has been present since 5.0.4 and shows up as a growing number
  of open (aka "hung") TCP connections used by Squid regardless of client
  traffic levels.

  It can be expected to affect on all HTTPS traffic, and proxy using
  SSL-Bump features. With the problem being worse the more CONNECT
  tunnels are handled.


  * Bug 5188: Fix reconfiguration leaking tls-cert=... memory

  This bug was found investigating other issues. Installations which
  are reconfiguring often may have been seeing sub-optimal memory
  usage. It has otherwise a minimal impact.



   All users of Squid-5 are encouraged to upgrade as soon as
   possible.


See the ChangeLog for the full list of changes in this and
earlier releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v5/RELEASENOTES.html
when you are ready to make the switch to Squid-5

This new release can be downloaded from our HTTP or FTP servers

   http://www.squid-cache.org/Versions/v5/
   ftp://ftp.squid-cache.org/pub/squid/
   ftp://ftp.squid-cache.org/pub/archive/5/

or the mirrors. For a list of mirror sites see

   http://www.squid-cache.org/Download/http-mirrors.html
   http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug
report.
   https://bugs.squid-cache.org/


Amos Jeffries
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-07 Thread Eliezer Croitoru
Hey David,

 

Since the handle_stdout runs in it’s own thread it’s sole purpose is to send 
results to stdout.

If I will run the next code in a simple software without the 0.5 sleep time:

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

 sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

 

 

what will happen is that the software will run with 100% CPU looping over and 
over on the size of the queue
while sometimes it will spit some data to stdout.

Adding a small delay with 0.5 secs will allow some “idle” time for the cpu in 
the loop preventing it from consuming
all the CPU time.

It’s a very old technique and there are others which are more efficient but 
it’s enough to demonstrate that a simple
threaded helper is much better then any PHP code that was not meant to be 
running as a STDIN/OUT daemon/helper software.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: David Touzeau  
Sent: Monday, February 7, 2022 02:42
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

 

Sorry  Elizer

It was a mistake... No, your code is clean..
Impressive for the first shot
Many thanks for your example, we will run our stress tool to see the 
difference...

Just a question

Why did you send 500 milliseconds of sleep in the handle_stdoud ? Is it for let 
squid closing the pipe ?




Le 06/02/2022 à 11:46, Eliezer Croitoru a écrit :

Hey David,

 

Not a fully completed helper but it seems to works pretty nice and might be 
better then what exist already:

https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

 

#!/usr/bin/env python

 

import sys

import time

import urllib.request

import signal

import threading

 

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

 

RUNNING = True

 

quit = 0

rand_api_url =  <https://cloud1.ngtech.co.il/api/test.php> 
"https://cloud1.ngtech.co.il/api/test.php;

 

def sig_handler(signum, frame):

sys.stderr.write("Signal is received:" + str(signum) + "\n")

global quit

quit = 1

global RUNNING

RUNNING=False

 

 

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

 

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

 sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

 

def handle_stdin(n):

while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

 

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

 

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

 

threads.append(stdout_thread)

 

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

 

threads.append(stdin_thread)

 

while(RUNNING):

time.sleep(3)

 

print("Not RUNNING")

for thread in threads:

    thread.join()

print("All threads stopped.")

## END

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  <mailto:squid-users-boun...@lists.squid-cache.org> 
 On Behalf Of David Touzeau
Sent: Friday, February 4, 2022 16:29
To: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: Re: [squid-users] external helper development

 

Elizer,

Thanks for all this advice and indeed your arguments are valid between opening 
a socket, sending data, receiving data and closing the socket unlike direct 
access to a regex or a memory entry even if the calculation has already been 
done.

But what surprises me the most is that we have produced a python plugin in 
thread which I provide you a code below. 
The php code is like your mentioned example ( No thread, just a loop and outp

Re: [squid-users] [ext] Re: Absolute upper limit for filedescriptors in squid-6?

2022-02-06 Thread Eliezer Croitoru
It has Systemd, Use it.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Ralf Hildebrandt  
Sent: Friday, February 4, 2022 10:12
To: Eliezer Croitoru 
Cc: 'Squid Users' 
Subject: Re: [squid-users] [ext] Re: Absolute upper limit for filedescriptors 
in squid-6?

* Eliezer Croitoru :

> What OS are you using exactly?

Ubuntu 20.04 on amd64

Ralf Hildebrandt
Charité - Universitätsmedizin Berlin
Geschäftsbereich IT | Abteilung Netzwerk

Campus Benjamin Franklin (CBF)
Haus I | 1. OG | Raum 105
Hindenburgdamm 30 | D-12203 Berlin

Tel. +49 30 450 570 155
ralf.hildebra...@charite.de
https://www.charite.de

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-06 Thread Eliezer Croitoru
Hey David,

 

Not a fully completed helper but it seems to works pretty nice and might be 
better then what exist already:

https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/21ff1bbc0cf3d91719db27d9d027652e8bd3de4e/threaded-helper-example.py

 

#!/usr/bin/env python

 

import sys

import time

import urllib.request

import signal

import threading

 

#set debug mode for True or False

debug = False

#debug = True

queue = []

threads = []

 

RUNNING = True

 

quit = 0

rand_api_url = "https://cloud1.ngtech.co.il/api/test.php;

 

def sig_handler(signum, frame):

sys.stderr.write("Signal is received:" + str(signum) + "\n")

global quit

quit = 1

global RUNNING

RUNNING=False

 

 

def handle_line(line):

 if not RUNNING:

 return

 if not line:

 return

 if quit > 0:

 return

 

 arr = line.split()

 response = urllib.request.urlopen( rand_api_url )

 response_text = response.read()

 

 queue.append(arr[0] + " " + response_text.decode("utf-8"))

 

def handle_stdout(n):

 while RUNNING:

 if quit > 0:

   return

 while len(queue) > 0:

 item = queue.pop(0)

 sys.stdout.write(item)

 sys.stdout.flush()

 time.sleep(0.5)

 

def handle_stdin(n):

while RUNNING:

 line = sys.stdin.readline()

 if not line:

 break

 if quit > 0:

 break

 line = line.strip()

 thread = threading.Thread(target=handle_line, args=(line,))

 thread.start()

 threads.append(thread)

 

signal.signal(signal.SIGUSR1, sig_handler)

signal.signal(signal.SIGUSR2, sig_handler)

signal.signal(signal.SIGALRM, sig_handler)

signal.signal(signal.SIGINT, sig_handler)

signal.signal(signal.SIGQUIT, sig_handler)

signal.signal(signal.SIGTERM, sig_handler)

 

stdout_thread = threading.Thread(target=handle_stdout, args=(1,))

stdout_thread.start()

 

threads.append(stdout_thread)

 

stdin_thread = threading.Thread(target=handle_stdin, args=(2,))

stdin_thread.start()

 

threads.append(stdin_thread)

 

while(RUNNING):

time.sleep(3)

 

print("Not RUNNING")

for thread in threads:

thread.join()

print("All threads stopped.")

## END

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
David Touzeau
Sent: Friday, February 4, 2022 16:29
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

 

Elizer,

Thanks for all this advice and indeed your arguments are valid between opening 
a socket, sending data, receiving data and closing the socket unlike direct 
access to a regex or a memory entry even if the calculation has already been 
done.

But what surprises me the most is that we have produced a python plugin in 
thread which I provide you a code below. 
The php code is like your mentioned example ( No thread, just a loop and output 
OK ) 

Results are after 6k requests, squid freeze and no surf can be made as with PHP 
code we can up to 10K requests and squid is happy
really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

def __init__(self):
self._exiting = False
self._cache = {}

def exit(self):
self._exiting = True

def stdout(self, lineToSend):
try:
sys.stdout.write(lineToSend)
sys.stdout.flush()

except IOError as e:
if e.errno==32:
# Error Broken PIPE!"
pass
except:
# other execpt
pass

def run(self):
while not self._exiting:
if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
line = sys.stdin.readline()
LenOfline=len(line)

if LenOfline==0:
self._exiting=True
break

if line[-1] == '\n':line = line[:-1]
channel = None
options = line.split()

try:
if options[0].isdigit(): channel = options.pop(0)
except IndexError:
self.stdout("0 OK first=ERROR\n")
continue

# Processing here

try:
self.stdout("%s OK\n" % channel)
except:
self.stdout("%s ERROR first=ERROR\n" % channel)




class Main(object):
def __init__(self):
self._threads = []
self._exiting = False

Re: [squid-users] external helper development

2022-02-06 Thread Eliezer Croitoru
Hey David,

It will take me more then couple seconds to write an example threaded python 
helper however it’s pretty simple why this is helper is slow.
It uses a select statement and threading in a very wrong way.
Before anything else try to compare the right helpers between php and python.
The next example helper can be used in comparison to a PHP helper and is much 
faster:
https://gist.githubusercontent.com/elico/03938e3a796c53f7c925872bade78195/raw/85f46ce58db12f30ed99a46c5f300dd8be401674/helper-1.py

#!/usr/bin/env python

import sys
import time

#set debug mode for True or False
debug = False
#debug = True

while True:
 line = sys.stdin.readline().strip()
 arr = line.split()
 msg = ""

 client = ""

 if debug:
   sys.stderr.write("__debug info__" + str(time.time()) +": \"" + line + 
"\"\n")
 if client and time.time() - time.mktime(time.gmtime(client)) < 
int(time.time() - logintime*60):
sys.stdout.write(arr[0] + " OK \n")
if debug:
  sys.stderr.write("__debug info__ : \"" + line + "\" and time in db 
is: "+ str(client) +"\n")
 else:
sys.stdout.write(arr[0] + " ERR \n")
if debug:
  sys.stderr.write("__debug info__ : " + line + '\n')
 sys.stdout.flush()
## END

If you have a specific API you want to try and test the requests against let me 
know and I will try to give an example via this:
* HTTP
* DNS
* Others

With the above example you would just need more helpers and add concurrency 
support to the squid external_acl helper configuration.
With enough helpers the stdin buffers will be enough to compensate the missing 
threads implementation.

Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: mailto:ngtech1...@gmail.com

From: squid-users  On Behalf Of 
David Touzeau
Sent: Friday, February 4, 2022 16:29
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

Elizer,

Thanks for all this advice and indeed your arguments are valid between opening 
a socket, sending data, receiving data and closing the socket unlike direct 
access to a regex or a memory entry even if the calculation has already been 
done.

But what surprises me the most is that we have produced a python plugin in 
thread which I provide you a code below. 
The php code is like your mentioned example ( No thread, just a loop and output 
OK ) 

Results are after 6k requests, squid freeze and no surf can be made as with PHP 
code we can up to 10K requests and squid is happy
really, we did not understand why python is so low.

Here a python code using threads

#!/usr/bin/env python
import os
import sys
import time
import signal
import locale
import traceback
import threading
import select
import traceback as tb

class ClienThread():

def __init__(self):
self._exiting = False
self._cache = {}

def exit(self):
self._exiting = True

def stdout(self, lineToSend):
try:
sys.stdout.write(lineToSend)
sys.stdout.flush()

except IOError as e:
if e.errno==32:
# Error Broken PIPE!"
pass
except:
# other execpt
pass

def run(self):
while not self._exiting:
if sys.stdin in select.select([sys.stdin], [], [], 0.5)[0]:
line = sys.stdin.readline()
LenOfline=len(line)

if LenOfline==0:
self._exiting=True
break

if line[-1] == '\n':line = line[:-1]
channel = None
options = line.split()

try:
if options[0].isdigit(): channel = options.pop(0)
except IndexError:
self.stdout("0 OK first=ERROR\n")
continue

# Processing here

try:
self.stdout("%s OK\n" % channel)
except:
self.stdout("%s ERROR first=ERROR\n" % channel)




class Main(object):
def __init__(self):
self._threads = []
self._exiting = False
self._reload = False
self._config = ""

for sig, action in (
(signal.SIGINT, self.shutdown),
(signal.SIGQUIT, self.shutdown),
(signal.SIGTERM, self.shutdown),
(signal.SIGHUP, lambda s, f: setattr(self, '_reload', True)),
(signal.SIGPIPE, signal.SIG_IGN),
):
try:
signal.signal(sig, action)
except AttributeError:
pass



def shutdown(self, sig = None, frame = None):
self._exiting = True
self.stop_threads()

def start_threads(self):

sThread = ClienThread(

Re: [squid-users] [ext] Re: Absolute upper limit for filedescriptors in squid-6?

2022-02-03 Thread Eliezer Croitoru
What OS are you using exactly?

Thanks,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Ralf Hildebrandt  
Sent: Thursday, February 3, 2022 09:42
To: NgTech LTD 
Cc: Squid Users 
Subject: Re: [squid-users] [ext] Re: Absolute upper limit for filedescriptors 
in squid-6?

* NgTech LTD :
> Hey Ralph,
> 
> Did you tried to configure the squid proxy systemd service and squid conf
> with the mentioned max fd?

I'm not using systemd to start squid (using runit here)

Ralf Hildebrandt
Charité - Universitätsmedizin Berlin
Geschäftsbereich IT | Abteilung Netzwerk

Campus Benjamin Franklin (CBF)
Haus I | 1. OG | Raum 105
Hindenburgdamm 30 | D-12203 Berlin

Tel. +49 30 450 570 155
ralf.hildebra...@charite.de
https://www.charite.de

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-03 Thread Eliezer Croitoru
Hey David,

 

First, PHP is a good language however it doesn’t handle well STDIN/STDOUT 
helpers and crashes more then once without any warnings.

It’s documented in the PHP docs (Don’t remember exactly where).

Regarding PHP compared to PHP being faster, it’s pretty simple to test and 
validate if the cost of the crash is better than the speed.

What python and PHP code have used in your tests? (I would be happy to try and 
test it..).

You can see this session helper written in python:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Python

 

And about the cache of each helpers, the cost of a cache on a single helper is 
not much in terms of memory comparing to some network access.

Again it’s possible to test and verify this on a loaded system to get results. 
The delay itself can be seen from squid side in the cache manager statistics.

 

You can also try to compare the next ruby helper:

https://wiki.squid-cache.org/EliezerCroitoru/SessionHelper

 

About a shared “base” which allows helpers to avoid computation of the query…. 
It’s a good argument, however it depends what is the cost of
pulling from the cache compared to calculating the answer.

A very simple string comparison or regex matching would probably be faster than 
reaching a shared storage in many cases.

 

Also take into account the “concurrency” support from the helper side.

A helper that supports parallel processing of requests/lines can do better then 
many single helpers in more than once use case.

In any case I would suggest to enable requests concurrency from squid side 
since the STDIN buffer will emulate some level of concurrency
by itself and will allow squid to keep going forward faster.

 

Just to mention that SquidGuard have used a single helper cache for a very long 
time, ie every single SquidGuard helper has it’s own copy of the whole

configuration and database files in memory.

 

And again, if you do have any option to implement a server service model and 
that the helpers will contact this main service you will be able to implement
much faster internal in-memory cache compared to a redis/memcahe/other external 
daemon(need to be tested).

 

A good example for this is ufdbguard which has helpers that are clients of the 
main service which does the whole heavy lifting and also holds 
one copy of the DB.

 

I have implemented SquidBlocker this way and have seen that it out-performs any 
other service I have tried until now.

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
David Touzeau
Sent: Thursday, February 3, 2022 14:24
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

 

Hi Elizer

You are right in a way but when squid loads multiple helpers, each helper will 
use its own cache.
Using a shared "base" allows helpers to avoid having to compute a query already 
found by another helper who already has the answer.

Concerning PHP what we find strange is that with our tests, a simple loop and 
an "echo OK", php goes faster: 1.5x than python.

Le 03/02/2022 à 07:09, Eliezer Croitoru a écrit :

Hey Andre,
 
Every language has a "cost" for it's qualities.
For example, Golang is a very nice language that offers a relatively simple way 
for concurrency support and cross hardware compilation/compatibility.
One cost in Golang is that the binary is in the size of an OS/Kernel.
In python you must write everything in a specific position and indentation and 
threading is not simple to implement for a novice.
However when you see what was written in Python you can see that most of 
OpenStack api's and systems are written in.. python and it means something.
I like very much ruby but it doesn't support threading by nature but supports 
"concurrency".
Squid doesn't implement threading but implements "concurrency".
 
Don't touch PHP as a helper!!! (+1 to Alex)
 
Also take into account that Redis or Memcached is less preferred in many cases 
if the library doesn't re-use the existing connection for multiple queries.
Squid also implements caching for helpers answers so it's possible to implement 
the helper and ACL's in such a way that squid caching will
help you to lower the access to the external API and or redis/memcahced/DB.
I also have good experience with some libraries which implements cache that I 
have used inside a helper with a limited size for "level 1" cache.
It's possible that if you will implement both the helper and server side of the 
solution like ufdbguard you would be able to optimize the system
to take very high load.
 
I hope the above will help you.
Eliezer
 

Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 
 
-Original Message-
From: squid-users  <mailto:sq

Re: [squid-users] How to fix the error error:transaction-end-before-headers in access log

2022-02-02 Thread Eliezer Croitoru
Hey,

In the case of health checks it's possible to write these in a way that will 
not result with this error line.
In case we would have the L4 balancer vendor we might be able to suggest on the 
right way to health check squid.

A nice example in haproxy can be seen at:
https://www.serverlab.ca/tutorials/linux/network-services/how-to-configure-haproxy-health-checks/

The next line should be good enough to understand:
option httpchk HEAD /squid-internal-static/icons/SN.png 
HTTP/1.1\r\nHost:\ proxy-host-or-ip\r\n

Another option is:
option httpchk HEAD /squid-internal-mgr/menu HTTP/1.1\r\nHost:\ 
proxy-host-or-ip\r\n

Which will require couple manager ACLs to make sure only the LB will have 
access to the internal mgr pages.

Hope This Helps,
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, January 20, 2022 19:40
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] How to fix the error 
error:transaction-end-before-headers in access log

On 1/20/22 2:42 AM, Matus UHLAR - fantomas wrote:
> On 20.01.22 07:04, Hg Mi wrote:
>> We currently using squid 4.13 on ubuntu 18.04,  the following error
>> generates really frequently in the access.log.
>>
>> error:transaction-end-before-headers
> 
> it means that either client or server closed connection before

In most cases, this means the client opened a TCP connection to a Squid
listening port and then closed it without sending the HTTP headers. To
figure out who is at fault, you need to figure out who is making these
connections to Squid and why they are closing them without sending HTTP
headers (if that is what they are actually doing).

Bugs notwithstanding, server closures should not lead to
transaction-end-before-headers records.

>> Is this a bug in squid4?  or it was misconfigured in my environment?
> 
> most likely your environment. don't you have any content filter in front of
> your proxy?

... or anything that would "probe" or "health check" Squid http_port or
https_port at TCP level.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external helper development

2022-02-02 Thread Eliezer Croitoru
Hey Andre,

Every language has a "cost" for it's qualities.
For example, Golang is a very nice language that offers a relatively simple way 
for concurrency support and cross hardware compilation/compatibility.
One cost in Golang is that the binary is in the size of an OS/Kernel.
In python you must write everything in a specific position and indentation and 
threading is not simple to implement for a novice.
However when you see what was written in Python you can see that most of 
OpenStack api's and systems are written in.. python and it means something.
I like very much ruby but it doesn't support threading by nature but supports 
"concurrency".
Squid doesn't implement threading but implements "concurrency".

Don't touch PHP as a helper!!! (+1 to Alex)

Also take into account that Redis or Memcached is less preferred in many cases 
if the library doesn't re-use the existing connection for multiple queries.
Squid also implements caching for helpers answers so it's possible to implement 
the helper and ACL's in such a way that squid caching will
help you to lower the access to the external API and or redis/memcahced/DB.
I also have good experience with some libraries which implements cache that I 
have used inside a helper with a limited size for "level 1" cache.
It's possible that if you will implement both the helper and server side of the 
solution like ufdbguard you would be able to optimize the system
to take very high load.

I hope the above will help you.
Eliezer


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of 
André Bolinhas
Sent: Wednesday, February 2, 2022 00:09
To: 'Alex Rousskov' ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] external helper development

Hi
Thanks for the reply.
I will take a look on Rust as you recommend.
Also, between Python and Go and is the best for multithreading and concurrency?
Rust supports multithreading and concurrency?
Best regards

-Mensagem original-
De: squid-users  Em Nome De Alex 
Rousskov
Enviada: 1 de fevereiro de 2022 22:01
Para: squid-users@lists.squid-cache.org
Assunto: Re: [squid-users] external helper development

On 2/1/22 16:47, André Bolinhas wrote:
> Hi
> 
> I’m building an external helper to get the categorization of an 
> website, I know how to build it, but I need you option about the best 
> language for the job in terms of performance, bottlenecks, I/O blocking..
> 
> The helper will work like this.
> 
> 1º  will check the hot memory for faster response (memcache or redis)
> 
> 2º If the result not exist in hot memory then will check an external 
> api to fetch the categorie and saved it in hot memory.
> 
> In what language do you recommend develop such helper? PHP, Python, Go..

If this helper is for long-term production use, and you are willing to learn 
new things, then use Rust[1]. Otherwise, use whatever language you are the most 
comfortable with already (except PHP), especially if that language has good 
libraries/wrappers for the external APIs you will need to use.

Alex.
[1] https://www.rust-lang.org/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid url_rewrite_program how to return a kind of TCP reset

2022-01-31 Thread Eliezer Croitoru
Hey David,

 

It works only with ICAP or ECAP but I was talking about ICAP.

I wrote an example golang service at:

https://github.com/elico/bgu-icap-example

 

It is licensed with 3-clause BSD so you can use it freely.

It’s pretty simple to understand the code and I have used it in more
then one production environment in the past years.

The above is modifying the response of a page while you can just push into the 
page a template.

Just so you would see that the production is using one that is similar to the 
next one:

https://github.com/elico/squidblocker-icap-server/blob/master/sb_icap.go

 

Eliezer



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-users  On Behalf Of 
David Touzeau
Sent: Monday, January 31, 2022 10:54
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid url_rewrite_program how to return a kind of 
TCP reset

 

Is adapted_http_access supporting url_rewrite_program  ? It seems only supports 
ecap/icap 

Le 31/01/2022 à 03:52, Amos Jeffries a écrit :

On 31/01/22 13:20, David Touzeau wrote: 



But it makes 2 connections to the squid for just stopping queries. 
It seems not really optimized. 


The joys of using URL modification to decide security access. 





I notice that for several reasons i cannot switch to an external_acl 


:( 





Is there a way / idea ? 


 <http://www.squid-cache.org/Doc/config/adapted_http_access/> 
<http://www.squid-cache.org/Doc/config/adapted_http_access/> 


Amos 
___ 
squid-users mailing list 
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org>  
http://lists.squid-cache.org/listinfo/squid-users 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2022-01-31 Thread Eliezer Croitoru
Hey Andre,

 

I would not recommend on 5.x yet since there are couple bugs which are blocking 
it to be used as stable.

I believe that your current setup is pretty good.

The only thing which might affect the system is the authentication and ACLs.

As long these ACL rules are static it should not affect too much on the 
operation, however,
When adding external authentication and external helpers for other things it’s 
possible to see some slowdown in specific scenarios.

As long as the credentials and the ACLs will be fast enough it is expected to 
work fast but only testing will prove how the real world usage
will affect the service.

I believe that 5 workers is enough and also take into account that the external 
helpers would also require CPU so don’t rush into
changing the workers amount just yet.

 

All The Bests,

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: André Bolinhas  
Sent: Monday, January 31, 2022 15:47
To: 'NgTech LTD' 
Cc: 'Squid Users' 
Subject: RE: [squid-users] Tune Squid proxy to handle 90k connection

 

Hi

I will not use cache in this project.

Yes, I will need

*   ACL (based on Domain, AD user, Headers, User Agent…)
*   Authentication
*   SSL bump just for one domain.
*   DNS resolution (I will use Unbound DNS service for this)

 

Also, I will divide the traffic between two Squid box instead just one.

 

So each box will handle around 50k request.

 

Each box have:

*   CPU(s) 16
*   Threads per code 2
*   Cores per socket 8
*   Sockets 1
*   Inter Xeron Silver 4208  @ 2.10GHz
*   96GB Ram
*   1TB raid-0 SSD

 

At this time I have 5 workers on each Squid box and the Squid version is 4.17, 
do you recommend more workers or upgrade the squid version to 5?

 

Best regards

 

De: NgTech LTD mailto:ngtech1...@gmail.com> > 
Enviada: 31 de janeiro de 2022 04:59
Para: André Bolinhas mailto:andre.bolin...@articatech.com> >
Cc: Squid Users mailto:squid-users@lists.squid-cache.org> >
Assunto: Re: [squid-users] Tune Squid proxy to handle 90k connection

 

I would recommend you to start with 0 caching.

However, for choosing the right solution you must give more details.

For example there is an IBM reasearch that prooved that for about 90k 
connections you can use vm's ontop of such hardware with apache web server.

If you do have the set of the other requirements from the proxy else then the 
90k requests it would be wise to mention them.

 

Do you need any specific acls?

Do you need authentication?

etc..

 

For a simple forward proxy I would suggest to use a simpler solution and if 
possible to not log anything as a starter point.

Any local disk i/o will slow down the machine.

 

About the url categorization, I do not have experience with ufdbguard on such 
scale but it would be pretty heavy for any software to handle 90k rps...

 It's doable to implement such setup but will require testing.

Will you use ssl bump in this setup?

 

If I will have all the technical and specs/requirements details I might be able 
to suggest better then now.

Take into account that each squid worker can handle about 3k rps tops(with my 
experience) and it's a juggling between two sides so... 3k is really 
3k+3k+external_acls+dns...

 

I believe that in this case an example of configuration from the squid 
developers might be usefull.

 

Eliezer

 

 

בתאריך יום ג׳, 25 בינו׳ 2022, 18:42, מאת André Bolinhas 
‏mailto:andre.bolin...@articatech.com> >:

Any tip about my last comment?

-Mensagem original-
De: André Bolinhas mailto:andre.bolin...@articatech.com> > 
Enviada: 21 de janeiro de 2022 16:36
Para: 'Amos Jeffries' mailto:squ...@treenet.co.nz> >; 
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
Assunto: RE: [squid-users] Tune Squid proxy to handle 90k connection

Thanks Amos
Yes, you are right, I will put a second box with HaProxy in front to balance 
the traffic.
About the sockets I can't double it because is a physical machine, do you think 
disable hyperthreading from bios will help, because we have other services 
inside the box that works in multi-threading, like unbound DNS?

Just more a few questions:
1º The server have 92Gb of Ram, do you think that is needed that adding swap 
will help squid performance?
2º Right now we are using squid 4.17 did you recommend upgrade or downgrade to 
any specific version?
3º We need categorization, for this we are using an external helper to achieve 
it, do you recommend use this approach with ACL or move to some kind of 
ufdbguard service?

Best regards
-Mensagem original-
De: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > Em Nome De Amos Jeffries
Enviada: 21 de janeiro de 2022 16:05
Para: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Assunto:

[squid-users] The status of AIA ie: TLS code: X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY ?

2022-01-25 Thread Eliezer Croitoru
Hey,

I have recently seen more then one site that doesn't provide the full CA
bundle chain.
An example:
https://www.ssllabs.com/ssltest/analyze.html?d=www.cloudschool.org
https://www.ssllabs.com/ssltest/analyze.html?d= certificatechain.io 

I wanted to somehow get this issue logged properly.
Currently squid sends the client a customized 503 page and the next line in
cache.log:
2022/01/25 19:01:25 kid1| ERROR: negotiating TLS on FD 26:
error:1416F086:SSL routines:tls_process_server_certificate:certificate
verify failed (1/-1/0)

Were there any improvement in this area in 5.x or 6.x brances?
And also the logging is very uninformative regarding the culprit of the
issue.
I would have expected that the remote host ip:port and sni would be logged
as well in the above mentioned line.

Currently I do not know about a way to identify from the logs these specific
sites.
I was thinking about writing a daemon that will do the trick automatically
for 4.17.
Any ideas about the subject?

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 4.17 and 5.3 SSL BUMP issue: SSL_ERROR_RX_RECORD_TOO_LONG

2022-01-24 Thread Eliezer Croitoru
I sat for a while thinking what is the best approach to the subject and the
next patch seems to be reasonable enough to me:
https://gist.github.com/elico/630fa57d161b0c0b59ef68786d801589

Let me know if this patch violates anything that I might not took into
account.

Thanks,
Eliezer

* Tested to work in my specific scenario which I really don't care about
caching when I'm in a DOS situation.


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Alex Rousskov
Sent: Monday, January 24, 2022 16:54
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] 4.17 and 5.3 SSL BUMP issue:
SSL_ERROR_RX_RECORD_TOO_LONG

On 1/24/22 2:42 AM, Eliezer Croitoru wrote:
> 2022/01/24 09:11:20 kid1| SECURITY ALERT: Host header forgery detected on
> local=142.250.179.228:443 remote=10.200.191.171:51831 FD 16 flags=33
(local
> IP does not match any domain IP)

As you know, Squid improvements related to these messages have been
discussed many times. I bet the ideas summarized in the following old
email remain valid today:

http://lists.squid-cache.org/pipermail/squid-users/2019-July/020764.html


If you would like to address browser's SSL_ERROR_RX_RECORD_TOO_LONG
specifically (the error in your email Subject line), then that is a
somewhat different matter: According to your packet capture, Squid sends
a plain text HTTP 409 response to a TLS client. That is not going to
work with popular browsers (for various technical and policy reasons).

Depending on the SslBump stage where the Host header forgery was
detected, Squid could bump the client connection to deliver that error
response; in that case, the browser may still refuse to show the
response to the user because the browser will not trust the certificate
that Squid would have to fake without sufficient origin server info.
However, the browser error will be different and arguably less confusing
to admins and even users.

https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feat
ure.2C_enhance.2C_of_fix_something.3F


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 4.17 and 5.3 SSL BUMP issue: SSL_ERROR_RX_RECORD_TOO_LONG

2022-01-23 Thread Eliezer Croitoru
 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.248  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.265  5 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.266  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.266  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.276  4 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.276  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.276  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.291  4 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.291  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.291  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.306  4 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.306  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.306  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.320  4 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.320  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.320  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008592.336  5 10.200.191.171 NONE/200 0 CONNECT 142.250.179.228:443
- HIER_NONE/- - www.google.com splice
1643008592.336  0 10.200.191.171 NONE/409 4077 CONNECT
www.google.com:443 - HIER_NONE/- text/html www.google.com -
1643008592.336  0 10.200.191.171 NONE/000 0 NONE
error:transaction-end-before-headers - HIER_NONE/- - - -
1643008594.154145 10.200.191.171 NONE/200 0 CONNECT 104.21.81.98:443 -
ORIGINAL_DST/104.21.81.98 - www.ruby-forum.com bump
## END

Squid returns the response:
HTTP/1.1 409 Conflict
Server: squid/4.17
Mime-Version: 1.0
Date: Mon, 24 Jan 2022 07:13:00 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3680
X-Squid-Error: ERR_CONFLICT_HOST 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from px2-043.ngtech.home
X-Cache-Lookup: NONE from px2-043.ngtech.home:3128
Via: 1.1 px2-043.ngtech.home (squid/4.17)
Connection: close
...

And squid is right indeed.
The local DNS has the next DNS resolution for www.google.com
> www.google.com
Server:  [10.200.191.3]
Address:  10.200.191.3

Non-authoritative answer:
Name:www.google.com
Addresses:  2a00:1450:4009:80a::2004
  216.58.212.196

While the remote resolution is:
> www.google.com
Server:  DC..XX
Address:  192.168.X.X

Non-authoritative answer:
Name:www.google.com
Addresses:  2a00:1450:4009:81d::2004
  142.250.179.228

So yes, it's a different IP then expected however squid should have the
option(to my understanding) to handle such cases.
Maybe disable caching or anything else.

The whole server config ie: /etc/squid is at:
http://cloud1.ngtech.co.il/squid/support-save-2022-01-24_09:31:10.tar.gz

I have created a setup which uses mysql to store and dump specific acls
files.
It has a nice Makefile with support-save option which dumps many details on
the machine including the HW and OS most relevant details.
I have tried to patch squid to "fix" the issue but didn't had enough time to
resolve it.
I hope it will help to add the ability to handle this situation (which in
the past I haven't seen the real need for a solution and I was wrong).

If any details are missing let me know.
I am pretty sure that there is an open bug for this issue and I am more then
welcome to get a redirection towards it with a link.

Thanks,


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RES: Squid 4.13 does not access Facebook

2022-01-08 Thread Eliezer Croitoru
Use Ansible to do it…

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

 

From: squid-users  On Behalf Of 
Graminsta
Sent: Friday, January 7, 2022 23:40
To: 'Bruno de Paula Larini' ; 
squid-users@lists.squid-cache.org
Subject: [squid-users] RES: Squid 4.13 does not access Facebook

 

I I thought it was filtered by you guys before make it public.
Other sensitive content I sent was erased before it came in this list.

Its to bad.
Now I have to change the pw of about 200 VPSs, hell.

 

Marcelo Rodrigo

 

De: Bruno de Paula Larini [mailto:bruno.lar...@riosoft.com.br] 
Enviada em: sexta-feira, 7 de janeiro de 2022 17:13
Para: Graminsta; squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Assunto: Re: [squid-users] Squid 4.13 does not access Facebook

 

This is a public mailing list my friend.
Your IP address and password are available on the internet now.

I recommend you to disable that SSH service.

Take care next time.


Em 07/01/2022 16:11, Graminsta escreveu:

Hello :)

 

I have a specific problem with facebook access in squid 4.13 with Ubuntu 20.

It can access all other IPV6 websites but not https://www.facebook.com. Or any 
other facebook URL.

 

Outside of squid, the access is working via curl in the same server with the 
same IPV6 addresses.

It happens with all IPV6 servers I have.

 

 

Please don’t make the following data public from here:

I am a IPV6 proxy provider

You can use the proxy access to test it. Its 181.191.73.174:4000 user hugoebah 
pw senhabesta11

I will provide you with a SSH access to avoid you all the trouble to you make a 
lab.

SSH 181.191.73.174 user root pw esqueci11@

I am using this server in my Instagram automation but if you have to reboot it 
or do testing, you can do it. Its just dummy accounts.

 

Tks a lot ;)

 

Marcelo Rodrigo

Whatsapp 11 9 6854-3878

 

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error 503 accessing Instagram/facebook via IPv6

2021-11-02 Thread Eliezer Croitoru
Hey,

Is this a tproxy or intercept setup?

Eliezer

-Original Message-
From: squid-users  On Behalf Of
marcelorodr...@graminsta.com.br
Sent: Saturday, October 30, 2021 09:10
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Error 503 accessing Instagram/facebook via IPv6

Hi,

I have been using squid for several years and am very grateful for the 
solution.

Since last 3-4 days my customers haven't been able to access 
www.instagram.com and Facebook throug IPv6s that were already working as 
proxies for years.

I only get 503 error after a time out.
The strangest thing is that I can connect 20-30% of the attempts.

I didn't make any changes to the VPSs.

I switched IPv6s provider to a totally different subnet, changed several 
equipments, but it didn't work at all.

If I access through the same VPS using the same IPs as squid is running 
using curl command but not going through the proxy It works.

But through Squid it no longer accesses as before.
This only happens with v6 IPs. On V4 Squid runs normally.

I upgraded to Ubuntu 20.04 and Squid 5.2-10 but everything remains the 
same.

My agency depends on it and I've already lost half of my clients.
Could you please help me, even if I have to pay for support?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About Squid 4, AD, Kerberos and AD group auth.

2021-09-27 Thread Eliezer Croitoru
Thanks,

 

I am looking for an English guide.

I am not to familiar with the language of the guide in the link.

 

Eliezer

 

From: Hernan Saltiel  
Sent: Sunday, September 26, 2021 15:16
To: Eliezer Croitoru 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] About Squid 4, AD, Kerberos and AD group auth.

 

Hi Eliézer,

 

I tried this article, sent to me by José Rodriguez, and after just a few 
adjustments, worked like a charm: 
https://www.sysadminsdecuba.com/2019/10/intergracion-de-squid-kerberos-con-samba4-como-addc/

 

Thanks a lot and best regards,

 

HeCSa.

 

 

 

On Sun, Sep 26, 2021, 03:09 Eliezer Croitoru mailto:ngtech1...@gmail.com> > wrote:

Hey,

 

Have you tried these instructions:

https://support.kaspersky.com/KWTS/6.1/en-US/166336.htm

 

Eliezer

 

From: squid-users mailto:squid-users-boun...@lists.squid-cache.org> > On Behalf Of Hernan Saltiel
Sent: Monday, September 20, 2021 16:17
To: Amos Jeffries mailto:squ...@treenet.co.nz> >
Cc: squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 
Subject: Re: [squid-users] About Squid 4, AD, Kerberos and AD group auth.

 

Hi Amos!

Thanks a lot for your response.

I already checked this page, it talks about using negotiate_kerberos_auth when 
having Squid 3.2 or newer, but there is no place in the document with an 
example about how to use it. 

Then I went to the manpage for this command (negotiate_kerberos_auth man page - 
squid | ManKier <https://www.mankier.com/8/negotiate_kerberos_auth> ), where I 
can see three lines talking about how to add this to squid.conf. But I don't 
know if I do need to follow a procedure to configure winbind, Samba, or any 
other thing, and how to configure squid.conf to work with groups, and their 
permissions. 

Is there any place with full examples on using that config?

Thanks again, and best regards,

 

HeCSa.

 

 

 

On Mon, Sep 20, 2021 at 3:10 AM Amos Jeffries mailto:squ...@treenet.co.nz> > wrote:

On 20/09/21 5:32 am, Hernan Saltiel wrote:
>  If you know about this, and can point me out to some URL I'm not 
> seeing, I'll thank you.

Please see the FAQ written by that helpers author
<https://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos>


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users




 

-- 

HeCSa

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] About Squid 4, AD, Kerberos and AD group auth.

2021-09-26 Thread Eliezer Croitoru
Hey,

 

Have you tried these instructions:

https://support.kaspersky.com/KWTS/6.1/en-US/166336.htm

 

Eliezer

 

From: squid-users  On Behalf Of 
Hernan Saltiel
Sent: Monday, September 20, 2021 16:17
To: Amos Jeffries 
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] About Squid 4, AD, Kerberos and AD group auth.

 

Hi Amos!

Thanks a lot for your response.

I already checked this page, it talks about using negotiate_kerberos_auth when 
having Squid 3.2 or newer, but there is no place in the document with an 
example about how to use it. 

Then I went to the manpage for this command (negotiate_kerberos_auth man page - 
squid | ManKier  ), where I 
can see three lines talking about how to add this to squid.conf. But I don't 
know if I do need to follow a procedure to configure winbind, Samba, or any 
other thing, and how to configure squid.conf to work with groups, and their 
permissions. 

Is there any place with full examples on using that config?

Thanks again, and best regards,

 

HeCSa.

 

 

 

On Mon, Sep 20, 2021 at 3:10 AM Amos Jeffries mailto:squ...@treenet.co.nz> > wrote:

On 20/09/21 5:32 am, Hernan Saltiel wrote:
>  If you know about this, and can point me out to some URL I'm not 
> seeing, I'll thank you.

Please see the FAQ written by that helpers author



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users




 

-- 

HeCSa

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 4.16 is available

2021-09-15 Thread Eliezer Croitoru
Hey Amos and Alex,

I have tested the 4.16 version and it seems to work steady on basic loads.

Eliezer

-Original Message-
From: squid-announce  On
Behalf Of Amos Jeffries
Sent: Thursday, July 22, 2021 7:24 AM
To: squid-annou...@lists.squid-cache.org
Subject: [squid-announce] Squid 4.16 is available

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-4.16 release!


This release is a bug fix release resolving several issues found in
the prior Squid releases.


The major changes to be aware of since 4.15:

  * Regression Fix: --with-valgrind-debug build

Squid-4.15 changes caused a build failure linking with valgrind
memory tracking tool. This release fixes that to allow memory
leak tracing again.


  * Bug 4528: ICAP transactions quit on async DNS lookups

Squid has never reliably been able to resolve hostnames configured
for ICAP services. They might work most of the time when added to
/etc/hosts, but not always - and would rarely work if relying on
remote DNS servers.

This release adds full support for DNS remote resolution of
service names in icap_service directive. Regardless of where the
hostname is resolved from it can now be expected to resolve and
also properly obey DNS TTL expiry for IP address changes.


  * Bug 5128: Translation: Fix '% i' typo in es/ERR_FORWARDING_DENIED

Spanish translation of the ERR_FORWARDING_DENIED template have
for some time omitted the URL which was having issues being fetched.
The template published with this release and current squid-langpack
downloads will now display the URL identically to other error pages.


   All users of Squid are encouraged to upgrade as soon as possible.


See the ChangeLog for the full list of changes in this and earlier
releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v4/RELEASENOTES.html
when you are ready to make the switch to Squid-4

This new release can be downloaded from our HTTP or FTP servers

   http://www.squid-cache.org/Versions/v4/
   ftp://ftp.squid-cache.org/pub/squid/
   ftp://ftp.squid-cache.org/pub/archive/4/

or the mirrors. For a list of mirror sites see

   http://www.squid-cache.org/Download/http-mirrors.html
   http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
   http://bugs.squid-cache.org/


Amos Jeffries
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid performance issues

2021-09-05 Thread Eliezer Croitoru
From:

https://serverfault.com/a/717273/227456

 

2

The number of file descriptors is set in the systemd unit file. By default this 
is 16384, as you can see in /usr/lib/systemd/system/squid.service.

To override this, create a locally overriding /etc/systemd/system/squid.service 
which changes the amount of file descriptors. It should look something like 
this:

.include /usr/lib/systemd/system/squid.service

 

[Service]

LimitNOFILE=65536

Do not edit the default file /usr/lib/systemd/system/squid.service, as it will 
be restored whenever the package is updated. That is why we put it in a local 
file to override defaults.

After creating this file, tell systemd about it:

systemctl daemon-reload

and then restart squid.

systemctl restart squid

 

 

Eliezer

 

 

 

From: NgTech LTD  
Sent: Tuesday, August 31, 2021 6:11 PM
To: Marcio B. 
Cc: Squid Users 
Subject: Re: [squid-users] Squid performance issues

 

Hey Marcio,

 

You will need to add a systemd service file that extends the current one with 
more FileDescriptors.

 

I cannot guide now I do hope to be able to write later.

 

If anyone is able to help faster go ahead.

 

Eliezer

 

 

בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. ‏mailto:marcioba...@gmail.com> >:

Hi,

I implemented a Squid server in version 4.6 on Debian and tested it for about 
40 days. However I put it into production today and Internet browsing was 
extremely slow.

In /var/log/syslog I'm getting the following messages:

Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:35 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors

Aug 31 11:29:51 srvproxy squid[4041]: WARNING! Your cache is running out of 
filedescriptors


I searched the Internet, but I only found very old information and referring 
files that don't exist on my Squid Server.

The only thing I did was add the following value to the 
/etc/security/limits.conf file:

*-nofile 65535

however this did not solve.

Does anyone have any idea how I could solve this problem?

 

Regards,

 

Márcio Bacci

___
squid-users mailing list
squid-users@lists.squid-cache.org  
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 4.16 is available

2021-08-17 Thread Eliezer Croitoru
Hey Amos,

I started testing the latest squid versions.
It will probably will take more time then usual and I hope the RPMs will be
ready tomorrow.

Eliezer 

-Original Message-
From: squid-announce  On
Behalf Of Amos Jeffries
Sent: Thursday, July 22, 2021 7:24 AM
To: squid-annou...@lists.squid-cache.org
Subject: [squid-announce] Squid 4.16 is available

The Squid HTTP Proxy team is very pleased to announce the availability
of the Squid-4.16 release!


This release is a bug fix release resolving several issues found in
the prior Squid releases.


The major changes to be aware of since 4.15:

  * Regression Fix: --with-valgrind-debug build

Squid-4.15 changes caused a build failure linking with valgrind
memory tracking tool. This release fixes that to allow memory
leak tracing again.


  * Bug 4528: ICAP transactions quit on async DNS lookups

Squid has never reliably been able to resolve hostnames configured
for ICAP services. They might work most of the time when added to
/etc/hosts, but not always - and would rarely work if relying on
remote DNS servers.

This release adds full support for DNS remote resolution of
service names in icap_service directive. Regardless of where the
hostname is resolved from it can now be expected to resolve and
also properly obey DNS TTL expiry for IP address changes.


  * Bug 5128: Translation: Fix '% i' typo in es/ERR_FORWARDING_DENIED

Spanish translation of the ERR_FORWARDING_DENIED template have
for some time omitted the URL which was having issues being fetched.
The template published with this release and current squid-langpack
downloads will now display the URL identically to other error pages.


   All users of Squid are encouraged to upgrade as soon as possible.


See the ChangeLog for the full list of changes in this and earlier
releases.

Please refer to the release notes at
http://www.squid-cache.org/Versions/v4/RELEASENOTES.html
when you are ready to make the switch to Squid-4

This new release can be downloaded from our HTTP or FTP servers

   http://www.squid-cache.org/Versions/v4/
   ftp://ftp.squid-cache.org/pub/squid/
   ftp://ftp.squid-cache.org/pub/archive/4/

or the mirrors. For a list of mirror sites see

   http://www.squid-cache.org/Download/http-mirrors.html
   http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
   http://bugs.squid-cache.org/


Amos Jeffries
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-announce

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TPROXY Error

2021-07-13 Thread Eliezer Croitoru
Hey Ben,

Still waiting for the relevant output.
Once I will have the relevant details I will probably be able to verify how and 
what is the issue.

Eliezer

-Original Message-
From: Eliezer Croitoru  
Sent: Thursday, July 8, 2021 12:04 AM
To: 'squid-users@lists.squid-cache.org' 
Cc: 'Ben Goz' 
Subject: RE: [squid-users] TPROXY Error

Hey Ben,

You are missing the critical output of the full command:
Ip route show table 100

What you posted was:
> 5.  the output of 'ip route show table 100'
$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213
##

It's important to see the relevant routing table.
The linux Kernel have couple routing tables which each can contain different 
routing/forwarding table.
If you want to understand a bit more you might be able to try and lookup for 
FIB.
( take a peek at: http://linux-ip.net/html/routing-tables.html)

Eliezer

-Original Message-
From: Ben Goz  
Sent: Wednesday, July 7, 2021 3:36 PM
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TPROXY Error

By the help of God.


Hi Eliezer,

Thanks for your help.

Please let me know if you need more information.


Regards,

Ben

On 07/07/2021 14:01, Eliezer Croitoru wrote:
> Hey Ben,
>
> I want to try and reset this issue because I am missing some technical
> details.
>
> 1. What Linux Distro and what version are you using?'
Ubuntu 20.04
> 2. the output of 'ip address'
$ ip address
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens1f0:  mtu 1500 qdisc mq 
master bond0 state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
3: ens1f1:  mtu 1500 qdisc mq 
master bond0 state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
4: usb0:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
 link/ether ca:13:59:65:c2:56 brd ff:ff:ff:ff:ff:ff
5: enx00e04c3600d3:  mtu 1500 qdisc 
fq_codel state UP group default qlen 1000
 link/ether 00:e0:4c:36:00:d3 brd ff:ff:ff:ff:ff:ff
 inet 8.11.39.250/30 brd 8.11.39.251 scope global enx00e04c3600d3
valid_lft forever preferred_lft forever
 inet6 fe80::2e0:4cff:fe36:d3/64 scope link
valid_lft forever preferred_lft forever
6: bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
7: bond0.212@bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet 8.13.140.1/28 brd 8.13.140.15 scope global bond0.212
valid_lft forever preferred_lft forever
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
8: bond0.213@bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet 1.21.213.1/24 brd 1.21.213.255 scope global bond0.213
valid_lft forever preferred_lft forever
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
> 3. the output of 'ip rule'
$ ip rule
0:from all lookup local
32762:from all fwmark 0x1 lookup 100
32763:from all fwmark 0x1 lookup 100
32764:from all fwmark 0x1 lookup 100
32765:from all fwmark 0x1 lookup 100
32766:from all lookup main
32767:from all lookup default

> 4.  the output of 'ip route show'

$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213

> 5.  the output of 'ip route show table 100'
$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213
> 6. the output of 'iptables-save'


$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Wed Jul  7 12:25:05 2021
*mangle
:PREROUTING ACCEPT [72898710:6084386298]

Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-08 Thread Eliezer Croitoru
Hey David,

 

I have just verified that the next guide works as expected:

https://www.serverlab.ca/tutorials/linux/administration-linux/how-to-set-the-proxy-for-apt-for-ubuntu-18-04/

 

on Ubuntu 20.04 you can create the file:

/etc/apt/apt.conf

 

And then add to it the next lines: (replace the domain and port to an ip and 
port or another domain and port if required)

Acquire::http::proxy "http://prox.srv.world:3128/;;
Acquire::https::proxy "http://prox.srv.world:3128/;;

##

 

The above works as expected.

 

If for any reason what so ever you experience some issues you can try to add 
into the proxy hosts  file a static  record of:

202.158.214.106 mirror.aarnet.edu.au

# verify the right host ipv4 address using host/dig/nslookup

 

And then restart the squid proxy.

 

Try again and see if it works as expected.

 

All The Bests,

Eliezer

 

 

From: squid-users  On Behalf Of 
David Mills
Sent: Wednesday, July 7, 2021 2:26 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid 
proxy

 

Hi,

 

We've got a collection of Ubuntu 18.04 boxes out in the field. They connect to 
an AWS OpenVPN VPN and use a Squid 3.5 AWS hosted Proxy. They work fine.

 

We have tried upgrading one to 20.04. Same setup. From the command line curl or 
wget can happily download an Ubuntu package from the Ubuntu Mirror site we use. 
But "apt update" gets lots of "IGN:" timeouts and errors.

 

The package we test curl with is 
https://mirror.aarnet.edu.au/ubuntu/pool/main/c/curl/curl_7.68.0-1ubuntu2.5_amd64.deb

 

The Squid log shows a line the doesn't occur for the successful 18.04 "apt 
updates":

1625190959.233 81 10.0.11.191 TAG_NONE/200 0 CONNECT 
mirror.aarnet.edu.au:443   - 
HIER_DIRECT/2001:388:30bc:cafe::beef -

 

The full output of an attempt to update is:

Ign:1 https://mirror.aarnet.edu.au/ubuntu focal InRelease   
   
Ign:2 https://mirror.aarnet.edu.au/ubuntu focal-updates InRelease   
   
Ign:3 https://mirror.aarnet.edu.au/ubuntu focal-backports InRelease 
   
Ign:4 https://mirror.aarnet.edu.au/ubuntu focal-security InRelease  
   
Err:5 https://mirror.aarnet.edu.au/ubuntu focal Release 
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:6 https://mirror.aarnet.edu.au/ubuntu focal-updates Release 
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:7 https://mirror.aarnet.edu.au/ubuntu focal-backports Release   
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:8 https://mirror.aarnet.edu.au/ubuntu focal-security Release
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.1.26 3128]
Reading package lists... Done   
   
N: Ignoring file 'microsoft-prod.list-keep' in directory 
'/etc/apt/sources.list.d/' as it has an invalid filename extension
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal Release' does not 
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-updates Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-backports Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-security Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.

 

While running, the line

0% [Connecting to HTTP proxy 
(http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128)]

appears often and hang for a while.

 

I've tried upping the squid logging and allowing all, but they didn't offer any 
additional information about the issue.

 

Any advice would be greatly appreciated.

 

Regards,

 


  

Re: [squid-users] TPROXY Error

2021-07-07 Thread Eliezer Croitoru
Hey Ben,

You are missing the critical output of the full command:
Ip route show table 100

What you posted was:
> 5.  the output of 'ip route show table 100'
$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213
##

It's important to see the relevant routing table.
The linux Kernel have couple routing tables which each can contain different 
routing/forwarding table.
If you want to understand a bit more you might be able to try and lookup for 
FIB.
( take a peek at: http://linux-ip.net/html/routing-tables.html)

Eliezer

-Original Message-
From: Ben Goz  
Sent: Wednesday, July 7, 2021 3:36 PM
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TPROXY Error

By the help of God.


Hi Eliezer,

Thanks for your help.

Please let me know if you need more information.


Regards,

Ben

On 07/07/2021 14:01, Eliezer Croitoru wrote:
> Hey Ben,
>
> I want to try and reset this issue because I am missing some technical
> details.
>
> 1. What Linux Distro and what version are you using?'
Ubuntu 20.04
> 2. the output of 'ip address'
$ ip address
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens1f0:  mtu 1500 qdisc mq 
master bond0 state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
3: ens1f1:  mtu 1500 qdisc mq 
master bond0 state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
4: usb0:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
 link/ether ca:13:59:65:c2:56 brd ff:ff:ff:ff:ff:ff
5: enx00e04c3600d3:  mtu 1500 qdisc 
fq_codel state UP group default qlen 1000
 link/ether 00:e0:4c:36:00:d3 brd ff:ff:ff:ff:ff:ff
 inet 8.11.39.250/30 brd 8.11.39.251 scope global enx00e04c3600d3
valid_lft forever preferred_lft forever
 inet6 fe80::2e0:4cff:fe36:d3/64 scope link
valid_lft forever preferred_lft forever
6: bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
7: bond0.212@bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet 8.13.140.1/28 brd 8.13.140.15 scope global bond0.212
valid_lft forever preferred_lft forever
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
8: bond0.213@bond0:  mtu 1500 qdisc 
noqueue state UP group default qlen 1000
 link/ether ba:59:58:58:23:2b brd ff:ff:ff:ff:ff:ff
 inet 1.21.213.1/24 brd 1.21.213.255 scope global bond0.213
valid_lft forever preferred_lft forever
 inet6 fe80::b859:58ff:fe58:232b/64 scope link
valid_lft forever preferred_lft forever
> 3. the output of 'ip rule'
$ ip rule
0:from all lookup local
32762:from all fwmark 0x1 lookup 100
32763:from all fwmark 0x1 lookup 100
32764:from all fwmark 0x1 lookup 100
32765:from all fwmark 0x1 lookup 100
32766:from all lookup main
32767:from all lookup default

> 4.  the output of 'ip route show'

$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213

> 5.  the output of 'ip route show table 100'
$ ip route show
default via 8.13.140.14 dev bond0.212 proto static
1.21.213.0/24 dev bond0.213 proto kernel scope link src 1.21.213.1
8.11.39.248/30 dev enx00e04c3600d3 proto kernel scope link src 8.11.39.250
8.13.140.0/28 dev bond0.212 proto kernel scope link src 8.13.140.1
8.13.144.0/20 via 1.21.213.254 dev bond0.213
8.13.148.1 via 1.21.213.254 dev bond0.213
> 6. the output of 'iptables-save'


$ sudo iptables-save
# Generated by iptables-save v1.8.4 on Wed Jul  7 12:25:05 2021
*mangle
:PREROUTING ACCEPT [72898710:6084386298]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DIVERT - [0:0]
-A PREROUTING -p tcp -m socket -j DIVERT
-A PREROUTING -i bond0.213 -p tcp -m tcp --dport 80 -j TPROXY --on-port 
15644 --on-ip 0.0.0.0 --tproxy-mark 0x1/0x1
-A PREROUTING -i bond0.213 -p tcp -m tcp --dport 443 -j TPROXY --on-port 
15645 -

Re: [squid-users] TPROXY Error

2021-07-07 Thread Eliezer Croitoru
Hey Ben,

I want to try and reset this issue because I am missing some technical
details.

1. What Linux Distro and what version are you using?
2. the output of 'ip address'
3. the output of 'ip rule'
4.  the output of 'ip route show'
5.  the output of 'ip route show table 100'
6. the output of 'iptables-save'
7. the output of 'nft -nn list ruleset' (if exists on the OS)
8. the output of your squid.conf
9. the output of 'squid -v'
10. the output of 'uname -a'

Once we will have all the above details (reducing/modifying any private
details) we can try to maybe help you.

Eliezer

-Original Message-
From: squid-users  On Behalf Of
Ben Goz
Sent: Wednesday, June 30, 2021 3:16 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] TPROXY Error

 By the help of God.

Hi All,
I'm trying to configure squid as a transparent proxy using TPROXY.
The machine I'm using has 2 NICs, one for input and the other one for
output traffic.
The TPROXY iptables rules are configured on the input NIC.
It looks like iptables TPROXY redirect works but squid prints out the
following error:

ERROR: NAT/TPROXY lookup failed to locate original IPs on
local=xxx:443 remote=xxx:49471 FD 14 flags=17

I think I loaded all TPROXY required kernel modules.

The ip forwarding works fine without the iptables rules. and I don't
see any squid ERROR on getsockopt

Please let me know what I'm missing?

Thanks,
Ben
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid proxy

2021-07-07 Thread Eliezer Croitoru
Hey David,

Just wondering if you have seen the apt related docs at:
https://help.ubuntu.com/community/AptGet/Howto/#Setting_up_apt-get_to_use_a_http-proxy

Eliezer

From: squid-users  On Behalf Of 
David Mills
Sent: Wednesday, July 7, 2021 2:26 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Ubuntu 20.04 "apt update" issues behind a VPN and Squid 
proxy

Hi,

We've got a collection of Ubuntu 18.04 boxes out in the field. They connect to 
an AWS OpenVPN VPN and use a Squid 3.5 AWS hosted Proxy. They work fine.

We have tried upgrading one to 20.04. Same setup. From the command line curl or 
wget can happily download an Ubuntu package from the Ubuntu Mirror site we use. 
But "apt update" gets lots of "IGN:" timeouts and errors.

The package we test curl with is 
https://mirror.aarnet.edu.au/ubuntu/pool/main/c/curl/curl_7.68.0-1ubuntu2.5_amd64.deb

The Squid log shows a line the doesn't occur for the successful 18.04 "apt 
updates":
1625190959.233 81 10.0.11.191 TAG_NONE/200 0 CONNECT 
http://mirror.aarnet.edu.au:443 - HIER_DIRECT/2001:388:30bc:cafe::beef -

The full output of an attempt to update is:
Ign:1 https://mirror.aarnet.edu.au/ubuntu focal InRelease   
   
Ign:2 https://mirror.aarnet.edu.au/ubuntu focal-updates InRelease   
   
Ign:3 https://mirror.aarnet.edu.au/ubuntu focal-backports InRelease 
   
Ign:4 https://mirror.aarnet.edu.au/ubuntu focal-security InRelease  
   
Err:5 https://mirror.aarnet.edu.au/ubuntu focal Release 
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:6 https://mirror.aarnet.edu.au/ubuntu focal-updates Release 
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:7 https://mirror.aarnet.edu.au/ubuntu focal-backports Release   
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.11.82 3128]
Err:8 https://mirror.aarnet.edu.au/ubuntu focal-security Release
   
  Could not wait for server fd - select (11: Resource temporarily unavailable) 
[IP: 10.0.1.26 3128]
Reading package lists... Done   
   
N: Ignoring file 'microsoft-prod.list-keep' in directory 
'/etc/apt/sources.list.d/' as it has an invalid filename extension
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal Release' does not 
have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-updates Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-backports Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.
E: The repository 'https://mirror.aarnet.edu.au/ubuntu focal-security Release' 
does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.

While running, the line
0% [Connecting to HTTP proxy 
(http://vpn-proxy-d68aca8a8f7f81d6.elb.ap-southeast-2.amazonaws.com:3128)]
appears often and hang for a while.

I've tried upping the squid logging and allowing all, but they didn't offer any 
additional information about the issue.

Any advice would be greatly appreciated.

Regards,


David Mills
Senior DevOps Engineer

 E: mailto:david.mi...@acusensus.com
 M: +61 411 513 404
 W:http://acusensus.com/



DISCLAIMER: Acusensus puts the privacy and security of its clients, its data 
and information at the core of everything we do. The information contained in 
this email (including attachments) is intended only for the use of the 
person(s) to whom it is addressed to, as it may be confidential and contain 
legally privileged information. If you have received this email in error, 
please delete all copies and notify the sender immediately. Any views or 
opinions presented are solely those of the author and do not necessarily 
represent the views of Acusensus Pty Ltd. Please consider the environment 
before printing this email.

___
squid-users mailing list
squid-users@lists.squid-cache.org

Re: [squid-users] allow request to cloudfront after 302 redirection.

2021-05-27 Thread Eliezer Croitoru
Thanks Alex,

I assume you do remember that 301 is not the same as 302.
Depends on the status code ie 301/302 307/307 etc the ttl of the cached 
location would be decides.
The basic idea is to use either redis or memcached  on another key-value DB to 
persist the Location response.
(This key-value DB would only be a "cache" with a ttl)

For a 301 the basic assumption is that the Location header contains the 
static/permanent redirection.
For a 302 it's not a static, I think it's important to have the status code 
sent to the helper.

About the http_reply_access, I actually forgot it exists and indeed it makes 
more sense...

I am working on an example.
Eliezer

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Monday, May 24, 2021 9:26 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] allow request to cloudfront after 302 redirection.

On 5/24/21 5:52 AM, Eliezer Croitoru wrote:

> Following up this thread I was wondering about an example how to do
> that with an external_acl helper. With ICAP I can do that easily to
> some degree. With an external_acl helper I am not sure what values to
> send.

AFAICT, the external ACL helper(s) should be sent the Location URI
values the helper(s) should cache (at response time) and the request URI
values the helper(s) should examine (at request time). More information
may be sent/cached, of course, depending on the exact decision logic.


> I would guess that the response code and response Location header
> might be the ones which should be passed to the helper?

I do not see the value in sending the response status code (because
Squid can check that itself), but the correct answer depends on the
exact decision logic.


> What do you think about the next acls, should do the trick?

> acl redirect http_status 301-308
> acl gitlab_package dstdomain package.gitlab.com

> external_acl_type openlocation children=15 %DST %SRC %<{Location} 
> /usr/local/bin/location-openner.rb
> acl location_openner external openlocation
> http_access deny gitlab_package redirect location_openner
> http_access allow location_openner

The above sketch does not make sense to me because it uses response
information (e.g., % -Original Message-
> From: squid-users  On Behalf Of 
> Alex Rousskov
> Sent: Wednesday, April 21, 2021 8:49 PM
> To: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] allow request to cloudfront after 302 redirection.
> 
> On 4/21/21 12:48 PM, Miroslaw Malinowski wrote:
>> Is it possible to create a whitelist that allows cloudfront 302
>> redirections, e.g. gitlab is using cloudfront as CDN and when we
>> whitelist package.gitlab.com the URL is redirected (302) to
>> https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c15d
> 
> 
> Yes, it is possible to allow future requests to Location-listed URLs,
> but since we are talking about two (or more) independent HTTP
> transactions, on two (or more) TCP connections, you will need to store
> the allowed Location values (at least) somewhere, maintain that storage
> (e.g., remove stale entries), and (optionally) determine whether the
> request for an allowed cloudfront URL came from the same user agent as
> the gitlab request that was redirected to that URL.
> 
> Storing, maintenance, and checking of allowed Locations/etc. can be done
> using external ACLs and/or eCAP/ICAP adaptation services. It cannot be
> reliably done using built-in ACLs alone AFAICT.
> 
> 
> HTH,
> 
> Alex.
> 
> 
>> I could whitelist a whole .cloudfront.net <http://cloudfront.net> domain
>> or url_regex, but what I would like to achieve, I don't know if
>> possible, is a chain of events like:
>> If packages.gitlab.com <http://packages.gitlab.com> return 302 Location
>> .cloudfront, then allow
>> https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c
>> <https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c>
>> request.
>> I've been playing around with http_reply_access and rep_headers, but I
>> can only go as far as allow replay of the first request to
>> package.gitlab.com <http://package.gitlab.com>, but then a GET to
>> cloudfront is blocked anyway as it's not on our whitelist.
>> e.g.
>> 1619022938.916   423 172.16.230.237 NONE/200 0 CONNECT 54.153.54.194:443
>> <http://54.153.54.194:443> - ORIGINAL_DST/54.153.54.194
>> <http://54.153.54.194> -
>> 1619022939.074   153 172.16.230.237 TCP_MISS/302 758 GET
>> https://packages.gitlab.com/gitlab/gitlab-ee/packages/ubuntu/bionic/gitlab

Re: [squid-users] allow request to cloudfront after 302 redirection.

2021-05-24 Thread Eliezer Croitoru
Hey Alex,

Following up this thread I was wondering about an example how to do that with 
an external_acl helper.
With ICAP I can do that easily to some degree.
With an external_acl helper I am not sure what values to send.
I would guess that the response code and response Location header might be the 
ones which should be passed to the helper?

What do you think about the next acls, should do the trick?(code to be followed)

acl redirect http_status 301-308
acl gitlab_package dstdomain package.gitlab.com

external_acl_type openlocation children=15 %DST %SRC %<{Location} 
/usr/local/bin/location-openner.rb
acl location_openner external openlocation
http_access deny gitlab_package redirect location_openner
http_access allow location_openner


Thanks,
----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Wednesday, April 21, 2021 8:49 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] allow request to cloudfront after 302 redirection.

On 4/21/21 12:48 PM, Miroslaw Malinowski wrote:
> Is it possible to create a whitelist that allows cloudfront 302
> redirections, e.g. gitlab is using cloudfront as CDN and when we
> whitelist package.gitlab.com the URL is redirected (302) to
> https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c15d


Yes, it is possible to allow future requests to Location-listed URLs,
but since we are talking about two (or more) independent HTTP
transactions, on two (or more) TCP connections, you will need to store
the allowed Location values (at least) somewhere, maintain that storage
(e.g., remove stale entries), and (optionally) determine whether the
request for an allowed cloudfront URL came from the same user agent as
the gitlab request that was redirected to that URL.

Storing, maintenance, and checking of allowed Locations/etc. can be done
using external ACLs and/or eCAP/ICAP adaptation services. It cannot be
reliably done using built-in ACLs alone AFAICT.


HTH,

Alex.


> I could whitelist a whole .cloudfront.net <http://cloudfront.net> domain
> or url_regex, but what I would like to achieve, I don't know if
> possible, is a chain of events like:
> If packages.gitlab.com <http://packages.gitlab.com> return 302 Location
> .cloudfront, then allow
> https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c
> <https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb?t=1619023239_a63698472b6bebeaee980e7c030632d97a29c>
> request.
> I've been playing around with http_reply_access and rep_headers, but I
> can only go as far as allow replay of the first request to
> package.gitlab.com <http://package.gitlab.com>, but then a GET to
> cloudfront is blocked anyway as it's not on our whitelist.
> e.g.
> 1619022938.916   423 172.16.230.237 NONE/200 0 CONNECT 54.153.54.194:443
> <http://54.153.54.194:443> - ORIGINAL_DST/54.153.54.194
> <http://54.153.54.194> -
> 1619022939.074   153 172.16.230.237 TCP_MISS/302 758 GET
> https://packages.gitlab.com/gitlab/gitlab-ee/packages/ubuntu/bionic/gitlab-ee_11.0.1-ee.0_amd64.deb/download.deb
> <https://packages.gitlab.com/gitlab/gitlab-ee/packages/ubuntu/bionic/gitlab-ee_11.0.1-ee.0_amd64.deb/download.deb>
> - ORIGINAL_DST/54.153.54.194 <http://54.153.54.194> text/html
> 1619022939.10820 172.16.230.237 NONE/200 0 CONNECT 52.84.90.34:443
> <http://52.84.90.34:443> - ORIGINAL_DST/52.84.90.34 <http://52.84.90.34> -
> 1619022939.114 2 172.16.230.237 TCP_DENIED/403 19053 GET
> https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb 
> <https://d20rj4el6vkp4c.cloudfront.net/7/11/ubuntu/package_files/35938.deb>?
> - HIER_NONE/- text/html

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is there a way to bind squid's outbound traffice to a specific network interface

2021-04-19 Thread Eliezer Croitoru
It might be possible to use the tcp_outgoing_address for this purpose but it’s 
not clear

What your setup technically look like and what is preventing the browser to do 
as you please.


Eliezer

 

From: squid-users  On Behalf Of Cary 
Lewis
Sent: Monday, April 12, 2021 12:58 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Is there a way to bind squid's outbound traffice to a 
specific network interface

 

I want to be able to bypass a vpn while using a web browser, so I need to be 
able to configure squid to always use a specific outbound interface. 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] All Adaptation ICAPs go down at the same time

2021-04-19 Thread Eliezer Croitoru
Hey Roie,

 

>From the output I assume it’s a dns resolution issue.

In the past I remember that Docker was updating the hosts file with the 
relevant names but  it’s not working the same way now.

Currently Docker is using a local network dns service which is being accessed 
via 127.0.0.53.

>From I remember Squid is resolving the icap service name only at startup or 
>reload.

Lately Alex published a testable patch that might fix specific issues with icap 
services which are resolved by dns. ( sorry I don’t remember the bug report)

I assume you can try to test this patch first.

If these services are static to some degree you might be able to create a 
script that updates the hosts file and reload squid on each change.

When using the hosts file it’s possible that some issues will disappear.


There is also another possibility which is a malformed ICAP response or wrong 
sessions handling which cause this issue.

You might be able to use tcpdump from either the host or the container side to 
capture traffic when these goes down.

Depends on your preference of debug level you might even be able to debug 
specific debug_options like for ICAP services
and/or requests to the degree you might be able to see what happens on the 
basic level of the ICAP encapsulation.

If you really need help with a diagnosis and a solution you might be able to 
use Alex and the measurement factory.



All The Bests,

Eliezer

 

From: squid-users  On Behalf Of roie 
rachamim
Sent: Monday, April 12, 2021 12:54 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] All Adaptation ICAPs go down at the same time

 

Hi,

 

Our setup includes squid that runs in docker container with several ICAP 
servers in additional containers.

>From time to time we see in cache.log the following messages:
2021/04/12 00:22:39| optional ICAP service is down after an options fetch 
failure: icap://icap1.proxy:14590/censor [down,!opt]
2021/04/12 00:22:39| optional ICAP service is down after an options fetch 
failure: icap://icap2.proxy:1344/request [down,!opt]
2021/04/12 00:22:39| optional ICAP service is down after an options fetch 
failure: icap://icap3.proxy:14590/response [down,!opt]

2021/04/12 06:10:45| optional ICAP service is down after an options fetch 
failure: icap://icap1.proxy:14590/censor [down,!opt]
2021/04/12 06:10:45| optional ICAP service is down after an options fetch 
failure: icap://icap2.proxy:1344/request [down,!opt]
2021/04/12 06:10:45| optional ICAP service is down after an options fetch 
failure: icap://icap3.proxy:14590/response [down,!opt]

 

We're trying to understand why it happens to all ICAPs at once. This happens in 
4.14 and in 5.0.4

Any thoughts about what might cause this ?

Many Thanks,

Roie

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache Peers and traffic handling

2021-04-15 Thread Eliezer Croitoru
I don’t know your use case that well but maybe another proxy can do that for 
you.
I wrote a haproxy routing config by username sometime ago:
https://gist.github.com/elico/405f0608e60910fc9ea119e22e1ffd07

It's very simple and worth a shot.
Let me know if it might be good for you.

All The Bests,
Eliezer


From: squid-users  On Behalf Of 
koshik moshik
Sent: Sunday, April 11, 2021 12:04 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Cache Peers and traffic handling

Hello, 

I am trying to run a Squid proxy Server witth about 5000 cache peers. I am 
running a dedicated server with 6 cores and 32GB RAM on Ubuntu 16. 

Could you tell me what else is needed / not needed in my squid.config? I am 
encountering a high CPU usage and would like to create a very efficient proxy 
server. 

Down below you can find my squid.config(I deleted the other cache_peer lines):
---
http_port 3128
dns_v4_first on
acl SSL_ports port 1-65535
acl Safe_ports port 1-65535
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/.htpasswd
auth_param basic children 5
auth_param basic realm Squid Basic Authentication
auth_param basic credentialsttl 5 hours
acl password proxy_auth REQUIRED
http_access allow password
#http_access deny all
cache allow all
never_direct allow all
ident_access deny all




cache_mem 1 GB
maximum_object_size_in_memory 16 MB




# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

#Rules to anonymize http headers
forwarded_for off
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all




#
# Add any of your own refresh_pattern entries above these.
#
#refresh_pattern ^ftp:   144020% 10080
#refresh_pattern ^gopher:14400%  1440
#refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
#refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
#refresh_pattern .   0   20% 4320


acl me proxy_auth ye-1
cache_peer http://my.proxy.com/ parent 31280 login=user1:password1 no-query 
name=a1
cache_peer_access a1 allow me
cache_peer_access a1 deny all

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can't get squid with whitelist text file to work TCP_DENIED/403

2021-04-14 Thread Eliezer Croitoru
Did you got it working eventually?

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

Zoom: Coming soon

 

 

From: squid-users  On Behalf Of
Elliott Blake, Lisa Marie
Sent: Thursday, April 8, 2021 10:11 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Can't get squid with whitelist text file to work
TCP_DENIED/403

 

I am trying to get squid to work with a text file for a whitelist.  I get
TCP_DENIED/403 on every url I try.  I am using curl to test.

acl whitelist dstdomain "/etc/squid/whitelist.txt"

curl -x https://libaux-prod.lib.uic.edu:3128 -I https://arl.org 

HTTP/1.1 403 Forbidden

Server: squid/3.5.20

Mime-Version: 1.0

Date: Wed, 07 Apr 2021 17:38:58 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 3521

X-Squid-Error: ERR_ACCESS_DENIED 0

Vary: Accept-Language

Content-Language: en

X-Cache: MISS from libaux-prod.lib.uic.edu

X-Cache-Lookup: NONE from libaux-prod.lib.uic.edu:3128

Via: 1.1 libaux-prod.lib.uic.edu (squid/3.5.20)

Connection: keep-alive

curl: (56) Received HTTP code 403 from proxy after CONNECT

 

However, if I change my squid.conf to just the url it works.

acl whitelist dstdomain .arl.org

curl -x https://libaux-prod.lib.uic.edu:3128 -I https://arl.org 

HTTP/1.1 200 Connection established

HTTP/1.1 301 Moved Permanently

Server: nginx

Date: Wed, 07 Apr 2021 17:40:31 GMT

Content-Type: text/html

Content-Length: 178

Connection: keep-alive

Keep-Alive: timeout=20

Location: https://www.arl.org/

Expires: Wed, 07 Apr 2021 18:40:31 GMT

Cache-Control: max-age=3600

 

I am running a centos 7 os with squid version 3.5.20, which is the most
recent yum version.

This is driving me crazy.  I have tried debugging in squid and cannot find
the answer.  I have tried changing the squid.conf file.  I always restart
squid after I change the squid.conf file.  

Any help would be appreciated.

 

My Squid.conf file:

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

acl localnet src 172.16.0.0/12  # RFC1918 possible internal network

acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

acl localnet src fc00::/7   # RFC 4193 local private network range

acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

 

acl SSL_ports port 443

acl Safe_ports port 80  # http

acl Safe_ports port 443 # https

acl Safe_ports port 591 # filemaker

acl CONNECT method CONNECT

 

http_access deny !Safe_ports

 

http_access deny CONNECT !SSL_ports

 

http_access allow localhost manager

http_access deny manager

 

acl whitelist dstdomain "/etc/squid/whitelist.txt"

#acl whitelist dstdomain .arl.org

http_access allow whitelist

#http_access allow CONNECT whitelist

 

http_access deny !whitelist

 

http_access allow localnet

http_access allow localhost

 

http_access deny all

 

# Squid normally listens to port 3128

http_port 3128

 

# port 1338 is for Front Desk Machines

http_port 1338

 

coredump_dir /var/spool/squid

 

refresh_pattern ^ftp:   144020% 10080

refresh_pattern ^gopher:14400%  1440

refresh_pattern -i (/cgi-bin/|\?) 0 0%  0

refresh_pattern .   0   20% 4320

 

Beginning of whitelist.txt

#A Page

.aacrjournals.org

.aai.org

.aaiddjournals.org

.aap.org

.aappublications.orga

.accessanesthesiology.com

.anthropology.org.uk

.archivegrid.org

.arl.org

.arlstatistics.org

.artstor.org

 

Thank you,

Lisa Blake

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] compile squid with tumbleweed

2021-04-01 Thread Eliezer Croitoru
Hey,

First try to use the next example:
https://github.com/elico/yt-classification-service-example/blob/master/redwood/init-local-rootca.sh

To create a rootCA key and certificate, which doesn't require you to use a 
password.
And I have also seen this article you have used and it has two ways to create 
the rootca.
One with the CA.pl script and the other one is  with the openssl tool.
As long as you don't need the CA.pl specifically I would recommend using 
openssl.
It's plain simple to just create a rootCA certificate.

All The Bests,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of 
Majed Zouhairy
Sent: Thursday, April 1, 2021 1:42 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] compile squid with tumbleweed

 >Peace,
as part of self developing, we decided that turning on sslbump + splice 
is a good idea, so how to install squid with ssl support on tumbleweed?

answer: it is already compiled with ssl support

but now i followed:

https://medium.com/@steensply/installing-and-configuring-squid-proxy-for-ssl-bumping-or-peek-n-splice-34afd3f69522

to enable ssl bumping.

specifically those commands:

/usr/share/ssl/misc/CA.pl -newca
/usr/share/ssl/misc/CA.pl -newreq
/usr/share/ssl/misc/CA.pl -sign
openssl x509 -in newcert.pem -outform DER -out squidTrusted.der
copied the 3 files to /etc/squid/certs
sudo chown squid:squid -R /etc/squid/certs
sudo /usr/libexec/squid/security_file_certgen -c -s 
/var/lib/squid/ssl_db -M 4MB
sudo chown squid:squid -R /var/lib/squid
sudo chmod 700 /etc/squid/certs/... (newcrt.pem newkey.pem squidTrusted.der)

sudo squid -z

asks for certificate password
then


2021/04/01 13:16:57| WARNING: BCP 177 violation. Detected non-functional 
IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03| Created PID file (/run/squid.pid)
zouhairy@proxy:~> 2021/04/01 13:17:03 kid1| WARNING: BCP 177 violation. 
Detected non-functional IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03 kid1| FATAL: No valid signing certificate configured 
for HTTP_port 0.0.0.0:8080
2021/04/01 13:17:03 kid1| Squid Cache (Version 4.14): Terminated abnormally.
CPU Usage: 0.047 seconds = 0.031 user + 0.016 sys
Maximum Resident Size: 62352 KB
Page faults with physical i/o: 0
2021/04/01 13:17:03 kid1| WARNING: BCP 177 violation. Detected 
non-functional IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03 kid1| FATAL: No valid signing certificate configured 
for HTTP_port 0.0.0.0:8080
2021/04/01 13:17:03 kid1| Squid Cache (Version 4.14): Terminated abnormally.
CPU Usage: 0.040 seconds = 0.032 user + 0.008 sys
Maximum Resident Size: 62272 KB
Page faults with physical i/o: 0
2021/04/01 13:17:03 kid1| WARNING: BCP 177 violation. Detected 
non-functional IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03 kid1| FATAL: No valid signing certificate configured 
for HTTP_port 0.0.0.0:8080
2021/04/01 13:17:03 kid1| Squid Cache (Version 4.14): Terminated abnormally.
CPU Usage: 0.042 seconds = 0.008 user + 0.034 sys
Maximum Resident Size: 63360 KB
Page faults with physical i/o: 0
2021/04/01 13:17:03 kid1| WARNING: BCP 177 violation. Detected 
non-functional IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03 kid1| FATAL: No valid signing certificate configured 
for HTTP_port 0.0.0.0:8080
2021/04/01 13:17:03 kid1| Squid Cache (Version 4.14): Terminated abnormally.
CPU Usage: 0.047 seconds = 0.032 user + 0.016 sys
Maximum Resident Size: 62992 KB
Page faults with physical i/o: 0
2021/04/01 13:17:03 kid1| WARNING: BCP 177 violation. Detected 
non-functional IPv6 loopback.
Enter PEM pass phrase:
2021/04/01 13:17:03 kid1| FATAL: No valid signing certificate configured 
for HTTP_port 0.0.0.0:8080
2021/04/01 13:17:03 kid1| Squid Cache (Version 4.14): Terminated abnormally.
CPU Usage: 0.045 seconds = 0.030 user + 0.015 sys
Maximum Resident Size: 62640 KB
Page faults with physical i/o: 0
2021/04/01 13:17:03| Removing PID file (/run/squid.pid)


squid conf:

acl localnet (network/24)

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 8080# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl blockfiles urlpath_regex -i "/etc/squid/blocks.files.acl"

http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web appl

[squid-users] Looking for subscription plan plain text Blacklists for a spin

2021-03-27 Thread Eliezer Croitoru
Hey List,

 

I wanted to test my SquidBlocker DB newest version with a paid categories
list.

I am looking for a paid list of categorized sites which I can load into the
DB and test performance.

 

For now I have found the next list:

https://github.com/blocklistproject/Lists

 

But since there are many vendors for blacklists I would be happy to get any
recommendations.

 

Thanks,

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

Zoom: Coming soon

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid won't return cached even with refresh_pattern extra options override-lastmod override-expire ignore-reload ignore-no-store ignore-private store-stale

2021-03-27 Thread Eliezer Croitoru
Hey Mirek,

This is not the first time this issue rises.
There are risks in implementing any solution for this *issue*.

I have implemented YouTube caching in the past using couple twisted techniques 
while leaving squid un-touched.
The desire to caching sometimes can overcome couple very big risks to the 
integrity if the date/content.
It is possible to use an ICAP service with a 206 response instead of 204 or 200 
however I believe
that you wouldn’t need to cache any POST requests so a simple ICAP service 
would be sufficient.
I believe that It is preferred to leave squid sources un touched for such a 
purpose.
An example for such a twist is at:
* https://github.com/elico/squid-helpers/tree/master/squid_helpers/youtubetwist
* 
https://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator?highlight=%28cache_peer%29#Implementing_ICAP_solution
* https://ieeexplore.ieee.org/abstract/document/9072556

I wrote a public example of an ICAP server that was used to  prove 
vulnerabilities in HTTP which is now used in the prove of HTTPS vulnerabilities.
Take a peek at:
* https://github.com/elico/bgu-icap-example

It's written in GoLang and works under pretty heavy loads.

Let me know if you need more help,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Friday, March 26, 2021 10:36 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid won't return cached even with refresh_pattern 
extra options override-lastmod override-expire ignore-reload ignore-no-store 
ignore-private store-stale

On 3/24/21 3:34 PM, Miroslaw Malinowski wrote:
> I thought about upper service but as is not required at the moment,
> introducing extra hop just to remove the header looks a bit like a
> hammer approach. I'll look into how easily I can amend the code as the
> other option is to introduce a proxy like a feature to the application,
> so either way, it is a code change. The only problem here is that it's
> an OPNSense squid service so I have to compile from source on BSD and
> then keep adding in manually each time they do the update.

At the risk of stating the obvious: If your feature is officially
accepted into Squid sources, then you would not have to keep adding it
manually (once the changes reach your Squid packaging source).

Alex.


> On Wed, Mar 24, 2021 at 7:11 PM Alex Rousskov wrote:
> 
> On 3/24/21 2:49 PM, Miroslaw Malinowski wrote:
> 
> > looking at the code and reading carefully your response, you're saying
> > there is no way you can do it with squid.
> 
> With Squid, your options include:
> 
> 1. Squid source code changes. Should not be too difficult and, IMO, a
> high-quality implementation would deserve official acceptance because it
> is a generally useful feature in line with existing control knobs.
> 
> https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
> 
> 2. An adaptation service that removes Cache-Control:no-cache from the
> response before Squid processes it:
> https://wiki.squid-cache.org/SquidFaq/ContentAdaptation
> 
> 
> HTH,
> 
> Alex.
> 
> > On Wed, Mar 24, 2021 at 6:28 PM Miroslaw Malinowski wrote:
> >
> > Hi,
> >
> > You've right yes it's revalidating as API server I'm
> requesting data
> > is setting Cache-Control: no-cache. My question is how I can force
> > squid to cache and not validate as I know it's safe to do so. As
> > I've explained earlier we are making the same request and
> receiving
> > the same response from 100+ server so as to reduce number of
> > requests to the external server we would like squid to cache the
> > response and issue a cached version.
> >
> > 2021/03/24 18:00:54.867 kid1| 22,3| refresh.cc(351) refreshCheck:
> > YES: Must revalidate stale object (origin set no-cache or private)
> >
> > Mirek
> >
> > On Wed, Mar 24, 2021 at 6:15 PM Alex Rousskov
> >  <mailto:rouss...@measurement-factory.com>
> > <mailto:rouss...@measurement-factory.com
> <mailto:rouss...@measurement-factory.com>>> wrote:
> >
> > On 3/24/21 12:48 PM, Miroslaw Malinowski wrote:
> >
> > > Probably, me missing on something silly or it can't be done
> > but I don't
> > > know why but squid won't return the cached version even
> when I
> > turn all
> > > override options ON in refresh_pat

Re: [squid-users] How to automatically Restart Squid on Ubuntu?

2021-03-22 Thread Eliezer Croitoru
It can crash if the memory is low compared to the number of allowed connections 
in the ulimit.

I don’t know the proxy and the setup but there are couple ways to limit 
connections per IP if indeed the
proxy is overloaded sometimes by specific users.

 

Angelo, you should really try to verify why is it crashing the proxy.

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

Zoom: Coming soon

 

 

From: squid-users  On Behalf Of 
Francesco Chemolli
Sent: Monday, March 22, 2021 5:20 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] How to automatically Restart Squid on Ubuntu?

 

Hi Angelo,

   Squid shouldn't crash with any number of connections. 

Anything in the logs?

 

On Mon, Mar 22, 2021 at 2:59 PM Angelo Wang mailto:wangang...@hotmail.com> > wrote:

Hi,

 

I have a /22 subnet on a server and sometimes Squid crashes when there are too 
many connections. Can someone help me create a script/command to automatically 
restart squid if this happens?

 

Best,

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org <mailto:squid-users@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users




 

-- 

Francesco

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Protecting squid

2021-03-15 Thread Eliezer Croitoru
Hey Ben,

Since you probably doesn’t have 100k users and there for passwords it wouldn't 
do a thing.
Nobody will feel you dropping the TTL.
The content of the credentials file will be in RAM so you should give it a try 
first and ask later.

All The Bests,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Ben 
Goz
Sent: Sunday, March 14, 2021 3:26 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Protecting squid


On 12/03/2021 7:13, Amos Jeffries wrote:
> On 12/03/21 3:56 am, Ben Goz wrote:
>>
>> On 11/03/2021 16:44, Amos Jeffries wrote:
>>> On 12/03/21 3:37 am, Ben Goz wrote:
>>>>
>>>> On 11/03/2021 15:50, Antony Stone wrote:
>>>>> On Thursday 11 March 2021 at 14:41:11, Ben Goz wrote:
>>>>>
>>>>> Tell about your network setup and what you are trying to achieve - 
>>>>> we might be
>>>>> able to suggest solutions.
>>>>
>>>> End users machine using some client application while their system 
>>>> proxy points to the above squid proxy server.
>>>>
>>>
>>> Please also provide your squid.conf settings so we can check they 
>>> achieve your described need(s) properly. At least any lines starting 
>>> with the http_access, auth_param, acl, or external_acl_type 
>>> directives would be most useful.
>>>
>>> Do not forget to anonymize sensitive details before posting. PLEASE 
>>> do so in a way that we can tell whether a hidden value was correct 
>>> for its usage, and whether any two hidden values are the same or 
>>> different.
>>
>>
>> It's fork of default configuration with some changes.
>>
>> # Recommended minimum Access Permission configuration:
>> #
>> # Deny requests to certain unsafe ports
>> #http_access deny !Safe_ports
>>
>
>
> Please restore this security protection. It prevents malware abusing 
> HTTP's similarity to certain other protocols to perform attacks 
> *through* your proxy.
>
> The default Safe_ports list allows all ports not known to be 
> dangerous, and all ports above 1024. So it should not have any 
> noticeable effect on to any legitimate HTTP proxy clients - unless 
> there is something really dodgy happening on your network. If you 
> actually want something like that happening, then add the appropriate 
> port for that activity to the Safe_ports list. Do not drop the 
> protection completely.
>
>
>> # Deny CONNECT to other than secure SSL ports
>> #http_access deny CONNECT !SSL_ports
>>
>
> The same can be said about this. Except this line is arguably even 
> more important. CONNECT tunnels can literally contain anything. Let 
> clients do things by adding ports to SSL_Ports list as-needed.
>
> Please do some due-diligence checks before that to verify you are okay 
> with all the uses of that port. Even ones you think the client 
> themselves is unlikely to be doing. Once you open a port here *anyone* 
> with access to the proxy can do whatever they like on that port.
>
>
>
>> # Only allow cachemgr access from localhost
>> http_access allow localhost manager
>> http_access deny manager
>>
>> http_access allow localnet
>> http_access allow localhost
>>
>> auth_param basic program /usr/local/squid/libexec/basic_ncsa_auth 
>> /usr/local/squid/etc/passwd
>> auth_param basic realm proxy
>
> I notice you are missing a line setting the login TTL value.
>
> There is currently a potential problem in the default which means 
> Squid encounters situations where the credentials are seen as still 
> going to be valid for hours so do not get refreshed. But garbage 
> collection decides to throw them away.
>
> This may not be related to the complaints you reported getting. But 
> should be fixed to ensure the side effect of having to re-authenticate 
> users does not complicate your actual problem.
>
> "auth_param basic credentialsttl ..." sets how often Squid will 
> re-check your auth system to confirm the users is still allowed. 
> Default: 2 hr.
>
> "authenticate_ttl ..." sets how often Squid will try to throw away all 
> info about old clients being logged in. Default: 1 hr.
>
>
>> acl authenticated proxy_auth REQUIRED
>> http_access allow authenticated
>>
>
> I recommend a slightly different form of check for logins. It prevents 
> the situation where a user trying the wrong credentials gets a loop of 
> popups.
>
> Like so:
>  http_access deny !authenticated
>
&g

Re: [squid-users] Squid 5 does not send ICAP request

2021-03-15 Thread Eliezer Croitoru
Hey Alex and Amos.

These bugs are open since forever.
There is a simple way to re-produce it.
There is also a simple bypass to the issue but still.
However, Would it be possible to fix it for 6?
It doesn't to even require a single http request to test and verify.
The ICAP hosts doesn't resolve for at-least 3 minutes.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Friday, March 12, 2021 8:43 PM
To: 橋本紘希 ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 5 does not send ICAP request

I suspect you are suffering from Bug 4528:
https://bugs.squid-cache.org/show_bug.cgi?id=4528

Which has also been discussed earlier as Bug 3621:
https://bugs.squid-cache.org/show_bug.cgi?id=3621

Does adding icap5 to /etc/hosts (or whatever your hosts_file points to)
help?

Unfortunately, I currently do not have enough free time to study your
logs to explain why Squid v5 delay is longer than that of v4, but I hope
that you can work around the problem by adjusting your hosts file.


HTH,

Alex.


On 3/12/21 2:44 AM, 橋本紘希 wrote:
> I made squid and ICAP system using docker-compose.
> 
> Squid 4 started sending ICAP requests 1 minute after boot.
> 
> However, squid 5 sends no ICAP request even 10 minutes after boot.
> Squid continued to mark the ICAP service down.
> 
> How can I make squid 5 to start ICAP conversation?
> 
> * squid version
> 5.0.5-20210223-r4af19cc24
> 
> * squid.conf
> 
> ```
> http_port 3128
> http_access allow all
> icap_enable on
> icap_service icapsvc reqmod_precache icap://icap5:1344 bypass=off
> adaptation_access icapsvc allow all
> icap_persistent_connections off
> icap_service_revival_delay 60
> debug_options ALL,9
> ```
> 
> * This is my environment.
> https://github.com/hsmtkk/squidicap
> 
> * I uploaded access.log and cache.log to the GitHub issue.
> https://github.com/hsmtkk/squidicap/issues/1
> 
> Best regards,
> Kouki Hashimoto
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] a specific host generates a 503 ...

2021-03-11 Thread Eliezer Croitoru
Hey Walter,

It's sitting behind:  DDoS protection by Cloudflare
So it makes sense that you would not be able to download it using wget.
The only option probably is using a web browser.
I would suggest contacting clamav.net web/system admins to verify what are the 
options.

All The Bests,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of 
Walter H.
Sent: Wednesday, March 10, 2021 7:55 AM
To: Squid Users 
Subject: [squid-users] a specific host generates a 503 ...

Hello,

can someone test the following URL

http://db.local.clamav.net/daily-26102.cdiff

e.g.   wget http://db.local.clamav.net/daily-26102.cdiff

I have an older squid (v3.1) there this works,
but with the newer ones (v3.4 and v3.5) this doesn't;

is there an explanation why?

the log shows this:

client-ip - - [10/Mar/2021:06:43:50 +0100] "GET 
http://db.local.clamav.net/daily-26102.cdiff HTTP/1.0" 503 8645 "-" 
"Wget/1.12 (linux-gnu)" TCP_MISS:HIER_DIRECT

the suspicious thing: when using a browser: this works with any squid, 
but this doesn't help because the clamav signature updates are loaded
by the freshclam which shows the  same failure as e.g. wget

client-ip - - [09/Mar/2021:06:00:03 +0100] "GET 
http://db.local.clamav.net/daily-26102.cdiff HTTP/1.0" 503 8642 "-" 
"ClamAV/0.103.1 (OS: linux-gnu, ARCH: x86_64, CPU: x86_64)" 
TCP_MISS:HIER_DIRECT

I noticed this two days after the nightly freshclam (signature update) 
failure,
and changed the freshclam config to use the squid v3.1;

Thanks,
Walter

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] websocket with sslbump

2021-03-11 Thread Eliezer Croitoru
Hey Niels,

 

I can help you with this if you need.
I have a pre-compiled version and while it’s not a Debian packaged ie .deb file 
it’s a matter of unpacking the files into the FS.

Also take a peek at the docker build:

https://github.com/elico/squid-docker-build-nodes

 


Let me know if you need this binaries, I can put them at:

https://ngtech.co.il/repo/bin/debian/10.4/amd64/

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

Zoom: Coming soon

 

 

From: squid-users  On Behalf Of 
Niels Hofmans
Sent: Wednesday, March 10, 2021 9:42 AM
To: Alex Rousskov 
Cc: Squid Users 
Subject: Re: [squid-users] websocket with sslbump

 

Hi Alex,

 

Thank you for your response. I’ll be opening up a Bugzilla ticket for opaque 
messages through ICAP if it doesn’t exist already.

Related to the squid 5.x, I’ve reached out to the debian package maintainer 
last week for a binary install in the repos but no response as of yet.

 

Best regards,
Niels Hofmans

SITE   https://ironpeak.be
BTW   BE0694785660
BANK BE76068909740795

 

On 9 Mar 2021, at 16:58, Alex Rousskov mailto:rouss...@measurement-factory.com> > wrote:

 

On 3/8/21 10:10 AM, Niels Hofmans wrote:




During testing sslbump + icap I noticed that websockets (ws + was) are
not supported by squid. (Even if using on_unsupported_protocol)
Are there any plans for supporting this with sslbump?


Your question can be misinterpreted in many different ways. I will
answer the following related question instead:

Q: Are there any plans for Squid to send tunneled traffic through
adaptation services?

The ICAP and eCAP protocols cannot support opaque/messageless traffic
natively. Squid can be enhanced to wrap tunneled traffic into something
resembling HTTP messages so that it can be analyzed using adaptation
services (e.g., Squid applies similar wrapping to FTP traffic already).

I recall occasional requests for such a feature. I am not aware of
anybody working on that right now.

https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F


HTH,

Alex.
P.S. Latest Squids support forwarding websocket tunnels that use HTTP
Upgrade mechanism (see http_upgrade_request_protocols in v5
squid.conf.documented).

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

2021-02-15 Thread Eliezer Croitoru
Google host means that the host that squid couldn't connect ie :
>>> connection on conn2195 local=216.58.198.67:443
>>> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
>>>

216.58.198.67:443

The issue can be teste against this host.( the above)
There is an issue with ssl bump and this specific host is a re-producible 
issue/case/problem.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Alex Rousskov  
Sent: Monday, February 15, 2021 9:03 PM
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

On 2/15/21 6:32 AM, Eliezer Croitoru wrote:

> Where exactly do you see Host Header Forgery in my last email?

Your last email says "google hosts". The previous email from you (in the
same thread) said "Most of the issues I see are related to Host header
forgery detection" and then named "google host related issue" to be "the
main issue". I naturally assumed that you are talking about a set of
Host forgery related issues with one specific Host forgery detection
issue being the prevalent/major one.

If my assumption was wrong, then you have not addressed the problem I
stated in my very first response -- I still do not know what "google
host related issue" is. The cache.log lines you have posted do not
answer that question for me. You seem to know what the problem actually
is, so, if you want answers, perhaps you can detail/explain the problem
you are asking about.

Alex.


> -Original Message-
> From: Alex Rousskov  
> Sent: Thursday, February 11, 2021 7:02 PM
> To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac
> 
> On 2/11/21 10:41 AM, Eliezer Croitoru wrote:
> 
>> The issue that makes it's impossible to surf not to cache.
>> The 
>>> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
>>> connection on conn2195 local=216.58.198.67:443
>>> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
>>>
>>> current master transaction: master78
>>>
>>> which is a google host related issue.
>>
>> The access to google hosts seems to be the main issue here.
> 
> How is this different from the host forgery related discussions we
> recently had? I consider the general "What can we do about host forgery
> errors?"  question answered already. If you disagree with those answers,
> we can discuss further, but, to make progress, you need to say
> explicitly which answer you disagree with and why.
> 
> Alex.
> 
> 
>> -Original Message-
>> From: Alex Rousskov  
>> Sent: Tuesday, February 9, 2021 11:03 PM
>> To: Eliezer Croitoru ;
>> squid-users@lists.squid-cache.org
>> Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac
>>
>> On 2/7/21 12:47 PM, Eliezer Croitoru wrote:
>>> I move on to testing squid-6.0.0-20210204-r5f37a71ac
>>>
>>> Most of the issues I see are related to Host header forgery detection.
>>>
>>> I do see that the main issue with TLS is similar to:
>>>
>>> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
>>> connection on conn2195 local=216.58.198.67:443
>>> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
>>>
>>> current master transaction: master78
>>>
>>> which is a google host related issue.
>>
>>
>>> Alex and Amos,
>>>
>>> Can the project do something about this?
>>  FWIW, I do not understand what you are asking about -- it is not clear
>> to me what "this" is in the context of your question. As you know, there
>> have been several recent discussions about host header forgery detection
>> problems. It is not clear to me whether you are asking about some
>> specific new case or want to revisit some specific aspects of those
>> discussions.
>>
>> Alex.
>>

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

2021-02-15 Thread Eliezer Croitoru
Hey Alex,

Where exactly do you see Host Header Forgery in my last email?

Eliezer

* I wrote my own proxy for now.


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Alex Rousskov  
Sent: Thursday, February 11, 2021 7:02 PM
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

On 2/11/21 10:41 AM, Eliezer Croitoru wrote:

> The issue that makes it's impossible to surf not to cache.
> The 
>> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
>> connection on conn2195 local=216.58.198.67:443
>> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
>>
>> current master transaction: master78
>>
>> which is a google host related issue.
> 
> The access to google hosts seems to be the main issue here.

How is this different from the host forgery related discussions we
recently had? I consider the general "What can we do about host forgery
errors?"  question answered already. If you disagree with those answers,
we can discuss further, but, to make progress, you need to say
explicitly which answer you disagree with and why.

Alex.


> -Original Message-
> From: Alex Rousskov  
> Sent: Tuesday, February 9, 2021 11:03 PM
> To: Eliezer Croitoru ;
> squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac
> 
> On 2/7/21 12:47 PM, Eliezer Croitoru wrote:
>> I move on to testing squid-6.0.0-20210204-r5f37a71ac
>>
>> Most of the issues I see are related to Host header forgery detection.
>>
>> I do see that the main issue with TLS is similar to:
>>
>> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
>> connection on conn2195 local=216.58.198.67:443
>> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
>>
>> current master transaction: master78
>>
>> which is a google host related issue.
> 
> 
>> Alex and Amos,
>>
>> Can the project do something about this?
>  FWIW, I do not understand what you are asking about -- it is not clear
> to me what "this" is in the context of your question. As you know, there
> have been several recent discussions about host header forgery detection
> problems. It is not clear to me whether you are asking about some
> specific new case or want to revisit some specific aspects of those
> discussions.
> 
> Alex.
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

2021-02-11 Thread Eliezer Croitoru
Hey Alex,

I am talking/writing about the actual issue..
The issue that makes it's impossible to surf not to cache.
The 
> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
> connection on conn2195 local=216.58.198.67:443
> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
> 
> current master transaction: master78
> 
> which is a google host related issue.

The access to google hosts seems to be the main issue here.

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Alex Rousskov  
Sent: Tuesday, February 9, 2021 11:03 PM
To: Eliezer Croitoru ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Started testing squid-6.0.0-20210204-r5f37a71ac

On 2/7/21 12:47 PM, Eliezer Croitoru wrote:
> I move on to testing squid-6.0.0-20210204-r5f37a71ac
> 
> Most of the issues I see are related to Host header forgery detection.
> 
> I do see that the main issue with TLS is similar to:
> 
> 2021/02/07 19:46:07 kid1| ERROR: failure while accepting a TLS
> connection on conn2195 local=216.58.198.67:443
> remote=192.168.189.94:41724 FD 104 flags=33: 0x55cf6a6debe0*1
> 
>     current master transaction: master78
> 
> which is a google host related issue.


> Alex and Amos,
> 
> Can the project do something about this?
 FWIW, I do not understand what you are asking about -- it is not clear
to me what "this" is in the context of your question. As you know, there
have been several recent discussions about host header forgery detection
problems. It is not clear to me whether you are asking about some
specific new case or want to revisit some specific aspects of those
discussions.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Originserver load balancing and health checks in Squid reverse proxy mode

2021-02-09 Thread Eliezer Croitoru
This is more of Amos and Alex area.
In general I think that haproxy does load balancing much more efficiently then 
squid.
It is being used in production for years so I'm not sure why you should use 
Squid for LB.
If you want to resolve this issue then be my guest I can only offer so QA and 
advice here and there.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Chris
Sent: Tuesday, February 9, 2021 6:36 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Originserver load balancing and health checks in 
Squid reverse proxy mode

This is what I'm seeing in peer_select in cache_log with 44,3 debug options:

2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(258) 
peerSelectDnsPaths: Find IP destination for: '[the_request]' via 
[ip_cache_peer_srv1]
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(280) 
peerSelectDnsPaths: Found sources for '[the_request]'
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(281) 
peerSelectDnsPaths:   always_direct = DENIED
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(282) 
peerSelectDnsPaths:never_direct = DENIED
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(292) 
peerSelectDnsPaths:  cache_peer = local=0.0.0.0 
remote=[ip_cache_peer_srv1]:[port] flags=1
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(292) 
peerSelectDnsPaths:  cache_peer = local=0.0.0.0 
remote=[ip_cache_peer_srv2]:[port] flags=1
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(292) 
peerSelectDnsPaths:  cache_peer = local=0.0.0.0 
remote=[ip_cache_peer_srv3]:[port] flags=1
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(292) 
peerSelectDnsPaths:  cache_peer = local=0.0.0.0 
remote=[ip_cache_peer_srv1]:[port] flags=1
2021/02/09 16:25:11.588 kid1| 44,2| peer_select.cc(295) 
peerSelectDnsPaths:timedout = 0
2021/02/09 16:25:11.588 kid1| 44,3| peer_select.cc(79) ~ps_state: 
[the_request]

and than in access.log I have:

TCP_MISS/200 [the_request] ROUND_ROBIN_PARENT/[ip_cache_peer_srv1]

TCP_MISS/200 [the_request] ROUND_ROBIN_PARENT/[ip_cache_peer_srv2]

TCP_MISS/200 [the_request] ROUND_ROBIN_PARENT/[ip_cache_peer_srv3]

evenly distributed.

So it's not using the weighted-round-robin that should have srv1 at 
11ms, while srv2 and srv3 are at about 150ms in regard to pinger.

What did I miss in configuring weighted-round-robin?

Best Regards,

Chris







On 09.02.21 17:09, Chris wrote:
> Hi Elizer, this helped, it seems as if I got the pinger working.
>
> It's now owned by root in the same group as the squid user and the 
> setuid set.
>
> So I used chown root:squidusergroup and chmod u+s on the pinger (and 
> in ubuntu it is actually found under /usr/lib/squid/pinger ).
>
> Now with debug 42,3 I get some values as:
>
> Icmp.cc(95) Log: pingerLog: [timestamp] [ip_srv2] 
> 0 Echo Reply  155ms 7 hops
>
> and
>
> Icmp.cc(95) Log: pingerLog: [timestamp] [ip_srv1] 
> 0 Echo Reply  11ms 9 hops
>
> but squid is still allocating the requests evenly and not using those 
> ping times in weighted-round-robin.
>
> Does the weighted-round-robin need some time to use those rtt values?
>
> Best Regards,
>
> Chris
>
>
> On 09.02.21 16:19, NgTech LTD wrote:
>> Maybe its apparmor.
>> pinger needs to have a setuid permission as root.
>> its a pinger and needs root privleges as far as i remember.
>>
>> Eliezer
>>
>>
>> On Tue, Feb 9, 2021, 17:03 Chris  wrote:
>>
>>> Hi,
>>>
>>> thank you Amos, this is bringing me into the right direction.
>>>
>>> Now I know what I'll have to debug: the pinger.
>>>
>>> Cache.log shows:
>>>
>>> 2021/02/09 14:49:27| pinger: Initialising ICMP pinger ...
>>> 2021/02/09 14:49:27| pinger: ICMP socket opened.
>>> 2021/02/09 14:49:27| pinger: ICMPv6 socket opened
>>> 2021/02/09 14:49:27| Pinger exiting.
>>>
>>> and that last line "pinger exiting" looks like a problem here.
>>>
>>> Squid is used as a package from ubuntu bionic, it's configured with
>>> "--enable-icmp" as stated by squid -v.
>>>
>>> Now I explicitly wrote a "pinger_enable on" and the pinger_program path
>>> (in this case: "/usr/lib/squid/pinger" ) into the squid.conf (as well
>>> as icmp_query on) and reconfigured but the cache.log still shows:
>>>
>>> "Pinger exiting"
>>>
>>> So I don't understand why the pinger is exiting. The pinger_program is
>>> owned by root and has 0755 execution rights. Normal ping commands do
>>> work and show th

Re: [squid-users] Port or switch level authorization

2021-02-09 Thread Eliezer Croitoru
Thanks Amos,

OK this seems to answer my question.
A session helper with ttl=3 should be enough if it will return the username 
associated by the helper.

The next thing is to block traffic if there is no username.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, February 9, 2021 5:30 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Port or switch level authorization

On 8/02/21 10:48 pm, Eliezer Croitoru wrote:
> I have a Mikrotik PPPOE server and I would like to register the logged in
> user on PPPOE Tunnel creation.
> In the mikroitk device I have a code which can run a curl/fetch request with
> the login details ie IP and username towards any server.
> I was thinking about creating a PHP api that will be allowed access only
> from the Mikrotik devices.
> On every login the user+IP pairs will be written to a small DB.
> Squid in it's turn will use an external helper to run queries against the DB
> per request with small cache of 3-10 seconds.

Do you mean the ext_session_sql_acl helper?

> 
> What's the best way to pass a username so with the ip it will be logged.
> 

The helper needs to return user= kv-pair to Squid for this to be an 
"authentication" rather than just authorization. That username will be 
logged without anything special having to be done.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Originserver load balancing and health checks in Squid reverse proxy mode

2021-02-08 Thread Eliezer Croitoru
Hey Chris,

The main question is " what do you need squid for?"
If you need squid for caching it one thing.
RFC compliance is another thing..
Anyway Haproxy is better in Load Balancing and traffic control/management.
If you need load balancer use haproxy.
If you need caching for very specific known use cases then use it.
For general purpose these days it might not work as you might expect.

Take into account that browsers cache lots of things, even these who shouldn't 
so the gain/profit
should be tested first.

Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-users  On Behalf Of Chris
Sent: Monday, February 8, 2021 4:41 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Originserver load balancing and health checks in Squid 
reverse proxy mode

Hi all,

I'm trying to figure out the best way to use squid (version 3.5.27) in 
reverse proxy mode in regard to originserver health checks and load 
balancing.

So far I had been using the round-robin originserver cache peer 
selection algorithm while using weight to favor originservers with 
closer proximity/lower latency.

The problem: if one cache_peer is dead it takes ages for squid to choose 
the second originserver. It does look as if (e.g. if one originserver 
has a weight of 32, the other of 2) squid tries the dead server several 
times before accessing the other one.

Now instead of using round-robin plus weight it would be best to use 
weighted-round-robin. But as I understand it, this wouldn't work with 
originserver if (as it's normally the case) the originserver won't 
handle icp or htcp requests. Did I miss sth. here? Would background-ping 
work?

I tried weighted-round-robin and background-ping on originservers but 
got only an evenly distributed request handling even if ones 
originservers rtt would be less than half of the others. But then again, 
those originservers won't handle icp requests.

So what's the best solution to a) choose the originserver with the 
lowest rtt and b) still have a fast switch if one of the originservers 
switches into dead state?

Would I have to span another proxy (like e.g. HAProxy) between Squid and 
originserver or better install Squid on those originservers as well 
(only for serving icp requests from the squid fellows)?

Is there a better way to update the dead state of an originserver?

How do you handle this?

Thanks a lot,

Chris

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >