Re: [squid-dev] [squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

2022-05-03 Thread Eliezer Croitoru
Tested for all RPM builds and seem to build.
I am releasing now the 6.0.1 RPMS at:
https://www.ngtech.co.il/repo/fedora/33/beta/

https://www.ngtech.co.il/repo/fedora/35/beta/

https://www.ngtech.co.il/repo/centos/7/beta/

https://www.ngtech.co.il/repo/centos/8/beta/

https://www.ngtech.co.il/repo/oracle/7/beta/

https://www.ngtech.co.il/repo/oracle/8/beta/


It will take some time to deploy but it's based on the latest daily auto
generated sources package from yesterday (02/05/2022).

I hope to start test this version next week.

Thanks,
Eliezer

* 5.5 works great!


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-dev  On Behalf Of Alex
Rousskov
Sent: Monday, May 2, 2022 21:54
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [squid-users] squid-6.0.0-20220412-rb706999c1
cannot be built

On 5/1/22 18:27, Eliezer Croitoru wrote:

>>From my tests this issue with the latest daily autogenerated sources
package
> is the same:
>
http://master.squid-cache.org/Versions/v6/squid-6.0.0-20220501-re899e0c27.ta
> r.bz2
> AclRegs.cc:165:50: error: unused parameter 'name'
[-Werror=unused-parameter]
> RegisterMaker("clientside_mark", [](TypeName name)->ACL* { return new
> Acl::ConnMark; });

Please test https://github.com/squid-cache/squid/pull/1042


Thank you,

Alex.


>   ~^~~~
> AclRegs.cc: In lambda function:
> AclRegs.cc:166:57: error: unused parameter 'name'
[-Werror=unused-parameter]
>   RegisterMaker("client_connection_mark", [](TypeName name)->ACL* {
> return new Acl::ConnMark; });
> ~^~~~
> g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
> -DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
> -DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
> -I../src -I../include
> -I../libltdl -I../src -I../libltdl  -I/usr/include/libxml2  -Wextra
> -Wno-unused-private-field -Wimplicit-fallthrough=2 -Wpointer-arith
> -Wwrite-strings -Wcomments -Wshadow -Wmissing-declarations
> -Woverloaded-virtual -Werror -pipe -D_REENTRANT -I/usr/include/libxml2
> -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall -Werror=format-security
> -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions
> -fstack-protector-strong -grecord-gcc-switches
> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
-fPIC
> -c -o DelayConfig.o DelayConfig.cc
> g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
> -DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
> -DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
> -I../src -I../include   -I../libltdl -I../src -I../libltdl
> -I/usr/include/libxml2  -Wextra -Wno-unused-private-field
> -Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow
> -Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
> -I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
> -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
> -fexceptions -fstack-protector-strong -grecord-gcc-switches
> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
-fPIC
> -c -o DelayPool.o DelayPool.cc
> g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
> -DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
> -DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
> -I../src -I../include   -I../libltdl -I../src -I../libltdl
> -I/usr/include/libxml2  -Wextra -Wno-unused-private-field
> -Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow
> -Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
> -I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
> -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
> -fexceptions -fstack-protector-strong -grecord-gcc-switches
> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
-fPIC
> -c -o DelaySpec.o DelaySpec.cc
> At global scope:
> cc1plus: error: unrecognized command line option
'-Wno-unused-private-field'
> [-Werror]
> cc1plus: all warnings being treated as errors
> 
> ## END
> 
> I will try to publish my podman build later on.
> 
> 
> Eliezer Croitoru
> NgTech, Tech Support
> Mobile: +972-5-28

Re: [squid-dev] [squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

2022-05-01 Thread Eliezer Croitoru
Moving to Squid-Dev.

>From my tests this issue with the latest daily autogenerated sources package
is the same:
http://master.squid-cache.org/Versions/v6/squid-6.0.0-20220501-re899e0c27.ta
r.bz2
## START
n -fPIC -c -o DelayBucket.o DelayBucket.cc
AclRegs.cc: In lambda function:
AclRegs.cc:165:50: error: unused parameter 'name' [-Werror=unused-parameter]
RegisterMaker("clientside_mark", [](TypeName name)->ACL* { return new
Acl::ConnMark; });
 ~^~~~
AclRegs.cc: In lambda function:
AclRegs.cc:166:57: error: unused parameter 'name' [-Werror=unused-parameter]
 RegisterMaker("client_connection_mark", [](TypeName name)->ACL* {
return new Acl::ConnMark; });
~^~~~
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include
-I../libltdl -I../src -I../libltdl  -I/usr/include/libxml2  -Wextra
-Wno-unused-private-field -Wimplicit-fallthrough=2 -Wpointer-arith
-Wwrite-strings -Wcomments -Wshadow -Wmissing-declarations
-Woverloaded-virtual -Werror -pipe -D_REENTRANT -I/usr/include/libxml2
-I/usr/include/p11-kit-1   -O2 -g -pipe -Wall -Werror=format-security
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions
-fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelayConfig.o DelayConfig.cc
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include   -I../libltdl -I../src -I../libltdl
-I/usr/include/libxml2  -Wextra -Wno-unused-private-field
-Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
-I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelayPool.o DelayPool.cc
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\"/etc/squid/squid.conf\"
-DDEFAULT_SQUID_DATA_DIR=\"/usr/share/squid\"
-DDEFAULT_SQUID_CONFIG_DIR=\"/etc/squid\"   -I.. -I../include -I../lib
-I../src -I../include   -I../libltdl -I../src -I../libltdl
-I/usr/include/libxml2  -Wextra -Wno-unused-private-field
-Wimplicit-fallthrough=2 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT
-I/usr/include/libxml2  -I/usr/include/p11-kit-1   -O2 -g -pipe -Wall
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
-fexceptions -fstack-protector-strong -grecord-gcc-switches
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC
-c -o DelaySpec.o DelaySpec.cc
At global scope:
cc1plus: error: unrecognized command line option '-Wno-unused-private-field'
[-Werror]
cc1plus: all warnings being treated as errors

## END

I will try to publish my podman build later on.


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-users  On Behalf Of
Amos Jeffries
Sent: Monday, May 2, 2022 00:32
To: squid-us...@lists.squid-cache.org
Subject: Re: [squid-users] squid-6.0.0-20220412-rb706999c1 cannot be built

On 2/05/22 07:55, Eliezer Croitoru wrote:
> I have tried to build couple RPMs for the V6 beta but found that the 
> current daily autogenerated releases cannot be built.
> 
> Is there any specific git commit I should try to use?
> 

There is a new daily tarball out now. can you try wit that one please?


Also, squid-dev for beta and experimental code issues.


Cheers
Amos
___
squid-users mailing list
squid-us...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid-Cache statistics reporting project

2022-04-21 Thread Eliezer Croitoru
Hey Alex,

Thanks for the kinds words and recommendations.
Since I already have some of the code ready and most of the clients do not
have a version with YAML cache mgr output available I would say
that this is the best choice.
I offer this as a Squid-Cache statistics project and not a personal one.

Any directions on this, @Amos @Kinkie?

Thanks,
Eliezer 


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-dev  On Behalf Of Alex
Rousskov
Sent: Thursday, April 21, 2022 05:03
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Squid-Cache statistics reporting project

On 4/20/22 18:34, Eliezer Croitoru wrote:

> In the past I wrote about a project that will include Squid statistics 
> reporting.
> 
> The main goal is to gather from the project users using a script a set 
> of cache-mgr pages in specific intervals.
> 
> The simplest way to do so is to run a script that will use either a 
> token and will upload the files to an api/webdav/sftp or via email and a 
> whitelist of emails.
> 
> I would like to RFC this specific idea.

Just to avoid misunderstanding: If "this idea" refers to offering your 
script to Squid users that want to participate in your project, then you 
do not need a squid-dev blessing for doing that because that idea does 
not require any Squid modifications.

If you are proposing Squid modifications, then please detail those 
modifications. I hope it does not come to Squid duplicating crontab and 
sendmail functionality :-).


> I can offer to write a cache-mgr to yaml/json converter script that will 
> create a singe json/yaml file that will contain all the details of the 
> instance.

As a personal project, that converter sounds good to me! FWIW, I have 
heard of 3rd party tools[1] that parse Squid cache manager output, but I 
do not know how suitable they are for what you want to achieve. The best 
output format would depend on how you plan to post-process data, but 
once you get something that follows strict grammar, it should be fairly 
easy to convert to other formats as needed. I would just keep the 
converter script output as simple and strict as possible to facilitate 
further conversions and input in various post-processing tools.

[1] https://github.com/boynux/squid-exporter

As an official Squid project, I think it would be much better to finish 
converting cache manager code to produce YAML output natively than to 
develop and maintain an official converter script (while still working 
on that native YAML conversion).


> This option will significantly help the project to grasp a little bit 
> about the usage of squid around the world and to get a glimpse into the 
> unknown.

Personally, I am worried that glimpses based on a few 
volunteer-submitted samples are more likely to mislead than to correctly 
guide Squid development, but that speculation cannot be proven.


Cheers,

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid-Cache statistics reporting project

2022-04-20 Thread Eliezer Croitoru
Hey All,

 

In the past I wrote about a project that will include Squid statistics
reporting.

The main goal is to gather from the project users using a script a set of
cache-mgr pages in specific intervals.

The simplest way to do so is to run a script that will use either a token
and will upload the files to an api/webdav/sftp or via email and a whitelist
of emails.

I would like to RFC this specific idea.

I can offer to write a cache-mgr to yaml/json converter script that will
create a singe json/yaml file that will contain all the details of the
instance.

 

First,

What do you all think?

 

This option will significantly help the project to grasp a little bit about
the usage of squid around the world and to get a glimpse into the unknown.

The basic constrain is a user UUID and an instance UUID.

The basic way for it to work is to run a cron job every 1 or 12 hours.

 

Eliezer

 

*   Please comment.

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] CVE-2019-12522

2022-03-04 Thread Eliezer Croitoru
Thanks!!


Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-dev  On Behalf Of Amos
Jeffries
Sent: Friday, March 4, 2022 06:43
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] CVE-2019-12522

On 4/03/22 00:39, Eliezer Croitoru wrote:
> I'm still trying to understand why it's described as "exploitable" ???
> It's like saying: The Linux Kernel should not be a kernel and init(or
> equivalent) should not run with uid 0 or 1.
> Why nobody complains about cockpit being a root process??
> 

This explains the _type_ of problem 
<https://secureteam.co.uk/articles/how-return-oriented-programming-exploits-
work/>.


Most Squid are automatically protected against it by at least one of OS 
or compiler systems. But some can still be vulnerable, as shown by Jerkio.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] ERR_CONFLICT_HOST for HTTP CONNECT request on port 80

2022-03-03 Thread Eliezer Croitoru
I am not sure if it’s for Squid-dev but anyway to clear out the doubts I would 
suggest attaching the squid.conf 
and remember to remove any sensitive data.

 

Eliezer

 



Eliezer Croitoru

NgTech, Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-dev  On Behalf Of YFone 
Ling
Sent: Thursday, March 3, 2022 22:55
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] ERR_CONFLICT_HOST for HTTP CONNECT request on port 80

 

My application sends  HTTP CONNECT requests to a HTTP proxy port 80, but gets a 
squid ERR_CONFLICT_HOST error page.

 

Is the following code really working as the comments pointed out "ignore them" 
since the following if condition is "http->request->method != 
Http::METHOD_CONNECT"

and the rest has been blocked by error page 
"repContext->setReplyToError(ERR_CONFLICT_HOST, Http::scConflict,"?

 

Does "ignore them" mean block them? 



void


ClientRequestContext::hostHeaderVerifyFailed(const char *A, const char *B)


{


// IP address validation for Host: failed. Admin wants to ignore them.


// NP: we do not yet handle CONNECT tunnels well, so ignore for them


if (!Config.onoff.hostStrictVerify && http->request->method != 
Http::METHOD_CONNECT) {


debugs(85, 3, "SECURITY ALERT: Host header forgery detected on " << 
http->getConn()->clientConnection <<


   " (" << A << " does not match " << B << ") on URL: " << 
http->request->effectiveRequestUri());



 

 

How does the squid get "hostHeaderVerifyFailed" for a normal HTTP CONNECT 
request to a HTTP Proxy as simple as below?

 

CONNECT www.zscaler.com:80 <http://www.zscaler.com:80>  HTTP/1.1

Host: www.zscaler.com:80 <http://www.zscaler.com:80> 

User-Agent: Windows Microsoft Windows 10 Enterprise ZTunnel/1.0

Proxy-Connection: keep-alive

Connection: keep-alive

 

HTTP/1.1 409 Conflict

Server: squid

Mime-Version: 1.0

Date: Tue, 22 Feb 2022 20:59:42 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 2072

X-Squid-Error: ERR_CONFLICT_HOST 0

Vary: Accept-Language

Content-Language: en

X-Cache: MISS from 3

Via: 1.1 3 (squid)

Connection: keep-alive

 





ERROR

The requested URL could not be retrieved





 



The following error was encountered while trying to retrieve the URL: http://www.zscaler.com:80> ">www.zscaler.com:80 
<http://www.zscaler.com:80> 

..

 

 

 

Thank you for any help on the understanding!

 

Paul Ling

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] CVE-2019-12522

2022-03-03 Thread Eliezer Croitoru
I'm still trying to understand why it's described as "exploitable" ???
It's like saying: The Linux Kernel should not be a kernel and init(or
equivalent) should not run with uid 0 or 1.
Why nobody complains about cockpit being a root process??

Thanks,
Eliezer

----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: squid-dev  On Behalf Of Amos
Jeffries
Sent: Thursday, March 3, 2022 09:17
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] CVE-2019-12522

On 2/03/22 05:35, Adam Majer wrote:
> Hi all,
> 
> There apparently was a CVE assigned some time ago but I cannot seem to 
> find it being addressed.
> 
>
https://gitlab.com/jeriko.one/security/-/blob/master/squid/CVEs/CVE-2019-125
22.txt 
> 
> 
> The crux of the problem is that privileges are not dropped and could be 
> re-acquired. There is even a warning against running squid as root but 
> if root is one function call away, it seems it's the same.
> 
> Any thoughts on this?
> 


To quote myself:

"
We do not have an ETA on this issue. Risk is relatively low and several
features of Squid require the capability this allows in order to
reconfigure. So we will not be implementing the quick fix of fully
dropping root.
"

If anyone wants to work on it you can seek out any/all calls to 
enter_suid and see if they can be removed yet. Some may be able to go 
immediately, and some may need replacing with modern libcap capabilities.


HTH
Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [squid-users] 4.17 and 5.3 SSL BUMP issue: SSL_ERROR_RX_RECORD_TOO_LONG

2022-01-24 Thread Eliezer Croitoru
Hey Alex, 

@Squid-dev

Thanks for the response!
It will take me time to answer your questions and doubts about the solution to 
a more "scientific" degree rather than a basic understanding of the way the 
code works.

The main reason I posted it on the Squid-Users list first was since this issues 
was not resolved by anyone to an acceptable degree for such a long time.
This patch demonstrates that it's possible to prevent a basic DOS from Squid 
side based on the assumption that the client can damage the cache.

There are couple options on what to do when such a request happens ie:
There is a difference between what Squid knows about the requested domain name 
and what the client is asking for.(Split brain scenario?)

What have been done in HTTP/1.X was mostly to force the proxy resolved domain 
ip rather then considering the client side of the picture.
In TLS we ca assume that the client knows pretty well what IP he wants to reach 
for.
The only thing that was left is to verify the remote host against the proxy 
local CA's bundle(s) from an admin point of view.
We can naturally assume that any TLS connection that is trusted by the proxy 
CA's bundle(s) is trusted and there is no need
for any special test about the basic trust for caching from the destination 
address.(Revocations are an exception to this)

Indeed there are tests which are required but when the service is denying the 
basic nature of the proxy which
is in my case content filtering and not caching, I and many others would prefer 
to have close to 0 Cache but an operational service.
(which by the way in TLS connections this what happens in most cases anyways)

I have traced the issue to at-least 6 years ago.
I have tried to find a bug report which contains NONE/409 but none was found by 
the Bugzilla search despite the fact
that many encountered this issue in production with alive clients... compared 
to API or AI systems.
( Why no-one responded or filed a bug report, even a duplicate one???)

Since V 3.5 there is something broken :\

So answering:
* http://lists.squid-cache.org/pipermail/squid-users/2020-November/022913.html
* http://www.squid-cache.org/mail-archive/squid-dev/201112/0035.html
* https://forum.netgate.com/topic/159364/squid-squidguard-none-409-and-dns-issue
* https://docs.netgate.com/pfsense/en/latest/troubleshooting/squid.html
* 
https://www.linuxquestions.org/questions/linux-server-73/tag_none-409-connect-squid-3-5-20-a-4175620518/
* 
https://community.spiceworks.com/topic/351106-pfsense-and-squidguard-error-page


The use cases that I have seen until now are: (please add more cases to the 
list if you have)
* remote work VPN which forces remote(geographically and in the network level) 
DNS but splits the tunnel traffic between the office and the local WAN 
connection
* enforcement of a specific IP for testing using the hosts file by CDN and 
networking services providers 
* software that uses another way to acquire the destination IP of the service 
(DNS over HTTPS/others) (AV, Software Updates, others)
* Malware/Spyware that forces a specific DNS service which also installed a 
RootCA on the device (Squid CA's bundle(s) blocks these easily)

Bugzilla related bugs I have found using other keywords:
* https://bugs.squid-cache.org/show_bug.cgi?id=4940
* https://bugs.squid-cache.org/show_bug.cgi?id=4514

I do not have any sponsorship for this patch and I if someone is willing to pay 
for the work as it is then I would be happy
to accept some donation for my time on it.

If you have noticed or not it also removes the:
"Host header forgery detected on" and couple other log messages flooding.

I hope it will help to move couple steps forward.

One of the things that pushed me to write this patch is that the fact that 
Squid is a very good software despite it's an old beast
and I have tried to use commercial products and got really disappointed until 
now more then once.

I do believe that the right solution is not an ON/OFF switch but rather 
something that can be matched against http_access and other matchers/acls.
The right choice in my use case is ON/OFF switch but if the admin would be able 
to configure this it would be very easy to enforce a policy.
Currently in the security industry the investments are on the TLS level rather 
then on the policy itself. (feel free to correct me if I'm wrong).

I will add this patch to the next RPM's release which have just finished the 
build so these who have been having these issues would be able to use squid in 
production.

Thanks again,
Eliezer

* Currently I am building RPM's for: CentOS7-8, Oracle Linux 7-8, Amazon Linux 
2.
* peek at: Slamming-Your-Head-Into-Keyboard-HOWTO: Packaging Applications - 
Jared Morrowhttps://vimeo.com/70019064

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com

-Original Message-
From: Alex Rousskov  
Sent: Monday, January 24, 2022 21:54
To: Eliezer Croitoru ; squid-us...@list

Re: [squid-dev] squid-5.0.5-20210223-r4af19cc24 difference in behaviors between openbsd and linux

2021-03-28 Thread Eliezer Croitoru
Hey Robert,

I am not sure I understood what is the meaning of the description:
openbsd: Requiring client certificates.
linux: Not requiring any client certificates

In what sense?
Let say you try
You have then next config directives:
http_port 3128 ssl-bump \
  cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem \
  generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
https_port 3129 intercept ssl-bump \
  cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem \
  generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
sslcrtd_program /opt/osec/libexec/security_file_certgen -s /opt/osec/etc/ssl_db 
-M 128MB
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
ssl_bump splice all

Which implies you do want ssl bump to work.
To clear out: What is the desired results and where?
How do you see that the expected result do not match the expectation?
It would help if you would show the expectation using the relevant access.log 
output when you try to access let say https://www.google.com/404.
Try to use the next to make it clear to me and probably others:
https_proxy=http://127.0.0.1:3128/ curl https://www.google.com/404 -v
https_proxy=http://127.0.0.1:3128/ curl https://www.google.com/404 -v -k

I hope this would make more sense into the scenario you are having.


Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-dev  On Behalf Of Robert 
Smith
Sent: Sunday, March 28, 2021 7:27 PM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] squid-5.0.5-20210223-r4af19cc24 difference in behaviors 
between openbsd and linux

Dear Squid-Dev list:

I could use some help on this one:


I have a build environment that is identical on linux, openbsd, and macosx

In this scenario, I am developing under:

Ubuntu 18.04 - All patches and updates applied as of 3/24
OpenBSD 6.8 - All patches and updates applied as of 3/24


I will note that I am really only using the libc from each system whereas every 
other component dependencies (which are not many! Good job squid team!) are a 
part of my build system.

When building squid with the exact same tool chain and library stack, with the 
same configure options, I am seeing a difference in behavior on the two 
platforms:

The difference is that after parsing the configuration file, the two systems 
differ in whether or not they will require client certificates:


openbsd: Requiring client certificates.

linux: Not requiring any client certificates



One would think this was a run-time configuration difference, It is not. They 
are identical, Please see below:


- all configuration, certificates, certificate databases under /opt/osec/etc on 
both systems are identical
- the configuration file on both system is identical



I have some suspicions about what the actual issue is. Using the configuration 
options below without any of the --enable-auth or --enable-auth* options (AUTH 
OPTIONS), both systems worked just fine and parse the configuration file 
identically. Of course, without auth. No good. After trying a number of 
different configure options and combinations, I discovered that on the linux 
platform, I could add the AUTH OPTIONS and remove the --enable-security-cert* 
(CERT OPTIONS):

#   --enable-security-cert-validators \
#   --enable-security-cert-generators \

and then it would parse and run the way I was used to using peek & slice.

Excited, thinking I'd found the issue, I ran the build on openbsd only to find 
the differences in functionality.



BUILD & RUNTIME INFORMATION



I will interleave these to make viewing easier. Please see below:


#
## md5 sum of config file:
#



# openbsd

root@openbsd:~# md5 /opt/osec/etc/squid.conf-bump
MD5 (/opt/osec/etc/squid.conf-bump) = a0bf93867aaff1f35eb1af23dd5eb49b



# linux

root@linux:~# md5sum /opt/osec/etc/squid.conf-bump
a0bf93867aaff1f35eb1af23dd5eb49b  /opt/osec/etc/squid.conf-bump



#
## Actual configuration (sanitized)
#


acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128 ssl-bump \
  cert=/opt/osec/etc/ssl_cert/

Re: [squid-dev] About SQUID sizing

2021-01-27 Thread Eliezer Croitoru
Hey,

 

This post is more of a Squid-Users question rather then squid-dev  to my 
understanding.

 

The technical way to look at it is not the sum of “global” users but rather 
their load on the system.

For 2k users you are better with more CPU ie 4+ vCPU and the 8GB RAM can be 
enough for many use cases.

The best way to start with this is capturing some data/stats on the network/fw 
level.

You can start with the exact system ie 4vCPU 8GB ram only as a router\firewall.

Then collect some details on the actual connections that are being used in the 
network.

I can recommend on Prometheus which I have used to get some nice graphs on 
systems.

There are ways to use Prometheus to graph Linux iptables/nftables/kernel 
conntrack which
is what I believe you should start from.

After you will have at-least a week of this stats you might be able to start 
and calculate more things.

 

Amos or Alex might know by heart what is the calculation that was mentioned 
here many times
regarding the amount of ram allocated per connection/request/session.

 

Eliezer

 



Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email:  <mailto:ngtech1...@gmail.com> ngtech1...@gmail.com

Zoom: Coming soon

 

 

From: squid-dev  On Behalf Of Hyukin 
Kwon
Sent: Wednesday, January 27, 2021 8:54 AM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] About SQUID sizing

 

Hi Squid development team,

 

I have one quick question about sizing.

Actually, I am trying to install a SQUID proxy for 2,000 users. So, I am 
finding out the h/w requirements for that and I am thinking of,

2vCPU and 8GB Mem, 50GB HDD(without Caching function)

 

Is it reasonable for that?

Any sizing calculation method?

 

I appreciate your help in advance,

 

Cheers,

Hugh

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] effective acl for tcp_outgoing_address

2021-01-21 Thread Eliezer Croitoru
Hey,

As Alex gave you the technical details.

At runtime of squid there is a sequence of events and acls validation.
http_access is validated as a slow acl first long before tcp_outgoing_address 
is happening.
If you will apply a "dummy" rule in the http_access like what Alex has suggested
you would be able to make sure that when the tcp_outgoing_address validation 
happens
a "pre-cooked"(this is how I call it) or a pre-determined session note will be 
"sticked" to the session details.

This is a simplified:
https://github.com/elico/vagrant-squid-outgoing-addresses/blob/master/shared/squid.conf#L14

squid.conf which includes the usage of a note from a helper that will always 
match like "all" should always be true
(which is used in alex example).

Let me know if it still doesn't make sense.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Hideyuki Kawai  
Sent: Thursday, January 14, 2021 2:22 PM
To: Eliezer Croitoru 
Cc: squid-dev@lists.squid-cache.org
Subject: RE: [squid-dev] effective acl for tcp_outgoing_address

Dear Eliezer

Thank you for your reply.
Could you let me ask you about your comment.

"slow acl" can use in tcp_outgoing_address?

Best regards,
Kawai

-
h.ka...@ntt.com
-----
-Original Message-
From: Eliezer Croitoru  
Sent: Thursday, January 14, 2021 8:36 PM
To: Hideyuki Kawai(川井秀行) 
Cc: squid-dev@lists.squid-cache.org
Subject: RE: [squid-dev] effective acl for tcp_outgoing_address

It's more of an users question.

Just to clear it out, the tcp_outgoing_address is a fast acl just when the 
decision is "required"
You can "pre-cook" the value of a specific note when the connection is only at 
the first http_access level.
An example for a setup which does probably what you want based on htaccess 
passwords you can here:
https://github.com/elico/vagrant-squid-outgoing-addresses

It's a vagrant lab which demonstrate this.

Let me know if it helps you or you need clarification.

Eliezer

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-dev  On Behalf Of Hideyuki 
Kawai
Sent: Thursday, January 14, 2021 2:48 AM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] effective acl for tcp_outgoing_address

Hi, this is Kawai.

Please let me send inquiry as followings.

### Requirement ###
1. Kerberos auth with Active Directory  : auth_param .  <- Success
2. "Security group" check which is gotten from AD : external_acl_type ...(using 
ext_kerberos_ldap_group_acl)   <- success
3. Different outgoing IP based on "Security group" : tcp_outgoing_address + 
external_acl  <- fail

### Inquiry ###
1. "external_acl" can not use on tcp_outgoing_address. Because the external_acl 
type is slow.
   My understanding is correct?
2. If yes, how to solve my requirement?

Please let me inform your comment and knowledge.
Thanks in advance.

-
h.ka...@ntt.com
-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] effective acl for tcp_outgoing_address

2021-01-14 Thread Eliezer Croitoru
Sorry there was a typo.
There are couple of places in the code that check ACLS.
IN -> PROXY PARSERS -> OUT

Fast acls are these for places which we cannot or won't delay the request.
The place which can take slow acls are before the OUT(simplified example abvoe).
You can apply slow ACLS at http_access layer and the notes are staying withing 
the request/session.
But on the OUT stage squid will not "stop" or "hold" the request until the 
helper will respond.

The IP address choice is in the "kernel" level so we must have the resolution 
for this "fast" and not "s-l-o-w".

I hope this answers you. If not .. ask again.

Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Hideyuki Kawai  
Sent: Thursday, January 14, 2021 2:22 PM
To: Eliezer Croitoru 
Cc: squid-dev@lists.squid-cache.org
Subject: RE: [squid-dev] effective acl for tcp_outgoing_address

Dear Eliezer

Thank you for your reply.
Could you let me ask you about your comment.

"slow acl" can use in tcp_outgoing_address?

Best regards,
Kawai

-
h.ka...@ntt.com
-----
-Original Message-
From: Eliezer Croitoru  
Sent: Thursday, January 14, 2021 8:36 PM
To: Hideyuki Kawai(川井秀行) 
Cc: squid-dev@lists.squid-cache.org
Subject: RE: [squid-dev] effective acl for tcp_outgoing_address

It's more of an users question.

Just to clear it out, the tcp_outgoing_address is a fast acl just when the 
decision is "required"
You can "pre-cook" the value of a specific note when the connection is only at 
the first http_access level.
An example for a setup which does probably what you want based on htaccess 
passwords you can here:
https://github.com/elico/vagrant-squid-outgoing-addresses

It's a vagrant lab which demonstrate this.

Let me know if it helps you or you need clarification.

Eliezer

Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-dev  On Behalf Of Hideyuki 
Kawai
Sent: Thursday, January 14, 2021 2:48 AM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] effective acl for tcp_outgoing_address

Hi, this is Kawai.

Please let me send inquiry as followings.

### Requirement ###
1. Kerberos auth with Active Directory  : auth_param .  <- Success
2. "Security group" check which is gotten from AD : external_acl_type ...(using 
ext_kerberos_ldap_group_acl)   <- success
3. Different outgoing IP based on "Security group" : tcp_outgoing_address + 
external_acl  <- fail

### Inquiry ###
1. "external_acl" can not use on tcp_outgoing_address. Because the external_acl 
type is slow.
   My understanding is correct?
2. If yes, how to solve my requirement?

Please let me inform your comment and knowledge.
Thanks in advance.

-
h.ka...@ntt.com
-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] effective acl for tcp_outgoing_address

2021-01-14 Thread Eliezer Croitoru
It's more of an users question.

Just to clear it out, the tcp_outgoing_address is a fast acl just when the 
decision is "required"
You can "pre-cook" the value of a specific note when the connection is only at 
the first http_access level.
An example for a setup which does probably what you want based on htaccess 
passwords you can here:
https://github.com/elico/vagrant-squid-outgoing-addresses

It's a vagrant lab which demonstrate this.

Let me know if it helps you or you need clarification.

Eliezer
----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: squid-dev  On Behalf Of Hideyuki 
Kawai
Sent: Thursday, January 14, 2021 2:48 AM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] effective acl for tcp_outgoing_address

Hi, this is Kawai.

Please let me send inquiry as followings.

### Requirement ###
1. Kerberos auth with Active Directory  : auth_param .  <- Success
2. "Security group" check which is gotten from AD : external_acl_type ...(using 
ext_kerberos_ldap_group_acl)   <- success
3. Different outgoing IP based on "Security group" : tcp_outgoing_address + 
external_acl  <- fail

### Inquiry ###
1. "external_acl" can not use on tcp_outgoing_address. Because the external_acl 
type is slow.
   My understanding is correct?
2. If yes, how to solve my requirement?

Please let me inform your comment and knowledge.
Thanks in advance.

-
h.ka...@ntt.com
-
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [squid-users] Host header forgery detected on domain: mobile.pipe.aria.microsoft.com

2021-01-06 Thread Eliezer Croitoru
Hey Alex,

The main issue now is the extensive logging.
For a tiny server with a single desktop client the cache.log  are expending a 
*lot*.
I have a problem with discarding these logs but for this specific case where 
the ttl is very low ie: lower < 30/20/10 seconds
We can expect for this to happen so we can disable the logs since the service 
continue to work with this low ttl.
The only and main issue is the extensive logging which is wrong.

Should we continue this on Squid-dev?

Eliezer

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon


-Original Message-
From: Alex Rousskov  
Sent: Wednesday, January 6, 2021 10:42 PM
To: squid-us...@lists.squid-cache.org
Cc: Eliezer Croitoru 
Subject: Re: [squid-users] Host header forgery detected on domain: 
mobile.pipe.aria.microsoft.com

On 1/6/21 2:49 PM, Eliezer Croitoru wrote:

> I am trying to think about the right solution for the next issue:
> SECURITY ALERT: Host header forgery detected on conn18767
> local=52.114.32.24:443 remote=192.168.189.52:65107 FD 15 flags=33 (local IP
> does not match any domain IP)

As you know, this has been discussed many times on this list before,
including recently[1]. I doubt anything has changed since then.

[1]
http://lists.squid-cache.org/pipermail/squid-users/2020-November/022912.html


> All of the hosts use the same DNS service in the LAN however for some reason
> both squid and the client are resolving different addresses
> in a period of  10  Seconds.

The "however for some reason" part feels misleading to me -- what you
observe is the direct, expected consequence of the low-TTL environment
you have described. There is no contradiction or uncertainty here AFAICT.


> The solution I am thinking is to force a minimum of 60 seconds caching using
> dnsmasq or another caching service.

FTR: Increasing DNS response TTL will reduce the number/probability of
false positives in forged Host header detection. No more. No less.


> Can we teach (theoretically) squid a way to look at these short TTLs as
> something to decide by an ACL?

Yes, it is possible. There is positive_dns_ttl already which specifies
an upper bound. One could add a similar positive_dns_ttl_min option that
would specify the lower bound. Like virtually any Squid directive, it
can be made conditional on ACLs.

IMO, violating DNS TTLs is not the right solution for this problem
though. See my response on the [1] thread for a better medium-term solution.


HTH,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] I have seen this patch for Host Header forgery, I need translation.

2021-01-06 Thread Eliezer Croitoru
Hey,

I know a bit about host header forgery.
However I have seen this patch and was wondering about the effect it would
have on a proxy:
https://github.com/NethServer/dev/issues/5348

The best solution is that the DNS world would be "perfected" however in the
real world there are
other consideration.
For example I have seen that specific domain names are generated
"on-demand",
After a basic confirmation in the HTTP/HTTPS level the client can try to
access a set of dynamic domains.

I am still not sure what is the right approach about the current logs.
It's pretty annoying if the admin knows that it happens however if he will
disable it by "default"
There are other side effects.

Logs, yes?no?.. nut sure..

Thanks,
Eliezer


Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com
Zoom: Coming soon



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Squid command

2020-06-01 Thread Eliezer Croitoru
Hey Pic, Leaving aside RHEL and their support, The setup is not well understood.My assumption is that RHEL support is not enough for all clients. To reproduce this issue there are many details missing.OS installed packagesFull squid -v outputsquid.conf RHEL support is pretty expensive.. I can try to understand the scenario. Eliezer Eliezer CroitoruTech SupportMobile: +972-5-28704261Email: ngtech1...@gmail.com From: pic rat ratSent: Wednesday, May 27, 2020 1:01 PMTo: squid-dev@lists.squid-cache.orgSubject: [squid-dev] Squid command Dear sir, We've found problem of squid program after config in squid.conf "ssl-bump generate-host-certificates=on,"service is not run, however I remove "generate-host-certificate=on" service is normally starting.Could you please advise? squid -vSquid Cache: Version 3.5.20 OS cat /etc/os-releaseNAME="Red Hat Enterprise Linux Server"VERSION="7.8 (Maipo)"ID="rhel"ID_LIKE="fedora"VARIANT="Server"VARIANT_ID="server"VERSION_ID="7.8"PRETTY_NAME="Red Hat Enterprise Linux"ANSI_COLOR="0;31"CPE_NAME="cpe:/o:redhat:enterprise_linux:7.8:GA:server"HOME_URL="https://www.redhat.com/"BUG_REPORT_URL="https://bugzilla.redhat.com/" Best Regards,Pichet R. 
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Fwd: squid-5.0.0-20190331-rf5e179474 cannot be built on CentOS 7

2019-04-10 Thread Eliezer Croitoru

  
  
I have not got into great depth or details but the CentOS 7 build
node I am using cannot build squid-5.0.0-20190331-rf5e179474 and the
past 5.0 series tar.bz2 files.

  
  
  I have tested these to build ontop of Debian 9.x but have not
tested wet it yet to work properly as a production system.
  
  
  Eliezer
  
  
  
  -- 

Eliezer
  Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

  

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Debian buster testing 4.x builds

2019-02-05 Thread Eliezer Croitoru
I have started Debian buster 4.x testing.

Currently what I came up with is :

/usr/sbin/squid -v

Squid Cache: Version 4.5

Service Name: squid

 

This binary uses OpenSSL 1.1.1a  20 Nov 2018. For legal restrictions on
distribution see https://www.openssl.org/source/license.html

 

configure options:  '--prefix=/usr' '--sysconfdir=/etc'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB
' '--enable-auth-digest=file,LDAP'
'--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,se
ssion,SQL_session,unix_group,wbinfo_group'
'--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
'--enable-icmp' '--enable-zph-qos' '--with-swapdir=/var/spool/squid'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-filedescriptors=65536' '--with-default-user=proxy' '--enable-snmp'
'--with-openssl' '--enable-ssl-crtd' '--disable-arch-native'
'--enable-linux-netfilter'

 

Which seems nice.

 

Should I check it with clang also else then gcc?

 

Eliezer

 



 <http://ngtech.co.il/main-en/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Securtiy_file_gen in a server format development

2018-12-29 Thread Eliezer Croitoru
Hey Dev list,

 

For quite some time I was wondering what would a Squid Cluster can use as a
certificates backend.

>From what I understood until now it seems that the current ssl_db directory
structure is simple enough that it might be possible to share it across a
NFS store.

 

Another thing in mind is certificates cleanup and history.

Since squid is being used in couple locations as a security software it
would be good for security admins to be able to have some history logs.

 

I can try to write couple helpers and external services on my free time that
can help with some of these points.

 

Please let me know if my assumptions are or the right track.

 

Thanks,

Eliezer

 

----

Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Allowing the admin to decide if a specific DNS+ip is ok for caching.

2018-07-20 Thread Eliezer Croitoru
OK.

So we will try to make the whole environment more secure rather than more 
"profitable".
I think that in general it's a good concept and doesn't sound like some "OCD" 
alike issue.

I will try to write an article about 4.1 release with this point in mind.

Thanks,
Eliezer

* I do not really care if someone will think or thinks that my articles are 
profiting me in any way

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Thursday, July 19, 2018 10:46 AM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Allowing the admin to decide if a specific DNS+ip is 
ok for caching.

On 19/07/18 04:56, Eliezer Croitoru wrote:
> Hey Squid-Dev’s,
> 
>  
> 
> Currently Squid-Cache forces Host Header Forgery on http and https requests.
> 
> -  https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery
> 

Forces? no. Prevents.

> Squid is working properly or “the best” when the client and the proxy
> use the same DNS service.
> 
> In the past I have asked about defining a bumped connection as secured
> and to disable host header forgery checks on some of these.
> 

Having a connection be bumped does not mean the requests decrypted from
that connection are meant for that server. DONT_VERIFY_PEER and such
false "workaround" are still very common things for admin to do.

A client or intermediary can as easily forge the SNI value on TLS setup
as a Host header in plain-text HTTP. The resulting problems in both
cases are the same.



> The conditions are:
> 
> -  Squid validates that the server certificate is valid against
> the local CA bundles (an admin can add or remove a certificate manually
> or automatically)
> 
> -  The admin defines an external tool that verifies and/or
> allows host header forgery to be disabled per request.
> 
>  
> 
> I am in the middle of testing 4.1 and wondering what is expected from
> 4.1 regarding host header forgery.
> 
> Was there any change of policy?
> 

No changes from Squid-3 are expected in terms of these checks. There may
be changes in TLS handling which decrypt more (or less) requests.

Any requests which *are* decrypted, the initial CONNECT (from SNI) are
expected to be verified.
 TPROXY / NAT intercepted traffic is verified against the against the
dst-IP of the intercepted client TCP connection.
 Bumped and non-intercepted traffic (in strict verify mode) against the
server-IP from initial client CONNECT tunnel.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Allowing the admin to decide if a specific DNS+ip is ok for caching.

2018-07-18 Thread Eliezer Croitoru
Hey Squid-Dev's,

 

Currently Squid-Cache forces Host Header Forgery on http and https requests.

-  https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery

Squid is working properly or "the best" when the client and the proxy use
the same DNS service.

In the past I have asked about defining a bumped connection as secured and
to disable host header forgery checks on some of these.

The conditions are:

-  Squid validates that the server certificate is valid against the
local CA bundles (an admin can add or remove a certificate manually or
automatically)

-  The admin defines an external tool that verifies and/or allows
host header forgery to be disabled per request.

 

I am in the middle of testing 4.1 and wondering what is expected from 4.1
regarding host header forgery.

Was there any change of policy?

 

Thanks,

Eliezer

 

----

Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] TCP_MISS_ABORTED/000 when accessing squid-internal-mgr page

2018-07-18 Thread Eliezer Croitoru
I have tried to access squid manage pages using curl and squidclient and got
the next weird results in the access.log.

 

TCP_MISS_ABORTED/000

 

The weird thing is that I am receiving 200 as a response:

 

 

 

Commands and logs:

### START

[root@squid4-testing check-systemd-squid]# curl
127.0.0.1:3128/squid-internal-mgr/info

Squid Object Cache: Version 4.1

Build Info:

Service Name: squid

Start Time: Wed, 18 Jul 2018 11:45:21 GMT

Current Time:   Wed, 18 Jul 2018 12:19:18 GMT



 28037 on-disk objects

[root@squid4-testing check-systemd-squid]# curl
127.0.0.1:3128/squid-internal-mgr/menu

index  Cache Manager Interface public

menu   Cache Manager Menu  public



server_listPeer Cache Statistics   public

[root@squid4-testing check-systemd-squid]# fg

tail /var/log/squid/access.log -f

1531916358.717 00 127.0.0.1 TCP_MISS_ABORTED/000 0 GET
http://squid4-testing:3128/squid-internal-mgr/info - HIER_NONE/- - Q-CC: "-"
"-" Q-P: "-" "-" Q-RANGE: "-" REP-CC: "-" REP-EXP: "-" VARY: "-"
00:00:00:00:00:00 REP-X-CACHE: "-" Adapted-X-Store-Id: "-"

1531916361.505 00 127.0.0.1 TCP_MISS_ABORTED/000 0 GET
http://squid4-testing:3128/squid-internal-mgr/menu - HIER_NONE/- - Q-CC: "-"
"-" Q-P: "-" "-" Q-RANGE: "-" REP-CC: "-" REP-EXP: "-" VARY: "-"
00:00:00:00:00:00 REP-X-CACHE: "-" Adapted-X-Store-Id: "-"

.

1531916504.216 00 ::1 TCP_MISS_ABORTED/000 0 GET
cache_object://localhost/menu - HIER_NONE/- - Q-CC: "-" "-" Q-P: "-" "-"
Q-RANGE: "-" REP-CC: "-" REP-EXP: "-" VARY: "-" 00-00-00-00-00-00-00-00
REP-X-CACHE: "-" Adapted-X-Store-Id: "-"

### END

 

Also:

[root@squid4-testing check-systemd-squid]#  curl
127.0.0.1:3128/squid-internal-mgr/menu -I

HTTP/1.1 200 OK

Server: squid/4.1

Mime-Version: 1.0

Date: Wed, 18 Jul 2018 12:22:50 GMT

Content-Type: text/plain

Expires: Wed, 18 Jul 2018 12:22:50 GMT

Last-Modified: Wed, 18 Jul 2018 12:22:50 GMT

Connection: close

### END

 

So. a bug on 4.1?

 

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] SSL-BUMP, Encryption and coding licensing.

2018-07-12 Thread Eliezer Croitoru
rrent YouTube and google video caching and
filtering service but.

I will not publish the sources yet.

 

I would like to say thank you to this great Development team of the project
that since V3.2 changed attitude from "caching as much as want" to "cache
what's worthy of it".

I believe that many admins now understands more about IT security due to
this amazing project.

 

I would like to hear from you what do you think is not worth publishing.

Let say I have a code that can take one of google nodes, what should I do?

I can write a code that meets a SPEC\blueprint but is licensed, would it be
OK to write a software that does the same but with my original code and
license?

What is the right way to tell a sysadmin or a webmaster that his site can be
compromised?
Would hacking the site "symbolically" ie add some text into the site is OK?
.. not talking about tearing apart the site, just adding a tiny JS popup
like the cookie warning that many sites show?

 

Thanks,

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] StoeiD and ICAP services callouts, when it happens?

2018-07-12 Thread Eliezer Croitoru
I tried it both on access logs and store_id_extras.
In the access log I am receiving this:
ICAP-Header-Test: "Connection: close\r\nDate: Thu, 12 Jul 2018 20:42:04 
GMT\r\nEncapsulated: req-hdr=0, null-body=1189\r\nIstag: 
YTGV-Predictor\r\nService: YouTube GoogleVideo Predictor ICAP serivce\r\n "

But on store_id_extras I am receiving for the same logformat ie
store_id_extras "%adapt::mailto:rouss...@measurement-factory.com] 
Sent: Thursday, July 12, 2018 11:10 PM
To: squid-dev@lists.squid-cache.org
Cc: Eliezer Croitoru 
Subject: Re: [squid-dev] StoeiD and ICAP services callouts, when it happens?

On 07/12/2018 11:21 AM, Eliezer Croitoru wrote:

> With regular logformat I am using:
> %{X-Store-Id}>ha
> 
> And it works perfect.

%>ha does not log an ICAP response header. It logs an HTTP request
header. You asked about an ICAP response header.


> So I tried using the next config line for store id:
> store_id_extras "%>a/%>A %un %>rm myip=%la myport=%lp %{X-Store-Id}>ha"
> 
> which didn't worked as expected since it sends always "-".

Bugs notwithstanding, the above should work if your adaptation service
adds an X-Store-ID request header field to the adapted HTTP request. You
can check what headers Squid has for %>ha by specifying %>ha without any
parameters.


> I have now tried to understand what and how I should use the:
> %adapt:: -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
> Sent: Thursday, July 12, 2018 7:01 PM
> To: Eliezer Croitoru ; squid-dev@lists.squid-cache.org
> Subject: Re: [squid-dev] StoeiD and ICAP services callouts, when it happens?
> 
> On 07/12/2018 03:16 AM, Eliezer Croitoru wrote:
>> is it possible to pass an ICAP response header into the store_id_extras?
> 
> Yes, %adapt:: 
> https://wiki.squid-cache.org/SquidFaq/OrderIsImportant#Callout_Sequence
> 
> Alex.
> 


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Squid 4.1 "- TCP_DENIED/403' and IPv6 while "dns_v4_first on"

2018-07-12 Thread Eliezer Croitoru
uot; "-" Q-P:
"-" "-" Q-RANGE: "-" REP-CC: "-" REP-EXP: "-" VARY: "-" 00:00:00:00:00:00
REP-X-CACHE: "-" Adapted-X-Store-Id: "-"

1531425990.307 000368 10.0.0.28 NONE/200 0 CONNECT bugs.squid-cache.org:443
- HIER_DIRECT/2001:4801:7827:102:ad34:6f78:b6dc:fbed - Q-CC: "-" "-" Q-P:
"-" "-" Q-RANGE: "-" REP-CC: "-" REP-EXP: "-" VARY: "-" 00:00:00:00:00:00
REP-X-CACHE: "-" Adapted-X-Store-Id: "-"

1531425990.339 00 10.0.0.28 NONE/503 4117 GET
http://squid4-testing:3128/squid-internal-static/icons/SN.png - HIER_NONE/-
text/html Q-CC: "no-cache" "no-cache" Q-P: "no-cache" "no-cache" Q-RANGE:
"-" REP-CC: "-" REP-EXP: "-" VARY: "Accept-Language" 00:00:00:00:00:00
REP-X-CACHE: "MISS from squid4-testing" Adapted-X-Store-Id: "-"

1531425990.374 00 10.0.0.28 NONE/503 4117 GET
https://bugs.squid-cache.org/favicon.ico - HIER_NONE/- text/html Q-CC:
"no-cache" "no-cache" Q-P: "no-cache" "no-cache" Q-RANGE: "-" REP-CC: "-"
REP-EXP: "-" VARY: "Accept-Language" 00:00:00:00:00:00 REP-X-CACHE: "MISS
from squid4-testing" Adapted-X-Store-Id: "-"

 

So the issue is a bit strange, is the remote IP is the issue or another
thing?

I looked at the archives and also the docs and from what I managed to make
sure the next resolve both issues which are tangled to each other:

## START squid.conf addition

acl internal transaction_initiator internal

 

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

 

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

http_access allow internal

## END squid.conf addition

 

http://www.squid-cache.org/Versions/v4/cfgman/acl.html

 

Clarify that  there is a new type of ACL named "transaction_initiator" which
does couple good things.

 

I am not sure but it seems to me that some wiki page is missing regarding
this issue.
I can try to write one if no one else will sit on it in the next month.

 

All The Bests,

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] StoeiD and ICAP services callouts, when it happens?

2018-07-12 Thread Eliezer Croitoru
I don't really understand how am I supposed to use it.
With regular logformat I am using:
%{X-Store-Id}>ha

And it works perfect.

So I tried using the next config line for store id:
store_id_extras "%>a/%>A %un %>rm myip=%la myport=%lp %{X-Store-Id}>ha"

which didn't worked as expected since it sends always "-".
I have now tried to understand what and how I should use the:
%adapt::mailto:rouss...@measurement-factory.com] 
Sent: Thursday, July 12, 2018 7:01 PM
To: Eliezer Croitoru ; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] StoeiD and ICAP services callouts, when it happens?

On 07/12/2018 03:16 AM, Eliezer Croitoru wrote:
> is it possible to pass an ICAP response header into the store_id_extras?

Yes, %adapt::https://wiki.squid-cache.org/SquidFaq/OrderIsImportant#Callout_Sequence

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] StoeiD and ICAP services callouts, when it happens?

2018-07-12 Thread Eliezer Croitoru
I have been working from time to time on StoreID while I believe it useful
for targeted scenarios.

I can test and write different code but wanted to first ask since my memory
is a bit vauge.

 

Squid during it's client side callouts:

Runs a series of tests at:

https://github.com/squid-cache/squid/blob/d2a6dcba707c15484c255e7a569b90f7f1
186383/src/client_side_request.cc#L1723

 

https://github.com/squid-cache/squid/blob/d2a6dcba707c15484c255e7a569b90f7f1
186383/src/client_side_request.cc#L1750

 

>From what I understood this section and others are running in
a-synchronically.

So from my point of view now I understand that if I will use a request mod
ICAP service to inject a request header into the request
It would be available as a StoreID extra ie I can use:

 

store_id_extras "%>a/%>A %un %>rm myip=%la myport=%lp %{X-Store-Id}>h"

 

and I will be able to "send" the StoreID helper an extra header which can be
considered while returning squid the right StoreID.

If it's indeed true, is it possible to pass an ICAP response header into the
store_id_extras?

 

Thanks,

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] max_url : Pls, into squid.conf

2018-07-09 Thread Eliezer Croitoru

Hey Amos,

I have just stared observing 6K+ urls from google ads.

I know that web servers are ready for 16K URL's so I believe it should 
be added into one of the things in the road map.


Eliezer

On 2018-06-09 15:42, Amos Jeffries wrote:

On 09/06/18 22:35, babajaga wrote:

I get a quite a few error msgs like
urlParse: URL too large (9182 bytes)
in cache.log
I suspect, they are most likely from some ads, however, they make the 
web

page appear slow to complete rendering.

Hesitating to patch defines.h directly, I kindly ask for
an additional parameter in squid.conf



This is not going to happen sorry. Unfortunately there are too many
points in the code allocating buffers for URLs which do not use the
MAX_URL definition.
For example; I am this very minute testing a patch to remove one more 
of

those bits of code I just found trying to stuff a 8KB URL into a 4KB
buffer :-(.

We have an ongoing project to remove the URL length limits from within
Squid. Hopefully that will resolve the issue as it progresses without
the need for a config option. If you are using Squid-3.5 you may see a
reduction of these messages from Squid-4.


PS. Squid is known to be one of the most lenient web agents. Most 
origin

servers and other proxies have smaller URL restrictions. So whatever
application is trying to use >9KB URLs is unlikely to work generally
over the Internet even if we resolve the Squid complaints.

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


--
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Support lower case http/ spn format for realmd/adcli join support.

2018-07-09 Thread Eliezer Croitoru

Mike,

It's better to have some noise around rather then slow and deadly 
silence.


Eliezer

On 2018-06-28 18:02, Mike Surcouf wrote:

Adding to this after testing more I realised that adcli does not lower
case the SPN.
I am not sure how I came to that conclusion sorry for noise.

-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On
Behalf Of Mike Surcouf
Sent: 28 June 2018 09:28
To: 'Amos Jeffries'
Cc: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Support lower case http/ spn format for
realmd/adcli join support.

Hi Amos thanks for that I need to correct you on the REALM bit though.

The bit before the slash in an SPN (service principal name) is SERVICE 
not REALM


So for a computer that has a
service =service
fqdn= SomeComputer.example.com
REALM= EXAMPLE.COM

service/somecomputer.example@example.com

So for a computer that has a
service =HTTP
fqdn= SomeComputer.example.com
REALM= EXAMPLE.COM

HTTP/somecomputer.example@example.com
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] squid to assign dedicated ip to clients behind same network/router

2018-06-22 Thread Eliezer Croitoru

Hey Desis,

What exactly the proxy needs to do? only access the network?
Also caching?
Are you using SSL-BUMP?

There are couple ways to achieve what you want but if you don't need 
Squid Cache special features then maybe you wouldn't need it 
specifically.


Eliezer

On 2018-06-11 22:53, desis wrote:
I have successfully installed squid server (On Centos) .. My servers 
has five
ip addresses . I have configured all 5 ip addresses for squid... so 
clients
can connect with any ip address and with tcp_outgoing_address client 
will

get same ip address from which ip address he is connecting.

But the problem is all my clients are behind a same router and having 
the

same public ip address.

Now the problem is .. Let see client one use server 1.1.1.0 ip address 
to
connect squid first, he is getting server 1.1.1.0 ip address for his 
public

ip.

Now Client two using server 2.2.2.0 ip address to connect squid , he is
getting server 2.2.2.0 ip adddress for his public ip ...

But at this moment client's one public ip address is changing to 
2.2.2.0 .


How I can configure squid, so each client will get dedicated public ip
address from server at the same ip .




--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Development-f1042840.html
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Block users dynamically

2018-06-22 Thread Eliezer Croitoru

I am pretty sure it was pointed towards Dean and the top\tip of thread.

But well also for ufdbGuard, have you ever considered using an external 
ACL helper api?

Or maybe an ICAP one?

..I can try to look at the ufdbGuard helper code and compile an ICAP 
service based on this but... only if someone needs or wants it.


Eliezer

On 2018-05-30 15:01, Marcus Kool wrote:

On 30/05/18 08:55, Daniel Berredo wrote:

Have you ever considered using an external ACL helper?


eh, for what ?

On Tue, May 29, 2018, 9:57 PM Marcus Kool <mailto:marcus.k...@urlfilterdb.com>> wrote:



On 28/05/18 15:10, dean wrote:
 > I am implementing modifications to Squid 3.5.27 for a thesis 
job. At some point in the code, I need to block a user. What I'm doing 
is writing to an external file that is used in the

configuration,
 > like Squish does. But it does not block the user, however when 
I reconfigure Squid if it blocks it. Is there something I do not know? 
When I change the file, should I reconfigure Squid? Is there

 > another way to block users dynamically from the Squid code?

You can use ufdbGuard for this purpose.  ufdbGuard is a free URL 
redirector for Squid which can be configured to read lists of 
usernames or list of IP addresses every X minutes (default for X is 
15).
So if you control a blacklist with usernames and write the name of 
the user to a defined file, ufdbguardd will block these users.
If the user must be blocked immediately you need to reload 
ufdbguardd, otherwise you wait until the configured time interval to 
reread the userlist expires and so after a few minutes the user gets

blocked.

Note that reloading ufdbguardd does not interfere with Squid and 
all activity by browsers and squid continues normally.


Marcus

___
squid-dev mailing list
squid-dev@lists.squid-cache.org 
<mailto:squid-dev@lists.squid-cache.org>

http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] PROXY protocol and TPROXY, can they go together?

2018-06-22 Thread Eliezer Croitoru

Hey Amos,

The custom LB is to only try and filter before the connections reach's 
squid http and non-http traffic.
Currently I have a prototype which intercepts using TPROXY by itself and 
identifies couple protocols.


The reason for this LB is that I get a more flexible way around the 
connection.
My code can enforce specific ACL's based on specific characteristics of 
the client and\or the server.


Iptables does it's work fine but lacks the ability to dynamically handle 
and identify specific traffic.

For example the nDPI iptables module:
- https://github.com/vel21ripn/nDPI

which is being used in couple products and a similar module also exists 
on many commercial products but still lacks some degree of flexibility.
The kernel land is indeed fast and maybe efficient but is binding the 
programmers to C and it's libraries and compilers let alone licenses.


Currently on a 40+ cores machine with 128GB ram I can run a full blown 
layer 7 proxy for a big network(/16+) and the CPU is almost always 
loaded below 10%.


I do not intent to develop my proxy too much since others have done this 
already but it's nice to see that more products can enter the market 
easily.


Thanks,
Eliezer


On 2018-05-15 22:36, Amos Jeffries wrote:

On 16/05/18 02:09, Eliezer Croitoru wrote:

Hey Squid-Dev,

I am in the middle of writing a load balancer \ router (almost done) 
for

squid with TPROXY in it.

The load balancer sits on the Squid machine and intercepts the 
connections.


I want to send Squid instances a new connection on a PROXY protocol
enabled http_port but that squid will use TPROXY on the outgoing
connection based on the PROXY protocol details.

 

Would it be possible? I think it should but not sure.



Maybe. Since both software are on the same machine it should get past
the kernel protections against arbitrary spoofing.

You will have to check that BOTH dst-IP:port and src-IP:port pairs are
correctly relayed by the PROXY protocol. If not the TPROXY will end up
with mangled socket state and undefined behaviour (probably breakage).



 

My plan is to try and load balance connections between multiple squid
instances\workers for filtering purposes and PIN each of the instances
to a CPU (20+ cores Physical host).

How reasonable is this idea?


You don't need a custom LB. iptables is sufficient, or other firewalls
if you have a non-Linux machine.


<https://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend#Frontend_Balancer_Alternative_1:_iptables>

You should be able to fit those LB lines into a normal TPROXY config.
Just replace the "-j REDIRECT" with the "-j TPROXY --tproxy-mark ...".

Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


--

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] PROXY protocol and TPROXY, can they go together?

2018-05-15 Thread Eliezer Croitoru
Hey Squid-Dev,

 

I am in the middle of writing a load balancer \ router (almost done) for
squid with TPROXY in it.

The load balancer sits on the Squid machine and intercepts the connections.

I want to send Squid instances a new connection on a PROXY protocol enabled
http_port but that squid will use TPROXY on the outgoing connection based on
the PROXY protocol details.

 

Would it be possible? I think it should but not sure.

 

My plan is to try and load balance connections between multiple squid
instances\workers for filtering purposes and PIN each of the instances to a
CPU (20+ cores Physical host).

How reasonable is this idea?

 

Thanks,

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Introduction

2018-03-28 Thread Eliezer Croitoru
Sorry for the late response, I somehow missed your response.

I will try to review the squid.conf and see if I can help with something.

 

If I'm not responding in a few days send me a PM to bump it up.

 

Eliezer

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: Khushal Jain Shripal <khushaljai...@excelacom.in> 
Sent: Friday, February 23, 2018 05:55
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Cc: SkoolLive_Offshore_Team <skoollive_offshore_t...@excelacom.in>; Gowtham
Anandaraj <gowtha...@excelacom.in>
Subject: RE: [squid-dev] Introduction

 

Hi Eliezer,

 

We have installed Squid Cache.

1.   We configured http port number.

2.   We gave hostname as localhost and port number in Proxy Settings of
Windows.

3.   Cache folder was created and cache data exists when we tried to
access http sites.

a.   For instance, we tried to access http://excelacom.in we were able
to cache and store the data in cache folder.

 

But when we tried to access https sites, we were not able to cache https
sites.

 

Attached is our Squid.conf for your reference.

 

Can you please provide us Squid.Conf file to access and cache https sites.

It would be more helpful to us.

 

 

Regards, 

 




Khushal Jain Shripal

Business Analyst


Excelacom Technologies Pvt Ltd

p: +91.9003028627

Skype ID: khushalshripal

From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf
Of Eliezer Croitoru
Sent: Friday, February 23, 2018 12:14 AM
To: Gowtham Anandaraj <gowtha...@excelacom.in
<mailto:gowtha...@excelacom.in> >; squid-dev@lists.squid-cache.org
<mailto:squid-dev@lists.squid-cache.org> 
Subject: Re: [squid-dev] Introduction

 

Hey,

 

Can you be more specific?

 

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il <mailto:elie...@ngtech.co.il> 



 

From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf
Of Gowtham Anandaraj
Sent: Monday, February 12, 2018 14:29
To: squid-dev@lists.squid-cache.org <mailto:squid-dev@lists.squid-cache.org>

Subject: [squid-dev] Introduction

 

Hello Squid Dev,

 

It's a great opportunity to learn Squid.

My Name is Gowtham , working as Programmer Analyst with over 3 years of
experience.

 

Currently I'm using squid for my project for caching, but I'm not able to
cache https sites.

Any help would be appreciated. 

 

Thanks,




Gowtham Anandaraj

Program Analyst


Excelacom Technologies Pvt Ltd

5/D5-IT Park, SIPCOT, Navallur Post, Siruseri, Chennai 603103   

T +91 44 4743 3000 | F +91 44 3068 3111 | E  <mailto:gowtha...@excelacom.in>
gowtha...@excelacom.in  | S gowthammiley |M 9524031314

[NOTICE: This e-mail is confidential and may also be privileged. If you are
not the intended recipient, please notify us immediately by replying to this
message and then delete it from your system. You should not copy or use it
for any purpose, nor disclose its contents to any other person. Thank you.]

 

 

 

  _  


DISCLAIMER INFORMATION

The information contained in this email is confidential and may contain
proprietary information. It is meant solely for the intended recipient.
Access to this email by anyone else is unauthorized. If you are not the
intended recipient, any disclosure, copying, distribution or any action
taken or omitted in reliance on this, is prohibited and may be unlawful. No
liability or responsibility is accepted if information or data is, for
whatever reason corrupted or does not reach its intended recipient.
Excelacom Technologies Private Ltd reserves the right to take any action in
accordance with its email policy. If you have received this communication in
error, please delete this mail & notify us immediately at
webad...@excelacom.in <mailto:webad...@excelacom.in> 

WARNING:
Computer viruses can be transmitted via email. The recipient should check
this email and any attachments for the presence of viruses. The company will
not accept any liability or any damage caused by any virus transmitted by
this email

 

  _  


DISCLAIMER INFORMATION

The information contained in this email is confidential and may contain
proprietary information. It is meant solely for the intended recipient.
Access to this email by anyone else is unauthorized. If you are not the
intended recipient, any disclosure, copying, distribution or any action
taken or omitted in reliance on this, is prohibited and may be unlawful. No
liability or responsibility is accepted if information or data is, for
whatever reason corrupted or does not reach its intended recipient.
Excelacom Technologies Private Ltd reserves the right to take any action in
accordance with its email policy. If you have received this communication in
error, please delete this m

Re: [squid-dev] SSL-BUMP distinguish between mobile devices such as IOS or ANDROID vs PC request

2018-02-22 Thread Eliezer Croitoru
OK then.
If it's doable then it's only waiting for the client who will want to fund this 
feature.

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, February 22, 2018 23:19
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] SSL-BUMP distinguish between mobile devices such as 
IOS or ANDROID vs PC request

On 02/22/2018 11:56 AM, Eliezer Croitoru wrote:

> I was wondering about the options to distinguish mobile devices TLS\SSL
> requests compared to PC one's.

You need ACLs that can match various TLS Client Hello fields (mostly
message version, protocol version, and ciphers) and a knowledgebase of
typical Hellos for the devices/clients you are interested in. A
Hello-based solution cannot be 100% reliable, but I bet you can identify
many popular OS versions (and, as a consequence, even some physical
devices) with a good-enough probability (for most applications).

IIRC, Squid does not have ACLs that interrogate TLS Client Hello with
the exception of SNI (i.e., ssl::server_name --client_requested).
However, it should not be very difficult to add such ACLs and they would
be generally useful IMO.


Forward proxies can also examine CONNECT headers. That is already
supported AFAIK.


Examining TCP/IP packet headers would also be useful in many cases, but
that is harder to do directly in Squid.


HTH,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Introduction

2018-02-22 Thread Eliezer Croitoru
Hey,

 

Can you be more specific?

 

Eliezer

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf
Of Gowtham Anandaraj
Sent: Monday, February 12, 2018 14:29
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] Introduction

 

Hello Squid Dev,

 

It's a great opportunity to learn Squid.

My Name is Gowtham , working as Programmer Analyst with over 3 years of
experience.

 

Currently I'm using squid for my project for caching, but I'm not able to
cache https sites.

Any help would be appreciated. 

 

Thanks,




Gowtham Anandaraj

Program Analyst


Excelacom Technologies Pvt Ltd

5/D5-IT Park, SIPCOT, Navallur Post, Siruseri, Chennai 603103   

T +91 44 4743 3000 | F +91 44 3068 3111 | E  <mailto:gowtha...@excelacom.in>
gowtha...@excelacom.in  | S gowthammiley |M 9524031314

[NOTICE: This e-mail is confidential and may also be privileged. If you are
not the intended recipient, please notify us immediately by replying to this
message and then delete it from your system. You should not copy or use it
for any purpose, nor disclose its contents to any other person. Thank you.]

 

 

 

  _  


DISCLAIMER INFORMATION

The information contained in this email is confidential and may contain
proprietary information. It is meant solely for the intended recipient.
Access to this email by anyone else is unauthorized. If you are not the
intended recipient, any disclosure, copying, distribution or any action
taken or omitted in reliance on this, is prohibited and may be unlawful. No
liability or responsibility is accepted if information or data is, for
whatever reason corrupted or does not reach its intended recipient.
Excelacom Technologies Private Ltd reserves the right to take any action in
accordance with its email policy. If you have received this communication in
error, please delete this mail & notify us immediately at
webad...@excelacom.in <mailto:webad...@excelacom.in> 

WARNING:
Computer viruses can be transmitted via email. The recipient should check
this email and any attachments for the presence of viruses. The company will
not accept any liability or any damage caused by any virus transmitted by
this email

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] http(s)_port TLS/SSL config redesign

2018-02-02 Thread Eliezer Croitoru
Great to hear about this.

The RPM's for 4.0.x latest already built and I'm starting to work on the near 
step towards which I hope is 4.1.0.
I hope that by 4.1 I will have packages for Debian 9.
Are we expecting 4.1 to be released sooner then Ubuntu 18.04 LTS(April 2018)?

* https://wiki.ubuntu.com/Releases

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Thursday, February 1, 2018 15:11
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [RFC] http(s)_port TLS/SSL config redesign

The final part of the proposal quoted below are now PR'd for audit.

There are several differences from the initial proposal;

* the ServerOptions::signingCa class changes and new default for
generate-host-certificates have allowed a far simpler back-compat check
using the non-existence of a filename by generate-host-certificates=
instead of relying on cert CA checking.
 That CA checking should still be done, but is no longer a required part
of this project.

* generation of reverse-proxy certificates is happening in a separate
parallel project.


Amos


On 20/07/17 13:27, Amos Jeffries wrote:
> Hi all, Christos and Alex particularly,
> 
> I have been mulling over several ideas for how to improve the config
> parameters on the http(s)_port to make them a bit easier for newbies to
> get right, and pros to do powerfully cool stuff.
> 
> 
> So, the most major change I would like to propose is to move the proxies
> CA for cert generation from cert= parameter to
> generate-host-certificates= parameter. Having it configured with a file
> being the same as generate =on and not configuring it default to =off.
> 
> 
> The matching key= and any CA chain would need to be all bundled in the
> same PEM file. That should be fine since the clients get a separate DER
> file installed, not sharing the PEM file loaded into Squid.
> 
> That will stop confusing newbies have over what should go in cert= for
> what Squid use-case. And will allow pros to configure exactly which
> static cert Squid uses as a fallback when auto-generating others -
> simply by using cert= in the normal non-bumping way.
> 
> Also, we can then easily use the two sets of parameters in identical
> fashion for non-SSL-Bump proxies to auto-generate reverse-proxy certs
> based on SNI, or use a fallback static cert of admins choice.
> 
> Bringing these two different use-cases config into line should vastly
> simplify the complexity of working with Squid TLS certs for everybody,
> including us in the code as we no longer have multiple (8! at least)
> code paths for different cert= possibilities and config error handling
> permutations.
> 
> 
> For backward compatibility concerns with existing SSL-Bump configs I
> think we can use the certificate CA vs non-CA property to auto-detect
> old SSL-Bump configs being loaded and handle the compatibility
> auto-upgrade and warnings. The warning will be useful long-term and not
> just for the transitional period.
> 
> 
> Now would also be a marginally better than usual time to make the change
> since the parameters are migrating to tls-* prefix in v4 and have extra
> admin attention already.
> 
> 
> Amos
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] The final phase of 3.5 testing/

2017-09-20 Thread Eliezer Croitoru
I forgot to mention that I have seen couple products which uses 3.1 with many 
patches and every week they fix at-least one bug.(we are from ahead of them).
I do not know if it's good or bad since most of the code I have seen from 
developers I know is not easily broken.

All The Bests,
Eliezer

* here it's new year, so have a happy year.


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Tuesday, September 19, 2017 21:28
To: 'Amos Jeffries' <squ...@treenet.co.nz>; 'squid-dev@lists.squid-cache.org' 
<squid-dev@lists.squid-cache.org>
Subject: RE: [squid-dev] [RFC] The final phase of 3.5 testing/

Amos,

Thanks for the details.
Since now I have seen that the list of open bugs is not more then 600 I will 
try to review some of them again and comment there if required.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Tuesday, September 19, 2017 19:14
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [RFC] The final phase of 3.5 testing/

On 19/09/17 12:45, Eliezer Croitoru wrote:
> Hey All,
> 
> In the last 2 year I have tested Squid-Cache 3.5 and for general purposes
> and it's stable.
> The basic challenge is a simple internal traffic but there are couple other
> challenges which I would like to verify.
> The first thing is a test of squid against a very hostile environment like
> the public Internet.
> By testing on the public Internet I mean that it will sit in intercept or
> trproxy mode ontop of a "VPN" service.
> I believe that such a test would benefit us with more details about the
> stability of the 3.5 branch and later maybe test the 4.x branch.
> There are some legal issues in running such a setup and I would not be able
> to run such a setup without some valid usage agreement.
> 
> First what do you all think about the idea in general?

We have various people actively using the latest code of both the v5 and 
v4 series. So Squid code of all branches is already being run in various 
production situations. Its just that since they are commercially 
sensitive we do not get feedback unless things actually go wrong.


> Second, would we be able to somehow list the known 3.5 bugs?

Each Squid series wiki page links to the bugzilla report listing all 
bugs which are still open and affecting that release.
eg <https://wiki.squid-cache.org/Squid-3.5#Open_Bugs>


> I believe that such a list will help the admins to decide if Squid-Cache is
> for them or not.
> Currently there are 574 open bugs on the Bugzilla and maybe some of these
> are not even relevant to 3.5.
> -
> http://bugs.squid-cache.org/buglist.cgi?bug_status=__open__==0
> _redirect=1=priority%2Cbug_severity=Squid_format=spec
> ific
> 
> Amos, You have suggested to walk me through some of the bugs to make our
> lives much easier.
> And I was thinking for example this bug: 3.2.1 SEGFAULT while freeing
> ipcache_entry
> http://bugs.squid-cache.org/show_bug.cgi?id=3644
> 
> It's a Solaris based squid which appeared in 3.2 and was confirmed by the
> reported as "works for me" but never been marked this way.
> If I'm not wrong Yuri also works or worked with solaris and can verify if
> this kind of a bug exists or not on 3.5 can then we can close it as "works
> for me".

More correctly it disappeared without anyone making changes. So may 
recur at any time.

There have been some major changes to the memory management and safety 
since that version, but not so much in terms of ipcache improvements 
except indirectly through libmem / MEMPROXY pool changes.

The uncertainty there means leaving it open until more certainty happens 
just in case it does recur.

> 
> 
> Also this bug: PURGE not purging hot objects (TCP_MEM_HIT ) with
> memory_cache_shared on
> http://bugs.squid-cache.org/show_bug.cgi?id=3888
> 
> Which I have seen that some work is being done on github  to resolve this
> bug(3888)
> It's there for at-least 3 years but nobody changed the milestone to what so
> version.
> I upped up the milestone to 3.5 since any patch will not be created by the
> squid project itself to older versions then 3.5 ie .3.4 and down.
> If you think that the milestone is 5 or 4 please change accordingly.

The milestone needs to be set to the oldest version where the bug is 
known to be *fixed*. So the report in the wiki page can accurately list; 
for example that this bug is still affecting Squid-4.

If we do not have a) a patch applied to the code master branch and 
working its way down to whatever old version, or b) a con

[squid-dev] [RFC] The final phase of 3.5 testing/

2017-09-18 Thread Eliezer Croitoru
Hey All,

In the last 2 year I have tested Squid-Cache 3.5 and for general purposes
and it's stable.
The basic challenge is a simple internal traffic but there are couple other
challenges which I would like to verify.
The first thing is a test of squid against a very hostile environment like
the public Internet.
By testing on the public Internet I mean that it will sit in intercept or
trproxy mode ontop of a "VPN" service.
I believe that such a test would benefit us with more details about the
stability of the 3.5 branch and later maybe test the 4.x branch.
There are some legal issues in running such a setup and I would not be able
to run such a setup without some valid usage agreement.

First what do you all think about the idea in general?
Second, would we be able to somehow list the known 3.5 bugs?
I believe that such a list will help the admins to decide if Squid-Cache is
for them or not.
Currently there are 574 open bugs on the Bugzilla and maybe some of these
are not even relevant to 3.5.
-
http://bugs.squid-cache.org/buglist.cgi?bug_status=__open__==0
_redirect=1=priority%2Cbug_severity=Squid_format=spec
ific

Amos, You have suggested to walk me through some of the bugs to make our
lives much easier.
And I was thinking for example this bug: 3.2.1 SEGFAULT while freeing
ipcache_entry
http://bugs.squid-cache.org/show_bug.cgi?id=3644

It's a Solaris based squid which appeared in 3.2 and was confirmed by the
reported as "works for me" but never been marked this way.
If I'm not wrong Yuri also works or worked with solaris and can verify if
this kind of a bug exists or not on 3.5 can then we can close it as "works
for me".


Also this bug: PURGE not purging hot objects (TCP_MEM_HIT ) with
memory_cache_shared on
http://bugs.squid-cache.org/show_bug.cgi?id=3888

Which I have seen that some work is being done on github  to resolve this
bug(3888)
It's there for at-least 3 years but nobody changed the milestone to what so
version.
I upped up the milestone to 3.5 since any patch will not be created by the
squid project itself to older versions then 3.5 ie .3.4 and down.
If you think that the milestone is 5 or 4 please change accordingly.

Thnank,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il




___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] What do you think about a response or continuation for this article?

2017-08-30 Thread Eliezer Croitoru
Hey Dev's,

I was looking at some old tutorial at:
https://www.linux.com/news/speed-your-internet-access-using-squids-refresh-p
atterns

From the search:
https://www.linux.com/search?keyword=squid

And was wondering if it might be a good idea to write something that will be
up-to-date ie 2017-2008=9 so with almost 10 years of progress.
I think it's time to offer an up-to-date article.
What do you all think?

Eliezer

* I think that github gave the development team a good shoulder to lean on
in the development cycle and all the PR's are looking much better then a
"dev list broadcast".

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il




___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] http(s)_port TLS/SSL config redesign

2017-08-09 Thread Eliezer Croitoru
I have not used v4 yet but the arguments stand for themselves.
+1

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Thursday, July 20, 2017 04:27
To: Squid Developers <squid-dev@lists.squid-cache.org>
Subject: [squid-dev] [RFC] http(s)_port TLS/SSL config redesign

Hi all, Christos and Alex particularly,

I have been mulling over several ideas for how to improve the config 
parameters on the http(s)_port to make them a bit easier for newbies to 
get right, and pros to do powerfully cool stuff.


So, the most major change I would like to propose is to move the proxies 
CA for cert generation from cert= parameter to 
generate-host-certificates= parameter. Having it configured with a file 
being the same as generate =on and not configuring it default to =off.


The matching key= and any CA chain would need to be all bundled in the 
same PEM file. That should be fine since the clients get a separate DER 
file installed, not sharing the PEM file loaded into Squid.

That will stop confusing newbies have over what should go in cert= for 
what Squid use-case. And will allow pros to configure exactly which 
static cert Squid uses as a fallback when auto-generating others - 
simply by using cert= in the normal non-bumping way.

Also, we can then easily use the two sets of parameters in identical 
fashion for non-SSL-Bump proxies to auto-generate reverse-proxy certs 
based on SNI, or use a fallback static cert of admins choice.

Bringing these two different use-cases config into line should vastly 
simplify the complexity of working with Squid TLS certs for everybody, 
including us in the code as we no longer have multiple (8! at least) 
code paths for different cert= possibilities and config error handling 
permutations.


For backward compatibility concerns with existing SSL-Bump configs I 
think we can use the certificate CA vs non-CA property to auto-detect 
old SSL-Bump configs being loaded and handle the compatibility 
auto-upgrade and warnings. The warning will be useful long-term and not 
just for the transitional period.


Now would also be a marginally better than usual time to make the change 
since the parameters are migrating to tls-* prefix in v4 and have extra 
admin attention already.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Cache poisoning vulnerability 3.5.23

2017-07-26 Thread Eliezer Croitoru
Hey Omid,

It's not clear what do you mean by cache poisoning?
There are couple options but there are missing technical pieces on how to 
re-produce the issue, what squid setup are you using ie squid.conf.
How can I test it here on my test lab?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Omid Kosari
Sent: Wednesday, July 26, 2017 13:19
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] Cache poisoning vulnerability 3.5.23

Hello,

Recently i have seen some Cache poisoning specially on android captive
portal detection sites .
My squid was 3.5.19 (from https://packages.debian.org/stretch/squid) on
Ubuntu Linux 16.04 . Then i have upgraded to latest version 3.5.23 (from
https://packages.debian.org/stretch/squid) and purged specific pages but
again i can see cache poisoning on same pages .

http://connectivitycheck.gstatic.com/generate_204
http://clients3.google.com/generate_204
http://172.217.20.206/generate_204
http://clients1.google.com/generate_204
http://google.com/generate_204




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-poisoning-vulnerability-3-5-23-tp4683214.html
Sent from the Squid - Development mailing list archive at Nabble.com.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] What should we do about these *wrong* wiki articles?

2017-07-23 Thread Eliezer Croitoru
Well, I need to re-read these pages...

And I do understand when there are times it's needed and the examples are 
giving good reasons to why and when to use.

After I will re-read these wiki sections I will try to think about it again and 
reply.
Then if we will think that it's needed to write a wiki page or to rewrite the 
wiki page order or structure I will try to offer a better version.

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, July 23, 2017 02:21
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] What should we do about these *wrong* wiki articles?

On 23/07/17 09:22, Eliezer Croitoru wrote:
> As I understood the article the DNAT is from another box ie "the router" to 
> the squid box.
> If I understood it wrong and didn't read properly I will re-read them and see 
> in what I am wrong.

see the Details section notes.


You are right about the cross-machine DNAT use-case no longer existing. 
We keep them both in the wiki because they still meet other use-cases:


* REDIRECT copes best for machines and black-box situations where one 
never knows in advance what network it will be plugged into. Such as 
products that will be sold as plug-and-play proxy caches, or to minimize 
config delays on VM images that get run up by the dozen and 
automatically assigned IPs.

However it always NAT's the dst-IP to the machines primary-IP. So is 
limited to the ~64K receiving socket numbers that IP can privide. It 
also spends some CPU cycles looking that IP up on each new TCP connection.


* DNAT copes best for high performance and security installations where 
explicit speed or control of the packets outweighs the amount of effort 
needed to configure it properly.

It is not doing any primary-IP stuff so is slightly faster than 
REDIRECT, and multiple DNAT rules can be added for each IP the machine 
has - avoiding the ~64K limit. BUT requires the admin to know in advance 
exactly what the IPs of the proxy will be. And the IP assignment, 
iptables rules and squid.conf settings are locked together - if any 
change they all need to. Lots of work to reconfigure any of it, even if 
automated. But, also lots of certainty about what the packets are doing 
for the security paranoid.


Those properties are generic, not just in relation to Squid.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Disable Github issue tracker

2017-07-22 Thread Eliezer Croitoru
Then it's also +1 for me to close for now but promote the README.md which will 
redirect users to the bugzilla..

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Kinkie
Sent: Saturday, July 22, 2017 18:16
To: Alex Rousskov <rouss...@measurement-factory.com>
Cc: Squid Developers <squid-dev@lists.squid-cache.org>
Subject: Re: [squid-dev] [RFC] Disable Github issue tracker

I also agree to close Issues for the time being. We can always reopen them 
later on if it'll be deemed useful.

On Fri, Jul 21, 2017 at 5:37 PM, Alex Rousskov 
<rouss...@measurement-factory.com> wrote:
> On 07/21/2017 10:15 AM, Amos Jeffries wrote:
>> Alex would you like to draw up a formal announcement email to go out 
>> to people not on squid-dev about the change having been done?
>>  I'm thinking squid-announce/squid-users and the blog.
>
> I can, but I do not recommend announcing anything and posting 
> README.md until we decide on the Github Issues status. Both of those 
> documents (and user actions) would be affected by this important 
> migration-related decision.
>
> So far, only three people voiced their opinions:
>
>   + For immediate Issues closure: Amos, Alex.
>   - Against immediate Issues closure: Eliezer.
>
> While the current tally is technically enough for me to put my foot 
> down and disable Github Issues, it would be nice to have at least one 
> more supporting opinion for a more meaningful "consensus" declaration.
>
> Alex.
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev



-- 
Francesco
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] What should we do about these *wrong* wiki articles?

2017-07-22 Thread Eliezer Croitoru
As I understood the article the DNAT is from another box ie "the router" to the 
squid box.
If I understood it wrong and didn't read properly I will re-read them and see 
in what I am wrong.
Squid doesn't like to act as intercept proxy and to have the destination ip and 
port as itself ie:
Client ip is 192.168.0.30
Squid ip is 192.168.1.40
Router sits at 192.168.0.254
Router does DNAT form 192.168.0.0/24 dst port 80 to squid ip:port ie 
192.168.1.30:3129

Am I missing something about this wrong picture?

Thanks,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, July 21, 2017 17:15
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] What should we do about these *wrong* wiki articles?

On 22/07/17 01:54, Eliezer Croitoru wrote:
> It's not the MASQARADE that is bad
> It's the DNAT rule which removes the original destination ip and port.
> 

I fail to see how NAT behaving as NAT always has done makes those articles 
*about NAT features* "aren't up-to-date and are misleading admins"


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] What should we do about these *wrong* wiki articles?

2017-07-21 Thread Eliezer Croitoru
It's not the MASQARADE that is bad
It's the DNAT rule which removes the original destination ip and port.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, July 21, 2017 15:42
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] What should we do about these *wrong* wiki articles?

On 21/07/17 21:17, Eliezer Croitoru wrote:
> Hey List,
> 
> I have seen that these articles aren't up-to-date and are misleading admins.
> The first step to my opinion would be to add a warning at the top of the
> articles that these are  obsolete and should not be used.
> Then fix the article content and redirect toward PBR\FBF\Other routing to
> the squid box example and eventually removing these examples from the wiki.
> 
> http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat?highlight=%28
> masquerade%29
> http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect?highlight
> =%28masquerade%29
> 
> What do you think?

Whats wrong with MASQUERADE ?

AFAIK it is still the best way to have the OS automatically assign 
outgoing IPs in the presence of NAT - an operation which the default 
configuration of Squid assumes to be happening.

If the admin knows sufficiently about iptables/netfilter to specifically 
setup something other than MASQUERADE properly they already know not to 
enter that line.


NP: the mention of IPv6 not being supported is wrong nowdays. That could 
be replaced by a note specifically for old kernel versions.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] What should we do about these *wrong* wiki articles?

2017-07-21 Thread Eliezer Croitoru
Hey List,

I have seen that these articles aren't up-to-date and are misleading admins.
The first step to my opinion would be to add a warning at the top of the
articles that these are  obsolete and should not be used.
Then fix the article content and redirect toward PBR\FBF\Other routing to
the squid box example and eventually removing these examples from the wiki.

http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat?highlight=%28
masquerade%29
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect?highlight
=%28masquerade%29

What do you think?

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il




___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Bzr to git migration schedule

2017-07-21 Thread Eliezer Croitoru
NOC NOC, anybody there?

Please take a minute to look at the subject.

Thanks,
Eilezer

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Friday, July 21, 2017 08:02
To: Eliezer Croitoru <elie...@ngtech.co.il>; 'Kinkie' <gkin...@gmail.com>; 
squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Bzr to git migration schedule

On 07/20/2017 02:52 PM, Eliezer Croitoru wrote:
> And... I must say that I am experiencing something weird with the wiki in the 
> last month or more.
> I mean that for example I received the message in the top left side of the 
> wiki:
> "Thank you for your changes. Your attention to detail is appreciated.
> Only minutes after I already seen the page exists in another tab.
> Should I open a new bug in the bugzilla?

I have already whined to NOC about this problem. They were going to look at it 
soon. If you do not see improvements for another week or so, it may be a good 
idea to file a bug report with Bugzilla.

Alex.


> From: Kinkie [mailto:gkin...@gmail.com]
> Sent: Thursday, July 20, 2017 10:59
> To: Amos Jeffries <squ...@treenet.co.nz>; Eliezer Croitoru 
> <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
> Subject: Re: [squid-dev] Bzr to git migration schedule
> 
> I believe we are talking about updating the pages that reference bzr 
> with pages that reference the current repositories and best practices
> 
> On Thu, 20 Jul 2017 at 08:27, Eliezer Croitoru <mailto:elie...@ngtech.co.il> 
> wrote:
> I didn't knew we are trying to "re-write" the wiki.
> To what scale are we talking?
> Mover everything from the http://wiki.squid-cache.org to github or just 
> specific articles?
> 
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: mailto:elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-dev 
> [mailto:mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
> Amos Jeffries
> Sent: Tuesday, July 18, 2017 11:59
> To: mailto:squid-dev@lists.squid-cache.org
> Subject: Re: [squid-dev] Bzr to git migration schedule
> 
> On 18/07/17 16:40, Alex Rousskov wrote:
>> On 07/15/2017 10:43 PM, Alex Rousskov wrote:
>>> On 07/11/2017 10:20 PM, Alex Rousskov wrote:
>>>
>>>>2017-07-11: No more new tags in the official bzr repo.
>>>>2017-07-13: No more new commits(*) in the official bzr repo.
>>>>2017-07-14: Migration starts.
>>>>2017-07-15: Anticipated optimistic migration end.
>>>>2017-07-18: Anticipated pessimistic migration end.
>>>>
>>>> All times are noon UTC.
>>
>>
>>> The migration steps are done. According to the automated tests, all 
>>> bzr and git commits match (except for bzr commits containing empty 
>>> directories, as expected). However, I suggest _not_ declaring the 
>>> migration over yet and keeping both the old bzr and the new git 
>>> repository intact for 24 hours in case somebody notices problems 
>>> with the new official git repository at
>>>
>>>  https://github.com/squid-cache/squid
>>
>> I think we should declare the migration over, and the official 
>> repository to be at the above Github URL.
> 
> +1.
> 
>>
>> I saw no problem reports about the new repository. We found and fixed 
>> one bug in the migration tool when migrating some of the Factory 
>> branches, but that bug did not affect the official repository which 
>> has a simpler structure. We could still go back to bzr in the face of 
>> disaster, but I hope that would not be necessary.
>>
>> Needless to say, this git migration is just the first step towards 
>> decent Project QA. The next steps are updating various maintenance 
>> scripts (thank you, Amos, for working on that) and pre-commit build 
>> tests of pull requests (Eduard is leading that effort). The basic 
>> build test enabled on Github should detect many serious problems 
>> today, but I hope to see a lot more powerful Jenkins-driven tests in the 
>> coming weeks.
>>
> 
> We also need someone to lead the wiki re-writing, most of the 
> DeveloperResources mentioning the repository and how-to for certain 
> development activities.
> 
> 
> Amos
> ___
> squid-dev mailing list
> mailto:squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
> 
> ___
> squid-dev mailing list
> mailto:squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
> 


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Disable Github issue tracker

2017-07-21 Thread Eliezer Croitoru
Then if nobody is sending these issues or using then, I believe that leaving 
this section should stay open as a "Chat" and "Connection initiation" channel 
for a month or so.
Also if it's doable to integrate github accounts with the bugzilla it would be 
pretty good thing to do.

And last but not least:
Please take your time to add a README.md to the github repo so anyone that will 
land into the github repository will be aware that the project is moving\moved 
from bzr to github and maybe add couple links of the project and couple other 
important things.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Friday, July 21, 2017 07:57
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [RFC] Disable Github issue tracker

On 07/20/2017 02:48 PM, Eliezer Croitoru wrote:

> github cab mainly be related to coding

Since no automation can enforce that kind of separation, we would have to do it 
manually, which will both annoy posters (confused by an unusual
split) and drain our resources.


> If I(a programmer) already have an account at github, why should I 
> open a new account just to start interacting with the Squid-Cache 
> project?

You are probably assuming that Bugzilla cannot accept github logins.
This is not my area of expertise, but I believe that Bugzilla can be configured 
to do so (natively or via extensions).


> What do you think about the idea of leaving the issues section open?

FWIW, Github issues were open for more than a year, without attracting any 
significant contributions. I would be surprised if they suddenly start doing so 
now.


> Or maybe we are not looking for such audience?

IMHO, we are.


Cheers,

Alex.



> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Sent: Thursday, July 20, 2017 20:35
> To: Eliezer Croitoru <elie...@ngtech.co.il>; 
> squid-dev@lists.squid-cache.org
> Subject: Re: [squid-dev] [RFC] Disable Github issue tracker
> 
> On 07/20/2017 01:25 AM, Eliezer Croitoru wrote:
>> Can we allow issues access to specific users?
> 
> AFAIK no. We can restrict certain issue updates (e.g., comment editing) but 
> not issue reading and issue creation.
> 
> 
>> I believe that the right place to have a "TODO" or similar notes as a github 
>> issue might be a good thing.
>> I think that the Bugzilla has much to offer then github issues so +1 for 
>> staying with the Bugzilla, but maybe try to utilize issues for code specific 
>> things and to allow only specific users get access to it.
>>
> 
> What is the essential difference between a code-specific TODO/note and a 
> feature request that makes only the former category benefit from using Github 
> Issues?
> 
> Alex.
> 


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Bzr to git migration schedule

2017-07-20 Thread Eliezer Croitoru
Thanks,

And... I must say that I am experiencing something weird with the wiki in the 
last month or more.
When I post a new article or update an existing one the connection end very far 
in the "future" despite to the fact that the wiki finished handling the post.
I mean that for example I posted:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/MikroTik-Route-To-Intercept-Squid

and I received the message in the top left side of the wiki:
"Thank you for your changes. Your attention to detail is appreciated.
Notifications sent to: AlexRousskov, AmosJeffries, AdrianChadd, NathanHoad"

Only minutes after I already seen the page exists in another tab.
Should I open a new bug in the bugzilla?

Eliezer


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: Kinkie [mailto:gkin...@gmail.com] 
Sent: Thursday, July 20, 2017 10:59
To: Amos Jeffries <squ...@treenet.co.nz>; Eliezer Croitoru 
<elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Bzr to git migration schedule

I believe we are talking about updating the pages that reference bzr with pages 
that reference the current repositories and best practices 

On Thu, 20 Jul 2017 at 08:27, Eliezer Croitoru <mailto:elie...@ngtech.co.il> 
wrote:
I didn't knew we are trying to "re-write" the wiki.
To what scale are we talking?
Mover everything from the http://wiki.squid-cache.org to github or just 
specific articles?

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: mailto:elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:mailto:squid-dev-boun...@lists.squid-cache.org] On 
Behalf Of Amos Jeffries
Sent: Tuesday, July 18, 2017 11:59
To: mailto:squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Bzr to git migration schedule

On 18/07/17 16:40, Alex Rousskov wrote:
> On 07/15/2017 10:43 PM, Alex Rousskov wrote:
>> On 07/11/2017 10:20 PM, Alex Rousskov wrote:
>>
>>>2017-07-11: No more new tags in the official bzr repo.
>>>2017-07-13: No more new commits(*) in the official bzr repo.
>>>2017-07-14: Migration starts.
>>>2017-07-15: Anticipated optimistic migration end.
>>>2017-07-18: Anticipated pessimistic migration end.
>>>
>>> All times are noon UTC.
>
>
>> The migration steps are done. According to the automated tests, all bzr
>> and git commits match (except for bzr commits containing empty
>> directories, as expected). However, I suggest _not_ declaring the
>> migration over yet and keeping both the old bzr and the new git
>> repository intact for 24 hours in case somebody notices problems with
>> the new official git repository at
>>
>>  https://github.com/squid-cache/squid
>
> I think we should declare the migration over, and the official
> repository to be at the above Github URL.

+1.

>
> I saw no problem reports about the new repository. We found and fixed
> one bug in the migration tool when migrating some of the Factory
> branches, but that bug did not affect the official repository which has
> a simpler structure. We could still go back to bzr in the face of
> disaster, but I hope that would not be necessary.
>
> Needless to say, this git migration is just the first step towards
> decent Project QA. The next steps are updating various maintenance
> scripts (thank you, Amos, for working on that) and pre-commit build
> tests of pull requests (Eduard is leading that effort). The basic build
> test enabled on Github should detect many serious problems today, but I
> hope to see a lot more powerful Jenkins-driven tests in the coming weeks.
>

We also need someone to lead the wiki re-writing, most of the
DeveloperResources mentioning the repository and how-to for certain
development activities.


Amos
___
squid-dev mailing list
mailto:squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
mailto:squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev
-- 
@mobile

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Disable Github issue tracker

2017-07-20 Thread Eliezer Croitoru
Well, what I was thinking is that issues in github cab mainly be related to 
coding and development like what the squid-dev list is.
I understand that technically the Bugzilla and the issues section are not 
different too much, but, we can logically set a rule that issues would be in 
the Squid-Cache project for development related conversations only.

I believe that the much younger programmers are already registered to github so 
it would make sense to use the github issues for these.
I mean: If I(a programmer) already have an account at github, why should I open 
a new account just to start interacting with the Squid-Cache project?
So disabling it completely is good for certain purposes but if the project 
would have a nice README.md with couple guide lines such as:
- To report security vulnerabilities use XYZ
- To make first contact with the development team use the issues section.
- 

Once we decide on the right approach I believe we can start and see what pop in 
the "issues net", maybe it's worth leaving it open just for the sake of 
"solicitation".
(Solicitation these days reminds me of a bug in Golang which prints
"Unsolicited response received on idle HTTP channel..."
https://github.com/golang/go/issues/12855 
One of their biggest bugs I have seen that were not handled for a very long 
time and is relevant specifically to http proxies which uses the native http 
libs )

What do you think about the idea of leaving the issues section open? Or maybe 
we are not looking for such audience?

Eliezer 


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, July 20, 2017 20:35
To: Eliezer Croitoru <elie...@ngtech.co.il>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [RFC] Disable Github issue tracker

On 07/20/2017 01:25 AM, Eliezer Croitoru wrote:
> Can we allow issues access to specific users?

AFAIK no. We can restrict certain issue updates (e.g., comment editing) but not 
issue reading and issue creation.


> I believe that the right place to have a "TODO" or similar notes as a github 
> issue might be a good thing.
> I think that the Bugzilla has much to offer then github issues so +1 for 
> staying with the Bugzilla, but maybe try to utilize issues for code specific 
> things and to allow only specific users get access to it.
> 

What is the essential difference between a code-specific TODO/note and a 
feature request that makes only the former category benefit from using Github 
Issues?

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Bzr to git migration schedule

2017-07-20 Thread Eliezer Croitoru
I didn't knew we are trying to "re-write" the wiki.
To what scale are we talking?
Mover everything from the wiki.squid-cache.org to github or just specific 
articles?

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Tuesday, July 18, 2017 11:59
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Bzr to git migration schedule

On 18/07/17 16:40, Alex Rousskov wrote:
> On 07/15/2017 10:43 PM, Alex Rousskov wrote:
>> On 07/11/2017 10:20 PM, Alex Rousskov wrote:
>>
>>>2017-07-11: No more new tags in the official bzr repo.
>>>2017-07-13: No more new commits(*) in the official bzr repo.
>>>2017-07-14: Migration starts.
>>>2017-07-15: Anticipated optimistic migration end.
>>>2017-07-18: Anticipated pessimistic migration end.
>>>
>>> All times are noon UTC.
> 
> 
>> The migration steps are done. According to the automated tests, all bzr
>> and git commits match (except for bzr commits containing empty
>> directories, as expected). However, I suggest _not_ declaring the
>> migration over yet and keeping both the old bzr and the new git
>> repository intact for 24 hours in case somebody notices problems with
>> the new official git repository at
>>
>>  https://github.com/squid-cache/squid
> 
> I think we should declare the migration over, and the official
> repository to be at the above Github URL.

+1.

> 
> I saw no problem reports about the new repository. We found and fixed
> one bug in the migration tool when migrating some of the Factory
> branches, but that bug did not affect the official repository which has
> a simpler structure. We could still go back to bzr in the face of
> disaster, but I hope that would not be necessary.
> 
> Needless to say, this git migration is just the first step towards
> decent Project QA. The next steps are updating various maintenance
> scripts (thank you, Amos, for working on that) and pre-commit build
> tests of pull requests (Eduard is leading that effort). The basic build
> test enabled on Github should detect many serious problems today, but I
> hope to see a lot more powerful Jenkins-driven tests in the coming weeks.
> 

We also need someone to lead the wiki re-writing, most of the 
DeveloperResources mentioning the repository and how-to for certain 
development activities.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] Disable Github issue tracker

2017-07-20 Thread Eliezer Croitoru
Can we allow issues access to specific users?
I believe that the right place to have a "TODO" or similar notes as a github 
issue might be a good thing.
I think that the Bugzilla has much to offer then github issues so +1 for 
staying with the Bugzilla, but maybe try to utilize issues for code specific 
things and to allow only specific users get access to it.

And about the questions about branching, git has a very nice branching system.
If needed I will review the git course I took to see if what we need(please 
specific this) can be done\implemented.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Wednesday, July 19, 2017 06:27
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [RFC] Disable Github issue tracker

On 19/07/17 09:22, Alex Rousskov wrote:
> Hello,
> 
>  With Squid official repository now at Github, a lot more folks will
> be tempted to report bugs and file feature requests there. I propose to
> remove that functionality from the Github interface (for now). Any
> objections or better ideas?
> 

Sounds good to me.

  I suspect the issues from Nov/Dec last year have duplicate discussions 
either in squid-users or bugzilla - but that would be worth checking up 
with the submitters before the github issues are lost.

Some other migration things;

a) I have now completed the updates for code maintenance scripts and 
re-enabled the cron jobs.

There are maintainer workflow scripts still todo, but before going to 
the trouble I'm considering whether that workflow still makes any sense 
- it was designed for CVS, and adapted to bzr in absence of good bzr 
tooling. If anyone knows of some good branch management tools for git 
I'm interested.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Fix SSL certificate cache refresh and collision handling.

2017-07-16 Thread Eliezer Croitoru
Seems like a very required patch.
I was wondering about another semi-related issue from the past:
Certificate DB directory become unusable, Was it resolved on 3.5 or 4?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf
Of Christos Tsantilas
Sent: Friday, July 14, 2017 18:19
To: Squid Developers <squid-...@squid-cache.org>
Subject: [squid-dev] [PATCH] Fix SSL certificate cache refresh and collision
handling.


SslBump was ignoring origin server certificate changes and using the
previously cached fake certificate (mimicking now-stale properties).
Also, Squid was not detecting key collisions inside certificate caches.

On-disk certificate cache fixes:

   - Use the original certificate signature instead of the certificate
 subject as part of the key. Using signatures reduces certificate key
 collisions to deliberate attacks and woefully misconfigured origins,
 and makes any mishandled attacks a lot less dangerous because the
 attacking origin server certificate cannot by trusted by a properly
 configured Squid and cannot be used for encryption by an attacker.

 We have considered using certificate digests instead of signatures.
 Digests would further reduce the attack surface to copies of public
 certificates (as if the origin server was woefully misconfigured).
 However, unlike the origin-supplied signatures, digests require
 (expensive) computation in Squid, and implemented collision handling
 should make any signature-based attacks unappealing. Signatures won
 on performance grounds.

 Other key components remain the same: NotValidAfter, NotValidBefore,
 forced common name, non-default signing algorithm, and signing hash.

   - Store the original server certificate in the cache (together with
 the generated certificate) for reliable key collision detection.

   - Upon detecting key collisions, ignore and replace the existing cache
 entry with a freshly computed one. This change is required to
 prevent an attacker from tricking Squid into hitting a cached
 impersonating certificate when talking to a legitimate origin.

In-memory SSL context cache fixes:

   - Use the original server certificate (in ASN.1 form) as a part of the
 cache key, to completely eliminate cache key collisions.

Other related improvements:

   - Make the LruMap keys template parameters.
   - Polish Ssl::CertificateDb class member names to match Squid coding
 style. Rename some functions parameters to better match their
 meaning.
   - Replace Ssl::CertificateProperties::dbKey() with:
  * Ssl::TxtKeyForCertificateProperties() in ssl/gadgets.cc for
on-disk key generation by the ssl_crtd helper;
  * Ssl::UniqueKeyForCertificateProperties() in ssl/support.cc for
in-memory binary keys generation by the SSL context memory cache.
   - Optimization: Added Ssl::BIO_new_SBuf(SBuf*) for OpenSSL to write
 directly into SBuf objects.

This is a Measurement Factory project.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Bzr to git migration schedule

2017-07-16 Thread Eliezer Croitoru
+1 on Amos direction.
I hope that this migration will give us better visibility to the public and 
will help to attract developers and patches.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Wednesday, July 12, 2017 09:53
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] Bzr to git migration schedule

On 12/07/17 16:20, Alex Rousskov wrote:
> 
> The transition schedule below is meant to minimize inconveniences for 
> Squid committers. If you are a committer, and the schedule below does 
> not work well enough for you, please email me.

The commit of rev.15240 appears to be having builds issues in Jenkins. 
You may want to delay starting the process until those are resolved. 
Then we should have all versions in a known buildable state before and after 
for testing.

Otherwise, all fine for me.


Amos
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] OpenSSL 1.1 regression

2017-05-20 Thread Eliezer Croitoru
I am missing coupe things about the subject and I want to verify it with your 
all.
From my point of view when maintaining the RPM's for couple distributions I am 
looking at:
Would a specific OpenSSL library hit the distributions I maintain or not or 
just in a couple years?
But I am not sure about the concern of the developers since I read something 
about gcc 6 which is the cutting edge version of gcc to my knowledge.

And I want to understand:
What is the aim of the Squid-Cache software development team for Versions 3.5, 
4.X, 5.X?
Also, Do we expect the main line linux distributions to use the cutting edge 
gcc or OpenSSL or 
we are just in the stage which we try to patch up things before the actual 
integration of these tools will be done?(can take even couple years..)

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Christos Tsantilas
Sent: Friday, May 19, 2017 7:20 PM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] OpenSSL 1.1 regression

The t4 patch

On 19/05/2017 12:27 πμ, Amos Jeffries wrote:
> On 19/05/17 04:04, Christos Tsantilas wrote:
>> On 18/05/2017 03:40 μμ, Amos Jeffries wrote:
>>> On 18/05/17 23:12, Christos Tsantilas wrote:
>>>> +# check for API functions
>>>> +AC_CHECK_LIB(ssl, SSL_CTX_get0_certificate,
>>>> [AC_DEFINE(HAVE_SSL_CTX_GET0_CERTIFICATE, 1, [SSL_CTX_get0_certificate
>>>> is available])], [])
>>>> +
>>>
>>> This bit seems to be correct.
>>>
>>> Given the .cc file sequence of macro tests I think we can speed up
>>> ./configure a bit by moving the use of
>>> SQUID_CHECK_OPENSSL_GETCERTIFICATE_WORKS into the if-not-found [] path.
>>>
>>> eg.
>>>
>>> AC_CHECK_LIB(ssl, SSL_CTX_get0_certificate, [
>>>   AC_DEFINE(HAVE_SSL_CTX_GET0_CERTIFICATE, 1, [SSL_CTX_get0_certificate
>>> is available])
>>>   ],[
>>>   # check for bugs and hacks in the old OpenSSL API
>>>   SQUID_CHECK_OPENSSL_GETCERTIFICATE_WORKS
>>>   ])
>>
>> I am attaching a new patch.
>> In this patch I moved the SQUID_CHECK_OPENSSL_GETCERTIFICATE_WORKS  as
>> you suggested.
>>
>> But also my last patch was buggy, the AC_CHECK_LIB did not search at
>> the correct directories for libssl library.
>>
>> In this patch I moved the "SQUID_STATE_ROLLBACK(squid_openssl_state)"
>> line some lines down to have the correct libraries search path.
>> Is it ok, or it is better to open a new SQUID_STATE_SAVE/ROLLBACK just
>> for AC_CHECK_LIB?
>
> Ah. Either moving the check which alters compiler environment above the
> existign ROLLBACK, or a new one. It is important the CXXFLAGS and SSLLIB
> lines directly above where your patch placed it do not get rolled back.
>
>
>>
>>
>> PS. Finally, this easy to fix issue, is one more prove that it is
>> better to not start fixing files involved with this satanic tool
>> called autoconf!
>>
>
> :-P
>
> Amos
>
> ___
> squid-dev mailing list
> squid-dev@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] WCCP alternatives implentations.

2017-04-02 Thread Eliezer Croitoru
Hey All,

I have a project I have been planning and it might take some time but I do
want to implement this one.
WCCP is a Cisco only protocol so they are the only that benefit from this
protocol.
I am planning to write a daemon for Linux routers that will be the
alternative to WCCP on Linux or other routers.
I am not going the 100% binary format like WCCP but a more HTTP\RPC a like
protocol.
I have been working and testing with:
- Json
- Yaml
- Msgpack
And couple other which each have it's own advantages or disadvantages.
I am planning to make it a TCP based API that will work in some level like
BGP that is based on the connection being alive and probes to verify that
the other peer is still there.
There are couple daemons that does something like this and one of them is
related to ETCD and couple other service discover protocols.

I want this "project" to be synchronized with the squid-cache project so we
would be able to have the solution be of helpful to linux based routers.
I do not remember the exact details but one of the things which wccp lacks
of is "who is included" in the service from the server side.
So for example the server cannot state "I can handle 3000 clients" and then
the admin need to tweak the router manually.

So first,  would the squid project like to cooperate with such a feature?
And in any case I want to know if there are some recommendations and
guidelines before implementing such a project.

Thanks,
Eliezer



Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il




___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] configure.ac cleanup of BerkleyDB related checks

2017-03-30 Thread Eliezer Croitoru
+1..


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Monday, March 27, 2017 4:58 PM
To: Squid Developers <squid-dev@lists.squid-cache.org>
Subject: [squid-dev] [PATCH] configure.ac cleanup of BerkleyDB related checks

While looking into the last remaining bits of bug 4610 I have found that most 
of what we were doing for libdb / -ldb is not necessary any longer.

Most of the logic seems to be hangovers from when session helper was using the 
BerkleyDB v1.85 compatibility interface. Some of it is possibly still necessary 
for the time_quota helper, but that helper has not been using it so far and 
needs an upgrade to match what happened to session helper.

Changes:

* The helpers needing -ldb will not be built unless the library and headers are 
available. So we can drop the Makefile LIB_DB substitutions and always just 
link -ldb explicitly to these helpers.

NP: Anyone who needs small minimal binaries, can build with the --as-needed 
linker flag, or without these helpers. This change has no effect on other 
helpers or the main squid binary.

* Since we no longer need to check if -ldb is necessary, we can drop the 
configure.ac and acinclude logic detecting that.

* Remove unused AC_CHECK_DECL(dbopen, ...)
 - resolves one "FIXME"

* Fix the time_quota helper check to only scan db.h header file contents if 
that file is existing, and if the db_185.h file is not being used instead.

* Fix the session helper check to only try compiling with the db.h header if 
that header actually exists.

* De-duplicate the library header file detection shared by configure.ac and the 
helpers required.m4 files (after the above two changes).

* Remove unused DBLIB variable from configure.ac.

Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] To make squid works in snap world.

2017-03-15 Thread Eliezer Croitoru
+1

How can I reproduce he error?
Is there a bug report open for this issue?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Wednesday, March 15, 2017 6:43 AM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] To make squid works in snap world.

On 15/03/2017 3:44 a.m., Gary Wang wrote:
> Hi guys
> I'm sorry that I'm here so late. :(
> Generally, regarding the purpose of this MP.
> 
> https://code.launchpad.net/~gary-wzl77/squid/ipc_prefix/+merge/318714
> 
> I'd like to make squid snap works as a confined 
> <https://snapcraft.io/docs/reference/confinement>snap in snap world. 
> So that we can ship this snap in ubuntu-core.
> The reason why I need to add compile option to enable to customize 
> IPC prefix at compiling time is that in order to use shared memory in 
> an app which released as a snap package the only allowed file path 
> will be like this  <https://bugs.launchpad.net/snappy/+bug/1653955>(in 
> the following
> namespace)
>  /dev/shm/sem.snap.@{SNAP_NAME}.*
> 
> Hence in our case, the shared memory file path should be
> /dev/shm/sem.snap.squid-snap.{random-string}
> Otherwise, you will get the following error when running the squid 
> in snap world
> http://paste.ubuntu.com/24175840/
> 

Having looked at this a lot more now I think the patch is based on an incorrect 
assumption.

You see Squid complaining of /dev/shm Permissions error. Other people getting 
that error in snap world were using semaphores and fixed it by using snap 
/dev/shm/sem.* names. So you fixed the /dev/shm naming to match snap semaphore 
naming.

... but Squid does *not* use semaphores.

Simply making Squid pretend to be doing semaphores to bypass the security is 
not the right way forward.

The real question is why the permissions error is occuring?

What in snap world is refusing permission?

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Valid document was not found in the cache and only-if-cached directive was specified. what does it mean?

2017-03-02 Thread Eliezer Croitoru
Thanks.


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Wednesday, March 1, 2017 6:47 AM
To: 'Squid Developers' <squid-dev@lists.squid-cache.org>
Cc: Eliezer Croitoru <elie...@ngtech.co.il>
Subject: Re: [squid-dev] Valid document was not found in the cache and 
only-if-cached directive was specified. what does it mean?

On 02/28/2017 06:10 PM, Eliezer  Croitoru wrote:
> I am receiving this error page when a cache_peer is defined as a 
> sibling on each of the 2 proxies:

> The requested URL could not be retrieved Valid document was not found 
> in the cache and only-if-cached directive was specified.

> When I am removing the cache_peer line the file is being fetched.
> I am not sure what should have happen and what went wrong.

This is probably a bug we are trying to fix. You can track our progress at 
http://bugs.squid-cache.org/show_bug.cgi?id=4223

Alex.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Valid document was not found in the cache and only-if-cached directive was specified. what does it mean?

2017-02-28 Thread Eliezer Croitoru
I am receiving this error page when a cache_peer is defined as a sibling on
each of the 2 proxies:
ERROR
The requested URL could not be retrieved

The following error was encountered while trying to retrieve the
URL: http://a7.org/Resize/pictures/187x113/757412.jpg
Valid document was not found in the cache and only-if-cached directive was
specified.
You have issued a request with a only-if-cached cache control directive. The
document was not found in the cache, or it required revalidation prohibited
by the only-if-cached directive.
Your cache administrator
is mailto:webmaster?subject=CacheErrorInfo%20-%20ERR_ONLY_IF_CACHED_MISS
y=CacheHost%3A%20filter%0D%0AErrPage%3A%20ERR_ONLY_IF_CACHED_MISS%0D%0AErr%3
A%20%5Bnone%5D%0D%0ATimeStamp%3A%20Wed,%2001%20Mar%202017%2000%3A57%3A25%20G
MT%0D%0A%0D%0AClientIP%3A%2010.0.0.28%0D%0A%0D%0AHTTP%20Request%3A%0D%0AGET%
20%2FResize%2Fpictures%2F187x113%2F757412.jpg%20HTTP%2F1.1%0AUpgrade-Insecur
e-Requests%3A%201%0D%0AUser-Agent%3A%20Mozilla%2F5.0%20(Windows%20NT%2010.0%
3B%20WOW64)%20AppleWebKit%2F537.36%20(KHTML,%20like%20Gecko)%20Chrome%2F55.0
.2883.95%20YaBrowser%2F17.1.1.1004%20Yowser%2F2.5%20Safari%2F537.36%0D%0AAcc
ept%3A%20text%2Fhtml,application%2Fxhtml+xml,application%2Fxml%3Bq%3D0.9,ima
ge%2Fwebp,*%2F*%3Bq%3D0.8%0D%0AAccept-Encoding%3A%20gzip,%20deflate,%20sdch%
0D%0AAccept-Language%3A%20en,he%3Bq%3D0.8%0D%0ACache-Control%3A%20max-age%3D
259200,%20only-if-cached%0D%0AConnection%3A%20keep-alive%0D%0AHost%3A%20a7.o
rg%0D%0A%0D%0A%0D%0A.


Generated Wed, 01 Mar 2017 00:57:25 GMT by filter (squid/3.5.24)
##END OF PAGE

When I am removing the cache_peer line the file is being fetched.
I am not sure what should have happen and what went wrong.
This is a part of a test I am running to vetify  how would a clustered set
of squid will work with a load balancer infront of them and also to verify
how would the proxies would communicate via ICP and HTCP.
The cache proxy are using a glusterfs nfs mount and rock cache_dir.
In theory and by glusterfs benchmarks(I ran) the proxy should be able to
read faster over the glusterfs and in speeds above 10Gbps.
There is a split of a second theoretical slow down but in practice I have
yet to feel it on small scale.

I need your help to understand if this is a bug and if so what should be
done with it.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] FW: [squid-users] [bug 4674] squid 4.0.18 delay_parameters for class 3 assertion failed

2017-02-27 Thread Eliezer Croitoru
Forwarding to Squid-Dev


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Vitaly Lavrov
Sent: Monday, February 27, 2017 9:39 PM
To: squid-us...@squid-cache.org
Subject: [squid-users] [bug 4674] squid 4.0.18 delay_parameters for class 3 
assertion failed

[bug 4674] Regression in squid 4.0.18 (4.0.17 does not have this error)

OS: Slackware linux 14.2 / gcc 4.8.2

Simple config:

delay_pools 1
delay_class 1 3
delay_parameters 1 64000/64000 32000/32000 3000/3000

squid -k parse

2017/02/20 12:27:48| Processing: delay_pools 1
2017/02/20 12:27:48| Processing: delay_class 1 3
2017/02/20 12:27:48| assertion failed: CompositePoolNode.h:27: "byteCount == 
sizeof(CompositePoolNode)"
Aborted

___
squid-users mailing list
squid-us...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] FW: [squid-users] Reverse proxy for HTTPS cloudfront server

2017-02-14 Thread Eliezer Croitoru
Forwarding the subject to the squid development list.


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Craig Gowing
Sent: Tuesday, February 14, 2017 12:52 PM
To: squid-us...@lists.squid-cache.org
Subject: Re: [squid-users] Reverse proxy for HTTPS cloudfront server

From what I can tell the SNI is not added for cache peers. In
Ssl::PeerConnector::initializeSsl if "peer" is set then the call to
Ssl::setClientSNI is skipped. Also the SSL context doesn't have the hostname
or a callback set, and sslCreateClientContext doesn't appear to be able to
set it either.

I've tested with a quick patch which appears to the fix the issue: (however
I feel it should take into account the forcedomain option as well)

diff --git a/src/ssl/PeerConnector.cc b/src/ssl/PeerConnector.cc
index f5d4c81..178c685 100644
--- a/src/ssl/PeerConnector.cc
+++ b/src/ssl/PeerConnector.cc
@@ -133,6 +133,7 @@ Ssl::PeerConnector::initializeSsl()
 if (peer) {
 SBuf *host = new SBuf(peer->ssldomain ? peer->ssldomain :
peer->host);
 SSL_set_ex_data(ssl, ssl_ex_index_server, host);
+Ssl::setClientSNI(ssl, host->c_str());
 
 if (peer->sslSession)
 SSL_set_session(ssl, peer->sslSession);





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-proxy-for-HTTPS-cloudfront-server-tp4681533p4681542.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-us...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Bug 4662 adding --with-libressl build option

2017-02-01 Thread Eliezer Croitoru
I want to add my opinion on the matter as a system administrator compared to 
some developers about LibreSSL.
The systems I am working are mostly stable or attempts to be stable.
I mentioned in the past that we should consider sticking on releases with 
something stable compared to non-stable as an approach.
Squid 4.0.X is trying to be Stable "Compatible" and what I consider stable in 
the Linux systems I work are:
- Debian
- CentOS
- Ubuntu 14.04 (16.04 is exactly on the point between stable enough for use to 
stable for enterprise).
- FreeBSD 10.X
- Solaris

Debian is far behind in libreSSL support and CentOS is not trying to be 
compatible in the near future.
So it leaves me\us with something that tries to be stable such as Ubuntu 16.04.
They do not offer LibreSSL officially(as far as I know) and it's the same for 
FreeBSD 10.X. and Solaris(as far as I know)

I believe that 4.0.X should not try to be compatible with LibreSSL and this is 
due to the fact that the developers of the SSL related code needs a stable 
ground.
BUG 4662[http://bugs.squid-cache.org/show_bug.cgi?id=4662] is a result of a 
testing experiment with a not enough production or non-tested enough by many 
software.
I believe that at this stage we should define the exact point in the future 
that we would try to support LibreSSL instead of supporting it.
The developers of the SSL related code should define if the goal is 5.0 or 
another more advanced version.

Also I do believe that for the latest hardware with beefy CPU, code repetition 
in C++ might not be much of a regression but not everybody can replace their 
systems hardware every year.
(If my assumption about code repetition affecting older systems is wrong let me 
know)

I hope that my words helped and contributed to the subject in hands.

Regards,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Alex Rousskov
Sent: Tuesday, January 31, 2017 6:50 PM
To: Squid Developers <squid-dev@lists.squid-cache.org>
Subject: Re: [squid-dev] [PATCH] Bug 4662 adding --with-libressl build option

On 01/31/2017 08:20 AM, Amos Jeffries wrote:
> On 31/01/2017 7:04 a.m., Alex Rousskov wrote:
>> On 01/29/2017 04:26 AM, Amos Jeffries wrote:
>>> This is I think all we need to do code-wise to resolve the Bug 4662 
>>> issues with LibreSSL being incompatible with OpenSSL 1.1.

>> I do not think these changes should be committed. As you probably 
>> know from earlier communication, I think we should avoid using both 
>> USE_OPENSSL and USE_LIBRESSL in the code if LibreSSL is [treated as] 
>> a replacement for OpenSSL. I have suggested several ways to avoid the 
>> dangerous and needless repetition of (USE_OPENSSL || USE_LIBRESSL) 
>> conditions, and we even seemed to agree on one of those solutions.


> IMO we only agreed on the HAVE_* macro usage for determining whether 
> the
> 1.1 API was in use. Which is included in this patch.

I do not limit "feature tests" to "1.1 API". Any feature is eligible. In fact, 
it may be a good idea to test individual features rather than bundle many of 
them into a single API version test (that will become obsolete as parts of that 
API are going to change). However, those details are probably not very 
important when resolving the core disagreement here.


> I don't think the repetition of (USE_OPENSSL || USE_LIBRESSL) is 
> either needless or dangerous.

The needless part is not a matter of opinion. It is a fact -- you do not need 
to repeat the (USE_OPENSSL || USE_LIBRESSL) test. You may use a single 
USE_FOOBAR macro to avoid this code repetition. The exact spelling of FOOBAR is 
a separate question.

Whether code duplication is bad, dangerous, unwanted, etc. is certainly a 
matter of opinion and subject to context, but I doubt you can convince me that 
it is not in this case, especially as we have started using USE_OPENSSL macro 
in complex expressions involving GnuTLS.


> No more than USE_OPENSSL was in the same spot.

Repeating (a || b) condition many times is bad. Using a single (c) condition 
many times is not. We should not be arguing about that basic programming 
principle!


> Fixing the sheer number of uses is not in scope.
> 
> Keep it simple. 

I believe we can simply continue to use USE_OPENSSL almost everywhere it is 
used today. No "fixing" is needed in this context except changing what you 
think that macro should stand for.


> Only one macro per library. --with-foo sets USE_FOO.

For you, the libraries provided by OpenSSL and LibreSSL project are two 
different "libraries". For me, they are two slightly different flavors of the 
same "library". IMO, the developers should not be forced to think which flavor 
is in use now 

[squid-dev] [RFC]WCCP alternative for linux routers.

2016-12-22 Thread Eliezer Croitoru
Hey List,

I had a talk about squid and WCCP and PBR with the leader of VYOS project.
Since WCCP is used only on specific systems and adding WCCP support for VYOS
would probably won't be a simple task I had another idea.
Since VYOS systems in most cases have spare CPU I can write a monitoring
daemon which will communicate with the squid service via TCP and will flag
the proxy as up or down etc in RealTime based on a similar way that WCCP
handled heartbeats.

First what do the development team thinks about the idea?
Are there any recommendations about anything regarding implementing such an
alternative?
I had in mind the option to use some other format then binary and there are
couple like json and others which can be combined with server status.
For example how many clients can a proxy handle etc or current CPU or memory
utilization.
Some of this is already implemented in the manager interface and I believe
that it would be simple enongh as a started to monitor the service also
based on the manager interface.

Ideas? Thougths? Arguments? Other angles and directions?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Compilation fix for v5 r14973

2016-12-10 Thread Eliezer Croitoru
Thanks!

5 Already? It seems like a century have passed and we moved on from level to 
another.

Just to update as a side note that the automation of squid builds in my side is 
moving forward each release.
The last time I released binaries It took me 10% of the time from the first 
time I started automating the build.

Currently I am targeting CentOS but Debian and Ubuntu are also on the table for 
a very long time.
Gentoo, Arch and Alpine are almost always on latest so there no much work there 
from our side.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Saturday, December 10, 2016 10:55 PM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [PATCH] Compilation fix for v5 r14973

On 11/12/2016 6:12 a.m., Christos Tsantilas wrote:
> I applied the patch, however still exist problem. The icmp pinger does 
> not build correctly.
> We should add libsbuf library to pinger libraries, but still there are 
> references to HistStat.cc file (maybe add HistStat stub files for pinger?).

pinger does not use the Auth:: things, so it really should not pull them
+ dependencies in.

The correct fix I think is to refactor the Auth::Config so that the various 
global auth* directives can all be stored there. I'm working on that right now.

Amos

___
squid-dev mailing list
mailto:squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Learning from others on "bumping ssl'

2016-12-10 Thread Eliezer Croitoru
In the past Francesco mentioned that we need to learn from others and on his
example it was on memory management from Varnish.
This time I want to bring to the table RedWood:
https://github.com/andybalholm/redwood/

Which is a tiny proxy which was started as an ICAP service but turned out to
a full fledged proxy written in GoLang.
I compared it to squid and in terms of speed in some cases squid is faster
but when taking stability and reliability I have seen that RedWood is
handling websockets pretty nice and we might be able to learn a bit or two
from it.
 And since it's a live mailing list I want to add something from my table to
others:
http://ngtech.co.il/music-en/2016/09/14/follow-me-progressive-trance-mix-201
5/

The RedWood was working for a whole hour with ssl-bump for everything and it
worked great with the clients:
-   Windows IE, FireFox, Chrome, Yandex
-   Android Firefox, Chrome Yandex, other apps

Enjoy the music(if you like it..)

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 


<>___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Automake bug workaround

2016-12-08 Thread Eliezer Croitoru
What happen with this?

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Alex Rousskov
Sent: Sunday, December 4, 2016 12:17 AM
To: Squid Developers <squid-dev@lists.squid-cache.org>
Subject: [squid-dev] Automake bug workaround

Hello,

Squid build produces lots of warnings in modern build environments such as 
Ubuntu 16.04:

> make > /dev/null
> /usr/bin/ar: `u' modifier ignored since `D' is the default (see `U')
> /usr/bin/ar: `u' modifier ignored since `D' is the default (see `U')
...
> /usr/bin/ar: `u' modifier ignored since `D' is the default (see `U')
> ar: `u' modifier ignored since `D' is the default (see `U')
> ar: `u' modifier ignored since `D' is the default (see `U')
> /usr/bin/ar: `u' modifier ignored since `D' is the default (see `U')
> /usr/bin/ar: `u' modifier ignored since `D' is the default (see `U')
...

These benign warnings is most likely an ancient Automake bug awaken by recent 
environmental changes. It is becoming a well-known issue among many projects 
AFAICT. The attached Web Polygraph patch works around this problem. The patch 
preamble has more technical references.

If you think Squid should do something like this, please consider adopting and 
adjusting this patch as needed for Squid.


Thank you,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] FW: [squid-users] IPv6 support for PF interception

2016-12-05 Thread Eliezer Croitoru
Proposal for PF ipv6 compatibility from squid-users.


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Egerv?ry Gergely
Sent: Monday, December 5, 2016 8:34 PM
To: squid-us...@lists.squid-cache.org
Subject: [squid-users] IPv6 support for PF interception

Hi,

So, do you want IPv4/IPv6 dual-stacked transparent interception on your NetBSD 
box? Unfortunately, you are out of luck.

On NetBSD, we have three choices for packet filtering:

- Darren Reed's "IPFilter". It has known bugs for years, and looks abandoned.

- OpenBSD's "PF". It's NetBSD port is very outdated, and porting newer version 
of PF is abandoned by NetBSD developers. Squid has support for PF interception 
for IPv4 only. (Newer OpenBSD PF supports IPv6 with TPROXY, but TPROXY is not 
supported by NetBSD version of PF)

- NetBSD's "NPF". It's quite new, and missing features like TPROXY / divert 
sockets support, and Squid does not have interception code for it.

We start working on NPF intercept support, but there's no working code yet. 
Until then, I have prepared a very simple patch for Squid - enabling IPv6 for 
PF interception. It works for me on my NetBSD 7-STABLE box.

Please review and test it, especially on OpenBSD and newer PF versions.
If it's approiate, please commit it.

Thank you.

--- Intercept.cc.orig   2016-10-09 21:58:01.0 +0200
+++ Intercept.cc2016-12-02 22:57:24.0 +0100
@@ -336,13 +336,20 @@
  }

  memset(, 0, sizeof(struct pfioc_natlook));
-newConn->remote.getInAddr(nl.saddr.v4);
-nl.sport = htons(newConn->remote.port());

-newConn->local.getInAddr(nl.daddr.v4);
+if (newConn->remote.isIPv6()) {
+newConn->remote.getInAddr(nl.saddr.v6);
+newConn->local.getInAddr(nl.daddr.v6);
+nl.af = AF_INET6;
+} else {
+newConn->remote.getInAddr(nl.saddr.v4);
+newConn->local.getInAddr(nl.daddr.v4);
+nl.af = AF_INET;
+}
+
+nl.sport = htons(newConn->remote.port());
  nl.dport = htons(newConn->local.port());

-nl.af = AF_INET;
  nl.proto = IPPROTO_TCP;
  nl.direction = PF_OUT;

@@ -358,7 +365,11 @@
  debugs(89, 9, HERE << "address: " << newConn);
  return false;
  } else {
-newConn->local = nl.rdaddr.v4;
+if (newConn->remote.isIPv6()) {
+newConn->local = nl.rdaddr.v6;
+} else {
+newConn->local = nl.rdaddr.v4;
+}
  newConn->local.port(ntohs(nl.rdport));
  debugs(89, 5, HERE << "address NAT: " << newConn);
  return true;


--
Gergely EGERVARY
___
squid-users mailing list
squid-us...@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Update refresh_pattern description regarding 'max' option

2016-12-05 Thread Eliezer Croitoru
To produce just open the page:
http://unix.stackexchange.com/questions/28851/iptables-to-block-https-websites

Using Firefox.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Eliezer Croitoru
Sent: Monday, December 5, 2016 5:52 PM
To: 'Amos Jeffries' <squ...@treenet.co.nz>; squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [PATCH] Update refresh_pattern description regarding 
'max' option

I was just reading at stackexchange and I got this nasty error:
2016/12/05 17:47:05 kid1| Accepting HTTP Socket connections at local=[::]:13129 
remote=[::] FD 24 flags=9
2016/12/05 17:47:05 kid1| Accepting SSL bumped HTTP Socket connections at 
local=[::]:3128 remote=[::] FD 25 flags=9
2016/12/05 17:47:05| pinger: Initialising ICMP pinger ...
2016/12/05 17:47:05| pinger: ICMP socket opened.
2016/12/05 17:47:05| pinger: ICMPv6 socket opened
2016/12/05 17:47:05 kid1| WARNING: Cannot retrieve 
'qa.sockets.stackexchange.com:80'.
2016/12/05 17:47:05 kid1| assertion failed: 
../../squid-4.0.16/src/FwdState.cc:268: "!EBIT_TEST(entry->flags, 
ENTRY_FWD_HDR_WAIT)"
2016/12/05 17:47:05| removing PID file: /var/run/squid.pid
2016/12/05 17:47:05| Pinger exiting.
2016/12/05 17:47:15| Pinger exiting.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Wednesday, November 30, 2016 1:56 PM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [PATCH] Update refresh_pattern description regarding 
'max' option

On 21/11/2016 9:58 a.m., Garri Djavadyan wrote:
> The patch adds description for undocumented behavior related to 'max'
> option of 'refresh_pattern' directive. I mean, 'max' option sets value 
> for 'Cache-Control: max-age' header sent by Squid upstream. Feel free 
> to modify the description. Thanks.
> 

IIRC it is not supposed to affect just any request. Only;
 a) the revalidation updates to HIT content need max-age, and that is to ensure 
newer data comes back.
 b) sibling cache requests, because the content retrieved from a sibling cache 
is supposed to be cacheable (thus within max storable age).

That (b) situation is one we probably should remove. But it needs someone to 
analyse the use cases and figure out what should be going on now in the 
HTTP/1.1 era.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] VISIO Diagrams assistant for the wiki.

2016-09-27 Thread Eliezer Croitoru
I got funds from my static work place for Visio and I am thinking about
Illustrating couple Diagrams for the Wiki.
I am looking for some guidance finding the right Diagrams.
Maybe I will find someone to help me with couple common symbols which can be
donated to the project.

Please comment  to the initiative about how and in what it can help.

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile+WhatsApp: +972-5-28704261
Email: elie...@ngtech.co.il
 


<>___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] "Splicing" bumped requests to resolve\workaround WebSockets issues.

2016-07-23 Thread Eliezer Croitoru
Hey Alex,

By saying that I didn't found it only means that *I* couldn't find it with
my rusted email search and found for something.
It doesn't state at all that it was not documented or someone didn't
presented enough details.
A search and lookup can end up more then once in a state of lost mind.
This is one of the reasons I do not like google and couple other search
backends.
For some minds "The search ends in google" literally means that if it was
not found there then there is no other way to find something.
For me the next step would be to ask somebody else with fresh mind and grasp
on the concept\idea\subject but in many cases I am starting with my
surroundings rather then emailing someone.
In this specific case I cannot ask anyone near me since not many even knows
HTTP not to talk about Squid.

So I can only say:
Thanks for bearing with me :D

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Monday, July 18, 2016 9:07 PM
To: squid-dev@lists.squid-cache.org
Cc: Eliezer Croitoru
Subject: Re: [RFC] "Splicing" bumped requests to resolve\workaround
WebSockets issues.

On 07/17/2016 02:34 PM, Eliezer Croitoru wrote:
> I remember something's vaguely and this is why I didn't quote anything.
> I tried searching for something in the squid-dev list or irc but I 
> couldn't found it.

For the future, I hope you will document your vague memories without saying
that somebody else did not present enough details.


On 07/18/2016 12:13 AM, Amos Jeffries wrote:
> that would be nice to have. But is not one of the things holding
> Squid-4 in beta.

Agreed on both counts.

Alex.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] "Splicing" bumped requests to resolve\workaround WebSockets issues.

2016-07-17 Thread Eliezer Croitoru
Alex thanks for clearing things out.
I remember something's vaguely and this is why I didn't quote anything.
I tried searching for something in the squid-dev list or irc but I couldn't
found it.

"tunnel after bump" is indeed the right term and despite to what some think
in many cases the issue is not certificate pinning but...
A specially crafted binary protocol that cannot be intercepted by an HTTP
proxy.

About the on_unsupported_protocol , I am assuming it's part of the:
http://wiki.squid-cache.org/Squid-4?highlight=%28on_unsupported_protocol%29

The test cases I can think about are couple:
- CONNECT of a pinned certificate based connection(MS, SKYPE)
- CONNECT of a non TLS based connection(SKYPE)
- CONNECT of a http websocket connection(WHATSAPP?)
- CONNECT of a HTTPS based connection, non websocket(a simple banking site)
- CONNECT of a HTTPS based websocket connection(the CentOS\Fedora cockpit
have these, other suggections are welcome)
- intercepted connection for each of the cases above

I think that when we could test each and every one of these
cases(successfully) then we can move forward from beta to the next release.
(only for the bump, splice, tunnel, on_unsupported_protocol aspect of squid)

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Sunday, July 17, 2016 10:39 PM
To: Eliezer Croitoru; squid-dev@lists.squid-cache.org
Subject: Re: [RFC] "Splicing" bumped requests to resolve\workaround
WebSockets issues.

On 07/15/2016 04:29 AM, Eliezer Croitoru wrote:
> The issue:
> 
> Clients are issuing secured connections which contains WebSockets
> internally and squid HTTP parsing breaks these connections.

> Another related issue which deserves attention:
> 
> Certificate pinning and connection breakage.
> 
> Currently we cannot determine for many connections what is the "issue",
> is it the bumping itself of the breakage of a WebSocket http connection.



> An acceptable solution:
> 
> Alex mentioned the option to splice a bumped connection.  
> 
> I do not know exactly what Alex meant since not much details were
presented.

I do not know exactly what Alex meant either since you provided no
source for that alleged Alex' opinion.


> As I understand, it would not be possible  to do this kind of splice
> without bumping first.

I recommend avoiding "splice after bump" terminology because, in SslBump
context implied by the word "bump", that combination makes no sense: It
is not possible to splice bumped connections.

I suggest using "tunnel after bump" instead. Please note that "tunnel"
(not "splice") is one of the on_unsupported_protocol actions.


HTH,

Alex.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] "Splicing" bumped requests to resolve\workaround WebSockets issues.

2016-07-15 Thread Eliezer Croitoru
I want to understand the way a WebSocket Splice would work.

The issue:

Clients are issuing secured connections which contains WebSockets internally
and squid HTTP parsing breaks these connections.

>From a security aspect of things, many companies would not like the idea of
the options to "smuggle" data using http through a proxy.

 

Another related issue which deserves attention:

Certificate pinning and connection breakage.

Currently we cannot determine for many connections what is the "issue", is
it the bumping itself of the breakage of a WebSocket http connection.

 

An acceptable solution:

Alex mentioned the option to splice a bumped connection.

 

I do not know exactly what Alex meant since not much details were presented.

How complex would it be to add an option to "splice"(maybe already done) a
bumped http connection?
For WebSockets to be supported we just need to dump the request headers into
the wire and "splice" everything back.

I was thinking about maybe adding if not there already a "Connection: close"
to try and verify that in some level the connection would be closed properly
by a civil server.

It's not "Secure" for many places but I think it could be pretty straight
forward to workaround this administrative issue.

I assume that the same solution can be applied to both regular
sockets\connections and secured.

 

As I understand, it would not be possible  to do this kind of splice without
bumping first.

 

Another related subject is CONNECT based TCP connections smuggling.

The scenario is that a client tries to issue a TCP connection using a
CONNECT method while these can be a wrapped HTTP ones.

 

I only would like to get feedback to make sure that my understanding of the
complexity of the subject is in the right direction.

 

Thanks,

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] HTTP2 push related question.

2016-06-22 Thread Eliezer Croitoru
I am having troubles in understanding the benefits of HTTP 2 push messages
and I am looking a starter point on how to look at the subject.
I am sure that there are applicable usage for it and I remember that xmpp
and many other protocols are using this feature already.
But I still do not understand in what way it will extend the protocol?
The situation of a PUSH as I understand it would be when you kind of "trust"
the origin server and for specific applications.
Today normal web pages are already pushing data to the web browsers but with
PUSH as I understand it means that for example a 1GB file can be pushed to
the client.

Any redirections are welcome,
Eliezer

----
Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

<>___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] ICAP external acl services

2016-06-22 Thread Eliezer Croitoru
Thanks for the comments.

It's much clearer now then before.

The next is the answer to all of my questions:

Simplicity. Speed. Flexibility.

And not having to teach admin a major compicated protocol just 
to script
a yes/no decision. The basic helper I/O protocol we have 
already seems
to be blowing some peoples minds.

For me it didn't blow the mind at start but later things got a bit complicated 
with the growing complexity of things.
But when someone answers something it helps for many more then just plain docs.

Eliezer 
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Do not hide important/critical messages

2016-06-15 Thread Eliezer Croitoru
Thanks!

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Thursday, June 16, 2016 1:24 AM
To: squid-dev@lists.squid-cache.org
Subject: Re: [squid-dev] [PATCH] Do not hide important/critical messages

On 12/04/2016 2:59 a.m., Alex Rousskov wrote:
> On 04/09/2016 10:42 PM, Amos Jeffries wrote:
>> On 29/03/2016 12:44 p.m., Alex Rousskov wrote:
>>> unpatched Squid console only says:
>>>
>>>   2016/03/27 14:19:48.297| SECURITY ALERT: By user agent:
>>>   2016/03/27 14:19:48.297| SECURITY ALERT: on URL: dut70.test:443
>>>
>>> A patched Squid produces the expected three console lines:
>>>
>>>   2016/03/27 15:25:42| SECURITY ALERT: Host header forgery detected...
>>>   2016/03/27 15:25:42| SECURITY ALERT: By user agent:
>>>   2016/03/27 15:25:42| SECURITY ALERT: on URL: dut70.test:443
> 
>>> If this v3.5 patch is accepted in principle, I hope somebody 
>>> volunteers a trunk port (which should not be difficult).
> 
> 
>> This appears to be just a polished implementation of the intended
>> debugs() original design. So in principle its already acceptible.
> 
>> To get into v3.5 it does need to go in through v4 first though.
> 
> 

Now that the v4 port is done and working. Iv've applied this to 3.5 as 
rev.14058.

Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Testing secure ICAP service.

2016-06-14 Thread Eliezer Croitoru
I lost the thread which mentioned that the ICAPS needs to be tested, so a
new one.

 

I have upgraded the GoLang ICAP library that I am using to support icaps://
and I am starting to test 4.0.X with a dummy ICAP service.

Anything special that should be tested\verified?

 

Currently I am dry running it with a DONT_VERIFY flag(squid side).

I am planning to upgrade my ICAP service to support TLS and test it with
real service behind it.

The binaries of the service are at:

http://ngtech.co.il/squid/icap-testing-service/

 

The service by default tries to read the "key.pem" and "cert.pem" from the
startup directory.

The library sources are at: https://github.com/elico/icap

I will later publish the source code for the dummy service.

 

Eliezer

 

----

Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] client header mangling

2016-06-09 Thread Eliezer Croitoru
I am trying to understand so bear with me couple seconds.
I have seen that there are pages\servers which doesn't state about the 
User-Agent in the Vary response while still taking it into account.

The caching side of the picture is storing an object which will never be served.
The HIT ratio is a whole other story  of the picture.

Since I am not inside the code but I do try to understand, currently what 
happens?
How many lookups are done for\per a request? Do we run an object lookup after 
the response headers was received from the server?
Can we predict a Vary object based on the request only?(I assume that it will 
be an estimated and not absolute certainty if at all)
Also let say we have a 1k page ahead, would we want it to be fetched from 
disk\ram store rather then from the origin server after we told it we want the 
object?

I am almost sure that lowering the disk and ram stored objects should be a goal 
by itself if we cannot "dig" them up from ram or disk later for any use.

A request_header_replace can work only for "generic" ones such as without a 
language preference such as "br" added to some requests by browser add-ons.

Now a step further, I can write a tiny ICAP service that will "handle" common 
Vary headers from FireFox and other browsers to test how it affects caches in 
general.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Amos Jeffries
Sent: Tuesday, June 7, 2016 2:10 PM
To: Squid Developers
Subject: [squid-dev] [RFC] client header mangling

I've been looking at ways to resolve the long Vary discussion going on in 
squid-users with a patch that we can accept into mainline. What they (joe and 
Yuri) have at present works, but only with extra request_header_replace config 
preventing integrity problems.

One way to make useful progress would be to finally add the recurring request 
for request_header_access/replace to work on client messages in a pre-cache 
doCallouts hook rather than only a post-cache hook.

I am imagining this being done on the adapted request headers after ICAP, eCAP 
and URL-rewrite have all done their things. And using the same request_header_* 
directive ACL lists as for outbound traffic.

Any alternative ideas or objections?

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org <mailto:squid-dev@lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-dev
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Fast SNI peek

2016-05-17 Thread Eliezer Croitoru
I will try to somehow schedule this test in some of my spare time after 
I will finish with couple other things.


Thanks,
Eliezer


On 16/05/2016 22:03, Alex Rousskov wrote:

On 05/16/2016 01:51 AM, Eliezer Croitoru wrote:


I have a question about this specific file and SNI peek and splice in general.

Your question is not specific to this patch/thread: AFAIK, the patch
does not change whether/how Squid validates SNI.



For the scenario which the SNI declares www.google.com and the
destination IP address is not the domain, IE default apache or any
other domain. What happens? And specifically about the request
splicing? And in more detail my concern is that if some software will
fake the SNI knowing that the destination will never be the requested
one but some default of another domain, will the request be spliced
anyway?

Sorry, I do not know the exact answers to your questions. Please note
that the answers may depend on whether Squid intercepts or forwards
connections and on the SslBump step during which Squid splices the
connection. Researching this (and documenting any non-trivial answers on
Squid wiki) would be useful.


Cheers,

Alex.



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Fast SNI peek

2016-05-16 Thread Eliezer Croitoru
Hey Alex,

I have a question about this specific file and SNI peek and splice in general.
For the scenario which the SNI declares www.google.com and the destination IP 
address is not the domain, IE default apache or any other domain. What happens? 
And specifically about the request splicing?
And in more detail my concern is that if some software will fake the SNI 
knowing that the destination will never be the requested one but some default 
of another domain, will the request be spliced anyway?
I am not sure what squid does now and if something might be changed or not.
I do believe that more then one admin will want to enforce dns resolution 
policy on such requests and to decide what to do with "rough" requests.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-dev [mailto:squid-dev-boun...@lists.squid-cache.org] On Behalf Of 
Alex Rousskov
Sent: Friday, May 13, 2016 11:38 PM
To: Squid Developers
Subject: Re: [squid-dev] [PATCH] Fast SNI peek

On 05/13/2016 11:07 AM, Christos Tsantilas wrote:

> mode | trunk | fast-sni
> SS1 100% 100%
> SS2  22%  69%
> SS3  16%  26%

The above [slightly adjusted by me] table needs an explanation and a few 
disclaimers.

SSN in the first column means "splicing at SslBump stepN". That is, we have 
tested recent trunk and fast-sni branch code using these three SslBump 
configurations:

  # SS1 (baseline):
  ssl_bump splice all

  # SS2 (peek at SNI -- the focus of this project):
  ssl_bump peek step1
  ssl_bump splice all

  # SS3 (peek at SNI then peek at the server certificate):
  ssl_bump peek step1
  ssl_bump peek step2
  ssl_bump splice all

The percentage shown is "Squid performance" compared to SS1/baseline where no 
peeking is performed. Higher numbers are better.

For example, the SS2 test shows that trunk SNI peeking performance is only 
about 20% of the "immediately splice everything without peeking"
baseline. In other words, trunk wastes ~80% of performance on SNI peeking. The 
optimized version loses only about 30%, approaching 70% of "ideal" performance.

Please note that SS3 includes certificate validation so it is not an estimate 
of step1+2 peeking performance alone; there were even DNS requests during step3 
in our rough tests. This could be perfected further, but since our focus is on 
SNI peeking (SS2), we did not polish.
The significant overheads of those additional activities probably explain why 
make SS3 trunk and fast-sni performance do not differ as much as performance 
during SS2.

Absolute performance of trunk and fast SNI branch during SS1 tests was about 
the same, so 100% baseline is the same for both code flavors and all the 
reported percentages can be compared with each other.

These results are based on a micro-test using a best-effort Polygraph workload 
(i.e., each Polygraph robot is configured to send the next requests only after 
getting a response to the previous one). There were
1000 robots and 1000 origin servers during these tests. We minimized irrelevant 
activity by sending only HTTPS HEAD requests and intercepting SSL traffic 
(i.e., no CONNECTs). We are publishing percentages and not absolute request 
rates or response times that do not estimate true sustained performance in a 
realistic environment due to these simplifications. Suffice to say that Squid 
was doing more than 1000 HTTPS requests per second in all these tests.

The results Christos shared did not come from the very last version of trunk 
(currently untestable anyway) and fast SNI branches, but they should be within 
a few percentage points of the latest code.


Cheers,

Alex.

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] Dynamic Hostnames and urls and StoreID, what do you think?

2016-03-19 Thread Eliezer Croitoru
Currently the Internet is in a more of a "static" state and there are 
couple moving parts in this whole big system.

Most of it is "binded" by the ipv4 and the domain name system.
With the developments of encryption including Diffie–Hellman and couple 
other ideas I have seen that it is possible that in the future(distance 
or not.) there is a possibility for a change in how things works.
Currently Google implements couple "moving" targets in their systems 
that gives them the option to redirect from one point to another in 
couple layers\levels and it's nice but it means that StoreID now is 
built based on the assumption or the idea of semi-static targets.


From the admin point of view or the script, the target needs to be 
known in advance to the actual fetch. In the not so long past 
Google\YouTube "cachers" used a nice trick that was described by Amos as 
"redirection attack" in order to prepare for an attack. Sometimes it was 
on specific hosts and in others it was on specific urls\objects.
I tried to track this issue for a very long time and it seems that these 
attacks was mitigated by Google\YouTube by adding the HTTPS level.


Now that we have ssl-bump in a very good shape I was wondering to 
myself, what would be the next move of Google\YouTube service?

Moving targets around the globe 24/7?
What or Why actually Google\YouTube care about when some local ISP or an 
internal proxy caches their content services?


I am looking for couple new angles to look at the subject, please share 
your opinion about the subject and also if you think I have a wrong one 
please add comments.


Eliezer

* Saying to someone as a joke in the middle of the work "Somebody from 
Google was just looking for you." was one of the devious things I have 
heard in my life!!!

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] assertion failed: Write.cc:41: "!ccb->active()"

2016-03-13 Thread Eliezer Croitoru

I would like to respond on some of the things in the post related to V4.

First 3.5 is very stable for a very long time now.
It have couple bugs in it and it will be good to somehow make them gone 
but I must highlight couple things which might not be clear.


Atomic patches are great and they solve things in a very good way.
But every bug has it's own "life" and we can understand that it would 
not be possible to write a perfect product without couple bugs\errors\typos.
I like many believe in perfection but from the last test(RAM ONLY) I 
ran, it seems that the hardware is not really ready to specific heat. 
The DDR2 ECC cards started pooping out of their socket in the middle of 
a simple test when it's pretty cold in the room...


+1 for trunk while I still have couple doubts about what I am 
understanding about take1 and take2 steps.
I will try to see how the service works after the patches will be 
applied to V4.


I have couple more build nodes and the automation learning keeps me busy.

Eliezer

On 10/03/2016 23:35, Alex Rousskov wrote:

On 03/10/2016 12:14 PM, Christos Tsantilas wrote:


  I am attaching two patches for this bug.


I will re-summarize the problem we are dealing with using higher-level
concepts so that it is easier to grok what Christos is talking about:

1. Ftp::Client cannot deal with more than one FTP command at a time.

2. Ftp::Server must _delay_ reading the next FTP command due to #1.

3. Delaying is complicated because Ftp::Server receives responses from
"Squid" (ICAP, errors, caching, etc.), and not Ftp::Client itself:
Ftp:Server may receive a response for the current FTP command
long _before_ Ftp::Client is done dealing with that same command!
Christos has highlighted a few specific cases where that happens.

[ In HTTP, this problem is much smaller because the two Squid HTTP
  sides do not share much state: New Http::Client objects are created
  as needed for the same ConnStateData object while the old ones
  are finishing their Http::Client jobs. ]

4. The current Squid code (v3.5 and trunk) contains a mechanism for #2,
but that mechanism does not handle a now-known race condition. The
bug leads to the Write.cc assertion. Our take1 patch fixes that bug.

5. While working on #4, we realized that the existing mechanism for #2
relies on Ftp::Client existence and other conditions that may not
materialize. If things go wrong, Ftp::Server may get stuck waiting
for Ftp::Client that either did not exist or did not end the wait.
Our take2 patch revises the mechanism for #2 to be much more robust.
We do not know of any specific way to hit the bugs fixed by take2,
but that does not mean it is (and will remain) impossible.

Take2 also relays FTP origin errors to the FTP user in more cases.



One simple for squid-3.5 (t1
patch) and one more complex (t2 patch). The simple patch solve the bug
for now, but may leave other similar bugs in squid.


Amos, do you want us to port take2 to v3.5? The take1 patch for v3.5 is
enough to fix the known assertion. Take2 fixes that assertion as well,
but it is bigger because it also fixes design problems that may lead to
other bugs in v3.5. Which one do you want in v3.5?

For trunk/v4, there is no doubt in my mind that we should use take2 (or
its polished variant).


The rest is my take2 review.



+bool clientSideWaitingForUs; ///< whether the client is waiting for us


Let's avoid confusing "client side" and "client" terms in new code,
especially when it is Ftp::Server that is waiting:

   /// whether we are between Ftp::Server::startWaitingForOrigin() and
   /// Ftp::Server::stopWaitingForOrigin() calls
   bool originWaitInProgress;



+void
+Ftp::Relay::swanSong()
+{
+if (clientSideWaitingForUs) {
+CbcPointer  = fwd->request->clientConnectionManager;
+if (mgr.valid()) {
+if (Ftp::Server *srv = dynamic_cast(mgr.get())) {
+typedef UnaryMemFunT CbDialer;
+AsyncCall::Pointer call = asyncCall(11, 3, 
"Ftp::Server::stopWaitingForOrigin",
+CbDialer(srv, 
::Server::stopWaitingForOrigin, 0));
+ScheduleCallHere(call);


and


  void
  Ftp::Relay::serverComplete()
  {
  CbcPointer  = fwd->request->clientConnectionManager;
  if (mgr.valid()) {
+if (clientSideWaitingForUs) {
+if (Ftp::Server *srv = dynamic_cast(mgr.get())) {
+   typedef UnaryMemFunT CbDialer;
+   AsyncCall::Pointer call = asyncCall(11, 3, 
"Ftp::Server::stopWaitingForOrigin",
+   CbDialer(srv, 
::Server::stopWaitingForOrigin, ctrl.replycode));
+   ScheduleCallHere(call);
+   clientSideWaitingForUs = false;
+}
+}



Let's move this nearly duplicated code into a new

Re: [squid-dev] [PATCH] Bug 7: Headers are not updated on disk after 304s

2016-03-12 Thread Eliezer Croitoru

I will try to follow up at:
http://bugs.squid-cache.org/show_bug.cgi?id=7

Eliezer

On 11/03/2016 20:16, Alex Rousskov wrote:

On 03/11/2016 02:17 AM, Amos Jeffries wrote:

On 11/03/2016 2:59 p.m., Alex Rousskov wrote:

 The attached compressed patch fixes a 15+ years old Bug #7 [1] for
the shared memory cache and rock cache_dirs. I am not aware of anybody
working on ufs-based cache_dirs, but this patch provides a Store API and
a cache_dir example on how to fix those as well.

   [1] http://bugs.squid-cache.org/show_bug.cgi?id=7




Ah. I'm getting deja-vu on this. Thought those two cache types were
fixed long ago and recent talk was you were working on the UFS side of it.


There was some noise about this bug and related issues some months ago.
It was easy to get confused by all the mis[leading]information being
posted on bugzilla, including reports that "the bug is fixed" for some
ufs-based cache_dirs. I tried to correct those reports but failed to
convince people that they do not see what they think they see.

After this patch, the following cache stores (and only them) should
support header updates:

   * non-shared memory cache (in non-SMP Squids only)
   * shared memory cache
   * rock cache_dir

Needless to say, the posted patch does not fix all the problems with
header updates, even for the above stores. For example, the code that
decides which headers to update may still violate HTTP in some ways (I
have not checked). The patch "just" writes the headers computed by Squid
to shared memory cache and to rock cache_dirs.

Moreover, given the [necessary] complexity of the efficient update code
combined with the [unnecessary] complexity of some old Store APIs, I
would be surprised if there are no new bugs or problems introduced by
our changes. I am not aware of any, but we continue to test and plan to
fix the ones we find.



Besides unavoidable increase in rock-based caching code complexity, the
[known] costs of this fix are:

1. 8 additional bytes per cache entry for shared memory cache and rock
cache_dirs. Much bigger but short-lived RAM _savings_ for rock
cache_dirs (due to less RAM-hungry index rebuild code) somewhat mitigate
this RAM usage increase.

2. Increased slot fragmentation when updated headers are slightly larger
than old ones. This can probably be optimized away later if needed by
padding HTTP headers or StoreEntry metadata.

3. Somewhat slower rock cache_dir index rebuild time. IMO, this should
eventually be dealt with by not rebuilding the index on most startups at
all (rather than focusing on the index rebuild optimization).


Hmm. Nod, agreed on the long-term approach.



The patch preamble (also quoted below) contains more technical details,
including a list of side changes that, ideally, should go in as separate
commits. The posted patch is based on our bug7 branch on lp[2] which has
many intermediate commits. I am not yet sure whether it makes sense to
_merge_ that branch into trunk or simply commit it as a single/atomic
change (except for those side changes). Opinions welcomed.




Do you know how to do a merge like that with bzr properly?
  My experience has been that it only likes atomic-like merges.


I sense a terminology conflict. By "merge", I meant "bzr merge". Trunk
already has many merged branches, of course:

   revno: 14574 [merge]
   revno: 14573 [merge]
   revno: 14564 [merge]
   ...

By single/atomic change, I meant "patch < bug7.patch". Merges preserve
individual branch commits which is good when those commits are valuable
and bad when those commits are noise. In case of our bug7 branch, it is
a mixture of valuable stuff and noise. I decided to do a single/atomic
change to avoid increasing the noise level.



in src/StoreIOState.h:

* if the XXX about file_callback can the removal TODO be enacted ?
  - at least as one of the side-change patches


Yes, of course, but out of this project scope. We already did the
difficult part -- detected and verified that the API is unused.
Hopefully, somebody will volunteer to do the rest (and to take the
responsibility for it).



* the docs on touchingStoreEntry() seem to contradict your description
of how the entry chains work. Now. You said readers could read whatever
chain they were attached to after the update switch. The doc says they
only ever read the primary.


Done: Clarified that the primary chain (which the readers always start
with) may become secondary later:


 // Tests whether we are working with the primary/public StoreEntry chain.
 // Reads start reading the primary chain, but it may become secondary.
 // There are two store write kinds:
 // * regular writes that change (usually append) the entry visible to all 
and
 // * header updates that create a fresh chain (while keeping the stale one 
usable).
 bool touchingStoreEntry() const;


The readers do not matter in the current code because reading code does
not use this method, but that may change in the future, of course.



in 

Re: [squid-dev] [PATCH] shared_memory_locking

2016-03-10 Thread Eliezer Croitoru

Well I'm with you on this.
I haven't reviewed this code yet and the words makes sense so..
debug() this or debug() that or about() this or that is related to 
crashes which I have not seen for a very long time.


Eliezer

On 11/03/2016 06:39, Alex Rousskov wrote:

On 03/10/2016 08:32 PM, Eliezer Croitoru wrote:


>Can this be verified in any way?

Verify that I am not imagining things? Sure! If looking at fatal()
itself is not enough to realize that it does not log the FATAL message
until_after_  calling a few "heavy" functions (which may fail on their
own), then just call "abort()" from releaseServerSockets() to emulate
such a failure, and you will not see any FATAL messages from Squid, no
mater how many times it calls fatal() or fatalf().

Alex.


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] shared_memory_locking

2016-03-10 Thread Eliezer Croitoru

Just wondering,

Can this be verified in any way?
I have seen couple times in the past that things are not as expected to 
be but I do not have enough debug output reading experience.


Eliezer

On 11/03/2016 05:28, Amos Jeffries wrote:

To partially address your concern, I set debugs() level to 5 and removed
>the ERROR prefix. This still helps with triage when higher debugging
>levels are configured and makes the new code consistent with most other
>fatal() calls in Segment.cc.

If there is a bug in fatalf() that's something else that needs to be
fixed. But there has been no sign that I'm aware of about any such issue
in for any of the hundreds of other calls to it. Please dont make bad
code here just for that.

Amos


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC] --> Redirect attack

2016-02-19 Thread Eliezer Croitoru

Thanks for the detailed response!

From what I understand until now it seems to me that the main reason 
that nobody created a "standard" for MITM proxies authentication is 
since the current HTTP implementations and UA are somehow aware of the 
option to the wildness of the networks and the Internet world.



While reviewing OAuth and SSO technologies I noticed that it is very 
"simple" to attack them using some kind of MITM and I was kind of amazed 
by what issues might arise if the UA in some way contacts some public 
Internet system.
It gets even more complicated with NAT and until now the solutions for 
SSO and OAuth are to somehow use an Internal Authentication and 
Authorization system in the UA\Session level.


Maybe I missed something but I am unsure. Authorization and 
Authentication can work in a way together or in separated ways(leaving 
squid and other acls aside).

What Authorization helpers I have missed?
The ones I know about that a transparent(MITM) proxy can use are:
- Radius(IP)
- Time_quota(IP)
- session(IP)

These are good for specific usages and will probably not be good enough 
for full authentication and authorization for VNC\RDP\NAT environments.


I do not argue with the risks that do exists and organizations do 
consider them over and over again. Some use VPN connections to somehow 
evade every HTTP\HTTPS MITM proxies in the wild but many do not require 
that and I really do not know how they make sure that the connections 
are secure else then TLS\SSL.


One of the really hard things I have seen lately in the security 
industry was Anti Viruses that are doing machine local HTTPS MITM. And 
the harder thing was that these programs implement some kind of "sensor" 
net which sends data to their main systems. The most weird thing I have 
seen is that most of them didn't even used HTTPS connections to these 
"sensor" networks main API system.


There is a whole other aspect about CDN security, many offer free domain 
level SSL.


The dilemma that stands is:
If these Security firms use plain HTTP for so many delicate things, why 
should I trust them or why do I use HTTPS?


As a result to this dilemma and the thread opening subject I was 
starting to think about writing a wiki article. In some relation to 
squid without providing any code, would it be right to describe in some 
technical details this kind of attack?

The pros:
- It will give another aspect when looking at the attack.
- It will make the awareness to the subjects a bit better.
- The time on the research about the subject was already invested.

The cons:
- Many wrote about the subject in the past.
- It's not a caching subject but rather a security one.

What do you and maybe others think about security related articles in 
the wiki?


Eliezer

On 18/02/2016 19:19, Amos Jeffries wrote:

On 19/02/2016 5:28 a.m., Eliezer Croitoru wrote:

After playing with couple SSO ideas in general I noticed that something
similar can be done with squid while it will not be as secured as
kerberos and couple other nice implementation.

Deatils:
A transparent proxy cannot authenticate users in the session level. The
current known available options are using some session db(by IP) or
Radius or LDAP(by ip) and couple others.
So for couple specific environments which uses either NAT or multiple
VNC\RDP sessions there is no option to authenticate a specific user but
a whole IP\machine.

The real solution would be to somehow make it possible for a browser to
know about the option of a "transparent proxy" and to also trust it.
Also this specific network must be secured enough so no rough dhcp
services will pop up and couple other network level restrictions will be
applied to ensure that the browser can authenticate to the proxy in the
case he will be there.(There might be some other attacks which I didn't
mentioned)



"transparent proxy" is a badly overloaded phrase.
  * Taken literally it means the same as "forward-proxy" in HTTP terminology.
  * Taken as a colloquial way of saying "interception proxy", aka. MITM.
It is by definition not possible to have trust since the identity of the
proxy is unverifiable.



Another approach to the issue:
Currently browsers use cookies to authenticate banks, police, health and
many other systems. They are not highly secured but they are being used
in many places.


Let me stop you right there.

In order to set a Cookie that is in any way equivalent to the
authentication/authorization sessions. The proxy has to first be able to
determine whether any two random connections came from the same client UA.
   If it is capable of doing that already then that is how the
authorization session helper should be operating. No need for Cookie to
be involved.

The ability to set Cookies for generic UA->interceptor authorization is
an *outcome* of having a session. Not the reverse.

They could be used for the much more limited scope o

[squid-dev] [RFC] How would you call this "authentication" way?(a bit long)

2016-02-18 Thread Eliezer Croitoru
After playing with couple SSO ideas in general I noticed that something 
similar can be done with squid while it will not be as secured as 
kerberos and couple other nice implementation.


Deatils:
A transparent proxy cannot authenticate users in the session level. The 
current known available options are using some session db(by IP) or 
Radius or LDAP(by ip) and couple others.
So for couple specific environments which uses either NAT or multiple 
VNC\RDP sessions there is no option to authenticate a specific user but 
a whole IP\machine.


The real solution would be to somehow make it possible for a browser to 
know about the option of a "transparent proxy" and to also trust it. 
Also this specific network must be secured enough so no rough dhcp 
services will pop up and couple other network level restrictions will be 
applied to ensure that the browser can authenticate to the proxy in the 
case he will be there.(There might be some other attacks which I didn't 
mentioned)


Another approach to the issue:
Currently browsers use cookies to authenticate banks, police, health and 
many other systems. They are not highly secured but they are being used 
in many places.


Maybe we can use them for something?
Since the first days of cookies security restrictions was applied on 
them to disable phishing, impersonation etc.
Many vendors "secure" the cookies based on user-agent string, 
timestamps, origin ip and many others. These in very specific cases can 
make it somehow harder on the cookie finder to use them but it still 
might not be enough for many systems.


OK so cookies are not 100% secure but as long as my network is under my 
"total" control and nobody can use tcpdump\wireshark on the client whole 
machine without privileges or some attack can force a network flood it 
would be kind of safe to use some cookies on some machines\clients.


What I was testing is a setup like this(which I don't know how to call 
if at all):

- Linux router with squid in intercept(ssl-bump)
- Linux machine ICAP service
- Linux machine HTTPs\AUTH service
- Linux machine Shared ram DB service
- A Client with a browser(Firefox\IE\Chrome\Opera\Safari etc)

The client is expected to be authenticated to the AUTH service.
On every http request(to wan) the ICAP service checks for a specific 
TOKEN-COOKIE.
If the TOKEN-COOKIE exists it validates the TOKEN VS the DB and if 
cleared for this specific IP and maybe other properties the TOKEN-COOKIE 
is being stripped from the request and the client request is being 
passed to the origin server and the ICAP service returns to squid the 
request authenticated users.
In the other case which either the TOKEN-COOKIE is not present or 
invalid the ICAP service is redirecting the client to the AUTH service 
page with a special REDIRECTION-TOKEN.
If the user was not authenticated to the AUTH service he will be asked 
to enter credentials.
When the user press LOGIN the user is being sent an authentication 
cookie and receives a redirection with a TOKEN(which can be stored in 
the DB) to the original request page.
If the browser has a valid AUTH cookie he will be identified and 
redirected automatically without the need to enter any credentials.
Then the ICAP service can verify who the user is and if the TOKEN is 
valid and the user is being redirected to the original request page with 
a Set-Cookie which contains the TOKEN-COOKIE for this specific domain.
Now this domain and all it's sub-domains requests will contain the 
TOKEN-COOKIE which the ICAP service will clear any request to them until 
some per-defined cookie expiration property or auto detected cookie abuse.


The above "idea" is being implemented in many SSO systems but in this 
specific use case it's being done on the whole Internet HTTP and HTTPS 
traffic.
It has couple attacks vectoring points and also lots of "holes" such as 
non bumpable HTTPS traffic.
It's not an alternative to simple forward proxy settings in browsers and 
gives really bad time to POST requests but it works for many simple use 
cases.


But, how would you call this kind of proxy authentication? "cookie" 
based? "token" based? SSO?? "cookie bearer" ? ideas?


Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Patches proposal

2016-02-18 Thread Eliezer Croitoru

I want to add couple cycles of mine to the subject.

On 18/02/2016 01:11, Alex Rousskov wrote:



>>>The OpenSSL local filesystem one is now called "security_file_certgen".
>>>A Redis DB helper would be "security_redis_certgen".

>>Why create a whole new binary?



>Because the "file" in its name means the data is stored to the local
>filesystem in some "flat-file" DB format. My understanding is that redis
>uses its own binary DB format.

The current helper name does not answer the question: If the current
helper was badly renamed to wrongly limit its scope, then we should
surely rename it again to avoid that limitation.


+1


>The redis version requires a whole new library linkage, its header files
>and code for its API use.

Yes, but why is that important here? Many Squid features require "whole
library linkage and header files". Those dependencies do not result in
ten Squid binary names as far as we are concerned.


+1 but.. adding something.
Portability for some systems might be an issue. We can distinguish full 
fledged servers from some embedded systems(routers, switches etc..)



>As a result the two binaries will act very
>differently when passed the same data.

They will act exactly the same from the caller point of view. The
database is one of the many internal optimizations.


OK so we are talking about "functionality" in general.
However I do see one specific issue with a DISK and Redis DB.
If for any reason the site headers will contain HSTS rules and the Redis 
DB(mem only..) will be restarted then the certificate would be different 
and the client will probably(to my understating) get some nice error 
page from the browser.
So it's exactly the same as long the Redis DB is persistent and\or was 
not restarted recently.



>With a lot of policy and security implications around those differences.

Same with configuring Squid to use different cache_dirs, but we do not
create many Squid executables because of the many cache_dir types.

..

>A generic distro builder who builds two helpers leaves the
>installers with the option to actually install only one binary if the
>other does not meet local policy/security requirements.

That sounds like a valid argument, although you seem to be invalidating
it below. Not sure our cycles are best spent accommodating such cases,
but you know better.


I am sorry but if for example RedHat or any other distribution will have 
some issue
with one helper having the option to use more then one back-end they are 
in more troubles then builder\maintainer.

And what I mean is that this is not the first binary to do so.


>In the generic builders this is best supported by providing different
>packages with different binaries inside.

Which means the "generic builder" is no longer "generic". It is a person
that understands the difference between various Squid pieces. Such a
person can create multiple binaries using ./configure options and/or VMs.


Which should be the default for any builder. A generic builder can never 
decide what to do with any more then minimal complexity packages(ie one 
or couple binaries).



>Which gets very annoying and/or
>tricky if the difference is ./configure parameters
>(--with/--without-hiredis) to produce different binaries.

You may be right, but it is not obvious to me why different ./configure
options is such a big deal to a person who already understands the
difference between two certificate generation helpers that Squid builds
by default and who have installed enough libraries for both helpers to
be built by default.


I think that the point would be choosing the defaults rather then the 
additions. For some builders adding "hiredis" support by default means 
they need to kind of "audit" or "validate" another whole side to the 
package. It might not only be just adding "hiredis" to the build but 
also testing couple other things.


I am not in the point to decide what to implement and what to do but 
"--without-hiredis" is fine for my RPMs(not saying William should 
implement it) . It will allow me to take in account that in the future I 
will want to add this dependency to the main package or to even package 
the specific binary by it's own to not affect the core package 
dependencies and giving the system admin the option to choose the 
packages installation policy.


Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] What tests would somehow clear 4.1 logically\scientifically\other as *very* stable for production?

2016-02-09 Thread Eliezer Croitoru
o 
find some re-direction or "re-think" but again beware.


After RedHat, Oracle and many others are taking care of some specific 
aspects of the Kernel and Support the basic testing are in our (as 
humans and not some Mighty Kings) hands.


For now I have built RPMs for: CentOS, Oracle Linux and SLES.
I have tried to somehow build also for Amazon Linux but I could not find 
a way I can get my hold on a VM image that will work on ubuntu KVM 
hypervisior yet.


All these RPMs of version 3.5.X and 3.4.X was tested manually in a 
forward proxy mode with some basic tests, reverse proxy mode was 
partially tested.
Basic Tproxy and Interception tests were conducted on a 3.4.X version 
and was found stable enough to test only if changes to the core code 
were done, I am planning to test 4.0.X in Tproxy and Intercept mode but 
it is an optional test which I will only conduct in my spare time.


Thanks for all the 4.0.X beta testers until now!!!

List of practical tests:
- Forward proxy for HTTP(static objects with size + without size 
declaration, dynamic content from various normal use cases such as 
social networks, academic sources, search engines)
- Forward proxy for "fake HTTP" requests( I am looking for such 
applications)
- Forward proxy for basic CONNECT requests with HTTPS, IRC, SKYPE, MAIL 
and couple other basic DESKTOP applications.
- proxy basic cache manager http(only) basic functionality(no reconf or 
shutdown etc)

- Forward proxy with ssl_bump and basic splice all settings
- Forward proxy with ssl_bump and basic peek and splice all settings
- Forward proxy with ssl_bump and basic peek and splice most settings
- Forward proxy with ssl_bump and basic peek and splice with specially 
crafted SSL requests
- Forward proxy with ssl_bump and basic peek and splice with specific 
applications such SKYPE, DROPBOX


The above list is not complete and needs couple more pair of eyes to 
highlight specific points and also practical methods.
Please.. if you have tested something and you can send me privately an 
email with any results which can help me or the squid developers to 
grasp the status of the BETA then send me or anyone from the squid 
development team or the measurement-factory staff an email.


This is the place to say thanks to:
- Duane Wessels
- Henrik Nordstrom
- Amos Jeffries
- Alex rousskov

And these who works and helps on every step of the process of 
squid-cache pumping bits around the globe 24X7 and making the WWW better 
for everybody.


Eliezer Croitoru

* This is a 1 pass text which is salted with many words from the Old and 
Fantasy literature world.


##QUOTE
On 01/02/2016 16:55, Eliezer Croitoru wrote:
> On 01/02/2016 16:23, Amos Jeffries wrote:
>> The next beta (4.0.5) should be out in the next few days.
>>
>> 4.1 (stable) will be out as soon as we have a 10 day period with no
>> major bugs existing and no new bugs being found. No certain timeline on
>> when that will occur.
>>
>> Amos
>
> ( kind of hijacking the thread due to the context... we can open a new
> thread for the responses.)
> Can we construct a list of tests that 4.1 should pass?
> A 10 days period is OK from one aspect of the picture but it doesn't
> mean that specific test cases were verified to work or not.
> Compared to older releases we would have couple very good check-marks.
> For now I still have my small testing environment which can test lots of
> basic things but I am working on a high performance hardware testing
> environment.
>
> Eliezer
##END OF QUOTE
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC] V6 connectivity testing and dns_v4_first auto setter bash script.

2016-01-28 Thread Eliezer Croitoru
I wrote a tiny bash script that I will later push into the CentOS RPMs 
and was wondering about pushing it into the squid sources.

The source at: http://paste.ngtech.co.il/pxizenek2

It was written for linux but can be modified for FreeBSD and other *NIX 
OS's.
Checks for IPV6 resolution and connectivity(ping6) and changes 
squid.conf dns_v4_first to or or off.

My idea was to run it every time before squid starts\restarts.

What do you think about the idea of testing ipv6 connectivity in general 
before starting squid and about automatically change squid.conf?


Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Planning for experiments with Digest, Link and metalink files.

2016-01-28 Thread Eliezer Croitoru
I have been working for some time on experimenting things related to 
metalinks files and I have couple things in my mind about it.
Now I am running for more then 4 month my local proxy service with 
SHA256 live digesting of all traffic using an eCAP module.
It's a small service and cannot take too much weight but it seems that 
with digest enabled or disabled squid doesn't loose enough speed that I 
seem to care.


Now I wanted to move into another side of my experiment and to implement 
some web service that uses the metalinks files data.
Since metalinks are for files or static objects the first thing is a 
simple file server.


I do not know what header to use for the digest hashes? I have seen that 
fedora mirror server uses "Digest: TYPE=hash" but from what I remember 
Henrik was talking about Content-MD5 and similar.
I can implement both Content-TYPE headers and Digest but what standards 
are the up-to-date?


Also the other question, how should the If-None-Match or another 
conditional should be used(since it was meant for ETAG)?


For now I am only working on a full hash match leaving aside any partial 
content pieces matching.


The options to validate file hash vs a static web service are:
- via some POST request(which can return a 304\302\200)
- via special if-none-match header

Also I had in my mind a situation that the client has couple hashes and 
want to verify a full match to the whole set or to at-least one from a 
set. What do you think should be done?


The current implementations relies on the fact that everybody uses the 
same default hash but I am almost sure that somewhere in the future 
people would like to run Digest match to some kind of salted algorithm 
so I am considering what would be the right way to allow\implement 
support for such cases.

Am I dreaming too much?

Another issue is a PUT requests, does it makes sense to attach some 
Digest or other headers by the user that will be stored with the file 
metadata or metalink file? compared.. to uploading two files one for the 
object and another one for the metalink file?


Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [RFC\PREVIEW] ICAP service nodown option addition.

2016-01-09 Thread Eliezer Croitoru

Thanks Alex,

Inline comments..

On 08/01/2016 17:27, Alex Rousskov wrote:

On 01/07/2016 07:03 PM, Eliezer Croitoru wrote:


I added a configurable option to the ICAP services named "nodown" but
maybe another name would be better fit.


Please do not use negative option names. If this feature is committed
despite my objections, please consider using "always-up" or another
positive name.

Both nodown and always-up do not describe the exact meaning of the flag.
Do you(..and others) think that a longer name would be better? something 
like "options-only-suspened" or "suspend-only-by-options-fail" ?




The idea of the patch is to allow the admin to count on the periodic
OPTIONS request only to identify the current state of the ICAP service.


If an ICAP OPTIONS request fails, will the service be marked as down? If
yes, the option name is misleading. If not, your description above is
misleading. If this feature is committed, please adjust the
documentation to clearly define whether Squid may mark a nodown=1
service as down for any reason.


I suspected that my words might not be descriptive enough and I will try 
to clear it out in the next comments.





I was thinking about using the icap_service_failure_limit with a very
high limit but it is an ICAP global configuration and not a service
specific configuration.


IMO, you should add a service-specific failure-limit option instead of
adding a brand new option with an overlapping but very narrow scope.
Squid already implements the failure-limit logic on a per-service basis.
You just need to add a per-service option (that will default to the
global one).


I was thinking about it but this feature eliminates every suspension 
between each OPTIONS fetch\check.


I know it doesn't fit all setups but I want to eliminate any other test 
then the OPTIONS fetch and the reason is that only using a very high 
failure limit will answer to this specific attack I have seen. A simple 
JS just sent about 1k requests pretty fast and with all of them failing 
pretty fast the ICAP service is just being suspended so just eliminating 
the issues solved my problem.


(I cannot do a thing about a webui programmer that doesn't like to do 
his job properly??)



+   [...] It gives the
+   service admin to implement his own heart-beat script which
+   will work as a replacment to the default internal requests
+   success\failure probes while the ICAP service state is UP.
+   An external heart-beat script can run under external_acl
+   and can be checked in http_access and couple other acls
+   vectoring points.


IMO, this text should not be added to the new option because it talks
about controlling access to an ICAP service rather than about [not]
changing the service state which the new option controls. Referring to
adaptation_access from the new option documentation and adding a similar
hint to the adaptation_access documentation instead may be a good idea.


Not really but I do understand what you are writing about.
The intention is very specific and not related to the adaptation_access, 
the idea is that the service is essential and the proxy cannot allow any 
traffic without this service due to some health facilities 
confidentiality policy. So for the case that the service is indeed down 
the admin needs to deny access to the whole service since it's useless 
without the adaptation service. The external_acl is per squid port and 
will deny access to the whole service based on the real status of the 
ICAP service.
If I will use adaptation_access it will lead to a situation which the 
clients traffic will not pass inspection which is worst then having ICAP 
failures per specific wrongly formed requests.


However I did not test what is the situation with adaptation services 
set and if their "chained" use is based only on full suspension or it 
will also pass the request to the next host per request failure.


I will conduct couple tests this week to make sure how things works with 
sets.


Do you think a wiki article will be good enough to explain the use cases 
of this option instead of the documentation?

I was thinking about writing one.


HTH,

Alex.


Again Thanks!
Eliezer

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [RFC\PREVIEW] ICAP service nodown option addition.

2016-01-07 Thread Eliezer Croitoru
While working on some ICAP issues I have found a very annoying situation 
which I wrote a small patch that *I think* can be applied without any 
known risks to exiting functionality.


I added a configurable option to the ICAP services named "nodown" but 
maybe another name would be better fit.
The idea of the patch is to allow the admin to count on the periodic 
OPTIONS request only to identify the current state of the ICAP service. 
Currently the only option to force a failing service into UP state after 
a failure is using a reconfiguration of squid or the periodic OPTIONS 
fetch. Due to this the patch doesn't makes things much worse then they 
are now but allows the admin to rely on some external_acl helper that 
will use a deny_info to reflect a solid DOWN state between the periodic 
OPTIONS fetch.


I was thinking about using the icap_service_failure_limit with a very 
high limit but it is an ICAP global configuration and not a service 
specific configuration.


The patch link: http://paste.ngtech.co.il/phq5lqtf5

Some description in the patch at: 
http://paste.ngtech.co.il/phq5lqtf5#line-233


Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] c++-refactor basic_gepwnam_auth

2015-12-31 Thread Eliezer Croitoru

On 28/12/2015 17:35, Kinkie wrote:

Hi all,
   this patch (which requires the recently-posted rfc3986 patch)
refactors the basic_getpwnam_auth helper to c++.
It's been farm-build-tested and run-tested on Ubuntu Linux.

-- Francesco

Seems pretty straight forward +1.

Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] Why squid would not allow non encrypted "https://" in a request?

2015-12-22 Thread Eliezer Croitoru

I was wondering to myself about it for a while now.
A client can fetch http:/x/y using a regular netcat using squid or in 
the case it wants to use squid for a TCP connection it will use a 
CONNECT request.
But squid doesn't allow clients to use it as a fully trusted https 
proxy, IE to send the next request to squid:

GET https://www.secured.example.com/ HTTP/1.1
Host: www.secured.example.com
Other-Headers: ...

..and possibly a body
##END OF Request

I do have a proxy program that supports this feature and one usage case 
I do have in mind is some trusted\secured automated closed environment 
which uses the proxy to access the external world and that the proxy is 
the admin delegated ssl enforcement authority.


I know that browsers do not implement this kind of a feature but I think 
it should be a feature.


I am looking for pros and cons of enabling such a feature.
pros:
- Allows full ssl delegation without any addition implications in the 
client side ssl implementation.


cons:
- Being transmitted over a non secured channel(IE plain text)

Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] [PATCH] Log SSL Cryptography Parameters

2015-12-22 Thread Eliezer Croitoru

On 22/12/2015 22:33, Christos Tsantilas wrote:

If no objections I will apply the last patch to trunk.

+1 from me on long names rather then short and cryptic ones.

Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] Why squid would not allow non encrypted "https://" in a request?

2015-12-22 Thread Eliezer Croitoru

Answering to myself...

There was probably an issue with the network connectivity while I was 
testing since it works now.


Eliezer

On 22/12/2015 21:16, Eliezer Croitoru wrote:

I was wondering to myself about it for a while now.
A client can fetch http:/x/y using a regular netcat using squid or in
the case it wants to use squid for a TCP connection it will use a
CONNECT request.
But squid doesn't allow clients to use it as a fully trusted https
proxy, IE to send the next request to squid:
GET https://www.secured.example.com/ HTTP/1.1
Host: www.secured.example.com
Other-Headers: ...

..and possibly a body
##END OF Request

I do have a proxy program that supports this feature and one usage case
I do have in mind is some trusted\secured automated closed environment
which uses the proxy to access the external world and that the proxy is
the admin delegated ssl enforcement authority.

I know that browsers do not implement this kind of a feature but I think
it should be a feature.

I am looking for pros and cons of enabling such a feature.
pros:
- Allows full ssl delegation without any addition implications in the
client side ssl implementation.

cons:
- Being transmitted over a non secured channel(IE plain text)

Thanks,
Eliezer
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: [squid-dev] "TCP_MISS/304" can we describe it better?

2015-11-25 Thread Eliezer Croitoru

OK then, sounds good enough for now.
Do we have a bugzilla report on this one?

Eliezer

On 25/11/2015 02:59, Amos Jeffries wrote:

On 25/11/2015 1:19 p.m., Eliezer Croitoru wrote:

I was wondering for a very long time.
We have changed some of the access.log syntax and TCP_REFRESH_X was added.
The TCP_MISS/304 is a bit miss leading when using the current squid
access.log analytical tools.
I think that it can be changed to something else since it's the right
thing to do.
It is a MISS since the origin was contacted and the full response from
the server was spliced to the client and this is what the log should
basically show.
But admins just count "TCP_MISS" as a loss of HIT.
I know they are not right and it's their way of understanding the logs
wrongly but what should happen? Should it stay like this?
Maybe the statistics tools are not up-to-date and this is not a squid
issue.

Will it be a good idea to change it a bit or that "TCP_MISS/304" is a
good one?


I suspect that should be TCP_IMS_MISS/304 or
TCP_CLIENT_REFRESH_MISS/304. But de-tangling it could be tricky. And
still will not solve the misunderstanding about "MISS" being in the name.


The problem is that the old code used a single enum value for each tag
name (as a whole) and each point updating the tag has to individually
calculate/estimate the entirety of the Squid processing that might have
been applied previously. Mistakes get made of course, or data needed to
decide properly is unknown, or two paths can reach the same set point.
Which is where that confusing output comes from.

I am wanting to make the LogTags a set of flags that different code
paths in Squid can set only the ones relevant to them. The log then
shows the final set, which is more accurately the code path that acted
on each request.


I got as far as moving the enum value into LogTags class and generating
the log entry from that class. But quick experiments about making "TCP_"
into a flag set by httpAccept() did not go well and its currently on hold.


Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev



___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] "TCP_MISS/304" can we describe it better?

2015-11-24 Thread Eliezer Croitoru

I was wondering for a very long time.
We have changed some of the access.log syntax and TCP_REFRESH_X was added.
The TCP_MISS/304 is a bit miss leading when using the current squid 
access.log analytical tools.
I think that it can be changed to something else since it's the right 
thing to do.
It is a MISS since the origin was contacted and the full response from 
the server was spliced to the client and this is what the log should 
basically show.

But admins just count "TCP_MISS" as a loss of HIT.
I know they are not right and it's their way of understanding the logs 
wrongly but what should happen? Should it stay like this?

Maybe the statistics tools are not up-to-date and this is not a squid issue.

Will it be a good idea to change it a bit or that "TCP_MISS/304" is a 
good one?


Thanks,
Eliezer

* I got to the conclusion that Google+Youtube are not necessarily the 
bad guys!

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


  1   2   3   4   >