Re: [squid-users] squid 3.1 ldap authentication

2016-01-28 Thread Eliezer Croitoru

  
  
Hey Nando,
  
  Can you test something?
  On 25/01/2016 17:52, nando mendonca wrote:


  external_acl_type

ldap_group %LOGIN /usr/local/squid1/libexec/ext_ldap_group_acl
-R -b "ou=groups,dc=gcsldap,dc=corp,dc=domain,dc=com" -D
"cn=cost,ou=admin,dc=gcsldap,dc=corp,dc=domain,dc=com" -f
"(&(memberuid=%u) (cn=%a))" -w password -h ldap.corp.domain.com
  
  


In the above replace the "%LOGIN" with "%un"  and see what
  happens.
The differences are mentioned at:
  http://www.squid-cache.org/Doc/config/external_acl_type/
  
Also comparing your command to what I have tested with I see
something different.
My test command can be seen in this ML thread: 
-
http://lists.squid-cache.org/pipermail/squid-users/2015-July/004874.html
I do not have the executable in my hands so I don't know the meaning
of  the "-R" flag and compared to the command I have used it's
different.
  
Try the above and we will see the results,
Eliezer

  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is jesred still compatible with squid 4.x?

2016-02-28 Thread Eliezer Croitoru

Hey,

So it seems pretty simple.
Your helper is very good and simple for the task and at-least we know 
that if your helper works it's something with jesred 100%.
I do not know german but it's not really a big deal to understand when 
you actually know how things works in general.
I do not know what settings exactly you need but you can take a look at 
squid sources:

http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/src/store/id_rewriters/file/storeid_file_rewrite.pl.in

Which is a very simple perl script that works with a "config" file.
It was designed to work with StoreID and some examples are at:
http://wiki.squid-cache.org/Features/StoreID/DB

But I really do not think that for a simple task such as this a special 
compiled version of jesred will be much more efficient then the python 
or perl scripts.

This is also since jesred doesn't implement any form of concurrency support.

I have been working on an improved compiled version(Golang) of a helper 
such as the above perl one but yet to have seen a really good use case 
that will benefit from that too much.


It is not 100% clear to me how the apt-cacher-ng decides which host to 
fetch the requests from but if it works then it is.


All The Bests,
Eliezer

On 29/02/2016 02:34, Karl-Philipp Richter wrote:

Hi,

Am 29.02.2016 um 01:13 schrieb Eliezer Croitoru:


I do not remember if I have tried to work with such a setup in the past
but, can you give some technical details on the desired setup?

I want to manipulate URLs for Ubuntu and Debian .deb packages download
requests (usually for fixed (ftp.debian.org) or predictable URLs
([country].ubuntu.archive.com) which are passed to squid used as
intercepting HTTP cache to be served by an `apt-cacher-ng` instance.
Configuration for clients and VMs running on clients should be zero.


Are there any written documentations about such a setup already? if so can you 
redirect me to one of these?

Depends on your german skills:
https://forum.ubuntuusers.de/topic/squid-transparent-und-apt-cacher-ng/


Basically the jesred program as far as I remembered it was not developed for a 
very long time but I have found the next sources:
https://github.com/sawcache/jesred/blob/master/main.c#L166

Which indicates that someone did something with it to somehow work with Squid 
3.5+ including 4.X.
If I would have known more about the setup I would probably be able to answer 
the question.

This still seems quite fragile, e.g. there's no automatic installation
routine (which I suggested as autconf setup in pull request
https://github.com/sawcache/jesred/pull/2). I'd like this fixed or
solved somehow to get some activity in the project as well. Otherwise
I'm not too eager to contribute since my script at
https://github.com/krichter722/apt-cacher-ng-rewriter works for now, but
isn't overly complicated and not configurable.


Also what changes in 4.X communications are you talking about?
The changes I know about are from 3.4-3.5

I was referring to probable communication changes which "might" have
been implemented - I've already made bad experience with (and filed bugs
about) outdated `squid` documentation, so I prefer to ask.

It might just be the case that `apt-cacher-ng` doesn't accept the output
of `jesred` (failure described above), but I'm wondering what could
there be wrong given the fact that my script works and this
communication protocol is no rocket science. That was the original
reason for my question/request for feedback.

-Kalle
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is jesred still compatible with squid 4.x?

2016-02-28 Thread Eliezer Croitoru

Hey Kalle,

I do not remember if I have tried to work with such a setup in the past 
but, can you give some technical details on the desired setup?
Are there any written documentations about such a setup already? if so 
can you redirect me to one of these?


Basically the jesred program as far as I remembered it was not developed 
for a very long time but I have found the next sources:

https://github.com/sawcache/jesred/blob/master/main.c#L166

Which indicates that someone did something with it to somehow work with 
Squid 3.5+ including 4.X.
If I would have known more about the setup I would probably be able to 
answer the question.


Also what changes in 4.X communications are you talking about?
The changes I know about are from 3.4-3.5

Eliezer

On 26/02/2016 17:20, Karl-Philipp Richter wrote:

Hi,
I noticed that `jesred` when used as `url_rewrite_program` program of
`squid` 4.0.4 with `jesred.rules`

 regex ^http://(de.archive.ubuntu.com/ubuntu/(dists|pool)/.*)$
http://192.168.178.20:3142/\1
 regex ^http://(security.ubuntu.com/ubuntu/(dists|pool)/.*)$
http://192.168.178.20:3142/\1
 regex ^http://(extras.ubuntu.com/ubuntu/(dists|pool)/.*)$
http://192.168.178.20:3142/\1
 regex ^http://(archive.canonical.com/ubuntu/(dists|pool)/.*)$
http://192.168.178.20:3142/\1

 regex ^http://(packages.medibuntu.org/(dists|pool)/.*)$
http://192.168.178.20:3142/\1
 regex
^http://(ppa.launchpad.net/chromium-daily/stable/ubuntu/(dists|pool)/.*)$ 
http://192.168.178.20:3142/\1
 regex ^http://(http://deb.opera.com/opera/(dists|pool)/.*)$
http://192.168.178.20:3142/\1

and an instance of `apt-cacher-ng` running on `192.168.178.20:3142`
(according to `netstat`) causes a lot of entries like

 1456494043|E|481|192.168.178.20|403 Forbidden file type or location:
/security.ubuntu.com/ubuntu/dists/wily-proposed/main/binary-i386/Packages.gz192.168.179.2/192.168.179.2-GET

I'd like to get some feedback whether this might be due to a change in
4.x communication with `url_rewrite_program` and which is the
recommended program to use for `url_rewrite_program`.

-Kalle



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid 3.5.5] security Update Advisory SQUID-2016:2

2016-02-25 Thread Eliezer Croitoru
I have a testing package ready for CentOS 7 and will try to see if it 
affects my local installation just out of the box.


Eliezer

On 25/02/2016 17:16, Amos Jeffries wrote:

Maybe yes, maye no. It seems to be one of those things that passes all
testing, then hits in production.

A few people seem to encounter it immediately, though I dont have a
clear picture yet about whether it affects everybody or just some installs.



Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.15 for Microsoft Windows 64-bit is available

2016-02-25 Thread Eliezer Croitoru

Great to hear Rafael!
Debian and Ubuntu squid debs will help many to upgrade their systems easily.

Eliezer

On 25/02/2016 12:02, Rafael Akchurin wrote:

NOTE1: we also plan to backport recompilation of 3.5.15 version of Squid
to Ubuntu 14.04 LTS. The repo will be made available on
ubuntu.diladele.com next week. The recompilation is done using Squid DEB
source from Debian Testing with some changes required to support SSL
bump / libecap3 on Ubuntu 14.04 LTS.

NOTE2: our efforts to recompile Squid 4.0 on Microsoft Windows for now
are not successful. We hope to be able to announce MSI for it in the
near future though.

Best regards,

Rafael Akchurin

Diladele B.V.

http://www.quintolabs.com

http://www.diladele.com



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IIS error with one website

2016-02-29 Thread Eliezer Croitoru

Can you send me or the list your squid.conf?
Also are you using SSl-BUMP? is this a https site?

Eliezer

On 01/03/2016 00:36, Ryan Slick wrote:

Hi Guys,

So here is an issue I am having,

there is a external website some of our users need to access. When
accessing via the Squid proxy, the site throws this error on the page:

iisnode encountered an error when processing the request.
HRESULT: 0xb
HTTP status: 500
HTTP reason: Internal Server Error
You are receiving this HTTP 200 response because
system.webServer/iisnode/@devErrorsEnabled
 configuration
setting is 'true'.

We configured on a pc that goes directly to the internet the page loads
fine, when going via a bluecoat proxy on a different network it loads
fine, When I put in a direct access rule on squid the error is still thrown.

I am convinced the issue is on the external webserver, however it would
appear squid is not playing nice with it, is there anything I can do to
attempt to fix it? Now the users have tested on their remote devices and
from home they are convinced the issue lies on the proxy.

regards





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IIS error with one website

2016-02-29 Thread Eliezer Croitoru
 situations) increase
# latency, which makes your cache seem slower for interactive
# browsing. By default, it is off.
# The FQDN will be prepended with a backslash and converted to lower
case since
# ClientNet only accepts custom user name with backslash. If log_fqdn is
# also enabled, the FQDN will be logged in access.log.
# For example, an FQDN of www.XYz.com in access.log will require specifying
# a custom user "\www.xyz.com" (no quotes) in ClientNet.
#
# fqdn_xsaucer off


# TAG: hash_username_xsaucer
#Turn this on if you wish to apply hex representative of hashed(SHA-1)
#to domain name\user name (before encryption) in X-Saucer instead.
#
# hash_username_xsaucer off


# ACCESS CONTROLS
#
-

#  TAG: acl
# TAG: disable password on conf file
#cachemgr_passwd none config
acl SSL_ports port 443 563 5443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443 563 5443# https, snews, medicare
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http

acl_uses_indirect_client on
acl CONNECT method CONNECT
acl authproxy proxy_auth REQUIRED
# the IP list of "acl our_networks src" may potentially be long while
the maximum number of characters supported by squid is around 500.
# therefore, you should try to splite long ip list to multiple lines for
readabilty and maintenability, see the following lines as an example:
# acl our_networks src x.x.x.x/z x.x.x.x/x x.x.x.x/z 
# acl our_networks src y.y.y.y/z y.y.y.y/y y.y.y.y/z 
acl our_networks src 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16



# __
acl HEAD method HEAD
follow_x_forwarded_for allow f5lb_prxy
#  TAG: http_access

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
# __
#http_access allow CONNECT SSL_ports
# __
http_access deny CONNECT !SSL_ports
#Allow the header as IE does not process the Head authentication
http_access allow HEAD
http_access deny !our_networks
http_access allow Smartconnect
# __



# __
# NTLM bypasses and specific domain bypass come after this comment block.
# http_access = NTLM bypass. always_direct = bypasses the MessageLabs proxy
# and sends the connection directly. The first sample below creates a
bypass
# named 'uniqueBypass1' which bypasses NTLM and sends the connection
directly
# for sample.com. The second sample will bypass NTLM authentication for
# connections to sample.com.
# Begin Sample 1:
#acl uniqueBypass1 dstdomain sample.com
# http_access allow uniqueBypass1
# always_direct allow uniqueBypass1
# Begin Sample 2:
#acl NTLMBypass dstdomain sample.com
#http_access allow NTLMBypass

http_access allow authproxy
http_access deny all


#  TAG: icp_access
icp_access allow all

#  TAG: httpd_suppress_version_stringon|off
#Suppress Squid version string info in HTTP headers and HTML error pages.
#
httpd_suppress_version_string on


# ADMINISTRATIVE PARAMETERS
#
-

#  TAG: visible_hostname
visible_hostname ClientSiteProxy

# OPTIONS FOR THE CACHE REGISTRATION SERVICE
#
-


# HTTPD-ACCELERATOR OPTIONS
#
-


# MISCELLANEOUS
#
-

# Forwarding proxy client IP addresses in X-Forwarded-For header.
# Disabled to prevent leakage of internal network configuration details.
forwarded_for truncate

# Do not reveal CSP version in "Via" HTTP header
header_access Via deny all

#  TAG: never_direct
never_direct allow all

# DELAY POOL PARAMETERS (all require DELAY_POOLS compilation option)
#
-

#  TAG: coredump_dir
#  completely disable checks for cache consistency (and/or garbage
collection) and
#  there will be no need to initialize cache dirs which amount to be
over 2000 dir.
cache_dir null c:/ClientSiteProxy
coredump_dir c:/clientsiteproxy/var/cache

http_port 80
http_port 8080



On Tuesday, 1 March 2016 11:49 AM, Eliezer Croitoru
<elie...@ngtech.co.il> wrote:


Can you send me or the list your squid.conf?
Also are you using SSl-BUMP? is this a https site?

Eliezer

On 01/03/2016 00:36, Ryan Slick wrote:
 > Hi Guys

Re: [squid-users] squidclient can't connect to localhost

2016-01-19 Thread Eliezer Croitoru

On 19/01/2016 14:38, Henri Wahl wrote:

So what is Squid logging during startup/reconfigure about that IPv6 port ?


What kernel and OS are you using? Also did you tried to start squid with 
default settings?

Also what is the output of "squid -v"?

Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Capitive portal with squid just to put small info then have internet

2016-01-20 Thread Eliezer Croitoru

Hey,

It depends on how you identify your clients\users.

If you do have a way to distinguish them then it would might be possible.

Eliezer

On 20/01/2016 11:58, Drvirus wrote:

Hi ,

Im wondering if  what I need is possible or not .

I need to have my customers connect over ip:port to my squid machine

And I want once they start working to have splash page that will ask
them to insert their email on it and then click ok and have full access
internet .

The question is being asked here

Is that something in portal page ?

Or beyond the portal page ?

Again , I just want it in the beginner in in the work , so once the
employee put it once he will have access and will not ask him again .

I did read the Faq Wiki here ,
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash
but not sure if will satisfy my needs

My kind regards



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 4.0.5 beta RPMs for: Oracle Linux EL7, CentOS EL7, SLES 12SP1 are available.

2016-02-14 Thread Eliezer Croitoru

SLES 12sp1 repositories at:
http://ngtech.co.il/repo/sles/12sp1/beta/SRPMS/
http://ngtech.co.il/repo/sles/12sp1/beta/x86_64/

Oracle Linux EL7 repositories at:
http://ngtech.co.il/repo/oracle/7/beta/SRPMS/
http://ngtech.co.il/repo/oracle/7/beta/x86_64/

CentOS EL7 repositories at:
http://ngtech.co.il/repo/centos/7/beta/SRPMS/
http://ngtech.co.il/repo/centos/7/beta/x86_64/

To find a single rpm or src.rpm file just browse the directories index.

Eliezer

* These are experimental packages that was yet to be tested and was 
built for testing purposes! if you are having any trouble with them 
notify me so I can fix anything.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock datastore, CFLAGS and a crash that (may be) known

2016-02-16 Thread Eliezer Croitoru

Before digging into the details of the issue, can you supply the OS details?
What OS are you using? What distribution?
32 or 64 bit?
can you also add the output of "squid -v" for both 3.5.14 and 3.5.13 ?

Thanks,
Eliezer

On 16/02/2016 16:32, Jester Purtteman wrote:

Greetings Squid users,

With 3.5.14 out and activating CFLAGS, I am getting into trouble.  Funny
too, I spent a lot of time wondering why it wasn’t adding CFLAGS in
earlier builds.  In any event, I have a 3.5.13 instance configured as
follows:

./configure --prefix=/usr   --localstatedir=/var
--libexecdir=/usr/lib/squid--srcdir=.   --datadir=/usr/share/squid
--sysconfdir=/etc/squid   --with-default-user=proxy
--with-logdir=/var/log   --with-pidfile=/var/run/squid.pid
--enable-linux-netfilter  --enable-cache-digests
--enable-storeio=ufs,aufs,diskd,rock  --enable-async-io=30
--enable-http-violations --enable-zph-qos --with-netfilter-conntrack
--with-filedescriptors=65536 --with-large-files

It has a quartet of cache-dirs (I’m still testing and monkeying) as follows:

cache_dir rock /var/spool/squid/rock/1 64000 swap-timeout=600
max-swap-rate=600 min-size=0 max-size=128KB

cache_dir rock /var/spool/squid/rock/2 102400 swap-timeout=600
max-swap-rate=600 min-size=128KB max-size=256KB

cache_dir aufs /var/spool/squid/aufs/1 20 16 128 min-size=256KB
max-size=4096KB

cache_dir aufs /var/spool/squid/aufs/2 150 16 128 min-size=4096KB
max-size=8196000KB

Permissions are all proxy.proxy for the cache dirs and everything is
happily running.  When I read that the CFLAGS bug was solved, I thought
“hey, didn’t I do some terrible thing to determine what cflags are
correct on a vmware virtual instance?” and dug up the cflags that I came
up with.  I then compiled 3.5.14 as follows:

./configure CFLAGS="-march=core2 -mcx16 -msahf -mno-movbe -mno-aes
-mno-pclmul -mno-popcnt -mno-sse4 -msse4.1" CXXFLAGS="${CFLAGS}"
--with-pthreads --prefix=/usr   --localstatedir=/var
--libexecdir=/usr/lib/squid--srcdir=.   --datadir=/usr/share/squid
--sysconfdir=/etc/squid   --with-default-user=proxy
--with-logdir=/var/log   --with-pidfile=/var/run/squid.pid
--enable-linux-netfilter  --enable-cache-digests
--enable-storeio=ufs,aufs,diskd,rock  --enable-async-io=30
--enable-http-violations --enable-zph-qos --with-netfilter-conntrack
--with-filedescriptors=65536 --with-large-files

This leads to the following in the cache log, and a crash.

<<>>SNIP

This looks similar to a bug
http://bugs.squid-cache.org/show_bug.cgi?id=3880#c1 that was already
reported, but I don’t know enough to say with certainty.  It does look
like these compile options are allowing squid to launch with multiple
processes and do other things that I think I might want, but I can’t
tell for sure.  So, it does lead me to a few questions:

(1)Do these flags make sense?  I only half know what half of them do,
but they appear to basically just be supported flags on a ESXi virtual
machine given my hardware.  I have googled, just not a lot of light shed
on this instance, thoughts and insights are appreciated.

(2)Are my rock stores lagging out, and how would you recommend tuning
them if so?

(3)Does the strategy above make sense?  My thinking is to segregate the
small cache items into a rock datastore, and the big items into an aufs
datastore.

(4)Do you have any pointers on calculating the size of rocks and aufs
stores based on disk performance etc?  I’m guessing that there is sort
of a logical size to make a specific rock and aufs based on how big of
items you store in it and so on.  Is there some way I can apply some
math and find bottlenecks?

Finally, 3.5.14 does run fine when compiled with the first set (even
with --with-pthreads added) so I think this is probably cflags related.
I would like to get multiple disker processes running, I think it would
probably help in my environment, but it’s not supremely critical.
Anyway, there is a note at the end of the bug saying that this wasn’t
seen for a while, and I thought I’d say “I’ve seen it! Maybe!”  let me
know if I am creating this bug through a creative mistake, or if you
have other ideas here.  Thanks!

Jester Purtteman, P.E.

OptimERA Inc



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay Pools and HTTPS on Squid 3.x

2016-02-16 Thread Eliezer Croitoru

Hey Martin,

I was wondering if you had the chance of trying to enforce some QOS 
policy on the OS level?

Also what OS and distribution are you using?

Eliezer

On 17/02/2016 03:37, Hery Martin wrote:

Hello everybody:

Since a few months ago I'm using squid to provide a solution as small
business proxy in the network of my work place.

I'm from Cuba, in our country the Internet is a very limited resource. I
have only one link of 2Mbps to share with 20 ~ 25 users (even with my
network have more than 60) this is the normal concurrent number.

When I start the squid deployment in my network I started using
2.7stable9 version, I made all arrangements to put it work with my AD to
match ACLs using AD Groups and everything works perfect.

I defined 1 class 2 delay pools to to limits traffic to 12 KBytes/s per
user approx.

delay_pool 1
delay_class 1 2
delay_parameters -1/-1 12228/12228

The delay pool works perfect, I was checking with real-time tool sqstat
and with squidclient mgr:delay

NOW.

I recently upgrade squid to 3.3.8 and I notice that delay pool started
to going wrong when the users surf or download using HTTPS protocol

I checked in real-time and when the users browse HTTPS the pool goes in
negative numbers and start to grow and grow, its very easy to check,
just define a delay pool with 5KB and start a download from an HTTPS
source and you can check it with squidclient mgr:delay, the ip takes
negative pool value and keep growing until the download finish.

Frustrated with this behavior I put different squid versions in a
Virtualization Server and definitely I saw that the problem occurs with
squid 3.x versions, today I made a final test and I think that the
implementation of HTTP v1.1 is maybe related with that problem (I'm not
sure but tomorow I will make a few tests with squid 3.1 where HTTP v1.1
was not yet implemented)

Please, if you have the opportunity, just test this in a Lab
environment, I decided to write to this email list because I asked to
many people that already have implemented squid as proxy in their
networks and they didn't believed to me until I demostrated the issue.

Have anyone information about this bug? There is any hope to fix this
problem at code level?

Anyway, I'm computer systems engineer, I use to write a lot C++ lines
every week... I'm not related with the squid development (never saw the
code in my life) but if somebody have any idea how to fix this and wants
help just count with me.

Greetings from Cuba and sorry about my English :)


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] crash with squid 3.5.5

2016-02-17 Thread Eliezer Croitoru

Hey Paul,

First there are missing parts to the picture such as squid.conf OS 
details and "squid -v".
Second you are using squid 3.5.5 which is at least half a year old and 
since I am using 3.5.14 and it works fine I would assume that it should 
work for you the same.


Eliezer

On 17/02/2016 13:57, Paul Martin wrote:

Hello,

I got a problem with squid 3.5.5. It crashes on kid3 after visiting
"www.oggi.it " site.

Here the cache.log:
2016/02/16 10:12:40 kid3| ctx: enter level  0:
'http://www.oggi.it/global_assets/js/plugins.js?v=1.6'
2016/02/16 10:12:40 kid3| *assertion failed: String.cc:174: "len_ + len
< 65536"

*
Can you explain me how to fix the problem ?

Regards,
Paul*
*
*
*
*
*



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock datastore, CFLAGS and a crash that (may be) known

2016-02-17 Thread Eliezer Croitoru

So after reading the whole thread from top to bottom:
Since it's an ESXi version 6.0 with an Ubuntu 14.04 guest I would choose 
another approach!
You do not use any SSL related settings and it's a simple proxy(from the 
squid -v output) so I would ask "did you tried to rebuild any deb file?"
There might be some benefits from a special flag or more but from my 
experience with such VMs it would be very little in most cases.(I do not 
have tons of experience...)
I had experience with a bunch of Gentoo machines running all sort of web 
and Internet services on them for years. The claim for self compiling 
was that it is much efficient then pre-built binaries. The fact is that 
real hardware was faster\better then VMs (ESXi) either with pre-built or 
self-compiled binaries. So when moved from HW to VMs there were lots of 
things to test and confirm. Things like high CPU spikes or high DISK IO 
activity spikes which was pretty weird compared to the real HW.
Currently I have seen that VMs given enough vCPU, RAM and doing some 
fine tuning for balancing between VMs(..not throwing 24 vCPU per guest 
on a 24 cores host) and hosts gave for *these specific hosts* better 
performance then custom-compiling and flagging.


I have not built a Debian\Ubuntu deb package for a very long time but I 
had a plan to do so.

Maybe I will do it one day.

All The Bests,
Eliezer Croitoru

On 17/02/2016 15:36, Jester Purtteman wrote:

Dear Eliezer, Amos and Marcus,

Thank you, and sorry for the late reply, day jobs are a menace to productivity:)

So, in order of responses:  Eliezer:

>Before digging into the details of the issue, can you supply the OS details?
>What OS are you using? What distribution?
>32 or 64 bit?
>can you also add the output of "squid -v" for both 3.5.14 and 3.5.13 ?

I am running Ubuntu 14.04.2 updated to the latest apt-get binaries, 64-bit version, 4 
processors, 24-gb of "ram" allocated under the VM.  This is all on a vmware 
ESXi 6.0 host, so I recognize that compiler flags are probably a bit like throwing water 
balloons at Jaws from a performance stand point.  The counter point is, with performance 
as bad as a VM, you need all the help you can get.  As much as anything, it was a 
curiousity.

Squid -v for a working configuration is as follows:

Squid Cache: Version 3.5.14
Service Name: squid
configure options:  '--with-pthreads' '--prefix=/usr' '--localstatedir=/var' 
'--libexecdir=/usr/lib/squid' '--srcdir=.' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-default-user=proxy' '--with-logdir=/var/log' 
'--with-pidfile=/var/run/squid.pid' '--enable-linux-netfilter' 
'--enable-cache-digests' '--enable-storeio=ufs,aufs,diskd,rock' 
'--enable-async-io=30' '--enable-http-violations' '--enable-zph-qos' 
'--with-netfilter-conntrack' '--with-filedescriptors=65536' 
'--with-large-files' --enable-ltdl-convenience

I cut and pasted the configuration string I'd used with 3.5.13, added 
"--with-pthreads", and had no problems.  Here is the working 3.5.13 -v output:

Squid Cache: Version 3.5.13
Service Name: squid
configure options:  '--prefix=/usr' '--localstatedir=/var' 
'--libexecdir=/usr/lib/squid' '--srcdir=.' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-default-user=proxy' '--with-logdir=/var/log' 
'--with-pidfile=/var/run/squid.pid' '--enable-linux-netfilter' 
'--enable-cache-digests' '--enable-storeio=ufs,aufs,diskd,rock' 
'--enable-async-io=30' '--enable-http-violations' '--enable-zph-qos' 
'--with-netfilter-conntrack' '--with-filedescriptors=65536' 
'--with-large-files' --enable-ltdl-convenience


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay Pools and HTTPS on Squid 3.x

2016-02-20 Thread Eliezer Croitoru

On 18/02/2016 04:02, Hery Martin wrote:

@Eliezer
I'm using Ubuntu Server 14.04 (not especial decision, because I use to
deploy different distros in a Citrix XenServer test environment)
Have you any guide to implements QOS+Squid? As I said, I saw in many
articles that you have to mark the traffic in Squid to deal with him after
but I'm never tried because didn't had enough information about.


I am in a similar position like you.
I have implemented QOS once or twice but I always need to learn it from 0.
I have seen couple nice scripts in FireHOL and arch linux tutorials.
But I will need to re-read many things to get a hold on how it works and 
should be configured.


If I will have enough time I will try to write about it in the squid 
wiki somewhere in the future.


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IIS error with one website

2016-02-29 Thread Eliezer Croitoru
I have investigated the issue and it seems that the specific application 
on the IIS 8.5 server cannot handle HTTP/1.0 with some Accept-Encoding 
headers.

Specifically what is being used is: Accept-Encoding: gzip, deflate
and if I remove the gzip and defalte and replace it with xxx or yyy it 
works fine. If one of these exists(and maybe others) with a HTTP/1.0 
request to this specific application it will result in 500 Internal 
error page.


My suggestion is that since the clients and the service requires 
HTTP/1.1 is to try and upgrade the squid service in any way possible to 
add support for HTTP/1.1.


If you have a specific environment feel free to share it with me 
publicly or privately to see if there is a smooth upgrade path for your 
environment.


Eliezer

On 01/03/2016 03:13, Amos Jeffries wrote:

On 1/03/2016 12:26 p.m., Eliezer Croitoru wrote:

>Hey Ryan,
>
>I noticed that you are using a windows version of squid and ontop of
>that a 2.X version.

And on top of that it has been patched with unknown extensions. So is
formally outside our ability to assist with support of this binary.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] pages not being cached

2016-03-10 Thread Eliezer Croitoru

Hey Cindy,

I do not have too much experience with MediaWIKI but I ran some test on 
it in the past for both caching and other things.
I am using this logformat to detect couple things that are related to 
caching:
logformat cache_headers %ts.%03tu %6tr %>a %Ss/%03>Hs %%Sh/%h" "%{Cache-Control}>ha" Q-P: 
"%{Pragma}>h" "%{Pragma}>ha" REP-CC: "%{Cache-Control}eui

access_log daemon:/var/log/squid/access.log cache_headers

The headers:
Cache-Control: s-maxage=18000, must-revalidate, max-age=0

clearly states that the response should not be cached by the client but 
supposed to be cached by a surrogate.

How do you test caching? wget? curl? script? netcat? a desktop browser?

Eliezer

On 10/03/2016 16:22, Cindy Cicalese wrote:

I am using Squid for caching with Apache and MediaWiki over HTTPS only.
Unfortunately, no pages are being cached; each request is being sent
from Squid to Apache. I would appreciate help figuring out how to get
caching to work.

My configuration is as follows:

  * Squid is configured to listen on :443 for HTTPS
requests and forwards them to Apache on port 8080

https_port :443 cert= key=
defaultsite= vhost
cache_peer 127.0.0.1 parent 8080 no-query originserver login=PASS

  * Squid also listens on 127.0.0.1:80  for PURGE
requests from MediaWiki because I could not figure out how to
configure MediaWiki to send PURGE requests with HTTPS

http_port 127.0.0.1:80  defaultsite= vhost

  * Apache is listening on 127.0.0.1:8080  for
requests from Squid
  * Apache is also listening on :80 which is set up as a
permanent redirect to HTTPS
  * MediaWiki is configured as follows:

$wgUseSquid = true;
$wgSquidServers = array('127.0.0.1'); // this is where PURGE requests
are sent
$wgSquidServersNoPurge = array('');

GET requests are being received by MediaWiki with Cache-Control:
max-age=0 in the headers. The response sent by MediaWiki includes the
following headers:

Cache-Control: s-maxage=18000, must-revalidate, max-age=0
X-Cache: MISS from 
X-Cache-Lookup: HIT from :80

I am suspicious why the last line says port 80 rather than 8080, but I'm
not sure if that is relevant.

Please let me know if there is an more relevant information that will
help to troubleshoot this. Thank you in advance for your assistance!

Cindy


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-14 Thread Eliezer Croitoru

Thanks,

I'm with you no this but it's not clear to many sys\cache admins that 
caching windows updates is the "tiny" bit of the wide Internet.


Eliezer

On 14/03/2016 17:37, Heiler Bemerguy wrote:


My colleagues here asked me the same question but I prefer to really FIX
the caching of bigfiles/rockstoredfiles/rangeDLs instead of doing
something specific for windows updates.

To be honest, windows updates are just a simple example of ranged
downloads of big files making squid/rockstore go mad

Best Regards,

--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-15 Thread Eliezer Croitoru

Hey,

Your words describe the BUG in his wildest and simplest form.
Please file a bug report to follow the progress.
Writing here more and more will not be really a good help as it is.

Eliezer

On 15/03/2016 19:51, Heiler Bemerguy wrote:


Hi joe, Eliezer, Amos.. today I saw something different regarding high
bandwidth and caching of windows updates ranged requests..

A client begins a windows update, it does a:
HEAD to check size or something, which is ok..
then a ranged GET, which outputs a TCP_MISS/206,
then the next GET gives a TCP_SWAPFAIL_MISS/206.
Lots of other TCP_MISS/206, then in the end, TCP_MISS_ABORTED/000

A big amount of parallel connections are being made because of each
GET.. ok, I know squid can't do much about it.. but then, why the
content does not get cached in the end?
I mean, the way it is, it will happen every day.. are these
"TCP_MISS_ABORTED" really the client aborting the download? I doubt it...

Take a look and see if you can understand:


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS interception and filtering?

2016-03-13 Thread Eliezer Croitoru

Are you referring to:
http://thread.gmane.org/gmane.comp.web.squid.general/114384/focus=114389

Eliezer

On 12/03/2016 15:58, James Lay wrote:

On Sun, 2016-03-13 at 00:09 +1100, Tim Bates wrote:

Is it possible to do this:

* Intercept HTTPS and send it via Squid?
* Apply ACLs to the intercepted HTTPS traffic based on host/domain name?
* Not change any configuration on clients?

Should I keep researching how this peeking and bumping and splicing and
such works, or is it impossible?

TB
___
squid-users mailing list
squid-users@lists.squid-cache.org

http://lists.squid-cache.org/listinfo/squid-users


Search for my previous posts...I've posted full configs on how to do
exactly this.

James


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-12 Thread Eliezer Croitoru

Hey,

Thanks for the debug!.

I do not know the exact reason but I can say for sure that it's not the 
NetAPP or any other OS level issue since the AUFS\UFS cache_dir works 
fine in the same system and in a similar situation.

I will try to replicate it locally.
I do understand the issue and I will try my best to see what can be done 
from my side.


For now I will try to replicate the issue on a RAM only 
system(OS,CACHE,DISK,env..).
I think the right way now is to file a BUG in the bugzilla and continue 
following here and there since it's a major bug when comparing UFS\AUFS 
to ROCK.


If you understand that a patch\fix for such an issue is not a tiny task 
then I and you are on the same ship.


(I will try to test today but I cannot promise I will find it in such a 
hurry)


All The Bests,
Eliezer

* http://bugs.squid-cache.org/enter_bug.cgi

On 11/03/2016 18:25, Heiler Bemerguy wrote:


I managed to track down with GDB one of these swapfails...

Breakpoint 1, clientReplyContext::cacheHit (this=0x33bff58, result=...)
at client_side_reply.cc:471
471 http->logType = LOG_TCP_SWAPFAIL_MISS;
(gdb) l
466 debugs(88, 3, "clientCacheHit: request aborted");
467 return;
468 } else if (result.flags.error) {
469 /* swap in failure */
470 debugs(88, 3, "clientCacheHit: swapin failure for " <<
http->uri);
471 http->logType = LOG_TCP_SWAPFAIL_MISS;
472 removeClientStoreReference(, http);
473 processMiss();
474 return;
475 }

(gdb) p *this
$2 = { = {_vptr.Lock = 0x84e668, count_ = 1},  =
{_vptr.StoreClient = 0x84e638}, purgeStatus = Http::scNone,
   lookingforstore = 5, http = 0x1eadb398, headers_sz = 0, sc =
0x3034028, tempBuffer = {flags = {error = 0}, length = 0, offset = 0,
 data = 0x0}, old_reqsize = 0, reqsize = 210, reqofs = 0, tempbuf =
'\000' , flags = {storelogiccomplete = 0,
 complete = 0, headersSent = false}, ourNode = 0xaa32198,
holdingBuffer = {flags = {error = 0}, length = 0, offset = 0, data = 0x0},
   reply = 0x0, old_entry = 0x0, old_sc = 0x0, deleting = false, static
CBDATA_clientReplyContext = 24}
(gdb) p result
$3 = {flags = {error = 1}, length = 0, offset = 0,
   data = 0x9ef18d8 "HTTP/1.1 200 Internal marker object\r\nServer:
squid\r\nMime-Version: 1.0\r\nDate: Fri, 11 Mar 2016 15:57:32
GMT\r\nContent-Type: x-squid-internal/vary\r\nExpires: Sat, 12 Mar 2016
19:44:12 GMT\r\nVary: Accept-En"...}

It already came to client_side_reply.cc with "ERROR" set to 1...

(gdb) bt
#0  clientReplyContext::cacheHit (this=0x33bff58, result=...) at
client_side_reply.cc:471
#1  0x0064dcbd in store_client::callback (this=0x3034028,
sz=, error=) at store_client.cc:130
#2  0x0064e6c1 in store_client::startSwapin
(this=this@entry=0x3034028) at store_client.cc:382
#3  0x006501d3 in store_client::doCopy
(this=this@entry=0x3034028, anEntry=anEntry@entry=0xab07eb0) at
store_client.cc:359
#4  0x0065033c in storeClientCopy2 (e=0xab07eb0,
sc=sc@entry=0x3034028) at store_client.cc:315
#5  0x00650db9 in storeClientCopy2 (sc=0x3034028, e=) at store_client.cc:281
#6  store_client::copy (this=0x3034028, anEntry=0xab07eb0, copyRequest=...,
 callback_fn=0x55b250 , data=0x33bff58) at store_client.cc:232
#7  0x00558111 in clientReplyContext::doGetMoreData
(this=this@entry=0x33bff58) at client_side_reply.cc:1799
#8  0x00558422 in clientReplyContext::identifyFoundObject
(this=0x33bff58, newEntry=) at client_side_reply.cc:1649
#9  0x0055a8b8 in clientReplyContext::cacheHit (this=0x33bff58,
result=...) at client_side_reply.cc:525
#10 0x0064dcbd in store_client::callback (this=0x51d5b08,
sz=, error=) at store_client.cc:130
#11 0x0064e1ba in store_client::readBody
(this=this@entry=0x51d5b08, buf=, len=len@entry=210) at
store_client.cc:497
#12 0x0064f76b in store_client::readHeader (this=0x51d5b08,
buf=, len=) at store_client.cc:611
#13 0x00732790 in Rock::IoState::callReaderBack (this=,
 buf=0x9ef18d8 "HTTP/1.1 200 Internal marker object\r\nServer:
squid\r\nMime-Version: 1.0\r\nDate: Fri, 11 Mar 2016 15:57:32
GMT\r\nContent-Type: x-squid-internal/vary\r\nExpires: Sat, 12 Mar 2016
19:44:12 GMT\r\nVary: Accept-En"..., rlen=428) at rock/RockIoState.cc:143
#14 0x00726ec1 in Rock::SwapDir::readCompleted (this=, buf=, rlen=428, errflag=0, r=...)
 at rock/RockSwapDir.cc:822
#15 0x006f22c0 in IpcIoFile::readCompleted (this=, readRequest=0x5417008, response=)
 at DiskIO/IpcIo/IpcIoFile.cc:255
#16 0x006f6bcc in IpcIoFile::handleResponse (this=, ipcIo=...) at DiskIO/IpcIo/IpcIoFile.cc:462
#17 0x006f6fde in IpcIoFile::HandleResponses
(when=when@entry=0x851324 "after notification") at
DiskIO/IpcIo/IpcIoFile.cc:449

...help!




___
squid-users mailing list

Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-12 Thread Eliezer Croitoru

OK it's pretty simple to reproduce on any machine what so ever on 3.5.15-2.

open two terminals on two machines more or less.
Then run on one the next command
watch -n 0.2 "http_proxy=http://IP_OP_PROXY:3128/ curl --silent --range 
20-40 http://ngtech.co.il/squid/videos/sosp2011_27.mp4 | wc -c"


And on the other one:
"http_proxy=http://IP_OP_PROXY:3128/ wget 
http://ngtech.co.il/squid/videos/sosp2011_27.mp4


You will see that when the download is finished and at the same time the 
download of the partial content is still running, the SWAPFAIL_MISS happens.


If for some reason it won't work for someone(I doubt that there is 
faster then full env RAM only (hypervisor+vms+disks) so lower the 0.2 to 
0.1 .


Please file a bug report with the above details,

Eliezer

On 11/03/2016 15:55, Heiler Bemerguy wrote:


Hi Eliezer,

We usually don't restart it ever. Only recently I've been restarting it
because of these issues. The shutdown_lifetime is set to 5 seconds only.

We are still getting SWAPFAIL_MISS without any apparent reason, and if
it is for a RANGE request, it would multiply it to many parallel
connections I'd like to track it down but I'm afraid I don't have
the knowledge to do it...

root@proxy:/var/log/squid# tail -f access.log |grep SWAPF
1457703376.593383 10.88.100.100 TCP_SWAPFAIL_MISS/206 533 GET
http://vdownloader.com/wp-content/uploads/multiwebsite-570x321.png -
HIER_DIRECT/104.25.245.28 image/png
1457703376.942726 10.88.100.100 TCP_SWAPFAIL_MISS/206 530 GET
http://vdownloader.com/wp-content/uploads/social-media-share-570x321.jpg
- HIER_DIRECT/104.25.244.28 image/jpeg
1457703376.964746 10.88.100.100 TCP_SWAPFAIL_MISS/206 529 GET
http://vdownloader.com/wp-content/uploads/Cnet-logo.png -
HIER_DIRECT/104.25.244.28 image/png
1457703540.685 106669 10.101.1.50 TCP_SWAPFAIL_MISS/200 3342325 GET
http://www.rarlab.com/rar/wrar531br.exe - HIER_DIRECT/5.135.104.98
application/octet-stream
1457703631.055   4088 10.101.1.130 TCP_SWAPFAIL_MISS/206 33062 GET
http://www.cetapnet.com.br/arquivos_cetap/arquivos/pma_001_2015_anexo_03.pdf
- HIER_DIRECT/200.219.214.204 application/pdf
1457703637.471   6407 10.101.1.130 TCP_SWAPFAIL_MISS/206 352287 GET
http://www.cetapnet.com.br/arquivos_cetap/arquivos/pma_001_2015_anexo_03.pdf
- HIER_DIRECT/200.219.214.204 application/pdf
1457703755.673262 10.72.0.24 TCP_SWAPFAIL_MISS/206 21736 GET
http://ciac.ufpa.br/phocadownload/EDITAL_018-2016-RET-HOM_REC_2-CH-SISU-2016.pdf
- HIER_DIRECT/200.239.64.160 application/pdf
1457703755.712 35 10.72.0.24 TCP_SWAPFAIL_MISS/206 65938 GET
http://ciac.ufpa.br/phocadownload/EDITAL_018-2016-RET-HOM_REC_2-CH-SISU-2016.pdf
- HIER_DIRECT/200.239.64.160 application/pdf

Best Regards,



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with ICAP filter?

2016-03-19 Thread Eliezer Croitoru

Hey Mike,

What do you mean by black box to us? who is us?

Eliezer

On 17/03/2016 21:52, Mike Summers wrote:

Thanks Alex.

You are correct, the message bodies are compressed (gzip). For reasons
unknown the ICAP service can't or won't deal with compressed data. Also
correct, the ICAP service is a black box for us.

Much thanks for the response, it gives us a place to start.

--Mike


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PURGE ERR_TOO_BIG

2016-03-10 Thread Eliezer Croitoru

squid.conf ...

Eliezer

On 11/03/2016 01:43, joe wrote:

trying to purge url
squidclient -h192.192.192.212 -p3128 PURGE
http://www.oggi.it/global_assets/js/searchform.js


Generated Fri, 11 Mar 2016 00:08:34 GMT by proxy.netgatesss.com
(squid)



debug_options ALL,2
---
2016/03/11 02:11:20.479 kid1| 5,2| TcpAcceptor.cc(220) doAccept: New
connection on FD 25
2016/03/11 02:11:20.479 kid1| 5,2| TcpAcceptor.cc(295) acceptNext:
connection on local=192.192.192.212:3128 remote=[::] FD 25 flags=9
2016/03/11 02:11:20.479 kid1| 11,2| client_side.cc(2345) parseHttpRequest:
HTTP Client local=192.192.192.212:3128 remote=192.192.192.212:46799 FD 8
flags=1
2016/03/11 02:11:20.479 kid1| 11,2| client_side.cc(2346) parseHttpRequest:
HTTP Client REQUEST:
-
GET http://www.oggi.it/global_assets/js/searchform.js HTTP/1.0
Host: www.oggi.it
User-Agent: squidclient/3.5.15-20160229-r13997
Accept: */*
Connection: close


--
2016/03/11 02:11:20.480 kid1| 85,2| client_side_request.cc(741)
clientAccessCheckDone: The request GET
http://www.oggi.it/global_assets/js/searchform.js is ALLOWED; last ACL
checked: all
2016/03/11 02:11:20.480 kid1| 85,2| client_side_request.cc(717)
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2016/03/11 02:11:20.480 kid1| 85,2| client_side_request.cc(741)
clientAccessCheckDone: The request GET
http://www.oggi.it/global_assets/js/searchform.js is ALLOWED; last ACL
checked: all
2016/03/11 02:11:20.480 kid1| 33,2| QosConfig.cc(177) doTosLocalHit: QOS:
Setting TOS for local hit, TOS=48
2016/03/11 02:11:20.480 kid1| 88,2| client_side_reply.cc(524) cacheHit:
clientProcessHit: Vary detected!
2016/03/11 02:11:20.481 kid1| 17,2| FwdState.cc(133) FwdState: Forwarding
client request local=192.192.192.212:3128 remote=192.192.192.212:46799 FD 8
flags=1, url=http://www.oggi.it/global_assets/js/searchform.js
2016/03/11 02:11:20.481 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: http://www.oggi.it/global_assets/js/searchform.js'
via www.oggi.it
2016/03/11 02:11:20.568 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'http://www.oggi.it/global_assets/js/searchform.js'
2016/03/11 02:11:20.568 kid1| 44,2| peer_select.cc(281) peerSelectDnsPaths:
always_direct = DENIED
2016/03/11 02:11:20.568 kid1| 44,2| peer_select.cc(282) peerSelectDnsPaths:
never_direct = DENIED
2016/03/11 02:11:20.568 kid1| 44,2| peer_select.cc(286) peerSelectDnsPaths:
DIRECT = local=0.0.0.0 remote=40.114.235.204:80 flags=1
2016/03/11 02:11:20.568 kid1| 44,2| peer_select.cc(295) peerSelectDnsPaths:
timedout = 0
2016/03/11 02:11:20.642 kid1| 11,2| http.cc() sendRequest: HTTP Server
local=192.192.192.212:29033 remote=40.114.235.204:80 FD 11 flags=1
2016/03/11 02:11:20.642 kid1| 11,2| http.cc(2223) sendRequest: HTTP Server
REQUEST:
-
GET /global_assets/js/searchform.js HTTP/1.1
Host: www.oggi.it
User-Agent: squidclient/3.5.15-20160229-r13997
Accept: */*
Cache-Control: max-age=2628000
Connection: keep-alive


--
2016/03/11 02:11:20.642 kid1| ctx: enter level  0:
'http://www.oggi.it/global_assets/js/searchform.js'
2016/03/11 02:11:20.642 kid1| HttpMsg::parse: Too large reply header (0 >
65536
2016/03/11 02:11:20.901 kid1| ctx: exit level  0
2016/03/11 02:11:20.901 kid1| 17,2| FwdState.cc(655)
handleUnregisteredServerEnd: self=0x3197708*2 err=0x2f87558
http://www.oggi.it/global_assets/js/searchform.js
2016/03/11 02:11:20.901 kid1| 4,2| errorpage.cc(1262) BuildContent: No
existing error page language negotiated for ERR_TOO_BIG. Using default error
file.
2016/03/11 02:11:20.902 kid1| 20,2| store.cc(954) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/03/11 02:11:20.902 kid1| 33,2| QosConfig.cc(145) doTosLocalMiss: QOS:
Preserving TOS on miss, TOS=0
2016/03/11 02:11:20.902 kid1| 88,2| client_side_reply.cc(2001)
processReplyAccessResult: The reply for GET
http://www.oggi.it/global_assets/js/searchform.js is ALLOWED, because it
matched all
2016/03/11 02:11:20.902 kid1| 11,2| client_side.cc(1391) sendStartOfMessage:
HTTP Client local=192.192.192.212:3128 remote=192.192.192.212:46799 FD 8
flags=1
2016/03/11 02:11:20.902 kid1| 11,2| client_side.cc(1392) sendStartOfMessage:
HTTP Client REPLY:
-
HTTP/1.1 502 Bad Gateway
Server: squid
Mime-Version: 1.0
Date: Fri, 11 Mar 2016 00:11:20 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3876
X-Squid-Error: ERR_TOO_BIG 0
X-Cache: MISS from proxy.netgatesss.com
Connection: close


--
2016/03/11 02:11:20.902 kid1| 20,2| store.cc(954) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/03/11 02:11:20.902 kid1| 33,2| client_side.cc(815) swanSong:
local=192.192.192.212:3128 remote=192.192.192.212:46799 flags=1
2016/03/11 02:11:20.902 kid1| 20,2| store.cc(954) checkCachable:
StoreEntry::checkCachable: NO: not cachable




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/PURGE-ERR-TOO-BIG-tp4676596.html
Sent from the Squid - 

Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-10 Thread Eliezer Croitoru

Hey,

I wanted to ask something very specific, how often do you restart the 
service if at all? what shutdown_flifetime 
[http://www.squid-cache.org/Doc/config/shutdown_lifetime/] are you using?


Eliezer

On 09/03/2016 15:17, Heiler Bemerguy wrote:


Hi Amos,

Now you can help me on tracking it down.. lol... can you? I don't know
what debug_options (apart of 88,3) I should enable.
I just know that disabling range_offset will eliminate this issue,
because it won't even try to cache range requests. Also, it didn't
happen when I was using AUFS.

Another examples:
2016/03/09 00:27:54.016 kid2| 88,3| client_side_reply.cc(463) cacheHit:
clientCacheHit: http://au.download.windowsupdate.com/c/msdownload/upda
te/software/secu/2016/02/ie11-windows6.1-kb3139929-x64_55bffa59079eb8da45400d6b0432262f96adb3b0.psf,
0 bytes
2016/03/09 00:27:54.016 kid2| 88,3| client_side_reply.cc(470) cacheHit:
clientCacheHit: swapin failure for http://au.download.windowsupdate.co
m/c/msdownload/update/software/secu/2016/02/ie11-windows6.1-kb3139929-x64_55bffa59079eb8da45400d6b0432262f96adb3b0.psf


There are some 0 bytes responses (giving a swapin failure) that won't
give me much trouble because files are small, like this:
2016/03/09 09:57:25.107 kid2| 88,3| client_side_reply.cc(463) cacheHit:
clientCacheHit:
http://www.mte.gov.br/images/Imagens/Noticias/2016/BRICS31.JPG, 0 bytes
2016/03/09 09:57:25.107 kid2| 88,3| client_side_reply.cc(470) cacheHit:
clientCacheHit: swapin failure for
http://www.mte.gov.br/images/Imagens/Noticias/2016/BRICS31.JPG

Looking the source code:
 debugs(88, 3, "HIT object being deleted. Ignore the HIT.");
 return;
 }

 StoreEntry *e = http->storeEntry();

 HttpRequest *r = http->request;

 debugs(88, 3, "clientCacheHit: " << http->uri << ", " <<
result.length << " bytes");

 if (http->storeEntry() == NULL) {
 debugs(88, 3, "clientCacheHit: request aborted");

I don't get this "deleted", so the object is not being deleted, and
"request aborted" is not being show too..

Best Regards,




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] PURGE ERR_TOO_BIG

2016-03-10 Thread Eliezer Croitoru

Sorry I got confused with my email service issue.

+1 Alex

Eliezer

On 11/03/2016 03:13, Alex Rousskov wrote:

On 03/10/2016 04:43 PM, joe wrote:

trying to purge url
squidclient -h192.192.192.212 -p3128 PURGE
http://www.oggi.it/global_assets/js/searchform.js


Missing squidclient -m option to specify the PURGE _method_.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.15-1 is available for Ubuntu 14.04 LTS (online repo ubuntu.diladele.com)

2016-03-09 Thread Eliezer Croitoru

First thanks!

Will it be possible to add another version of squid to other versions of 
ubuntu?


Eliezer

On 10/03/2016 00:00, Rafael Akchurin wrote:

Hello all,

We have rebuilt the Debian (testing) package for Squid 3.5.15-1 for
Ubuntu 14.04 LTS with libecap3.

If you need to install the latest Squid 3.5 on Ubuntu 14.04 LTS please
take a look at online repository at http://ubuntu.diladele.com. To use
the repo run the following commands:

# add repo

echo "deb http://ubuntu.diladele.com/ubuntu/ trusty main" >
/etc/apt/sources.list.d/ubuntu.diladele.com.list

# update the apt cache

apt-get update

# install

apt-get install libecap3

apt-get install squid-common

apt-get install squid

apt-get install squidclient

The following tutorial shows how we rebuilt Squid 3.5.15 on Ubuntu 14.04
LTS http://docs.diladele.com/tutorials/build_squid_ubuntu14/index.html.

All questions/comments and suggestions are welcome at
supp...@diladele.com  or here in the
mailing list.

Best regards,

Rafael Akchurin

Diladele B.V.

--

Please take a look at Web Safety - our ICAP based web filter server for
Squid proxy at http://www.diladele.com.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Peek and Splice with SIP over TCP

2016-03-09 Thread Eliezer Croitoru

On 09/03/2016 21:31, Jason Haar wrote:

Or use socat. I have used it to allow ancient SSLv3-only clients to
communicate with TLS-only servers.

Jason


Would it be possible to put haproxy as a SSL termination proxy and pass 
the TCP request to squid? which will results in a similar situation to 
socks?


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Need advice on some crazy access control requirements

2016-03-10 Thread Eliezer Croitoru

Hey Victor,

I do not think it's too crazy.
It is a very common statement in the Law of Pharmacy to not operate 
"heavy" tools when taking a specific medicine. In most cases it is there 
since the operation of such tools(light\heavy) requires the 
worker\operator a specific amount of concentration and attention and 
since the desire of the usage is a change this is the right phrase.


I think that it depends also on the target of the ACL\policy in many cases.
For example there are many places that do allow Apple(which includes 
music, videos, books and many more) but do not allow YouTube or in some 
places even Google or Bing. If for example in a medical operating room 
there would be Internet available it can be potentially hacked and in 
many places the common policy is that VOIP(over the Internet) in these 
cases is in use. It's one of the tools for the room. The staff in the 
room tends to be very trusted but you cannot rely on specific tools to 
replace the soul which decides on the right thing to do "mid-flight" 
when there are tiny saws and scalpel on the stand.(and vice versa mind 
cannot replace specific tools).


The first thing that you can do in such a scenario is to analyze the 
network traffic using squid.
It can give lots of output and feedback even if used only as a simple 
logging tool.
When you do have a clear view with what you are handling you can see 
what are the realistic option about this specific group of Internet 
users. For example if they are trying to use a proxy service that is on 
other ports then 443 and 80 your goal would be to use a strict policy 
rather then simply monitoring the HTTP and HTTPS connections.


I do not have experience with psychology but I do think that if most of 
the undesired sites will be blocked it would fit most ACLs\policy ideas.
I think it's a really good idea to somehow find the right tactic so that 
the request for such a crazy ACL requirement would be understood by the 
requester.


I do not remember if squid can "stop" a download after a specific amount 
of KB\MB for one file but again eventually it is possible to download 
them in chunks...
So it's not really impossible but indeed it's not an easy task to 
implement. Also I know that there are couple products that does in a way 
what you just described. The issue with them in most cases is that they 
do cost more then a dime and sometimes the request for such a 
requirement being dropped by hearing only part of the costs.


Eliezer

On 11/03/2016 05:31, Victor Sudakov wrote:

Dear Colleagues,

New Internet access rules are being introduced in our company, among
them there is a requirement to have special groups of Internet users
who are permitted to:

1. Download files from the Internet.

2. Use Web forums.

3. Use streaming audio/video.

By default users should have no access to the above facilities.

These requirements may sound stupid and vague to some, but is there a
way to accomodate them at least partially, without keeping long lists
of prohibited file extensions and domains, which is very
counterproductive?

I am perfectly aware that an advanced Internet user will be able to
circumvent those prohibitions, but still, any recipes? I have looked
in http://wiki.squid-cache.org/SquidFaq/SquidAcl but found nothing
very useful.




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] after i checked via firebug ( firefox addon) , i found waiting time is very high via monitor hit small object , how i do reduce the waiting time for hit object ??

2016-03-14 Thread Eliezer Croitoru

Hey There,

I am not sure what is causing it and there might be some network related 
issue but I am not sure what the issue is.

Can you please share the related access.log output for these requests?
Are you testing internally or against the Internet?

Eliezer

On 14/03/2016 15:57, johnzeng wrote:


Hello Dear Sir :

i hope to optimize cache effect recently , after i checked via firebug (
firefox addon) ,

i found waiting time is very high via monitor hit small object

Maybe there are some error about refresh_pattern ( reload-into-ims) ?

how i do reduce the waiting time for hit object ??

for example :

Dns lookup 0ms 0ms

connecting 0ms 0ms

sending 0ms 0ms

waiting 0ms 761ms

receiving 761ms 0ms


This is part config


quick_abort_min -1 KB
quick_abort_pct 50
collapsed_forwarding off
request_entities on
relaxed_header_parser on

refresh_pattern \.html$ 480 50% 22160 reload-into-ims
refresh_pattern \.htm$ 480 50% 22160 reload-into-ims
refresh_pattern \.class$ 10080 90% 43200 reload-into-ims
refresh_pattern \.zip$ 10080 90% 43200 reload-into-ims

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-14 Thread Eliezer Croitoru

Hey,

I have a question, in your scenario, if you would be able to statically 
cache all these updates using nginx, or another cache_peer, would it 
sound OK? or good enough?


Eliezer

On 14/03/2016 16:32, Heiler Bemerguy wrote:


Hi Eliezer and Joe!!!

Thank you very much for your support.

I have done a test here too. I've replaced 3.5.15 with 3.5.14 and the
high bandwidth (associated with SWAPFAIL) is GONE.

I've checked twice the sources diffs between 14 and 15 and can't tell
what break this.. but I'm running 3.5.14 for 3 days without any
download-loop sucking all our bandwidth.

I'm still having a SWAPFAIL here and there, and a lot of MISSES for
files that should have been cached... but no high bandwidth !!



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4: Cloudflare SSL connection problem

2016-04-12 Thread Eliezer Croitoru

  
  
What "dig www.cloudflare.com" results with?
Also what OS are you using? I am using CentOS 7 up to date...

Eliezer

On 12/04/2016 21:39, Yuri Voinov wrote:

root @
  cthulhu /patch # openssl s_client -cipher
  'ECDHE-ECDSA-AES128-GCM-SHA256' -connect www.cloudflare.com:443

  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4: Cloudflare SSL connection problem

2016-04-12 Thread Eliezer Croitoru

  
  
Hey Yuri,

I will try to test it with couple versions of 4.0.x.
But it's weird...
The reason it's weird is since some kind of trust or understand this
test:
https://www.ssllabs.com/ssltest/analyze.html?d=www.cloudflare.com=198.41.214.162

I am not an SSL expert in general but I can use openssl client to
test and verify things.
I have tested this scenario with openssl like this:
# openssl s_client -cipher 'ECDHE-ECDSA-AES256-SHA' -connect
www.cloudflare.com:443
CONNECTED(0003)
139990857013152:error:14077410:SSL
routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
failure:s23_clnt.c:744:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 119 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

And it seems that openssl does something which might be my fault but
if squid 3.5.16 works fine with it and 4.0.8 it might be connected
to the connection between openssl library to the service and squid
only displays the issue in the nice html page.
I do not know what service cloudflare uses and how it all works but
if openssl states that there is an issue with what the service is
either sending or itself analyzing then the issue is in the openssl
level rather then squid.

I am sure that both cloudflare and openssl and squid users, admins
and devs wants to resolve the issue.

Eliezer

On 12/04/2016 18:29, Yuri Voinov wrote:


  
  
  -BEGIN PGP SIGNED MESSAGE- 
  Hash: SHA256 
   
  UPDATE:
  
  Every failed connect produce the next sequence in access.log:
  
  1460474791.631  15444 192.168.100.103 NONE_ABORTED/200 0 CONNECT
  198.41.215.162:443 - ORIGINAL_DST/198.41.215.162 -
  1460474791.658  0 192.168.100.103 NONE/503 3951 GET https://www.cloudflare.com/*
  - HIER_NONE/- text/html
  
  Note: 198.41.215.162 is current cloudflare.com IP.
  
  Also: NONE_ABORTED/200 is often occurs in access.log with another
  accessible sites.
  
  12.04.16 20:03, Yuri Voinov пишет:
  >

  > UPDATE:

  >

  > https://i1.someimage.com/b8w5dFz.png

  >

  > This is answer from Cloudflare support.

  >

  > But: 3.5.16 can deal with ECDSA TLS 1.2 but 4.0.8 not?

  >

  > 12.04.16 17:55, Yuri Voinov пишет:

  > > Does anybody faces this problem with 4.0.8:

  >

  > > https://i1.someimage.com/3lD2cvV.png

  >

  > > ?

  >

  > > It accomplished this error in cache.log:

  >

  > > 2016/04/12 17:39:38 kid1| Error negotiating SSL on FD
  54:

  > error::lib(0):func(0):reason(0) (5/0/0)

  >

  > > and "NONE/503" in access.log.

  >

  > > Without proxy works like sharm. 3.5.16 with the similar
  squid.conf

  > works like sharm.

  >

  > > NB: Cloudflare support said, that they key feature for
  SSL is SNI and

  > ECDSA now. AFAIK, 4.0.8 is fully supports this features.

  >

  > > Any advice will be helpful.

  >

  > > Yes, I know this looks like DDoS protection on
  Cloudflare. But WTF?

  > Any workaround required. Half-Internet is hosted on
  Cloudflare.

  >

  > > WBR, Yuri

  >

  >
  
  -BEGIN PGP SIGNATURE-
  
  Version: GnuPG v2
  
   
  iQEcBAEBCAAGBQJXDRRPAAoJENNXIZxhPexGmZcIAI1gcVCHUjCrDk0vI/f7omMP
  
  ALa5XYk0VrsoOioc5cIh0DuIRN8THqkdXxtRXdKnxC8hgRfvOxN6h7NFilZhVAiT
  
  tvgQkmKxAXXkCXik03AYU5DBoElMDcCgznksAxcckvXGCyWxN7pFwSY2p87WPHa/
  
  5G/K5BTG1rf30OjVYIMPRtsfkHyA5xWIPNHKcbu6bCsV7H+oXh8x8oCNHdF06Q1i
  
  s3U1kiFEudOKC1bMGVY4RJlzqDgGdANsHMSh0/v3rS4it5KBFxPsuz/DDcU1DlkO
  
  MIEMF7FgvxORtgBZPUnxa+sF5gunZqDuv2R2aJuxJpYK2OriOC7+e40dZiw7xpQ=
  
  =/LGq
  
  -END PGP SIGNATURE-
  
  
  
  
  
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache.org misconfigured

2016-04-10 Thread Eliezer Croitoru
Hey Yuri,

I filled a bug report about this couple times and the answer that I received 
and is the actual case is:
There is a sync process to the squid-cache mirror\cache web servers.
Since the synchronization "reset" the permissions of the files apache cannot 
access the web page files.
Due to this we see a forbidden access page when accessing once in a while, 
while the sync is done.

I do not know this specific system and I think that with the budget and tools 
of the project it's OK to have this kind of "down" time.

Eliezer

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Saturday, April 9, 2016 1:52 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] squid-cache.org misconfigured

https://i1.someimage.com/Mv9LdJN.png
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache.org misconfigured

2016-04-10 Thread Eliezer Croitoru

  
  
Hey Yuri,

I will try to put up a "status" page for some of the project web
services in order to describe\explain the current status of the down
time.

How long is it "long enough" that you mean\know?

If for example the project page would be down for a whole day as a
rest day it would be acceptable if the work days and hours are
during the week.
If the case is that an upgrade\update of systems is done only on
weekends without preparations during the week then it would be very
acceptable to not have down time during the weekend.

I do not know RedHat or SUSE and others policy of
updates\upgrades\patches and other things but they do not reveal to
me their "secret" for high up-time.
My assumption is that it requires more "time" more "work hours" more
"voluntaries" and many other things which are too much for a single
human to handle by himself alone.

Eliezer

* I believe that more support for any project is one of the big
secrets of the black magic of up-time.

On 10/04/2016 14:31, Yuri Voinov wrote:


  Yep, I understand. Simple this occurs some often and take long enough time.


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache.org misconfigured

2016-04-10 Thread Eliezer Croitoru

  
  
Even two hours is acceptable!
If someone requires the squid sources I have it on my own web
service at:
http://ngtech.co.il/squid/src/

And in general it is up to date for the stable but it takes me time
to be in "real time" sync with the project since work times are
different for me and the other project voluntaries and
announces\releases.

Eliezer

On 10/04/2016 14:56, Yuri Voinov wrote:


  
  10.04.16 17:54, Eliezer Croitoru пишет:
  > Hey Yuri,

  >

  > I will try to put up a "status" page for some of the project
  web services in order to describe\explain the current status of
  the down time.
  Good idea, Eliezer. Really good.
  >

  > How long is it "long enough" that you mean\know?
  I've observed half-hour - hour. Usually less, but it depends...

  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-cache.org misconfigured

2016-04-10 Thread Eliezer Croitoru
I do not know the reason but it seems that:

https://rsync.samba.org/ftp/rsync/src/rsync-3.1.0-NEWS

 

have couple things which can solve the issue using the:

 

- Added the --usermap/--groupmap/--chown options for manipulating file

  ownership during the copy.

 

I will try to update the bugzilla report later to have this info with hope that 
it will be resolved.

 

Eliezer

 

From: Yuri Voinov [mailto:yvoi...@gmail.com] 
Sent: Sunday, April 10, 2016 2:58 PM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid-cache.org misconfigured

 


-BEGIN PGP SIGNED MESSAGE- 
Hash: SHA256 
 
I've go to project site some times per day - tracking nev snapshots/fixed bugs. 
So, periodic downtime is annoying.

10.04.16 17:54, Eliezer Croitoru пишет:
> Hey Yuri,



  >



  > I will try to put up a "status" page for some of the project

  web services in order to describe\explain the current status of

  the down time.



  >



  > How long is it "long enough" that you mean\know?



  >



  > If for example the project page would be down for a whole day

  as a rest day it would be acceptable if the work days and hours

  are during the week.



  > If the case is that an upgrade\update of systems is done only

  on weekends without preparations during the week then it would be

  very acceptable to not have down time during the weekend.



  >



  > I do not know RedHat or SUSE and others policy of

  updates\upgrades\patches and other things but they do not reveal

  to me their "secret" for high up-time.



  > My assumption is that it requires more "time" more "work

  hours" more "voluntaries" and many other things which are too much

  for a single human to handle by himself alone.



  >



  > Eliezer



  >



  > * I believe that more support for any project is one of the

  big secrets of the black magic of up-time.



  >



  > On 10/04/2016 14:31, Yuri Voinov wrote:



  >> Yep, I understand. Simple this occurs some often and take

  long enough time.



  >

-BEGIN PGP SIGNATURE- 
Version: GnuPG v2 
 
iQEcBAEBCAAGBQJXCj/IAAoJENNXIZxhPexGPw4H/AtftzcpYDd3pVo9E9Uzmde9 
5LMuIm0NSH2k5btlnlwEDLDOM2jkYSMDx0loJjWFF6VYEzWMLf0e/KB+LKxdmynd 
vWm4eRrL0W29PoaDX5t9lmUGBI/U4N/r1esxhs+DhPOIvvdJFH42kYheMMZB5myt 
yRrMgWyQOc+9/yh905ekhuECgaQK3nyobMfWoIUoiJS6YOBPfisyat9x+6qreR72 
h2gPkbv5YarBAEJwM9eUwsYZ3jsaqDizgFDkFEbvqOEjSk95/igDxirwbwlWagMX 
bk4++TesUWQXaS9sH/34vJZZPrWE8AMQQS9GbU4myx9/c5ORVe+8bte11j+ANg4= 
=u47b 
-END PGP SIGNATURE- 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] FATAL: Ipc::Mem::Segment::create failed to shm_open(/squid-cf__metadata.shm): (13) Permission denied

2016-04-11 Thread Eliezer Croitoru

  
  
Hey,

There are couple things which are unclear about both the system you
are running and the situation.
In the post mentioned a CentOS 6.5 and SElinux policy for a specific
thing.
The specific policy in the post seems "sensible" but the default
policy for squid in CentOS works fine as far as I can tell.
It is mentioned that after installing squid 3.5.0 on CentOS this
issue appeared. Since I am building the unofficial CentOS RPMs it's
pretty simple for me to understand that there are scenarios which
you would be better without SElinux or other restrictions or
"binding" tools by the OS of the running process\software\script.
Specifically the pid file is not related in any way to the SElinux
mentioned in the blog post..
If you can post the content of the "te" file of the audit2allow
result it would help to understand the issue better.

Have you tried my RPMs? If something is missing in them let me know
please.

Eliezer

On 11/04/2016 22:11, amadaan wrote:


  So I actually dig deeper into this issue and found stack traced error of 
squid: ERROR: Could not read pid file
	/var/run/squid.pid: (13) Permission denied

Tried one of the responses from one of the forums. Saying the issue is with
SELinux being enabled.
I disabled that and it worked fine after that. 

But that means I am removing security from my system. Now this awesome blog
tells me how to add policy rules to allow your new software to run when
SELinux is enabled.

http://sharadchhetri.com/2014/03/07/selinux-squid-service-failed-startrestart/

Quite helpful but not sure if that is real solution. Can any changes be done
on squid end to ignore above steps . Any suggestions on this will be of
help.

Thanks


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP RDP on squid Pfsense not woking

2016-04-11 Thread Eliezer Croitoru

  
  
Did you tried to enable all traffic as I suggested in the other
email?

Eliezer

On 11/04/2016 23:54, --Ahmad-- wrote:


  
  
  
  
  
  
  
  

  
On Apr 11, 2016, at 9:40 AM, --Ahmad--
  <> wrote:


  
  Hi
dev ,


when i use socks5 client on my pc to
  connect to squid proxy on centos  , i can tunnel RDP
  traffic using squid .


recently when i changed to pfsense , 
I’m unable to use RDP  using proxy .


MY CACHE PEER proxy is 10.12.0.32 , if  i
  use it directly i can use RDP.


but RDP from pfsense always forbidden and
  i already allowed rdp port in the ports in pfsense
  squid config .!




i will paste my squid config below and the
  error i face when i try .


===

  [2.2.2-RELEASE][admin@pfSense]/root: squid
  -k parse
  2016/04/11
09:25:53| Startup: Initializing Authentication
Schemes ...
  2016/04/11
09:25:53| Startup: Initialized Authentication Scheme
'basic'
  2016/04/11
09:25:53| Startup: Initialized Authentication Scheme
'digest'
  2016/04/11
09:25:53| Startup: Initialized Authentication Scheme
'negotiate'
  2016/04/11
09:25:53| Startup: Initialized Authentication Scheme
'ntlm'
  2016/04/11
09:25:53| Startup: Initialized Authentication.
  2016/04/11
09:25:53| Processing Configuration File:
/usr/local/etc/squid/squid.conf (depth 0)
  2016/04/11
09:25:53| Processing: http_port 10.12.140.254:8080
  2016/04/11
09:25:53| Processing: http_port 127.0.0.1:8080
  2016/04/11
09:25:53| Processing: icp_port 0
  2016/04/11
09:25:53| Processing: dns_v4_first off
  2016/04/11
09:25:53| Processing: pid_filename
/var/run/squid/squid.pid
  2016/04/11
09:25:53| Processing: cache_effective_user proxy
  2016/04/11
09:25:53| Processing: cache_effective_group proxy
  2016/04/11
09:25:53| Processing: error_default_language en
  2016/04/11
09:25:53| Processing: icon_directory
/usr/pbi/squid-amd64/local/etc/squid/icons
  2016/04/11
09:25:53| Processing: visible_hostname mpwh
  2016/04/11
09:25:53| Processing: cache_mgr admin@localhost
  2016/04/11
09:25:53| Processing: access_log
/var/squid/logs/access.log
  2016/04/11
09:25:53| Processing: cache_log
/var/squid/logs/cache.log
  2016/04/11
09:25:53| Processing: cache_store_log none
  2016/04/11
09:25:53| Processing: netdb_filename
/var/squid/logs/netdb.state
  2016/04/11
09:25:53| Processing: pinger_enable on
  2016/04/11
09:25:53| Processing: pinger_program
/usr/pbi/squid-amd64/local/libexec/squid/pinger
  2016/04/11
09:25:53| Processing: logfile_rotate 0
  2016/04/11
09:25:53| Processing: debug_options rotate=0
  2016/04/11
09:25:53| Processing: shutdown_lifetime 3 seconds
  2016/04/11
09:25:53| Processing: acl localnet src 
10.12.140.0/24 127.0.0.0/8
  2016/04/11
09:25:53| Processing: forwarded_for on
  2016/04/11
09:25:53| Processing: uri_whitespace strip
  2016/04/11
09:25:53| Processing: acl dynamic urlpath_regex
cgi-bin \?
  2016/04/11
09:25:53| Processing: cache deny dynamic
  

Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-07 Thread Eliezer Croitoru

Thanks for the Interpretation.

I didn't found any bug report that is related to the subject.
I will try to add it into the bugzilla later.

Eliezer

On 08/03/2016 04:00, Amos Jeffries wrote:

On 8/03/2016 10:00 a.m., Eliezer Croitoru wrote:


I do not know exactly what this means from the info page:
 Maximum number of file descriptors:   81920


80K FD are available to Squid.

The rest gets strange..


 Largest file desc currently in use:   6157
 Number of file desc currently in use: 8216


FD numbers 0-6157 are being used for 8216 concurrent connections.

Sounds impossible? not with SMP workers. The 6K is really 0-6157
per-worker. So 6-12K FDs in use, and ~8K fits right in there.

Its a minor bug in the report display that they are not having more
columns with separate numbers for each worker.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with sslbump blocking Netflix

2016-03-02 Thread Eliezer Croitoru

In some places the law can prohibit the usage of pinned certificates.

Eliezer

On 02/03/2016 21:09, Yuri Voinov wrote:

Nobody can fight SSL pinning in proprietary apps.

The only way I see is to put Netflex under splice ACL and do not do SSL
bump for all Netflex CDN.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid :D

2016-03-03 Thread Eliezer Croitoru

This is where you need to share your squid.conf..
Also what was the result of the query I mentioned?

Another one to try is:
http://www.squid-cache.org/Doc/config/dns_v4_first/

try adding to the end of squid.conf
dns_v4_first on

All The Bests,
Eliezer

On 04/03/2016 00:42, Dan Charlesworth wrote:

Thanks for your input Eliezer.

I've tested against various public DNS servers at this point so I'm
ruling out any DNS-server-side problems. The only time there's any
timeouts or slowness is when the request is going through squid. Doesn't
seem to matter which HTTP server I'm requesting, whether it returns
multiple IPs or not.

Also worth noting that this company has about 30 other sites with mostly
identical network topologies and equipment where it's completely fine.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid :D

2016-03-06 Thread Eliezer Croitoru
If you want to somehow use a skype\irc session to see what can be done 
without all the hassle of emails back and forth let me know.


Eliezer

On 06/03/2016 13:55, Dan Charlesworth wrote:

For what it's worth, I've now tried disabling IPv6 via sysctl and it
didn't make any difference.

Appreciate the advice so far. More from me tomorrow.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Survey on assertions: When the impossible happens

2016-03-01 Thread Eliezer Croitoru

Hey Eray,

Indeed all of these are good and sysadmins should be able to handle them 
but.. in specific cases it's not easy.

The cases I know about are:
- SAT links (slow or costly)
- Sensitive acl\security systems
- Very low quality distance wireless links

In the case of ACLs system bypass or bridging might not be an option if 
the stakes are high(This is where I asked myself couple times why IT 
managers doesn't like to pay for industrial guarantees).


I am happy with squid for a very long time and I couldn't understand why 
a friend of mine wasn't happy about it. Only when I was with him and he 
showed me what happens when he tries to install and run squid I 
understood the Blocking issue. Eventually for him a special customized 
proxy was the answer.


Thanks,
Eliezer

On 01/03/2016 12:55, Eray Aslan wrote:

False dichotomy.  There is always something you can do.  Re-route the
traffic, throw the bypass switch, bridge the interfaces, don't use the
cache, downgrade, take preventive measures uptream in the flow...

i.e. let the sysadmin/system architect handle the emergencies.  The case
above is not different from squid box(es) going offline for whatever
reason.  Worst case:  Live through the outage, learn from it and
hopefully design your systems accordingly in the future.

-- Eray


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with sslbump blocking Netflix

2016-03-02 Thread Eliezer Croitoru

On 02/03/2016 21:33, Yuri Voinov wrote:


Yes, and in some places the law prohibit SSL bump completely

But AFAIK here is technical list, not lawer, is it?;)


Yuri,

You are right but since some of us do have legal obligations to some 
laws and do not live in a desert on the moon or the sun like Google or 
other services, I do tend to mention this side since it's not obviates 
to everybody.


Also I do understand why netflix would want to preserve their profits 
and investment in any of their services. Eventually they like many 
others do not like their plate of food being taken while smelling or 
tasting the result of their cooking skills.
There is a saying about eating raw non cooked food which I fully 
understand and this is the same for this scenario. If it was cooked, you 
need to at-least say thank you and in many ways the only way to do so is 
by paying couple bucks.
The only case which I think that it will be allowed by the cook and the 
owner of the food to be taken is when it will not heart him or any of 
the related parties life\soul.
Eventually maybe not everybody sees it this way but the possibility of 
pinning a certificate is reserved for anyone that needs to have a basic 
safety-net for his basic needs. The way I see it, the only case that I 
will live in a country that prohibit the use of certificate pinning is 
when this country will provide me the basic safety-net for a way to earn 
my food(and couple other basic needs..).


If for example the "lets encrypt" idea\program was designed to give a 
safety-net for many organizations which are fighting to survive in this 
very wide Internet with so many predators within it, then I vote +1 for 
them but if the idea was meant to or will cripple the encryption world I 
would vote -10^100.


So it's not really a lawyer thing but rather a simple understanding of 
this very very beautiful and amazing world.


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid 

2016-03-02 Thread Eliezer Croitoru

Hey Dan,

What dig+nslookup queries did you tested for?

Eliezer

On 03/03/2016 07:39, Dan Charlesworth wrote:

Right now we have 1 squid box (out of a lot), running 3.5.13, which does 
something like this for every request, taking about 10 seconds:

2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup: 
idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1745) idnsALookup: 
idnsALookup: buf is 29 bytes for httpbin.org, id = 0x8528
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1683) 
idnsSendSlaveQuery: buf is 29 bytes for httpbin.org, id = 0x69c2
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 93 bytes from 192.231.203.132:53
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x733a, -3 answers
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1195) idnsGrokReply: 
idnsGrokReply: error Name Error: The domain name does not exist. (3)
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns0 QID 0x69c2: timeout
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 110 bytes from 172.16.100.4:53
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x69c2, 0 answers
2016/03/03 16:30:58.885 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 246 bytes from 172.16.100.5:53
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x8528, 1 answers

AND YET, every nslookup or dig done at the command line on the same server is 
lightning fast. I’ve tried local and ISP-level DNS servers and get the same 
result.

What could be going on here?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid 

2016-03-02 Thread Eliezer Croitoru

can you try the next command:
dig -x 10.100.128.1

Eliezer

On 03/03/2016 08:04, Dan Charlesworth wrote:

Like this:

# time nslookup httpbin.org
Server: 192.231.203.3
Address:192.231.203.3#53

Non-authoritative answer:
Name:   httpbin.org
Address: 54.175.222.246

real0m0.026s
user0m0.001s
sys 0m0.004s


# time dig httpbin.org

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.37.rc1.el6_7.6 <<>> httpbin.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44477
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 4

;; QUESTION SECTION:
;httpbin.org.   IN  A

;; ANSWER SECTION:
httpbin.org.577 IN  A   54.175.222.246

;; AUTHORITY SECTION:
httpbin.org.6161IN  NS  ns-769.awsdns-32.net.
httpbin.org.6161IN  NS  ns-1074.awsdns-06.org.
httpbin.org.6161IN  NS  ns-410.awsdns-51.com.
httpbin.org.6161IN  NS  ns-1756.awsdns-27.co.uk.

;; ADDITIONAL SECTION:
ns-410.awsdns-51.com.   9966IN  A   205.251.193.154
ns-769.awsdns-32.net.   13639   IN  A   205.251.195.1
ns-1074.awsdns-06.org.  11459   IN  A   205.251.196.50
ns-1756.awsdns-27.co.uk. 11489  IN  A   205.251.198.220

;; Query time: 21 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Thu Mar  3 17:03:04 2016
;; MSG SIZE  rcvd: 246

real0m0.026s
user0m0.004s
sys 0m0.001s



On 3 Mar 2016, at 4:55 PM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:

Hey Dan,

What dig+nslookup queries did you tested for?

Eliezer

On 03/03/2016 07:39, Dan Charlesworth wrote:

Right now we have 1 squid box (out of a lot), running 3.5.13, which does 
something like this for every request, taking about 10 seconds:

2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup: 
idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1745) idnsALookup: 
idnsALookup: buf is 29 bytes for httpbin.org, id = 0x8528
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1683) 
idnsSendSlaveQuery: buf is 29 bytes for httpbin.org, id = 0x69c2
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 93 bytes from 192.231.203.132:53
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x733a, -3 answers
2016/03/03 16:30:48.884 kid1| 78,3| dns_internal.cc(1195) idnsGrokReply: 
idnsGrokReply: error Name Error: The domain name does not exist. (3)
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:53.884 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns0 QID 0x69c2: timeout
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 110 bytes from 172.16.100.4:53
2016/03/03 16:30:53.885 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x69c2, 0 answers
2016/03/03 16:30:58.885 kid1| 78,3| dns_internal.cc(1384) idnsCheckQueue: 
idnsCheckQueue: ID dns8 QID 0x8528: timeout
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1277) idnsRead: idnsRead: 
starting with FD 7
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1323) idnsRead: idnsRead: 
FD 7: received 246 bytes from 172.16.100.5:53
2016/03/03 16:30:58.886 kid1| 78,3| dns_internal.cc(1130) idnsGrokReply: 
idnsGrokReply: QID 0x8528, 1 answers

AND YET, every nslookup or dig done at the command line on the same server is 
lightning fast. I’ve tried local and ISP-level DNS servers and get the same 
result.

What could be going on here?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.x install problem

2016-03-03 Thread Eliezer Croitoru

On 03/03/2016 14:35, Jorgeley Junior wrote:

to install squid in /etc use "--prefix=/etc/squid"

The standard way is:
./configure --prefix=/usr/local/squid

and it's also normal in some systems to use the /opt such as
./configure --prefix=/opt/squid

Permissions and users you will need to set manually to avoid all sort of 
weird things.
In any of the uses above just install the squid deb package and then 
just tweak things around to match your setup(to install dependencies and 
couple nice scripts)


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Bizarrely slow, timing out DNS only via Squid :D

2016-03-07 Thread Eliezer Croitoru
dig +trace results against ISP+other dns services shows 65000+ ms 
response time which means that there is something wrong outside of squid.


Eliezer

On 07/03/2016 06:50, Dan Charlesworth wrote:

Alright, we’re getting somewhere.

A plain curl is about as slow as a default squid config curl:

P.S. I sent you a Skype request

---

# time curl http://httpbin.org/ip
{
   "origin": "59.167.202.249"
}

real0m5.513s
user0m0.002s
sys 0m0.001s

# time curl http://httpbin.org/ip --proxy http://localhost:1
{
   "origin": "::1, 59.167.202.249"
}

real0m5.469s
user0m0.001s
sys 0m0.001s

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-07 Thread Eliezer Croitoru
Sorry about the confusion\misunderstanding.. my brains cache is kind of 
tiny\short and I am not sure but was it you that asked about the big 
NETAPP cache a question not long ago? or was it someone else? I am maybe 
confusing because the other one had more clients but a similar issue.


I will later read and try to understand the info page again but I just 
wanted to clear something out about storage which might be known in some 
places but not to everybody.


Every storage system requires some logical and physical layer and each 
and every one of them is doing something to any IO that happens!!!
I do not say that there is an issue with the raid or the storage but it 
is clear that the storage else then the RAID needs some kind of "cache" 
or some level of buffering in order to work better. It's the same for 
DAS, SAN, NAS and any other form of storage. This is the design of such 
products. The only place which a storage is almost always directly 
accessed by most IO calls is the RAM and in the DAS area it's SSD and 
RAM+battery based products which "cache" is only slowing down the RAM 
and CPU.


I am not sure I interpret the cache_dir the docs state:
http://west.squid-cache.org/Doc/config/cache_dir/
cache_dir rock Directory-Name Mbytes [options]

which means that you are using:
cache_dir rock /cache2/rock1 9 min-size=0 max-size=32768
cache_dir rock /cache/rock1 30 min-size=32769 max-size=10737418240

90K MB on the first?
300K MB o the second? right?

How much is it in GB??

Eliezer

On 07/03/2016 20:38, Heiler Bemerguy wrote:

skyrocketing = using our maximum link download bandwidth.
This machine is only proxying. Not being a firewall, not a router, nor a
gateway. It has access to the internet through our gateway/firewall
(pfsense).
Lots of LAN clients are connected to the proxy, this is their only way
to the internet. 1 interface, debian linux. EXT4 FS. CPU/MEM usage is
always stable.
Clients use it explicitly or via wpad. Never transparently. Now I'm
using 3 workers, because 1 is not enough and we have spare cores.
It's a VM machine with netapp storage. lots of raid disks.
SQUID was running perfectly without cache_dirs.

I think squid is downloading and redownloading the same files over and
over again because: 1- these are segmented downloads and
range_offset_limit is set to NONE for these files. 2- it can't store the
downloaded files on the cache but I don't know why!


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-07 Thread Eliezer Croitoru

On 07/03/2016 22:08, Yuri Voinov wrote:

90 Gb first, 300 Gb second.

Thanks but...
Wouldn't it be much simpler and cheaper to just use WSUS instead all of 
the hassle??(if it's a closed business environment)

And when does the TCP_SWAPFAIL_MISS happens? always?
And a little tweak for the squid.conf
read_ahead_gap 4096 KB

The above doesn't match your environment bandwidth.
You are just spending too much bandwidth on requests that might not be 
fulfilled.

Try changing the settings to:
read_ahead_gap 128 KB

and see if it helps with something.
Also since your issue is bandwidth and users are not allowed to the 
Internet without the proxy I would try to dump the pfsense pf states to 
see what happens in the network layer, what src IP is consuming all this 
bandwidth.(or get a wider picture)
Also squid would not be the answer for a fully reasonable network usage, 
it only comes to help with couple specific things and not to mirror the 
whole Internet or even just MS as it is.(not saying that your case is a 
trial such as this)
MS has more then one API that can be mistaken as a download and it can 
consume more then actually required caching.


I do not know exactly what this means from the info page:
Maximum number of file descriptors:   81920
Largest file desc currently in use:   6157
Number of file desc currently in use: 8216

If the number of FD currently in use is 8216 then the largest file desc 
currently in use doesn't match.
This specific question might be a bug or expected result but I do not 
know and Amos or Alex or others might know the answer for this specific 
info page issue.


Another question I have which might be related(I have experienced such 
issues with GlusterFS in the past), how is the VM cache disk\s are 
connected? Is it connected directly to the VM or in the hypervisor level?
How do you mount them(fstab)? are these on the root disk or do you have 
couple disks mounted?
Did you had the chance to try to use other FS then EXT4? reiserFS? XFS? 
other?


The above questions are related to the TCP_SWAPFAIL_MISS.
Since there is an issue and you are probably only on the "buffering" 
testing stage of the cache_dir I would try to somehow reproduce the 
issue but it's not clear to me what is the exact way to replicate the issue.


Thanks,
Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-07 Thread Eliezer Croitoru

On 07/03/2016 16:29, Heiler Bemerguy wrote:

We're still getting all these SWAPFAIL and our link is
skyrocketing.. please help! I think it didn't happen on older
versions (.14 and below)


Hey,

What do you mean by skyrocketing?? like in the graph??
Also it is not clear to me something about the machine, is this machine 
a FW\ROUTER\GW?

If so is it for a lan?
How many interface this machine has?
Is it a pfsense? if so what version?
What is the FS used for the cache directories?
Did you also measured CPU when you see the spikes? if so what is it?
How clients access the proxy service? transparently or using a browser 
setings or WPAD with dhcp settings?
Also I have seen you are using 2 workers, is it because one worker 
doesn't seem to do the job?

Did you tried to change the values of:
cache_swap_low 98
cache_swap_high 99

from this high to lower numbers such as:
cache_swap_low 90
cache_swap_high 95

or even lower?
cache_swap_low 80
cache_swap_high 85

I am unsure about this since in the docs at:
http://www.squid-cache.org/Doc/config/cache_swap_low/
http://www.squid-cache.org/Doc/config/cache_swap_high/

the ROCK storage is not mentioned.

Also on what hardware are you running? what disks?

All the above are important and in your case it is possible that there 
is something wrong in how the network is planned or the software doing 
something wrong.


In scenarios like this I offer to verify two things:
- test what happens when you disable disk cache.(from CPU, bandwidth, 
DISK aspect)
- dump the cache manager info page to see basic statistics about the 
proxy traffic using: 
http:/cache_ip_or_visiblie_host_name:3128/squid-internal-mgr/info


For scenarios like this I started working on a logging\monitoring 
service\script that will run in the background of the machine and will 
dump content of some statistics to enable couple squid developers eyes 
to see these and then have a better understanding the nature of the issue.
For now the script\service is not ready and will not be able to help us 
so we need these dumps and information..


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-07 Thread Eliezer Croitoru

On 08/03/2016 00:08, Heiler Bemerguy wrote:

I don't know how to explain these FD numbers. I'm using EXT4 and I don't
know what are vmware cache disks.

Since it's a VM, there are couple options for a DATASTORE in vmware ESXi.
A description about the different options is at:
https://www.vmware.com/products/vsphere/features/storage

For the VMWARE vsphear\ESXi DATASTORE you can use a 
NAS(NFS,...),SAN(ISCSI,..),DAS.
Inside the VM settings\options you can connect a DISK device to a 
virtual IDE controller or a SCSI\SAS one.

The VM will see disks as /dev/sda{b,c,d,..} on Debian.
Or the VM has a NIC(E1000\VMNET2\VMNET3) which allows it to access the 
SAN\NAS directly using some internal network.


You should know how you configured your system.
For example GlusterFS is NAS.
When using VMWARE Vsphear\ESXi the most commonly used in the SMB to SME 
level of uses is a shared NFS for a cluster of VMWARE machines. There 
are couple other cases.
For DB servers the most commonly used practice I know of is letting the 
DB server mange the storage as a DAS(SATA\SAS\SCSI) with the exception 
of ISCSI that is mainly replaces DAS over dedicated links with kinds of 
fiber instead of a copper link(s).


If you just attached a new disk on-top of the datastore it's one thing 
and if you attached an ISCSI volume directly to the Debian VM or mounted 
a NFS share it's a different use case.


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-09 Thread Eliezer Croitoru

On 09/03/2016 10:54, L.P.H. van Belle wrote:

No,

Aufs :
cache_dir aufs /var/spool/squid 9216 16 256 max-size=100663296


Then the cases are different by nature...
you have 9GB and he uses 90+++ GB, you are using AUFS which is a FS 
based and he is using ROCK which is a DB structure.


The issues he present are probably related to some kind of DISK access 
or DB structure integrity and probably not directly to squid.conf but it 
is also possible that it is related to the cache_dir settings not being 
properly tuned.


The main issues that are related to the rock cache_dir DB in general can 
be reviewed at:

http://wiki.squid-cache.org/Features/RockStore

I have tried couple times to compare ufs\aufs to rock and each has it's 
own limitations.
The ufs\aufs is relying mainly on the OS FS structure which has lots of 
testing and tuning in it while rock do not have this kind of luxury yet.
I do not have the resources an funding an knowledge to test and compare 
them.
However I do know that there are couple basic test cases that can help 
to identify if it fit for this specific task without deep inspection.


Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sudden but sustained high bandwidth usage

2016-03-09 Thread Eliezer Croitoru

On 09/03/2016 09:59, L.P.H. van Belle wrote:

With the settings i already told you.  Today is ms update day and hee..
its caching my windows updates ..  so go try them out.


Are you using ROCK cache_dir ??

Eliezer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] videos caching over https

2016-04-03 Thread Eliezer Croitoru
I am unsure what you want to achieve.

Do you want to cache one specific url or a set of urls?

Also are you targeting one host\url and\or also one client or more?

It will depend on the level of control that you have on the client side.

If you are in a position to Intercept all the traffic it would be pretty 
simple(in most cases) to achieve what you are describing.

 

Eliezer

 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Raju M K
Sent: Friday, April 1, 2016 4:56 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] videos caching over https

 

Hi users,

iam able to cache videos through http 
by adding refresh_pattern -i .amazonaws.com  (m4f|mp4)


but i need to cache through https as well. from single url only

please help

-- 

Regards,
M K Raju.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] What are the chunks per request limits in squid? if at all? and Is there any client that comply with Retry-After response?

2016-05-23 Thread Eliezer Croitoru
Is there any limit to the amount of chunks per range request in squid?
I tried to read the RFCs:
https://tools.ietf.org/html/rfc7231#section-7.1.3
https://tools.ietf.org/html/rfc7233#section-3.1

But It was a bit hard for me to understand if there is a limit.
Currently I have seen MS updates trying to fetch about 16 ranges per request
with "if-modified-since X".
From What I understand squid should be able to fulfill each and every one of
these requests if it has the full file 
but it will fetch each and every chunk or at-least validate against the
origin service.

And I was also wondering about squid compliance to "Retry-After" RFC, is
there any known client which actually implements support for that feature? 

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] What are the chunks per request limits in squid? if at all? and Is there any client that comply with Retry-After response?

2016-05-23 Thread Eliezer Croitoru
Is there any limit to the amount of chunks per range request in squid?
I tried to read the RFCs:
https://tools.ietf.org/html/rfc7231#section-7.1.3
https://tools.ietf.org/html/rfc7233#section-3.1

But It was a bit hard for me to understand if there is a limit.
Currently I have seen MS updates trying to fetch about 16 ranges per request
with "if-modified-since X".
>From What I understand squid should be able to fulfill each and every one of
these requests if it has the full file 
but it will fetch each and every chunk or at-least validate against the
origin service.

And I was also wondering about squid compliance to "Retry-After" RFC, is
there any known client which actually implements support for that feature? 

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Would it be possible to run a http to https gateway using squid?

2016-05-10 Thread Eliezer Croitoru
I was wondering to myself, If I can generate certificates and bump the
connection, I can use a 302\308 to redirect all traffic from https to a
http(intercepatble) connection.

Then on the http interceptor rewrite the request into https.

I have a working setup which uses a redirection "attack" to authenticate
users over http+https.

Now the issue is that if all browsers will deny a redirection from https to
http(a downgrading attack) then the http world would look a bit weird.


I was thinking about such a downgrade attack on couple sites but I am unsure
how good it will be.

I have seen couple years ago that some ISPs used a redirection attack when
youtube used plain http, this was in order to allow a "pre-fetch" of a tiny
GET request.

Now since many others up-graded their security it's another story.

 

And as an addition I have seen that Microsoft use and "FTP" like transfer
protocol in their software.

They have a "secured" control channel which has certificates pinning or
something else as a safe guard,
and in more then one case they use another channel to fetch the request over
plain HTTP( when a proxy is defined).

 

Would it be reasonable to write and publish such a tool? Or is it a security
risk to publish such a tool to the public?

 

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL-Bump and generated certificates ...

2016-05-16 Thread Eliezer Croitoru
Hey Walter,

I am not sure if it's the ssl_crtd which does such a thing but this is my
main suspect.
If you can extract the ssl_crtd binary from 3.4.X(newest) and test it before
maybe Alex will respond then it will verify some of the doubt.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On
Behalf Of Walter H.
Sent: Monday, May 16, 2016 7:48 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] SSL-Bump and generated certificates ...

Hello,

I updated squid 3.4.10 to 3.5.19 on my CentOS VM, I noticed that the
generated certificates are now SHA2 and not SHA1, can I influence somewhere
to generate still SHA1 certificates?
(I have devices which use this proxy and are not able to handle SHA2)

Thanks,
Walter



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Would it be possible to run a http to https gateway using squid?

2016-05-15 Thread Eliezer Croitoru

Hey Amos,

You are right that it seems like there is no point since you already 
decrypt the connection.
But in the real world the price of maintaining an encrypted session for 
many users for a long period is not the same as maintaining them for 
short burst.


Since all YouTube traffic is done on HTTPS it would be pretty simple 
with these days tools to use some kind of a "https to http bridge" 
software that would
fetch the pages for the clients(most of the pages are tiny) and it will 
help the clients to be able to handle less secured traffic.


I know that with these days hardware it's almost not needed but inside a 
trusted network there is no point for using end to end HTTPS.(to my 
understanding)
Some will might not believe that there are trusted networks in the wild 
but I know that these do exist and in many of these such a GW is required.


Eliezer

On 11/05/2016 08:40, Amos Jeffries wrote:

On 11/05/2016 9:25 a.m., Eliezer Croitoru wrote:

I was wondering to myself, If I can generate certificates and bump the
connection, I can use a 302\308 to redirect all traffic from https to a
http(intercepatble) connection.

Then on the http interceptor rewrite the request into https.

What would be the point? You already had to decrypt to do the bump and
redirect.


I have a working setup which uses a redirection "attack" to authenticate
users over http+https.

Now the issue is that if all browsers will deny a redirection from https to
http(a downgrading attack) then the http world would look a bit weird.


Not that weird. It is called HTTP Strict Transport Security (HSTS).



And as an addition I have seen that Microsoft use and "FTP" like transfer
protocol in their software.

They have a "secured" control channel which has certificates pinning or
something else as a safe guard,
and in more then one case they use another channel to fetch the request over
plain HTTP( when a proxy is defined).


You will note that this is a very cache friendly way to do crypto. The
bulky part of the content is cacheable by anyone who needs to reduce
bandwith, but remains securely verifiable and integrity checked using
the off-band details.

However, it is not what you are talking about for your tool. The above
method by MS requires intentional design in the web service with
integrity checking actually performed by the endpoints.

  Under downgrade attack conditions the endpoints would not know that the
extra work was needed so one cannot assume that it is getting done. One
of the reasons browsers are so into TLS is that the transport layer does
all the verification and leaves them able to skip perceived slow
security checks at higher levels.


Would it be reasonable to write and publish such a tool? Or is it a security
risk to publish such a tool to the public?


Up to you. AIUI is illegal in most of the world to make use of it. Like
most hacking tools if used other than for permitted penetration testing
and research purposes.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New StoreID helper: squid_dedup

2016-05-15 Thread Eliezer Croitoru

Thanks for sharing!

I didn't had enough time to understand the tool structure since I am not 
a python expert but,
This is the first squid helper I have seen which is based on python and 
implements concurrency.


Thanks!!
Eliezer Croitoru

On 10/05/2016 00:56, Hans-Peter Jansen wrote:

Hi,

I'm pleased to announce the availability of squid_dedup, a helper for
deduplicating CDN accesses, implementing the squid 3 StoreID protocol.

It is a multi-threaded tool, written in python3, with no further dependencies,
hosted at: https://github.com/frispete/squid_dedup
available at: https://pypi.python.org/pypi/squid-dedup

For openSUSE users, a ready made rpm package is available here:
https://build.opensuse.org/package/show/home:frispete:python3/squid_dedup

Any feedback is greatly appreciated.

Cheers,
Pete
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Are there any distros with SSL Bump compiled by default?

2016-05-16 Thread Eliezer Croitoru
Hey Tim,

I have been working for quite some time on packages for couple Linux 
distributions and in them Ubuntu and Debian.
I was planning to publish them(Ubuntu + Debian) inside a tar.xz and to attach 
them a tiny "update\install" script.
This is since I was trying to use the deb packaging system for quite some time 
and to try and build using them but compared to RPMs I keep forgetting every 
time what I did last time.
So in the next couple weeks I will try to publish the next tar.xz
- Ubuntu 14.04 32+64 bit
- Ubuntu 16.04 32+64 bit
- Debian 8 32+64 bit
- Debian 7 32+64 bit

This is a part of my trial to somehow publish a binary version of squid per 
release.
I hope to have some time and to make it possible so also squid 4.X will also 
get the same attention.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Tim Bates
Sent: Saturday, May 14, 2016 12:36 PM
To: squid-us...@squid-cache.org
Subject: [squid-users] Are there any distros with SSL Bump compiled by default?

Are there any Linux distros with pre-compiled versions of Squid with SSL Bump 
support compiled in?

Alternatively, does anyone reputable do a 3rd party repo for Debian/Ubuntu that 
includes SSL Bump?

TB
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid and AD => That' s don't work !

2016-05-11 Thread Eliezer Croitoru
Hey Oliver,

 

What version of AD are you trying to authenticate against?

What is the client Operating System?

The more details you will give on the system the more possible you will get an 
answer(in general not from me specifically..)

 

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Olivier CALVANO
Sent: Wednesday, May 11, 2016 11:09 AM
To: Squid Users
Subject: [squid-users] Squid and AD => That' s don't work !

 

Hi

 

is that someone has actually used squid with ntlm AD authentication?

because it don't works really well and no there is no one who reponds to 
problems, it's a shame.

 

there is commercial support a squid?

 

Regards

Olivier

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Only listening to ipv6 (bug) still present? http_port

2016-05-03 Thread Eliezer Croitoru

Hey Tory,

I am not aware of such changes from 3.5.16 to 3.5.17.
I have not tested for this case yet and it seems a bit weird for me to 
see such behavior from squid.
I will be able to add it to the set of tests I already have later, until 
now 3.5.17 looks pretty working to me and without known regressions.


Eliezer

On 04/05/2016 02:12, Tory M Blue wrote:

My configs have always consisted of http_port 80 accel vhost.. With
the latest 3.5.17 (I guess) if you don't list 0.0.0.0:80 squid won't
even attempt to listen, talk on ivp4..

So adding 0.0.0.0:80 allows it to at least talk via ipv4.

This seems wrong, odd.

I understand you are removing methods to disable ipv6, however forcing
folks to us only ipv6 seems like a stretch :)

Thanks
Tory

CentOS 7
squid-3.5.17-1.el7.centos.x86_64
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4: Cloudflare SSL connection problem

2016-04-17 Thread Eliezer Croitoru

  
  
For me it works.
...
The first thing to do is publish the squid.conf with a bug report
and all other related info.
*NIX doesn't mean CentOS since on CentOS this specific issue doesn't
exit.
I assume that if it works on CentOS it will work almost the same for
Ubuntu and Debian.

Eliezer

On 16/04/2016 19:50, Yuri Voinov wrote:


  3.5.16 on *NIX is also has this issue.

Only 3.5.16 Win64 is works like sharm.


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Routing Internally And/Or Externally?

2016-04-19 Thread Eliezer Croitoru
Hey There,

In general what you want is possible but couple things are not clear to me yet.
The config you mentioned has couple issues:
##START OF INFO
acl localnetPAC src 192.168.0.0/24#resource within my 
network
acl localnetPAC src internal.resources.com  #resource within my 
network
acl localnetPAC src internal1.resources.com#resource within my 
network
acl localnetPAC src internal2.resources.com#resource within my 
network
acl localnetPAC src internal3.resources.com#resource within my 
network

acl InboundNet scr 10.24.62.51   #NetScaler
acl OutboundNet scr 10.24.62.51   #NetScaler

http_access allow localnetPAC #user will be let thru to the 
local resources
#InboundNet !localnetPAC allow OutboundNet
#this is what I WANT to do but isn't working 
#can anyone steer me to the right track?
##END OF INFO

In general if you want to deny with a redirection you can use the deny_info and 
a custom "shebang"  dummy acl.
One example of implementation can be found in the list archives at:
http://lists.squid-cache.org/pipermail/squid-users/2015-October/006092.html

Squid allows you to customize the "deny" action and which one of them can be a 
redirection.
You have used in your example an acl like:
acl localnetPAC src internal3.resources.com

which uses a domain, but the "src" type acl cannot be used with a domain name 
and can only be an IP address.
Peek at the acl docs at: http://www.squid-cache.org/Doc/config/acl/
But you have mentioned the bottom line as:
#InboundNet !localnetPAC allow OutboundNet

Which is not clear to me but I will try to be creative with an example:
acl local_network_addresses dst 192.168.0.0/24  #Internal services
acl internal_domains dstdomain internal1.resources.com #Internal domains names
acl internal_domains dstdomain internal2.resources.com  #Internal domains names
acl dummy_match dstdom_regex .  #dummy match all domain regex 
acl InboundNet scr 10.24.62.51   #NetScaler source IP(the clients IP is 
not visible behind the NetScaler)
deny_info 302:http://www.google.com/?%H dummy_match #Customized deny_info 
that will redirect to google with some addition
http_access allow InboundNet internal_domains   #rule that allows netscaler 
sources traffic to access internal domains
http_access allow InboundNet acl local_network_addresses#rule that 
allows netscaler sources traffic to access internal ip addresses
http_access deny dummy_match# rule that should match all traffic and 
redirect any request to google
##END OF example

I hope the example helps you.
Let me know If it helped you and\or if you need more help or if I didn't 
understood the question.

Eliezer

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of nkingsquid
Sent: Tuesday, April 19, 2016 9:19 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Routing Internally And/Or Externally?

I should probably mention that its important that the request NOT be denied, 
just redirected if it is not a listed internal resource...



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Routing-Internally-And-Or-Externally-tp4677152p4677153.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 4: Cloudflare SSL connection problem

2016-04-20 Thread Eliezer Croitoru

  
  
Hey Yuri,

I think that the bug solution or identification is requiring a full
tcpdump trace for a single request as was mentioned on the bug
report:
http://bugs.squid-cache.org/show_bug.cgi?id=4497#c39
http://bugs.squid-cache.org/show_bug.cgi?id=4497#c40

I have opened the port to my proxy, so you would be able to run
couple requests to verify that your curl and wget and other clients
doesn't have this "handshake" issue when accessing
https://cloudflare.com using my local testing proxy.
Send me privately your origin IP address so I would add an exception
in my proxy for it.

Eliezer

On 12/04/2016 14:55, Yuri Voinov wrote:

Does
  anybody faces this problem with 4.0.8:
  
  
  https://i1.someimage.com/3lD2cvV.png
  
  
  ?
  
  
  It accomplished this error in cache.log:
  
  
  2016/04/12 17:39:38 kid1| Error negotiating SSL on FD 54:
  error::lib(0):func(0):reason(0) (5/0/0)
  
  
  and "NONE/503" in access.log.
  
  
  Without proxy works like sharm. 3.5.16 with the similar squid.conf
  works like sharm.
  
  
  NB: Cloudflare support said, that they key feature for SSL is SNI
  and ECDSA now. AFAIK, 4.0.8 is fully supports this features.
  
  
  Any advice will be helpful.
  
  
  Yes, I know this looks like DDoS protection on Cloudflare. But
  WTF? Any workaround required. Half-Internet is hosted on
  Cloudflare.
  
  
  WBR, Yuri
  
  ___
  
  squid-users mailing list
  
  squid-users@lists.squid-cache.org
  
  http://lists.squid-cache.org/listinfo/squid-users
  


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Peer2Peer Url categorizing, black\white lists, can it work?

2016-07-25 Thread Eliezer Croitoru
I have it on my plate for quite some time and I was wondering about the
options and interest in the subject.

Intro:
Currently most free blacklists are distributed in the old fashion way of a
tar or other file.
There are benefits to these but I have not seen an option to be able to
"help" each other.
For example many proxy servers "knows" about a domain that other do not.
So even if the site exists and know in one side of the planet it's not in
another.
If it could be categorized or white\black listed in one side of the planet
why we cannot help each other?
Many admins adds sites to their DB and list but not many share them
publically.

The idea:
As an example Google and Mozilla services advertise malware infected sites
using their browser.
Many filtering solutions uses their clients logs to inspect and enhance
their lists.
There are many distributed key+value DB systems such as etcd and many others
DHT based.
I believe that somehow a url categorizing and black\white lists can be
advertised in a similar way.
The only limit is the "bootstap" or the "routers" of such a network.
Since such a service should only apply to KEYS and values which today should
not exceed 1MB I believe it would be pretty simple to create networks based
on that.
Once a network category or scheme can be defined it would be pretty simple
to "match" or "connect" between the relevant nodes.

Currently I am looking at the different options for the backend DB,
permissions and hierarchy which should give an admin a nice start point.
Such "online network" can be up and running pretty fast and it can enhance
the regular categories and lists to be more up-to-date.
Else then the actual categorizing and listing I believe that it would be
possible to share and generate a list of public domains which are known
compared to the current state which many parts of the web is "unknown".

If you wish to participate in any of the above ideas please contact me here
or privately.

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 


<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-14 Thread Eliezer Croitoru
Hey Omid,

The key concept is that it is possible but not always worth the effort..
I have tested it to work for Windows 10 and for couple other platforms but I 
didn't verified how it will react to every version of Windows 7.
I have tested how things works with WSUSOFFLINE and you will need to change the 
regex dstdomain into:
acl wu dstdom_regex download\.windowsupdate\.com$ download\.microsoft\.com$

Now you need to have my latest updated version in order to avoid caching of MS 
AV updates which are critical and should never be cached for more then 1 hour.

You can try to "seed" the cache using a client which will run WSUSOFFLINE but 
to my understanding it's not required since you will store more then you 
actually need.
If one user is downloading an ancient or special update you don't need it 
stored unless you can predict it will be used\downloaded a lot.

Let me know if you need some help with it.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Thursday, July 14, 2016 2:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows 
updates store.

Hi,

Great idea . I was looking for something like this for years and i was too
lazy to start it myself ;)

I am going to test your code in a multi thousand client ISP .

It would more great if use the experiences of http://www.wsusoffline.net/
specially for your fetcher . It is GPL

Also the ip address 13.107.4.50 is mainly used by microsoft for its download
services . With services like
https://www.virustotal.com/en-gb/ip-address/13.107.4.50/information/ we have
found that other domains also used for update/download services . Maybe not
bad if create special things for this ip address .

Thanks in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678492.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-25 Thread Eliezer Croitoru
Hey Omid,

I will comment inline.
And there are couple details which we need to understand couple issues.


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Monday, July 25, 2016 12:15 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows 
updates store.

Hi,

Thanks for support .

recently i have seen a problem with version beta 0.2 . when fetcher is working 
the kernel logs lots of following error
TCP: out of memory -- consider tuning tcp_mem

# To verify the actual status we need the output of:
$ free -m
$ cat /proc/sys/net/ipv4/tcp_mem
$ top -n1 -b
$ cat /proc/net/sockstat
$ cat /proc/sys/net/ipv4/tcp_max_orphans 

I think the problem is about orphaned connections which i mentioned before .
Managed to try new version to see what happens.

# If you have an orphaned connections on the machine with or without the MS 
updates proxy, you should consider to analyze the machine structure and load in 
general.
If indeed there are orphan connections we need to verify if it's from the squid 
or my service or the combination of them together.


Also i have a feature request . Please provide a configuration file for example 
in /etc/foldername or even beside the binary files to have selective options 
for both fetcher and logger.

# With what options for the logger and fetcher?

I have seen following change log
beta 0.3 - 19/07/2016
+ Upgraded the fetcher to honour private and no-store cache-control  headers
when fetching objects.

As my point of view the more hits is better and there is no problem to store 
private and no-store objects if it helps to achieve more hits and bandwidth 
saving . So it would be fine to have an option in mentioned config file to 
change it myself .

# I understand your way of looking at things but this is a very wrong way to 
look at cache and store.
The problem with storing private and no-store responses is very simple.
These files are temporary and exists for one request only(in most cases).
Specifically for MS it is true and they do not use private files more then once.
I do not wish to offend you or anyone by not honoring such a request but since 
it's a public service this is the definition of it.
If you want to see the options of the fetcher and the service just add the "-h" 
option to see the available options.

I have considered to use some log file but yet to get to the point which I have 
a specific format that I want to work with.
I will try to see what can be done with log files and also what should be done 
to handle log rotation. 

Thanks again


## Resources
* http://blog.tsunanet.net/2011/03/out-of-socket-memory.html

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi on embedded system

2016-07-24 Thread Eliezer Croitoru
Hey,

What version are you using?
Squid since version 3.X has a built in interface which might fit your needs.
You can see an example of usage at:
http://wiki.squid-cache.org/Features/CacheManager#default

What you will need to do is to access the proxy directly using a url like:
http://mycache.example.com:3128/squid-internal-mgr/menu

and for the info page from the menu:
http://mycache.example.com:3128/squid-internal-mgr/info

So unless you have a special need for the cache manger cgi you should use the 
http one.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of reinerotto
Sent: Sunday, July 24, 2016 3:54 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] cachemgr.cgi on embedded system

I have a problem to use cachemgr.cgi on an embedded system: 
(Cache Server: 127.0.0.1:3128; manager name: manager: Password: maypasswd)
browser:
The following error was encountered while trying to retrieve the URL:
cache_object://127.0.0.1/
Cache Manager Access Denied.
Sorry, you are not currently allowed to request cache_object://127.0.0.1/ from 
this cache manager until you have authenticated yourself.
ACL Access Denied

cache.log:
2016/07/24 13:19:00| CacheManager: unknown@local=127.0.0.1:3128
remote=127.0.0.1:56590 FD 18 flags=1: password needed for 'menu'

squid.conf:
acl manager proto cache_object
#next just for testing
http_access allow manager all
cachemgr_passwd mypasswd all

On the embedded system, there is only a small http-server (uhttpd) running, 
_not_ apache or similar, so I suspect some special "requirement" not met on my 
system.
It could be _either_ some special .configure option for squid (I have a 
downsized one, self-compiled) _or_ some speciality regarding my http-server, 
which otherwise works well.

Any ideas ? 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cachemgr-cgi-on-embedded-system-tp4678665.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Peer2Peer Url categorizing, black\white lists, can it work?

2016-07-26 Thread Eliezer Croitoru
Thanks Amos,

The concept is simple and easy to implement but it is not maintained anymore.
The url http://gremlin.ru/soft/drbl/en/zones.html is broken :\

I have also seen: RiskIQ -> https://en.wikipedia.org/wiki/RiskIQ
And a dnsmasq blacklist: https://github.com/britannic/blacklist
And a reverse proxy idea: https://github.com/marinhero/goxy

In any case it's not like DHT and similar ideas.
The drbl has a very solid concept but lacks couple concepts compared to what I 
was thinking about.

Currently I have a client for public rbls such as Symantec and OpenDNS.
And this is nice example code that handles dns blacklist queries in golang: 
https://github.com/jersten/ipchk
(for me to remember later)

I will try to calculate couple things and then I will move on.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Tuesday, July 26, 2016 2:45 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Peer2Peer Url categorizing, black\white lists, can 
it work?

On 26/07/2016 3:14 p.m., Eliezer Croitoru wrote:
> I have it on my plate for quite some time and I was wondering about the
> options and interest in the subject.
> 
> Intro:
> Currently most free blacklists are distributed in the old fashion way of a
> tar or other file.
> There are benefits to these but I have not seen an option to be able to
> "help" each other.
> For example many proxy servers "knows" about a domain that other do not.
> So even if the site exists and know in one side of the planet it's not in
> another.
> If it could be categorized or white\black listed in one side of the planet
> why we cannot help each other?
> Many admins adds sites to their DB and list but not many share them
> publically.
> 
> The idea:
> As an example Google and Mozilla services advertise malware infected sites
> using their browser.
> Many filtering solutions uses their clients logs to inspect and enhance
> their lists.
> There are many distributed key+value DB systems such as etcd and many others
> DHT based.
> I believe that somehow a url categorizing and black\white lists can be
> advertised in a similar way.
> The only limit is the "bootstap" or the "routers" of such a network.
> Since such a service should only apply to KEYS and values which today should
> not exceed 1MB I believe it would be pretty simple to create networks based
> on that.
> Once a network category or scheme can be defined it would be pretty simple
> to "match" or "connect" between the relevant nodes.
> 
> Currently I am looking at the different options for the backend DB,
> permissions and hierarchy which should give an admin a nice start point.
> Such "online network" can be up and running pretty fast and it can enhance
> the regular categories and lists to be more up-to-date.
> Else then the actual categorizing and listing I believe that it would be
> possible to share and generate a list of public domains which are known
> compared to the current state which many parts of the web is "unknown".


I suggest you look into how DRBL works,
<http://gremlin.ru/soft/drbl/en/faq.html>. The distributed blacklist
design was created by the anti-spam community as both a protection for
maintainers against legal threats to list administrators, and to provide
resistance against individual nodes disappearing for any other reason.
 That system would allow immedate linkup with some existing Rpublic
blacklists like SURBL, which lists websites used by spammers for malware
or phish hosting.

All thats needed in Squid would be an ACL to do the lookups.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.3.8 https (Yuri Voinov)

2016-08-10 Thread Eliezer Croitoru
There are couple ways to do so but your speed issues are probably not from 
access to the domains in the logs:
www.youtube.com

You will need to "slow" down domains such as:
r7---sn-nhpax-ua8s.googlevideo.com

You don't need squid for that but you would be able to track the relevant IP 
addresses to limit access towards.
I can write a log "follower" script that will update an IPTSET iptables target.
Do you have any experience with CentOS QOS or rate limiting?

I will be able to write the script only next week if it will help you.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of erdosain9
Sent: Thursday, August 11, 2016 2:38 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid 3.3.8 https (Yuri Voinov)

Yes, sorry, i break the "thread"... i was talking about this 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-3-8-https-td4678795.html

i just want to limit youtube, not block youtube... just limit the bandwith.
i can do that without https?? i try with dealy pools, but not working with 
https... there's other way??
Thanks!!!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Re-squid-3-3-8-https-Yuri-Voinov-tp4678799p4678826.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how can I coplete this squid tutorial?

2016-08-10 Thread Eliezer Croitoru
Hey james,

We can try to help you but I couldn't understand your question.
The squid.conf file by default contains a limited set of configuration lines 
since others are bound to the default.
You can see the full list of options in the squid.conf.documented.
Depends on your OS version you will be able to access the file on different 
locations in the file system.
The configuration that are mentioned in the document\tutorial are specific and 
some of then will not appear in the default squid.conf directly but can be 
added manually.
What OS are you using?
What are you trying to achieve? Basic caching or filtering or just access 
control?

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of james82
Sent: Wednesday, August 10, 2016 9:00 AM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] how can I coplete this squid tutorial?

I find a tutorial on this website:
http://www.deckle.co.uk/squid-users-guide/squid-configuration-basics.html
. I want to complete it. I don't know where is those line: 

acl localnet src 192.168.1.0/255.255.255.0
..
http_access allow  localnet 
icp_access  allow  localnet

Can somebody help me? 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-can-I-coplete-this-squid-tutorial-tp4678821.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Range header is a hit ratio killer

2016-08-10 Thread Eliezer Croitoru
Well it will be different from system to system but one of the main things was 
not about prefetching.
Indeed sometimes prefetching is not possible, but, when you have a situation 
which parallel requests are causing amplified download by your proxy it means 
the proxy or the clients have some conflict of "interest".
The client and you will blame the proxy while you will probably blame the 
software or the code.

Prefetching in the caching world and from my perspective and understanding of 
how proxies work is only compared to repeated download.
Means that since a proxy can never fetch something without the client 
requesting anything, then we can separate couple things in the proxy.
The request, the prefetching and the cache policy are couple different "things".
When you are using a rule\config which forces the proxy to utilize 500% of the 
bandwidth then you have an "issue".
This specific issue can be converted from one to another with enough admin 
logic leaving the cache policy to the internal parts of the cache.

The simplest way to understand the issue is to understand what Amos described.. 
It is possible that the proxy is trying to download the full object 5 times if 
5 clients(or the  same but couple connections) are asking the same object.
The solution to such an issue would be to consolidate these requests into one.
And since code changes in Squid takes time you could convert the issue into 
another form.
The simplest way to do so is inspect each request in a level that will identify 
a 206 request and will send it into one "prefetch" queue.
This prefetch external queue software\script\code will be able to resolve the 
"500% amplification" which some would describe as an attack.
This way the clients requests will be served live and without causing 
amplification attacks while the cache will be filled externally\artificially 
with objects.
Depends on the cache purpose you would be able to make it work.
I have implemented couple times this idea in the past using a set of ruby 
scripts and I must admit that some objects do not worth the code invested in 
them.

Hope it clears the picture\words and meanings,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: k simon [mailto:chio1...@gmail.com] 
Sent: Tuesday, August 9, 2016 6:36 AM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Range header is a hit ratio killer



在 16/8/7 21:20, Eliezer Croitoru 写道:
> Hey Simon,
>
> I do not know the plans but it will depend on couple things which can fit to 
> one case but not the other.
> The assumption that we can fetch any part of the object is the first step for 
> any solution what so ever.
> However it is not guaranteed that each request will be public.
>
> The idea of static chunks exists for many years in many applications and in 
> many forms and YouTube videos player uses a similar idea. Google video 
> clients and servers uses a bytes "range" request in the url rather then in 
> the request header.
> Technically it would be possible to implement such an idea but it has it's 
> own cost.
> Eventually if the file is indeed public(what squid was designed to cache) 
> then it might not be of a big problem.
> Depends on the target sites a the solution will be different.
> Before deciding on a specific solution my preferred path is to analyze the 
> requests.
>
> By observing amplified traffic of 500% to  clients side you mean that the 
> incoming traffic to the server is 500% compared to the output towards the 
> clients?
> If so I think that there might be a "smarter" solution then 206 range offset 
> limit.
> The old method of prefetching works pretty good in many cases. From what you 
> describe it might have better luck then the plain "fetch everything on the 
> wire in real time".
>
> I cannot guarantee that prefetching is the right solution for you but I think 
> that a case like this deserves couple eyes to understand if there is a right 
> way to handle the situation.
>
  I think prefetch may not be fit for forward proxy, as we do not know what's 
"hot" request exactly. LRU should do more efficient.

Simon

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.3.8 https (Yuri Voinov)

2016-08-10 Thread Eliezer Croitoru
To do what?

If you want to implement QOS you can do that pretty easily on the OS level.

Since you are using a regular forward proxy you can monitor youtube traffic 
using some external acl or a logging helper and add new domains IP addresses to 
an iptables IPSET which will mark these connections.

There is a risk that some of these are IP will be shared with other google 
services but most of the problematic content is on the subdomains of 
.googlevideo.com and it will be pretty simple to mark then.

 

It would be better if you could upgrade to a more recent version of squid but I 
think you should consider the options first since maybe even with newer 
versions of squid you would need a combination to get a full match for your 
needs.

 

Eliezer

 



Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Erdosain9
Sent: Tuesday, August 9, 2016 12:54 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] squid 3.3.8 https (Yuri Voinov)

 

but, its possible to do with this version?? (3.3.8) i have CentOs 7 and 
thats the official packet.

thanks

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-13 Thread Eliezer Croitoru
Just to add the Microsoft BITS client uses a If-Unmodified-Since header and not 
a no-cache.
The above is as far as I can tell from dumps I have for windows 7 and up.
There are cases which a client would want to abort the connection but I have 
not seen these from windows for a long time.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Heiler Bemerguy
Sent: Wednesday, July 13, 2016 6:35 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] time based range_offset_limit


1468423415.143 160645 10.1.4.7 TCP_MISS_ABORTED/206 510 GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/06/word-x-none_48e3c2f2bb14dd57321ae5a53cf8de2ca0fe6114.cab
 
- HIER_DIRECT/201.48.38.146 application/octet-stream
1468423415.146 160651 10.1.4.7 TCP_MISS_ABORTED/206 510 GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/06/word-x-none_48e3c2f2bb14dd57321ae5a53cf8de2ca0fe6114.cab
 
- HIER_DIRECT/201.48.38.146 application/octet-stream
1468423415.146 160509 10.1.4.7 TCP_MISS_ABORTED/206 510 GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/06/word-x-none_48e3c2f2bb14dd57321ae5a53cf8de2ca0fe6114.cab
 
- HIER_DIRECT/201.48.38.146 application/octet-stream
1468423415.147 160579 10.1.4.7 TCP_MISS_ABORTED/206 510 GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/06/word-x-none_48e3c2f2bb14dd57321ae5a53cf8de2ca0fe6114.cab
 
- HIER_DIRECT/201.48.38.146 application/octet-stream
1468423415.643 251033 10.1.4.7 TCP_MISS/206 103141 GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/secu/2016/06/excel-x-none_2acf846b28d580d20f1d5973c9697cb54dc1ad21.cab
 
- HIER_DIRECT/201.48.38.146 application/octet-stream


For some reason, it seems the client is aborting the range connection.. 
and squid keeps downloading it all simultaneously because it triggers 
range_offset_limit. But why would BITS (background intelligent transfer 
services - microsoft) cancel these downloads?

It really seems to use some no-cache headers:

Cache-Control: no-cache
Pragma: no-cache


How to ignore it?


-- 
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 13/07/2016 10:26, Amos Jeffries escreveu:
> On 14/07/2016 12:43 a.m., Heiler Bemerguy wrote:
>> Em 12/07/2016 23:43, Alex Rousskov escreveu:
>>>> (without using Range: header).
>>> That's your squid.conf customization, I presume.
>>>
>> The squid won't send a Range: header to the server because the request
>> is matching the range_offset_limit -1 ACL. I presume. So squid will try
>> to fetch the file from the beginning, faking a full request, right?
> No faking. Is making.
>
>>>> That's why I don't understand why it does not work on a REAL
>>>> enviroment.
>>> Many things can go wrong -- the real requests may require collapsed
>>> forwarding that you do not test, the real requests may have no-cache,
>>> the real response may not be cachable, or there is some Range handling
>>> bug that your test scripts do not tickle (e.g., they request ranges that
>>> are always close to each other and are always available at the same
>>> time).
>> Well, if I turn off collapsed_forwarding and try to GET the same file on
>> the same server in a row (only changing the Range), it will create *two
>> *connections to the server instead of only *one*.
>> I use "override-expire ignore-private ignore-no-store ignore-reload
>> ignore-must-revalidate store-stale" for this particular request, won't
>> it override the no-cache or whatever?
> No. Those refresh_pattern options are overriding the response
> requirements mandated by the server.
>
> The "no-cache" Alex speaks of is a client requirement that no cached
> data be sent. Which also means that client request cannot be collapsed
> with others, since collapsing is essentially just using 'cached' data
> before it gets stored to the cache.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-20 Thread Eliezer Croitoru
Hey Omid,

After inspection of more data I have seen that there are couple cases which 
will result in disks space consumption.
Windows Updates supports a variety of languages. When you have more then one or 
two languages the amount of cache is rapidly changes.
To give some numbers to the picture:
- Each Windows version have multiple versions(starter, home, professional, 
enterprise..)
- Each cpu arch requires it's own updates(x86, x64) 
- Each Windows version can have a big update for multiple languages, depends on 
the locality of the system
- Each Windows product such as office has it's own language packs and 
updates(some updates are huge..)

Since I am not one of Microsoft Engineers or product\updates managers I cannot 
guarantee that my understanding of the subject is solid like the ground.
But in the other hand since I do have background with HTTP and it's structure I 
can guarantee some assurance that my research can be understood by most if not 
any HTTP expert.

Squid by it's nature honors specific caching rules and these are very general.
To my understanding Squid was not built to satisfy each use case but it helps 
many of them.
Since you also noticed that windows updates can consume lots of disk space then 
what you mentioned about last accessed time seems pretty reasonable for a cache.
You have the choice on how to manage your store\cache according to whatever is 
required\needed.
For example the command:
find /cache1/body/v1/  -atime +7 -type f|wc -l

Should give you some details about the files which was not accessed in the last 
week.
We can try to enhance the above command\idea to calculate statistics in a way 
that will help us to get an idea of what files or updates are downloaded 
periodically.
Currently only with the existence of the request files we can understand what 
responses belongs to what request.

Let me know if you want me to compose some script that will help you to decide 
what files to purge. (I will probably write it in ruby)
There is an option to "blacklist" a response from being fetched by the fetcher 
or to be used by the web-service but you will need to update to the latest 
version of the fetcher and to use the right cli option(don't remember now) or 
to use the command under a "true" pipe such as "true | /location/fetcher ..." 
to avoid any "pause" which it will cause.

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Tuesday, July 19, 2016 1:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows 
updates store.

Eliezer Croitoru-2 wrote
> Hey Omid,
> 
> Indeed my preference is that if you can ask ask and I will try to give you
> couple more details on the service and the subject.

Hey Eliezer,


4.Current storage capacity is 500G andmore than 50% of it becomes full and
growing fast . Is there any mechanism for garbage collection in your code ?
If not is it good idea to remove files based on last access time (ls -ltu
/cache1/body/v1/) ? should i also delete old files from header and request
folders ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678581.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-17 Thread Eliezer Croitoru
I read your email but now I am a bit busy.
Later today or tomorrow I will respond.

All The Bests,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Sunday, July 17, 2016 1:16 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] cache peer communication about HIT/MISS between 
squid and and non-squid peer

Lets assume the all of parents replies are hits . Now is there a way ?

iptables -t mangle -A OUTPUT -t mangle -p tcp -m tcp -d
192.168.1.1,192.168.1.2 --sport 8080 -j DSCP --set-dscp 0x60

is this ok ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678534.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-18 Thread Eliezer Croitoru
About the mismatch log output I cannot say a thing since I have not researched 
it.
And about an option to add a HIT HEADER you can use the next script:
https://gist.github.com/elico/ac58073812b8cad14ef154d8730e22cb

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Monday, July 18, 2016 2:39 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows 
updates store.

Dear Eliezer,

Unfortunately no success . I will describe what i did maybe i missed something .

run the command
perl -pi -e '$/=""; s/\r\n\r\n/\r\nX-SHMSCDN: HIT\r\n\r\n/;' 
/cache1/header/v1/*

and verified that the text injected correctly

squid config

acl mshit rep_header X-SHMSCDN HIT
clientside_tos 0x30 mshit

but got the following popular log
2016/07/18 16:26:31.927 kid1| WARNING: mshit ACL is used in context without an 
HTTP response. Assuming mismatch.
2016/07/18 16:26:31.927 kid1| 28,3| Acl.cc(158) matches: checked: mshit = 0


One more thing . as i am not so familiar with perl , may i ask you to please 
edit it to ignore the files which already have the text ?

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678557.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-18 Thread Eliezer Croitoru
Hey Omid,

The issue is that the docs are unclear to *me* and I suspect that I will need 
to quote them:
acl aclname rep_header header-name [-i] any\.regex\.here
  # regex match against any of the known reply headers. May be
  # thought of as a superset of "browser", "referer" and "mime-type"
  # ACLs [fast]

Which to me means that it works only against "any of the known reply headers" 
but not special ones.
It would be a bit weird if it is indeed the state but it's probably it.

This is the place to lower my hat and say:
I do not know what to tell you!
I can try to research and read code but there are others which can answer 
better then me on this one.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Monday, July 18, 2016 5:42 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] rep_header not working

Hello,

It seems rep_header does not work at all.

acl mshit rep_header X-SHMSCDN .
acl mshit rep_header Content-Type -i text\/html acl html rep_header 
Content-Type -i ^text\/html acl apache rep_header Server ^Apache debug_options 
28,3

Other types of acl works fine

the log is very huge because of thousands of clients .

Squid Object Cache: Version 3.5.19 Official Debian Package Ubuntu Linux 16.04  
4.4.0-28-generic on x86_64



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype+intercept+ssl_bump

2016-07-18 Thread Eliezer Croitoru
To clear out my idea,

I was thinking about an option to decide if to bump or not based on a SSL 
handshake test on the destination Service.
I do not know skype traffic that much but I do know that a PTR can be "faked" 
and have seen it couple times in the past.
I considered what to do and one of the options is to do the bump in two steps 
and to identify requests that was not supposed to be bumped.
It's a bit complicated since in the nature of the idea there would be at least 
one failure for the client attempt to reach a destination.
I do not like the idea and I know it's not a nice one but I think that if an 
admin can identify the goal and determine that he doesn't care about traffic
detained to a specific host for both filtering and  caching then all traffic to 
these hosts can be tunneled or spliced.

The methods I have in mind are:
- Using firewall\kernel level of bumping exceptions
- Using some no-bump external_acl helper

I have a specific model for doing such a thing with Linux ipset and I only need 
couple domains to evaluate the concept.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Monday, July 18, 2016 10:27 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Skype+intercept+ssl_bump

On 15/07/2016 10:38 p.m., Evgeniy Kononov wrote:
>  Hello!
> 
> Can you help me with correct settings for squid to use skype ?
> 

FYI: there are currently no known "correct" setting for Skype when SSL-Bump is 
involved.

There are settings known to work when Squid is setup as an explicit proxy, and 
some which almost-always (but only 99.999%) working for Squid intercepting port 
80.

Intercepting port 443 and bumping the crypto has issues distinguishing 
Skype-TLS from real TLS and HTTPS.


That said, I have been giving it some thought today and suspect that since MS 
are apparently filtering Skype traffic through their own machines these days we 
could maybe use the "dst" ACL reverse-DNS behaviour to detect and splice that 
traffic.

If you want to experiment with that and have good results there are many here 
who would like some good news on this.



> With this setup I have problem with group chats, calls and attachments in 
> messages.
> Attachments sended, but not delivered to respondent.
> Unable to create group chats and if it created, what respondents do not see 
> the chat or can not make calls.
> I tried add IP regexp to access list, but after that all https traffic was 
> spliced.
> Skype work well when I change ssl_bump bump all to ssl_bump splice all 
> How can I exclude skype from SSL bumping ?

The problem is with identifying it in fairly reliable way from all the other 
traffic. That is where we are currently all stuck.

Yuri and Eliezer have been trying various things and talking about it on-list 
in recent weeks/months. But so far no results I'm confident about recommending.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-18 Thread Eliezer Croitoru
Well I cannot say a thing until I will study the subject.
One thing I was thinking about was:
Can you analyze the squid access.log and to reduce from the account\user the 
HIT traffic?
If so then I can recommend some log format special log to give you the needed 
details.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Monday, July 18, 2016 8:42 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] rep_header not working

Hey Eliezer,

I am aware of thay sentence . I have carefully read that . But as you see even 
apache or html one does not work .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678565.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to change public IP to access website on proxy squid?

2016-07-20 Thread Eliezer Croitoru
First take a look at the documents about:
http://www.squid-cache.org/Doc/config/forwarded_for/
http://www.squid-cache.org/Doc/config/via/

Depends on your setup you would be able to "MASK" your IP.
But it is better done using some kind of VPN service rather then a proxy.
Try to change\add the above squid settings ie:
via off
forwarded_for delete

And see if it helps you.

All The Bests,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of james82
Sent: Wednesday, July 20, 2016 5:06 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] how to change public IP to access website on proxy 
squid?

i'm sorry. using for what is my secret. .i just want to know, can i use squid
as a proxy server to connect to internet? normal i search whatmyip and my ip
appear on that website. i want to change that ip. is it possible? how to do
it.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-change-public-IP-to-access-website-on-proxy-squid-tp4678593p4678598.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to change public IP to access website on proxy squid?

2016-07-20 Thread Eliezer Croitoru
And to just illustrate what can be extracted by a single JavaScript:
http://myip.net.il/

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Amos Jeffries
Sent: Wednesday, July 20, 2016 6:25 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] how to change public IP to access website on proxy 
squid?

On 21/07/2016 3:04 a.m., Eliezer Croitoru wrote:
> First take a look at the documents about:
> http://www.squid-cache.org/Doc/config/forwarded_for/
> http://www.squid-cache.org/Doc/config/via/
> 
> Depends on your setup you would be able to "MASK" your IP.
> But it is better done using some kind of VPN service rather then a proxy.

Maybe. What is best to do depends on the reason/thing you want kept secret and 
won't tell us. So any response we provide to that half-question would probably 
be wrong.


> Try to change\add the above squid settings ie:
> via off

Note that Via header is a required header for HTTP. Only disable if you need 
to. It reveals the fact of a proxy being used but no details about your machine.

And it does not carry your IP address, so for the purpose stated it is not 
relevant.


> forwarded_for delete
> 

"forwarded_for transparent" is better.

> 
> -Original Message-
> From: james82
> 
> i'm sorry. using for what is my secret. .i just want to know, can i 
> use squid as a proxy server to connect to internet? normal i search 
> whatmyip and my ip appear on that website. i want to change that ip. 
> is it possible? how to do it.

There are many 'whatismyip' type services, they all do things differently and 
some use tricks to identify you that no proxy or other service can prevent.
 The IP address those sites display is not always the IP seen by services you 
connect to.

For privacy protection. Simply using a proxy in normal ways with that 
"forwarded_for transparent" setting "hides" your client machine a large amount. 
But does not prevent the machine itself shouting its details to the world in 
many other ways.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-16 Thread Eliezer Croitoru
Hey Omid,

1. You should understand what you are doing and not blindly fetch downloads.
The estimation is that you will need maximum of 100GB of storage for the whole 
"store" for a period of time.
This is also due to this that Microsoft Windows Update service will not 
download files without a need.
The fetcher should help you to download periodical updates but I assume that 
the updates have a limit... You should consider asking MS on what is expected 
to be in the downloads or when do download happen.

2. If you need more then one location you should use some logical volume to do 
that instead of spreading manually over more then one disk.
This is based on the basic understanding that the service is a "web-service" 
which is serving files and you should treat it the same way like any other.
When I am running a web-service and I need more then one disk I do not run to 
"spread" it manually but use some OS level tools.
I do trust the OS and the logical volume management tools to do their work 
properly. When I will loss my trust in them I will stop using this OS, this is 
as simple as that.
3. The HITS are counted but I need to dig into the code to verify how a HIT is 
logged and how it can be counted manually.
QOS or TOS, by what? How?
The service how one way out and one way in..
If the requested file is in store you will not see outgoing traffic for the 
file.
The right way to show a HIT in this service is to change the response headers 
file to have another header.
This could be done manually using a tiny script but not as a part of the store 
software.
An example to such addition would be:
# perl -pi -e '$/=""; s/\r\n\r\n/\r\nX-Store-Hit: HIT\r\n\r\n/; 
/var/storedata/header/v1/fff8db4723842074ab8d8cc4ad20a0f97d47f6d849149c81c4e52abc727d43b5

And it will change the response headers and these can be seen in a squid 
access.log using a log format.
I can think of other ways to report this but a question:
If it works as expected and expected to always work, why would you want to see 
the HIT in a QOS or TOS?
QOS and TOS levels of socket manipulation will require me to find a way to hack 
the simple web service and I will probably won’t go this way.
I do know that you will be able to manipulate QOS or TOS in squid if some 
header exist in the response.

I will might be able to look at the subject if there is a real 
technical\functional need for that in a long term usage.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Omid Kosari
Sent: Friday, July 15, 2016 8:48 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Windows Updates a Caching Stub zone, A windows 
updates store.

Hi,

Questions
1-What happens if disk or partition becomes full ?
2-Is there a way to use more than one location for store ?
3-Currently hits from your code , could not be counted .How i can use qos
flows/tos mark those hits ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678524.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NOTICE: Authentication not applicable on intercepted requests.

2016-07-05 Thread Eliezer Croitoru
If I may add that with some conditions it would be possible to use some network 
level authentication.
Indeed Browsers Clients and Servers do not support intercept and transparent 
proxy authentication but and a big one,
If the network has Clients that uses a single seat per user(IE IP per PC) and 
have no central terminal service then you can workaround the impossible into 
possible.
You could then allow a users to authenticate a web page and since then to some 
point of time such as couple seconds to minutes he will be authenticated.
In big WIFI networks that works and support radius authentication it is 
possible to authenticate users against LDAP or AD and the session will be valid 
for the time that the WIFI session is open.

Another approach which I have implemented in the past was to use some kind of 
DNS service which systems interacts with as a "registration" DB.
A user is logged in and the DHCP registers that a specific user has a specific 
IP and MAC address(there are couple much secure ways) then when the user 
authenticate itself using a web page\service the DNS PTR records for the IP is 
being updated.
The proxy has an helper that checks the PTR of the IP and if exists it tells 
squid what is the username for the request.
If not then it would return a missing username.
The client authenticate for a specific amount of time and after that the DNS 
record is expunged.
It is similar to the squid sessions helpers but works with another DB.. DNS.

Another approach I have seen in products is to install some kind of 
authentication Daemon per DESKTOP which will extend a 60 seconds authorization 
and registration every 15-30-45 seconds using the AD or LDAP user.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Alex Rousskov
Sent: Friday, July 1, 2016 8:45 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] NOTICE: Authentication not applicable on intercepted 
requests.

On 06/30/2016 01:19 PM, Eugene M. Zheganin wrote:
> On 30.06.2016 17:04, Amos Jeffries wrote:
>> Use a myportname ACL to prevent Squid attempting impossible things like
>> authentication on intercepted traffic.


> Sorry, but I still didn't get the idea. I have one port that squid is
> configured to intercept traffic on, and another for plain proxy
> requests. 

That is OK/normal, of course.


> How do I tell squid not to authenticate anyone on the intercept one? 

By making your authentication rules port-specific. Squid does not
authenticate by default so you are explicitly telling it to authenticate
[some] users. You need to adjust those rules to exclude intercepted
transactions.


> From what I know, squid will send the authentication
> sequence as soon as it encounters the authentication-related ACL in the
> ACL list for the request given. Do have to add myportname ACL with
> non-intercepting port for all the occurences of the auth-enabled ACLs,
> or may be there's a simplier way ?

I do not think there is. We could, in theory, [add an option to] ignore
authentication-related ACLs when dealing with intercepted transactions,
but I am not sure that doing so would actually solve more problems than
it will create.

Please note that, in many cases, your myportname ACLs can go at the very
beginning of the authentication-sensitive rules to exclude intercepted
transactions -- you may not have to prefix each auth-enabled ACL
individually (because none of them will be reached after early
myportname ACL guards).


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-06 Thread Eliezer Croitoru
Hey Steve,

There are couple options to the issue and a bad request can happen if squid 
transforms or modifies the request.
Did you tried to use basic debug sections output to verify if you are able to 
"replicate" the request using a tiny script or curl?
I think that section 11 is the right one to start with
(http://wiki.squid-cache.org/KnowledgeBase/DebugSections)
There were couple issues with intercepted https connections in the past but a 
400 means that something is bad and mainly in the expected input and not a 
certificate but it is possible that other reasons are there.
I have not tried to use skype in a transparent environment for a very long time 
but I can try to test it later.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Steve Hill
Sent: Wednesday, July 6, 2016 5:47 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Skype, SSL bump and go.trouter.io


I've been finding some problems with Skype when combined with TProxy and 
HTTPS interception and wondered if anyone had seen this before:

Skype works so long as HTTPS interception is not performed and traffic 
to TCP and UDP ports 1024-65535 is allowed directly out to the internet. 
  Enabling SSL-bump seems to break things - When making a call, Skype 
makes an SSL connection to go.trouter.io, which Squid successfully 
bumps.  Skype then makes a GET request to 
https://go.trouter.io/v3/c?auth=true=55 over the SSL connection, 
but the HTTPS server responds with a "400 Bad Request" error and Skype 
fails to work.

The Skype client clearly isn't rejecting the intercepted connection 
since it is making HTTPS requests over it, but I can't see why the 
server would be returning an error.  Obviously I can't see what's going 
on inside the connection when it isn't being bumped, but it does work 
then.  The only thing I can think is maybe the server is examining the 
SSL handshake and returning an error because it knows it isn't talking 
directly to the Skype client - but that seems like an odd way of doing 
things, rather than rejecting the SSL handshake in the first place.

-- 
  - Steve Hill
Technical Director
Opendium Limited http://www.opendium.com

Direct contacts:
Instant messager: xmpp:st...@opendium.com
Email:st...@opendium.com
Phone:sip:st...@opendium.com

Sales / enquiries contacts:
Email:sa...@opendium.com
Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
Email:supp...@opendium.com
Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Eliezer Croitoru
If the splice doesn’t solve the issue what would you expect squid to do?

Spilce equals routing…

The other solution which ufdbguard implements is probing the destination hosts.

If you want a solution I can try to see if it is possible but I cannot 
guarantee that you or anyone will like it.

 

Eliezer

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: Yuri Voinov [mailto:yvoi...@gmail.com] 
Sent: Wednesday, July 6, 2016 11:49 PM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] host_verify_strict and wildcard SNI

 


-BEGIN PGP SIGNED MESSAGE- 
Hash: SHA256 
 
I am very seriously concerned about the issue CDN, because every day I discover 
more and more problematic sites, namely in connection with the CDN and HTTPS. 
For more than four Squid servers are experiencing these problems in my network. 
And I still do not see any reason why any solutions to these problems.

Moreover, the splice does not solve these problems.

Just skip the whole networks in the proxy bypass.

What is totally unacceptable. Traffic is money. And a lot of money.

07.07.2016 2:38, Eliezer Croitoru пишет:
> Hey Yuri,



  >



  > I am not the "standards" guy but I do know that if something

  can be encoded



  > it can be "decoded".



  > There are special cases which needs special "spice" which

  sometimes is not



  > present here or there on the shelves.



  > To my disappointment and happiness there are very good

  products out there



  > which are not squid with much better fines invested in them.



  > I can clearly say that the Squid-Cache project is not the

  most "advanced"



  > piece of software in the market and I know that it cannot

  compare to let say



  > even 500 coding programmers work.



  > I have seen couple products that are open source which tries

  to provide



  > functionality which is similar to squid only in the protocol

  level and a



  > simple proxy with great luck.



  > Some of them are not as great as they might seems but I think

  that a young



  > programmer with enough investment can learn the required

  subjects to



  > implement a solution.



  > However, here admins, users, programmers can ask questions as

  they please



  > and I encourage to ask.



  > I try to answer as much as I can and in many cases my

  knowledge might not



  > be enough but I am trying to answer what I can with hope that

  it will help.



  > And unlike MD Doctors SysAdmins do not need to swear on

  something like "do



  > not harm" and I think it's a good aspect on things.



  >



  > I am still looking for clues about cloudflare since I have

  yet to see the



  > person who hold the keys for them.



  >



  > Eliezer



  >



  > 



  > Eliezer Croitoru  <http://ngtech.co.il/lmgtfy/> 
<http://ngtech.co.il/lmgtfy/> 



  > Linux System Administrator



  > Mobile: +972-5-28704261



  > Email: elie...@ngtech.co.il



  >  



  >



  > From: Yuri Voinov [mailto:yvoi...@gmail.com] 



  > Sent: Wednesday, July 6, 2016 11:15 PM



  > To: Eliezer Croitoru; squid-users@lists.squid-cache.org 
<mailto:squid-users@lists.squid-cache.org> 



  > Subject: Re: [squid-users] host_verify_strict and wildcard

  SNI



  >



  >



  > I know. Just asked. Since I am familiar with the standards.



  >



  > 07.07.2016 1:54, Eliezer Croitoru пишет:



  > > Hey Yuri,



  >



  >



  >



  >   > These two subjects are not related directly to

  each other but



  >   they might have something in common.



  >



  >   > Squid expects clients connections to meet the

  basic RFC6066



  >   section 3:



  >



  >   > https://tools.ietf.org/html/rfc6066#section-3



  >  <https://tools.ietf.org/html/rfc6066> 
<https://tools.ietf.org/html/rfc6066>



  >



  >



  >



  >   > Which states that a host name should be there and

  the legal



  >   characters of a hostname from both rfc1035 and rc6066

  are very



  >   speicifc.



  >



  >   > If a specific software are trying to request a

  wrong sni name



  >   it's an issue in the client side request or software

  error



  >   handling and enforcement.



  >



 

Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Eliezer Croitoru
Hey Yuri,

These two subjects are not related directly to each other but they might have 
something in common.
Squid expects clients connections to meet the basic RFC6066 section 3:
https://tools.ietf.org/html/rfc6066#section-3

Which states that a host name should be there and the legal characters of a 
hostname from both rfc1035 and rc6066 are very speicifc.
If a specific software are trying to request a wrong sni name it's an issue in 
the client side request or software error handling and enforcement.
A http server would probably respond with a 4XX response code or the default 
certificate.
There are other options of course but the first thing to check is if the client 
is a real browser or some special creature that tries it's luck with a special 
form of ssl.
To my understanding host_verify_strict tries to enforce basic security levels 
while in a transparent proxy the rules will always change.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Wednesday, July 6, 2016 10:43 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] host_verify_strict and wildcard SNI


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Sounds familiar.

Do you experience occasional problems with CloudFlare sites?


06.07.2016 20:36, Steve Hill пишет:
>
> I'm using a transparent proxy and SSL-peek and have hit a problem with
an iOS app which seems to be doing broken things with the SNI.
>
> The app is making an HTTPS connection to a server and presenting an
SNI with a wildcard in it - i.e. "*.example.com".  I'm not sure if this
behaviour is actually illegal, but it certainly doesn't seem to make a
lot of sense to me.
>
> Squid then internally generates a "CONNECT *.example.com:443" request
based on the peeked SNI, which is picked up by hostHeaderIpVerify().
Since *.example.com isn't a valid DNS name, Squid rejects the connection
on the basis that *.example.com doesn't match the IP address that the
client is connecting to.
>
> Unfortunately, I can't see any way of working around the problem -
"host_verify_strict" is disabled, but according to the docs,
> "For now suspicious intercepted CONNECT requests are always responded
to with an HTTP 409 (Conflict) error page."
>
> As I understand it, turning host_verify_strict on causes problems with
CDNs which use DNS tricks for load balancing, so I'm not sure I
understand the rationale behind preventing it from being turned off for
CONNECT requests?
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXfV9aAAoJENNXIZxhPexGSaIH/0Q9/FiYOhBeoWIkppSU9joc
uE80bqZ9QP+e0MRcDWjsiZd6RmbcNj5+KnrFsjRLerFF42A5IZ6x9KzkswEz1sO5
CBz3gpUg9uJuTbS9WBEGmw+n1dL8nXSwpFhXM7wjb40m7cAGdFiF5DGdquj/b8bv
WgZMYREFXZaK49NunaEUIvx7DQHEqQaMLLYhQTIrTjIV1RWaiWFl5wLijfJKdpSK
MF/PK847dwmaoquzQPwVFLEuiEXyYpJMYEzQRiJhksklcW2qZRLw8LMDrj3Jrhiq
iKsB3lhyoQR1/SzXHCNxpVrZonQ4HN0LC1JeAbZteaFBYu2IH4Jd9CiTLHU4fqs=
=R47a
-END PGP SIGNATURE-


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Eliezer Croitoru
Hey Yuri,

I am not the "standards" guy but I do know that if something can be encoded
it can be "decoded".
There are special cases which needs special "spice" which sometimes is not
present here or there on the shelves.
To my disappointment and happiness there are very good products out there
which are not squid with much better fines invested in them.
I can clearly say that the Squid-Cache project is not the most "advanced"
piece of software in the market and I know that it cannot compare to let say
even 500 coding programmers work.
I have seen couple products that are open source which tries to provide
functionality which is similar to squid only in the protocol level and a
simple proxy with great luck.
Some of them are not as great as they might seems but I think that a young
programmer with enough investment can learn the required subjects to
implement a solution.
However, here admins, users, programmers can ask questions as they please
and I encourage to ask.
I try to answer as much as I can and in many cases my knowledge might not
be enough but I am trying to answer what I can with hope that it will help.
And unlike MD Doctors SysAdmins do not need to swear on something like "do
not harm" and I think it's a good aspect on things.

I am still looking for clues about cloudflare since I have yet to see the
person who hold the keys for them.

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

From: Yuri Voinov [mailto:yvoi...@gmail.com] 
Sent: Wednesday, July 6, 2016 11:15 PM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] host_verify_strict and wildcard SNI


-BEGIN PGP SIGNED MESSAGE- 
Hash: SHA256 
 
I know. Just asked. Since I am familiar with the standards.

07.07.2016 1:54, Eliezer Croitoru пишет:
> Hey Yuri,

  >

  > These two subjects are not related directly to each other but
  they might have something in common.

  > Squid expects clients connections to meet the basic RFC6066
  section 3:

  > https://tools.ietf.org/html/rfc6066#section-3
<https://tools.ietf.org/html/rfc6066> 

  >

  > Which states that a host name should be there and the legal
  characters of a hostname from both rfc1035 and rc6066 are very
  speicifc.

  > If a specific software are trying to request a wrong sni name
  it's an issue in the client side request or software error
  handling and enforcement.

  > A http server would probably respond with a 4XX response code
  or the default certificate.

  > There are other options of course but the first thing to
  check is if the client is a real browser or some special creature
  that tries it's luck with a special form of ssl.

  > To my understanding host_verify_strict tries to enforce basic
  security levels while in a transparent proxy the rules will always
  change.

  >

  > Eliezer

  >

  > 

  > Eliezer Croitoru

  > Linux System Administrator

  > Mobile: +972-5-28704261

  > Email: elie...@ngtech.co.il <mailto:elie...@ngtech.co.il> 

  >

  >

  > -Original Message-

  > From: squid-users
  [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf Of
  Yuri Voinov

  > Sent: Wednesday, July 6, 2016 10:43 PM

  > To: squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org> 

  > Subject: Re: [squid-users] host_verify_strict and wildcard
  SNI

  >

  >

  > Sounds familiar.

  >

  > Do you experience occasional problems with CloudFlare sites?

  >

  >

  > 06.07.2016 20:36, Steve Hill пишет:

  >

  > > I'm using a transparent proxy and SSL-peek and have hit
  a problem with

  > an iOS app which seems to be doing broken things with the
  SNI.

  >

  > > The app is making an HTTPS connection to a server and
  presenting an

  > SNI with a wildcard in it - i.e. "*.example.com".  I'm not
  sure if this

  > behaviour is actually illegal, but it certainly doesn't seem
  to make a

  > lot of sense to me.

  >

  > > Squid then internally generates a "CONNECT
  *.example.com:443" request

  > based on the peeked SNI, which is picked up by
  hostHeaderIpVerify().

  > Since *.example.com isn't a valid DNS name, Squid rejects the
  connection

  > on the basis that *.example.com doesn't match the IP address
  that the

  > client is connecting to.

  >

  > > Unfortunately, I can't see any way of working around the
  problem -

  > "host_verify_

Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Eliezer Croitoru
Couple thoughts Alex,

Currently the basic splice rules are being used with regex which means that 
they can work with wildcard.
And I can understand the argument of a client wanting some wildcard domain but 
I do not know about an application that actually
tries to uses such logic.

There are cases which the RFC do leave the open minds to get wild and I am not 
saying if these are right or wrong but,
they do states it's a hostname and not a certificate common name or some v3 
component.

Practically some client can try to contact some arbitrary website and there are 
couple aspects to it.
If a user tries to connect to a site using a company proxy, then what companies 
will want to allow?
Would large companies allow such a connection to be spliced, or maybe they will 
want to inspect this connection deeper?
What about ISP's, these mostly care about cache and not about ACL's, while 
there are many who uses squid for content filtering.

From where anyone understands, a wildcard should never be required to be tested 
against any DNS server in the current state of the internet since these do have 
a strict policy to honor only valid hostname characters.
Maybe the future will bring the wildcard into the DNS world, should we consider 
such an option even if the RFC tends to minimize and containerize the options?

Thanks,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Alex Rousskov
Sent: Thursday, July 7, 2016 4:07 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] host_verify_strict and wildcard SNI

On 07/06/2016 05:01 PM, Marcus Kool wrote:
> On 07/06/2016 11:36 AM, Steve Hill wrote:
>> I'm using a transparent proxy and SSL-peek and have hit a problem with
>> an iOS app which seems to be doing broken things with the SNI.
>>
>> The app is making an HTTPS connection to a server and presenting an
>> SNI with a wildcard in it - i.e. "*.example.com".  I'm not sure if
>> this behaviour is actually illegal, but it certainly doesn't seem
>> to make a lot of sense to me.


> An SNI with a wildcard indeed does not make sense.

There are three rather different questions to consider here:

1. Is wildcard SNI "legal/valid"?
2. Can wildcard SNI "make sense" in some cases?
3. What should Squid do when receiving a wildcard SNI?


Q1. Is wildcard SNI "legal/valid"?

I do not know the answer to that question. The "*.example.com" name is
certainly legal in many DNS contexts. RFC 6066 requires HostName SNI to
be a "fully qualified domain name", but I failed to find a strict-enough
RFC definition of an FQDN that would either accept or reject wildcards
as FQDNs. I would not be surprised if FQDN syntax is not defined to the
level that would allow one to reject wildcards as FQDNs based on syntax
alone.


Q2. Can wildcard SNI "make sense" in some cases?

Yes, of course. The client essentially says "I am trying to connect to
_any_ example.com subdomain at this IP:port address. If you have any
service like that, please connect me". That would work fine in
deployment contexts where several servers with different names provide
essentially the same service and the central "routing point" would pick
the "best" service to use. I am not saying it is a good idea to use
wildcard SNIs, but I can see them "making sense" in some cases.


Q3. What should Squid do when receiving a wildcard SNI?

The first two questions are not really important and each may not even
have a single "correct" answer. I am sure protocol purists can argue
about them forever. The last question is important, which brings us to:

> Since Squid tries to mimic the behavior of the server and of the client,
> it deserves a patch where instead of doing a DNS lookup and then doing a
> connect (based on the result of the DNS lookup?),
> Squid simply connects to the IP address that the client tries to connect to
> and does the TLS handshake with the SNI (that does not make sense).
> This way it mimics the client a bit better.

I believe that is what Squid does already but please correct me if I am
wrong.

When forming a fake CONNECT request, Squid uses SNI information because
that is what ACLs and adaptation services usually want to see. However,
I hope that intercepting Squid always connects to the intended
destination of the intercepted connection instead of trusting its own
fake CONNECT request.

Whether Squid should generate a fake CONNECT with a wildcard host name
is an interesting question:

1. A fake CONNECT targeting an wildcard name may break ACL-driven rules
and adaptation services (at least).

2. A fake CONNECT targeting an IP address instead of a wildcard name may
not give some ACL-driven rules and

Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Eliezer Croitoru
Can you verify please using a debug 11,9 that squid is not altering the request 
in any form?
Such as mentioned at: http://bugs.squid-cache.org/show_bug.cgi?id=4253

Have you tried adding:
request_header_access Surrogate-Capability deny all

Microsoft is in the edge of technology compared to what some might think but if 
they do not reveal their cards it doesn't mean they are stupid(not directed to 
you).
If there is a security expert out there for Linux, there is more then one for 
MS.

Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Steve Hill [mailto:st...@opendium.com] 
Sent: Thursday, July 7, 2016 11:45 AM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Skype, SSL bump and go.trouter.io

On 06/07/16 20:44, Eliezer Croitoru wrote:

> There are couple options to the issue and a bad request can happen if
> squid transforms or modifies the request. Did you tried to use basic
> debug sections output to verify if you are able to "replicate" the
> request using a tiny script or curl? I think that section 11 is the
> right one to start with
> (http://wiki.squid-cache.org/KnowledgeBase/DebugSections) There were
> couple issues with intercepted https connections in the past but a
> 400 means that something is bad and mainly in the expected input and
> not a certificate but it is possible that other reasons are there. I
> have not tried to use skype in a transparent environment for a very
> long time but I can try to test it later.

I tcpdumped the icap REQMOD session to retrieve the request and tried it
manually (direct to the Skype server) with openssl s_client.  The Skype
server (not Squid) returned a 400.  But of course, the Skype request
contains various data that the server will probably (correctly) see as a
replay attack, so it isn't a very good test - all I can really say is
that the real Skype client was getting exactly the same error from the
server when the connection is bumped, but works fine when it is tunnelled.

Annoyingly, Skype doesn't include an SNI in the handshake, so peeking in
order to exclude it from being bumped isn't an option.

The odd thing is that I have had Skype working in a transparent 
environment previously (with the unprivalidged ports unfirewalled), so I 
wonder if this is something new from Microsoft.

-- 
  - Steve Hill
Technical Director
Opendium Limited http://www.opendium.com

Direct contacts:
Instant messager: xmpp:st...@opendium.com
Email:st...@opendium.com
Phone:sip:st...@opendium.com

Sales / enquiries contacts:
Email:sa...@opendium.com
Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
Email:supp...@opendium.com
Phone:+44-1792-825748 / sip:supp...@opendium.com

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTPS bump doesn't work with websites that require SNI

2016-07-10 Thread Eliezer Croitoru
Hey,

 

What version of squid is provided on pfsense and what version are you using?

 

Eliezer

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yi?itcan U?UM
Sent: Sunday, July 10, 2016 3:49 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] HTTPS bump doesn't work with websites that require SNI

 

Hello there. We're using pfsense and squid-proxy to bump https connections 
between some of our machines and www. The setup seems to works fine for most of 
the https sites, but it doesn't work for the others.

 

One example to this sites is "docs.docker.com <http://docs.docker.com/> ". Even 
though we can connect to "docker.com <http://docker.com/> ", we can't connect 
to "docs.docker.com <http://docs.docker.com/> ".

 

The error we get is:

(92) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

Handshake with SSL server failed: error:14077410:SSL 
routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure

Upon further investigation we found out that this happens because some sites 
require SNI to supply correct SSL certificate.

You can test this out with:

---

openssl s_client -connect docs.docker.com:443 <http://docs.docker.com:443/>  -> 
ERROR

140612823746464:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert 
handshake failure:s23_clnt.c:744:

---

openssl s_client -connect docs.docker.com:443 <http://docs.docker.com:443/>  
-servername docs.docker.com <http://docs.docker.com/>  -> Works



Squid seems to make https request without the SNI. How can we configure Squid 
to use SNI? Thanks.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-10 Thread Eliezer Croitoru
Windows Updates a Caching Stub zone
<http://www1.ngtech.co.il/wpe/?page_id=301> 

I have been working for quite some time trying to see if it is possible to
cache windows updates using Squid.
I have seen it is possible but to test a concept I wrote a small proxy and a
helper tool.
The tools are a Proof Of Concept and an almost full implementation of the
idea.
I consider it a Squid Helper tool.

Feel free to use the tool and if you need any help using it just contact me
here or off list.

Eliezer


Eliezer Croitoru <http://ngtech.co.il/lmgtfy/> 
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il
 

<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Eliezer Croitoru
Returning back to the beginning of the subject there are couple other ideas on 
the table to allow these connections to exit or somehow either predict them or 
identify them as they come.
The first thing is that you don't really care to pass authentication sessions 
from a caching perspective, since these should never be cached.
Let say we know every one of the domains IP addresses and these are not a CDN 
one, it would be possible to identify them and splice them.

I can think about a tiny script that will identify the IP addresses of this 
service and will splice these.
The issue is that I cannot guarantee that it will open other doors which you 
might not want to.
If you wish to try my concept I can try to give it some work but my condition 
is to try it in binary form only for the testing period.

Let me know how it sounds,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Steve Hill
Sent: Wednesday, July 6, 2016 5:47 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Skype, SSL bump and go.trouter.io


I've been finding some problems with Skype when combined with TProxy and 
HTTPS interception and wondered if anyone had seen this before:

Skype works so long as HTTPS interception is not performed and traffic 
to TCP and UDP ports 1024-65535 is allowed directly out to the internet. 
  Enabling SSL-bump seems to break things - When making a call, Skype 
makes an SSL connection to go.trouter.io, which Squid successfully 
bumps.  Skype then makes a GET request to 
https://go.trouter.io/v3/c?auth=true=55 over the SSL connection, 
but the HTTPS server responds with a "400 Bad Request" error and Skype 
fails to work.

The Skype client clearly isn't rejecting the intercepted connection 
since it is making HTTPS requests over it, but I can't see why the 
server would be returning an error.  Obviously I can't see what's going 
on inside the connection when it isn't being bumped, but it does work 
then.  The only thing I can think is maybe the server is examining the 
SSL handshake and returning an error because it knows it isn't talking 
directly to the Skype client - but that seems like an odd way of doing 
things, rather than rejecting the SSL handshake in the first place.

-- 
  - Steve Hill
Technical Director
Opendium Limited http://www.opendium.com

Direct contacts:
Instant messager: xmpp:st...@opendium.com
Email:st...@opendium.com
Phone:sip:st...@opendium.com

Sales / enquiries contacts:
Email:sa...@opendium.com
Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
Email:supp...@opendium.com
Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Eliezer Croitoru
Thanks for clearing things out.
I suspect that at 1987 I wasn't able yet to understand English as I am now.
And also the Internet in my area at this year was something worth almost like 
GOLD.
So it seems that this is the first time of me actually encountering a case 
which a "hostname" was used with wildcard in it.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, July 7, 2016 7:24 PM
To: Eliezer Croitoru; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] host_verify_strict and wildcard SNI

On 07/07/2016 01:37 AM, Eliezer Croitoru wrote:

> Maybe the future will bring the wildcard into the DNS world

FYI: Wildcards have been in DNS world since before RFC 1035 dated 1987:

>- The results of standard queries where the QNAME contains "*"
>  labels if the data might be used to construct wildcards.

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [squid-announce] Squid 3.5.20 is available

2016-07-07 Thread Eliezer Croitoru
The article was published at: http://www1.ngtech.co.il/wpe/?p=293

I am happy to publish the article for:
Squid-Cache 3.5.20 and 4.0.12 beta release.
The details about the the RPMs repository are at squid-wiki 
<http://wiki.squid-cache.org/KnowledgeBase/CentOS> .
RPMs Available <http://www.ngtech.co.il/repo/>  for CentOS, Oracle Linux, 
OpenSUSE Leap
Faster is not always the answer!!
When clients are not complaining?
What I mean is, did you ever seen a client complains about the speed of the 
Internet connection?
No I do not mean that he or she complains it's too slow but that it's too fast?
I had the pleasure to meet couple clients which complained that the Computer is 
moving slow
since their Internet connection speed was upgraded. No it wasn't a joke and it 
is reality.
The scenario needs some background and context to sound a bit more realistic:
The client is in the age of about 80 and the PC is 2-3 year old. When the 
Internet connection was slow,
the OS updates and AV P2P connections was slow. Every day the computer got 
shutdown around a specific hour
and if was required some updates was applied. Now the issue is that since the 
Internet speed got faster,
every couple hours an update from the AV was applied and almost every couple 
days an OS update was back on the table.
The main issue was speed but with a twist "when I am disconnecting the router 
it's working faster" he states.
Actually it took me quite a while to understand that a simple Desktop with 
about 4GB RAM should be enough to use:
Skype, Word, Email and couple console based tiny pieces of software.
So why? why did the PC got slower?
I really do not know! It could be lots of IOPS that was dumped on a 5400 RPMs 
HDD or that the AV scanned the
2GB of updates repeatedly. I cannot answer what I never understood and from 
what I understood, faster is not always
the good answer. However I can try to imaging that to verify that every 
signature of a file is still the same as it should be,
might not be so easy for every PC.
These days I am counting the 10th month which my local testing Squid runs in a 
"full" http responses digest mode.
Every single response was digested using the SHA256 hashing function and it 
feels like it's not there at all.
It's not affecting my tiny 15Mbps line rate downloads  or my tiny servers farm.
Ho well it's not the full and the whole truth!!
The full truth is that the users agreed to use the service in any form since 
they care more about their mind and
soul rather then their comfort. They decided that they need some filtering 
system when they insert some data into
their mind through their eyes. It's as simple as it sounds. They know that 
their mind should be guarded under
couple NAT systems and couple IDS+IPS since there are couple weird ideas out 
there on the Internet.
I am asking myself couple times every single day the questions like:
*   How do you want others to treat you when you have some need?
*   Would you want that others will do everything for you?
*   How would a "Plate Of Gold" look like?
And then my IDS+IPS system throws on me a big fat text exception with the 
header "We are humans, we need others!".
And indeed this is an IDS+IPS which I didn't built and every once in a  while I 
am asking myself,
how many digest functions are in there?
*   CRC32
*   MD5
*   SHA1
*   SHA256
*   SHA512
*   SHA1024
*   SHA∞ ?
Is there an AES based one also in there?
And my answer is that I do not know what's in there but I can see some 
"reflection" of something greater and better.
Then I start to wonder, why all these clients wants their so well formed and 
solid and mature mind to be proxied
using any solution? Would any human made solution ever match our genes?
I cannot give any "scientific" opinion but I can bring to the table things from 
others which have more weight then
me on them either from life experience or scientific research. These do claim 
that the human genes are not "perfect" and
there for there is always a need to "spice" the human mind and soul in order to 
allow it some level of progress.
The most simple example of humans being affected is that kids tries to learn 
from their parents and later with
time they try to learn from others. This state of learning curve can teach us 
that genes are not "everything".
The answer will not always be "Faster" if you will get to the state of 
understanding and believing that
it's a rocket to your mind that's hitting using words, pictures, tables, shapes 
and other things.
But!! don't get paranoid!! Enough that you have another person in the house 
next to you
and you are safe enough to not loose your mind. Enough that there is someone 
that can
be asked directly or using a proxy and this world already feels much better 
then it was couple seconds ago.
All The Bests,
Eliez

Re: [squid-users] Range header is a hit ratio killer

2016-08-07 Thread Eliezer Croitoru
Hey Yuri,

 

The issue is not money alone…

To my understanding Squid is written in C++ and is very complex, due to this it 
requires more then basic level knowledge.

However I can clearly say that it's not a big issue to use current squid 
APIs\Interfaces(ICAP\ECAP) to implement a solution which will act like the 
nginx  "module".

I do not know how long it would take or how much it will cost since it requires 
time…

This time is required for:

- Learning\Relearning

- Identifying and predicting the different cases

- Basic testing for the different cases

- Implementing a basic structure

- Testing

- (In a loop and\or couple trees…)

 

>From my point of view compared to "ransom" or any similar idea, anyone that 
>will write any piece of software to implement this specific idea should be 
>able to take on his shoulders more then only this but more then just this.

And just to illustrate, imagine that some nice guy pops into Boing or RedHat 
offices and will leave a DiskOnKey at the front desk with a note "This flash 
drive contains an idea that will bring you lots of money"(not saying the 
current idea itself is bad or wrong..).

What would these companies do? Will they put a team of engineers in a second?

I do believe that they are not "hot headed" enough to act in a second.

 

I received a link couple years ago from Amos for an e-cap module:

https://github.com/creamy/ecap-mongo

Which does couple very interesting things but, despite to the fact that I 
learned to program in C and C++ I couldn't understand and\or implement a Store 
API which could be used for\by squid.

However I implemented this:

Windows Updates a Caching Stub zone[ http://www1.ngtech.co.il/wpe/?page_id=301 ]

And while implementing the idea one of the main things I noticed is that trying 
to "catch" all traffic into disk is the wrong way to define a goal.

Indeed it can be written to be done "automatically" but I will ask:
What is it worth to write everything into disk if you never read from disk more 
then 1% of the files content?

 

If you have a specific targeted site it's one thing but trying to catch them 
all is kind of like tying your feet with a rope to a door and then shove\slam 
the door to the other direction.
Imagine yourself how far and fast you will fly?

 

Thanks,

Eliezer

 



 <http://ngtech.co.il/lmgtfy/> Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Yuri Voinov
Sent: Sunday, August 7, 2016 9:23 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Range header is a hit ratio killer

 


-BEGIN PGP SIGNED MESSAGE- 
Hash: SHA256 
 
So,

the overall answer is "NO".

You can use Store-ID + collapsed forwarding functionality to achieve something 
your want. May be together, may be separate. Hard luck :)

But this is your own problem. No one will solve the problem without the 
infusion of large amounts of money to make it interesting.

:)


07.08.2016 20:12, Amos Jeffries пишет:
> On 6/08/2016 9:56 p.m., k simon

  wrote:



  >> Hi,list,



  >>   Code 206 is the most pain for our forwed proxy. Squid

  use



  >> “range_offset_limit” to process byte-range request. when

  set it "none",



  >> it has 2 wellknown issue:



  >> 1.  boost the traffic on the server side, we observed

  it's amplified



  >> 500% compared to clients side on our box.



  >



  > To which the answer currently is to see if enabling

  collapsed_forwarding



  > works okay for your needs.



  >



  >> 2.  it's always failed on a lossy link, and squid

  refetched it again and



  >> again.



  >>   I've noticed that nginx have supported "byte-range

  cacheing" since



  >> 1.9.8  by Module ngx_http_slice_module officially.



  >> (1.



  >>

http://nginx.org/en/docs/http/ngx_http_slice_module.html?_ga=1.140845234.106894549.1470474534



  >>



  >



  > So? what relevance does other software features have to Squid

  behaviour?



  >



  > 

 
<http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F>
 
<http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F>



  >



  > ... to be fair the storage code in Squid is a bit hairy in

  places. So



  > paying for it to be done is unlikely to be cheap. But still,

  waiting



  > wont fix the problem. We nearly go there in Squid-2.7, but

  the



  > experiment there is not able to completely port across to


<    1   2   3   4   5   6   7   8   9   10   >