[squid-users] SNMP queries to squid never go beyond 1 GB

2015-03-19 Thread Lawrence Pingree
No matter what cache_mem I set it seems that MRTG queries via SNMP never
seem to get beyond 1 GB even running the latest 3.5 code. Amos, is the code
capable of allocating more than one gig of memory?

 

Storage Mem Size @ x

The statistics were last updated Thursday, 19 March 2015 at 18:48,
at which time 'squid 3.5.1' had been up for 10:27:16.

`Daily' Graph (5 Minute Average)




Max

Average

Current


Mem Size 

980.8 MBytes

648.1 MBytes

139.5 MBytes

 

 

 

Convert your dreams to achievable and realistic goals, this way the journey
is satisfying and progressive. - LP

 

Best regards,

The Geek Guy



Lawrence Pingree

 http://www.lawrencepingree.com/resume/
http://www.lawrencepingree.com/resume/

 

 

Author of The Manager's Guide to Becoming Great

 http://www.management-book.com/ http://www.Management-Book.com

 

 
https://webportal.isc2.org/custom/CertificationVerificationResults.aspx?FN=
LawrenceLN=PingreeCN=76042 

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Donate to squid!

2014-10-22 Thread Lawrence Pingree
Fyi, I just donated, and I encourage all of you to donate to squid to help
our squid development out!

 

To donate: http://www.squid-cache.org/Foundation/donate.html

 

 

 

 

Convert your dreams to achievable and realistic goals, this way the journey
is satisfying and progressive. - LP

 

Best regards,

The Geek Guy



Lawrence Pingree

 http://www.lawrencepingree.com/resume/
http://www.lawrencepingree.com/resume/

 

Author of The Manager's Guide to Becoming Great

 http://www.management-book.com/ http://www.Management-Book.com

 

 
https://webportal.isc2.org/custom/CertificationVerificationResults.aspx?FN=
LawrenceLN=PingreeCN=76042 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


RE: [squid-users] squid yum install

2014-08-29 Thread Lawrence Pingree
Awesome! Thank you. Will that roll into their prod repositories?

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Friday, August 29, 2014 1:46 AM
To: squid-users@squid-cache.org
Cc: Lawrence Pingree
Subject: Re: [squid-users] squid yum install

On 08/29/2014 07:33 AM, Lawrence Pingree wrote:
 Does anyone know who builds the latest versions of squid RPMs for Opensuse? I 
 would love to upgrade but can't.

I have a build node for opensuse and I will add it to the build list for next 
week.

Eliezer



RE: [squid-users] squid yum install

2014-08-28 Thread Lawrence Pingree
Does anyone know who builds the latest versions of squid RPMs for Opensuse? I 
would love to upgrade but can't.

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Thursday, August 28, 2014 3:40 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid yum install

Hey There,

Indeed there is not yet a 3.4.7 release due to the basic fact that it was 
released in the last 24 hours and it takes time to run a basic test and build 
the RPMS.

I will probably build the 3.4.7 RPMS in the next week.
The release will be for CentOS 6 and not yet 7.

Indeed squid builds on CentOS 7 but from my point view it is not tested enough 
for production compared to ubuntu 14.04.

I will release notes about it later.

Eliezer

On 08/28/2014 04:28 PM, Santosh Bhabal wrote:
 Hello Farooq,

 I am unable to find squid 3.4.7 rpm in the URL which you have given.

 Regards
 Santosh





RE: [squid-users] Anybody using squid on openWRT ?

2014-08-25 Thread Lawrence Pingree
Gotcha. Agreed. 

-Original Message-
From: Leonardo Rodrigues [mailto:leolis...@solutti.com.br] 
Sent: Monday, August 25, 2014 10:58 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Anybody using squid on openWRT ?


 If you're talking about caching, then you're absolutely correct. If you're 
using squid just for filtering and policies enforcment, as i'm doing, than even 
a small box like the routerboards i'm using (32Mb RAM and 64Mb flash disk) is 
enough for a 30-40 stations network. squid needs a bit of tweaking for running 
on those but, once you mastered that, is works absolutely fine. I even have it 
doing authentication on Windows ADs through ldap authenticators !

On 22/08/14 15:16, Lawrence Pingree wrote:
 Plus a wifi device is severely underpowered and lacks sufficient memory and 
 storage for squid to provide any real benefit (IMHO).


-- 


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it







RE: [squid-users] FW: squid 3.3.10 always gives TCP_MISS for SSL requests

2014-08-25 Thread Lawrence Pingree
I'm not sure if this is right or not, but wouldn't your refresh patterns
need to have the ignore-private to cache ssl? Amos may know better, but I
don't see that option specified in your All Files refresh_patterns.


-Original Message-
From: Ragheb Rustom [mailto:rag...@smartelecom.org] 
Sent: Monday, August 25, 2014 5:12 PM
To: squid-users@squid-cache.org
Subject: [squid-users] FW: squid 3.3.10 always gives TCP_MISS for SSL
requests

Dear All,

I have lately installed squid 3.3.11 on Centos 6.5 x86_64 system. I have
configured it as a transparent SSL_BUMP proxy. All is working well I can
browse all SSL websites successfully after I have imported my generated CA
file. The problem is that no matter how many times I request the SSL
websites I always get a TCP_MISS in the squid access log. Among other
websites I am trying to cache yahoo.com, facebook and youtube but most
websites are always being served directly from source nothing is being
served for the squid proxy. Please find below my configuration files. I
deeply value any help on this matter.

Squid setup settings:

Squid Cache: Version 3.3.11
configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-eui'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake'
'--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos,wrapper'
'--enable-external-acl-helpers=wbinfo_group,kerberos_ldap_group,AD_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock'
'--enable-wccpv2' '--enable-esi' '--enable-zph-qos' '--with-aio'
'--with-default-user=squid' '--with-filedescriptors=65535' '--with-dl'
'--with-openssl' '--with-pthreads' '--disable-arch-native'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic' 'CXXFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'

squid.conf file:

acl snmppublic snmp_community public
acl bamboe src 10.128.135.0/24
#uncomment noway url, if necessary.
#acl noway url_regex -i /etc/squid/noway
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 1935  # http acl Safe_ports port 21  #
ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais acl Safe_ports port
1025-65535  # unregistered ports acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http acl Safe_ports port 591 #
filemaker acl Safe_ports port 777 # multiling http


acl CONNECT method CONNECT
#http_access deny noway
http_access allow manager localhost
http_access allow bamboe
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
htcp_access deny all
miss_access allow all

# NETWORK OPTIONS
http_port 8080
http_port 8082 intercept
https_port 8081 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=8MB cert=/etc/squid/myconfigure.pem
key=/etc/squid/myconfigure.pem ssl_bump server-first all always_direct allow
all sslproxy_cert_error allow all sslproxy_flags DONT_VERIFY_PEER

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 8MB
sslcrtd_children 5 hierarchy_stoplist cgi-bin ? .js .jsp mivo.tv
192.168.10.29 192.168.10.30 static.videoku.tv acl QUERY urlpath_regex
cgi-bin \? .js .jsp 192.168.10.29 192.168.10.30 youtube.com indowebster.com
static.videoku.tv no_cache deny QUERY

#  MEMORY CACHE OPTIONS
cache_mem 6000 MB
maximum_object_size_in_memory 16 KB
memory_replacement_policy heap GDSF

# DISK CACHE OPTIONS
cache_replacement_policy heap LFUDA
cache_dir aufs /cache1 30 64 256
store_dir_select_algorithm least-load
minimum_object_size 16 KB
maximum_object_size 2 GB
cache_swap_low 97
cache_swap_high 99

#LOGFILE 

RE: [squid-users] Anybody using squid on openWRT ?

2014-08-22 Thread Lawrence Pingree
Plus a wifi device is severely underpowered and lacks sufficient memory and 
storage for squid to provide any real benefit (IMHO).

-Original Message-
From: Cassiano Martin [mailto:cassi...@polaco.pro.br] 
Sent: Friday, August 22, 2014 5:06 AM
To: babajaga
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Anybody using squid on openWRT ?

Unfortunately openwrt squid package is very outdated and buggy. I've tried it, 
but I gave up.

I'm not sure, but they do not include software which uses C++ as language. 99% 
of its package repository are C source software, may be this is one reason to 
keep an older squid version, which is not written in C++

2014-08-22 7:48 GMT-03:00 babajaga augustus_me...@yahoo.de:
 Just trying to use offic. package for openWRT, which is based on 
 squid2.7 only.
 Having detected some DNS-issues, does anybody use squid on openWRT, 
 and which squid version ?



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Anybody-using-squid
 -on-openWRT-tp4667335.html Sent from the Squid - Users mailing list 
 archive at Nabble.com.




RE: [squid-users] https://weather.yahoo.com redirect loop

2014-08-21 Thread Lawrence Pingree
Don't kill the messenger :) I agree, but had to remove forwarded for and via or 
I faced blocking and weirdness with several of the services I use. I won't name 
names cause I don't really want to pursue the debate. 

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 20, 2014 9:39 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] https://weather.yahoo.com redirect loop

On 21/08/2014 2:23 p.m., Lawrence Pingree wrote:
 No, I mean they are intentionally blocking with a configured policy, 
 its not a bug. :) They have signatures that match Via headers and 
 forwarded for headers to determine that it's squid. This is because 
 many hackers are using bounces off open squid proxies to launch web 
 attacks.
 

That still sounds like a bug. Blocking on squid existence makes as much sense 
as blocking all traffic with UA header containing MSIE on grounds that 90% of 
web attacks come with that agent string.
The content inside those headers is also context specific, signature matching 
will not work beyond a simple proxy/maybe-proxy determination (which does not 
even determine non-proxy!).


A proposal came up in the IETF a few weeks ago that HTTPS traffic containing 
Via header should be blocked on sight by all servers. It got booted out on 
these grounds:

* the bad guys are not sending Via.

* what Via do exist are being sent by good guys who obey the specs but are 
othewise literally forced (by law or previous TLS based attacks) to MITM the 
HTTPS in order to increase security checking on that traffic (ie. AV scanning).

Therefore, the existence of Via is actually a sign of *good* health in the 
traffic and a useful tool for finding culprits behind the well behaved proxies.
 Rejecting or blocking based on its existence just increases the ratio of nasty 
traffic which makes it through. While simultaneously forcing the good guys to 
become indistinguishable from bad guys. Only the bad guys get any actual 
benefit out of the situation.


Basically via off is a bad idea, and broken services (intentional or
otherwise) which force it to be used are worse than terrible.

Amos




RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
Ideally you should upgrade to 3.4.4 or higher. I was able to download the
file just fine through my transparent squid. 503 error is odd, this is an
indication of a server side issue but I realize it is coming from squid.
Amos, any ideas?

-Original Message-
From: nuhll [mailto:nu...@web.de] 
Sent: Wednesday, August 20, 2014 9:51 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: server failover/backup

I found out why.

If i go direct to
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win
_deDE-final.MPQ
it works (without proxy).

If i enable proxy, it wont work and i get 503.


BTW i upgraded to Squid Cache: Version 3.3.8 



--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websit
es-tp4667121p4667279.html
Sent from the Squid - Users mailing list archive at Nabble.com.




RE: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Lawrence Pingree
Personally I have found that the latest generation of Next Generation Firewalls 
have been doing blocking when they detect a via with a squid header, so I did 
the same and that way no-one can detect my cache. The key thing you need to 
make sure is that NAT and redirection doesn't go into a loop so that the cache 
isn't receiving the packets twice and trying to re-process the requests.

-Original Message-
From: Amm [mailto:ammdispose-sq...@yahoo.com] 
Sent: Tuesday, August 19, 2014 11:16 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] https://weather.yahoo.com redirect loop


On 08/20/2014 10:52 AM, Jatin Bhasin wrote:
 And when I browse to https://weather.yahoo.com then it goes in 
 redirect loop. I am using Chrome browser and I get a message at the 
 end saying 'This webpage has a redirect loop'.

Happens in 3.4 series too.

I added these in squid.conf as a solution:

via off
forwarded_for delete

Amm




RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1





RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
In transparent mode things are working for me just fine including access to
battle.net and using the battle client. Does battle.net support proxy
configurations? i.e. are you putting the squid IP and Port as a proxy for
the client app to use?

-Original Message-
From: nuhll [mailto:nu...@web.de] 
Sent: Wednesday, August 20, 2014 9:51 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: server failover/backup

I found out why.

If i go direct to
http://llnw.blizzard.com/hs-pod/beta/EU/4944.direct/Updates/hs-6187-6284-Win
_deDE-final.MPQ
it works (without proxy).

If i enable proxy, it wont work and i get 503.


BTW i upgraded to Squid Cache: Version 3.3.8 



--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websit
es-tp4667121p4667279.html
Sent from the Squid - Users mailing list archive at Nabble.com.




RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Sorry, got that backwards, squid is a client, so I guess it should be listed. 

-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:09 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1







RE: [squid-users] what AV products have ICAP support?

2014-08-20 Thread Lawrence Pingree
Squid is listed as a client 
http://www.icap-forum.org/icap?do=productsisClient=checked


-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:17 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Sorry, got that backwards, squid is a client, so I guess it should be listed. 

-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Wednesday, August 20, 2014 10:09 AM
To: 'Jason Haar'; squid-users@squid-cache.org
Subject: RE: [squid-users] what AV products have ICAP support?

Squid is an ICAP server not a client 

-Original Message-
From: Jason Haar [mailto:jason_h...@trimble.com] 
Sent: Tuesday, August 19, 2014 4:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] what AV products have ICAP support?

Thanks for that, shouldn't squid be listed there as an ICAP client?

On 19/08/14 17:56, Amos Jeffries wrote:
 http://www.icap-forum.org/icap?do=productsisServer=checked 


-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1









RE: [squid-users] https://weather.yahoo.com redirect loop

2014-08-20 Thread Lawrence Pingree
No, I mean they are intentionally blocking with a configured policy, its not a 
bug. :) They have signatures that match Via headers and forwarded for headers 
to determine that it's squid. This is because many hackers are using bounces 
off open squid proxies to launch web attacks. 

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, August 20, 2014 4:10 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] https://weather.yahoo.com redirect loop

On 21/08/2014 5:08 a.m., Lawrence Pingree wrote:
 Personally I have found that the latest generation of Next Generation 
 Firewalls have been doing blocking when they detect a via with a squid 
 header,

Have you been making bug reports to these vendors?
 Adding Via header is mandatory in HTTP/1.1 specification, and HTTP proxy is a 
designed part of the protocol. So any blocking based on the simple existence of 
a proxy is non-compliance with HTTP itself. That goes for ports 80, 443, 3128, 
3130, and 8080 which are all registered for HTTP use.

However, if your proxy is emitting Via: 1.1 localhost or Via: 1.1 
localhost.localdomain it is broken and may not be blocked so much as rejected 
for forwarding loop because the NG firewall has a proxy itself on localhost. 
The Via header is generated from visible_hostname (or the OS hostname lookup) 
and supposed to contain the visible public FQDN of the each server the message 
relayed through.

Amos




RE: [squid-users] Re: server failover/backup

2014-08-20 Thread Lawrence Pingree
Nuhll,
Just use the following config and point your clients to port 8080 on the
squid ip. The ONLY thing you really should change with this configuration is
the IP addresses, the hostname or add file extensions to the
refresh_patterns. It should work!


#
#Recommended minimum configuration:
#
always_direct allow all

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 192.168.0.0/16
acl localnet src fc00::/7
acl localnet src fe80::/10 # RFC1918 possible internal network
acl Safe_ports port 1-65535 # RFC1918 possible internal network
acl CONNECT method GET POST HEAD CONNECT PUT DELETE # RFC1918 possible
internal network
#acl block-fnes urlpath_regex -i .*/fnes/echo # RFC 4193 local private
network range
acl noscan dstdomain .symantecliveupdate.com liveupdate.symantec.com
psi3.secunia.com update.immunet.com # RFC 4291 link-local (directly plugged)
machines

acl video urlpath_regex -i
\.(m2a|avi|mov|mp(e?g|a|e|1|2|3|4)|m1s|mp2v|m2v|m2s|wmx|rm|rmvb|3pg|3gpp|omg
|ogm|asf|asx|wmvm3u8|flv|ts)

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost

no_cache deny noscan
always_direct allow noscan
always_direct allow video

# Deny requests to certain unsafe ports

# Deny CONNECT to other than secure SSL ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on .localhost. is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#cache_peer 192.168.1.1 parent 8080 0 default no-query no-digest
no-netdb-exchange
#never_direct allow all

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed

http_access allow all

# allow localhost always proxy functionality

# And finally deny all other access to this proxy

# Squid normally listens to port 3128
http_port 192.168.2.2:8080

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
maximum_object_size 5000 MB
#store_dir_select_algorithm round-robin
cache_dir aufs /daten/squid 10 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

# Add any of your own refresh_pattern entries above these.
# General Rules
refresh_pattern -i
\.(jpg|gif|png|webp|jpeg|ico|bmp|tiff|bif|ver|pict|pixel|bs)$ 22 90%
30 override-expire ignore-no-store ignore-private ignore-auth
refresh-ims
refresh_pattern -i
\.(js|css|class|swf|wav|dat|zsci|do|ver|advcs|woff|eps|ttf|svg|svgz|ps|acsm|
wma)$ 22 90% 30 override-expire ignore-no-store ignore-private
ignore-auth refresh-ims
refresh_pattern -i \.(html|htm|crl)$ 22 90% 259200 override-expire
ignore-no-store ignore-private ignore-auth refresh-ims
refresh_pattern -i \.(xml|flow)$ 0 90% 10
refresh_pattern -i \.(json)$ 1440 90% 5760
refresh_pattern -i ^http:\/\/liveupdate.symantecliveupdate.com.*\(zip)$ 0 0%
0
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
22 80% 259200
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$ 22 80% 259200
refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f]|asf|wma|dat|zip)$
22 80% 259200
refresh_pattern -i
\.(bin|deb|rpm|drpm|exe|zip|tar|tgz|bz2|ipa|bz|ram|rar|bin|uxx|gz|crl|msi|dl
l|hz|cab|psf|vidt|apk|wtex|hz|ipsw)$ 22 90% 50 override-expire
ignore-no-store ignore-private ignore-auth refresh-ims
refresh_pattern -i \.(ppt|pptx|doc|docx|pdf|xls|xlsx|csv|txt)$ 22 90%
259200 override-expire ignore-no-store ignore-private ignore-auth
refresh-ims
refresh_pattern -i ^ftp: 66000 90% 259200
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern -i . 0 90% 259200
log_icp_queries off
icp_port 0
htcp_port 0
snmp_port 3401
acl snmppublic snmp_community public
snmp_access allow snmppublic all
minimum_object_size 0 KB
buffered_logs on
cache_effective_user squid
#header_replace User-Agent Mozilla/5.0 (X11; U;) Gecko/20080221
Firefox/2.0.0.9
vary_ignore_expire on
cache_swap_low 90
cache_swap_high 95
visible_hostname shadow
unique_hostname shadow-DHS
shutdown_lifetime 0 second
request_header_max_size 256 KB
half_closed_clients off
max_filedesc 65535
connect_timeout 10 second
cache_effective_group squid
#access_log /var/log/squid/access.log squid
access_log daemon:/var/log/squid/access.log buffer-size=1MB
client_db off
dns_nameservers 127.0.0.1
#pipeline_prefetch 20
ipcache_size 8192
fqdncache_size 8192
#positive_dns_ttl 72 hours
#negative_dns_ttl 5 minutes
tcp_outgoing_address 192.168.2.2
dns_v4_first on
check_hostnames off
forwarded_for delete
via off
pinger_enable off
cache_mem 2048 MB
maximum_object_size_in_memory 256 KB
memory_cache_mode disk
cache_store_log none

RE: [squid-users] Quick question

2014-08-06 Thread Lawrence Pingree
Interesting, so on ext4 (which is what I am using) there's no performance
differences between using different numbers?


Convert your dreams to achievable and realistic goals, this way the journey
is satisfying and progressive. - LP

Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, August 5, 2014 6:28 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Quick question

 -Original Message-
 From: Lawrence Pingree

 I have a 175 gigabyte cache file system. What would be the optimal L1
and L2
 cache dirs allocated for this cache size to perform well?


On 6/08/2014 11:52 a.m., Lawrence Pingree wrote:
 Anyone?

That depends on the OS filesystem underlying the cache, and the size of
objects in it.

The L1/L2 settings matter on FS which have a per-directory limit on inode
entries, or need to scan the full list on each file open/stat event (I think
that was FAT32, NTFS, maybe ext2, maybe old unix FS). On FS which do not do
those two things they are just an admin convenience.

Amos


smime.p7s
Description: S/MIME cryptographic signature


RE: [squid-users] Quick question

2014-08-05 Thread Lawrence Pingree
Anyone?

Convert your dreams to achievable and realistic goals, this way the journey
is satisfying and progressive. - LP

Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Lawrence Pingree [mailto:geek...@geek-guy.com] 
Sent: Monday, August 4, 2014 10:38 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Quick question

I have a 175 gigabyte cache file system. What would be the optimal L1 and L2
cache dirs allocated for this cache size to perform well?



Best regards,
Lawrence Pingree
Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com

- sent from my mobile BYOD please excuse any typos


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Quick question

2014-08-04 Thread Lawrence Pingree
I have a 175 gigabyte cache file system. What would be the optimal L1 and L2 
cache dirs allocated for this cache size to perform well?



Best regards,
Lawrence Pingree
Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com

- sent from my mobile BYOD please excuse any typos

RE: [squid-users] problem streaming video

2014-07-15 Thread Lawrence Pingree
I have found that although RFC's state that you should have VIA and forwarded 
for headers, firewalls and intrusion detection devices are now blocking (based 
on their configuration of the organization) proxies that are detected using 
these headers as the method for detection.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com



-Original Message-
From: ama...@tin.it [mailto:ama...@tin.it]
Sent: Tuesday, July 15, 2014 1:46 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] problem streaming video

Resolved.
Setting option:

via off
forwarded_for delete

Best regards,

Maurizio


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] feature request for sslbump

2014-07-14 Thread Lawrence Pingree
Several ssl inspecting firewalls also provide this capability.

Sent from my iPhone

 On Jul 14, 2014, at 6:10 PM, Brendan Kearney bpk...@gmail.com wrote:
 
 On Mon, 2014-07-14 at 15:57 +1200, Jason Haar wrote:
 Hi there
 
 I've started testing sslbump with ssl_bump server-first and have
 noticed something (squid-3.4.5)
 
 If your clients have the Proxy CA cert installed and go to legitimate
 https websites, then everything works perfectly (excluding Chrome with
 it's pinning, but there's no way around that). However, if someone goes
 to a https website with either a self-signed cert or a server cert
 signed by an unknown CA, then squid generates a legitimate SSL cert
 for the site, but shows the squid error page to the browser - telling
 them the error
 
 The problem with that model is that it means no-one can get to websites
 using self-signed certs. Using sslproxy_cert_adapt to allow such
 self-signed certs is not a good idea - as then squid is effectively
 legitimizing the server - which may be a Very Bad Thing
 
 So I was thinking, how about if squid (upon noticing the external site
 isn't trustworthy) generates a deliberate self-signed server cert itself
 (ie not signed by the Proxy CA)? Then the browser would see the
 untrusted cert, the user would get the popup asking if they want to
 ignore cert errors, and can then choose whether to trust it or not. That
 way the user can still get to sites using self-signed certs, and the
 proxy gets to see into the content, potentially running AVs over
 content/etc.
 
 ...or haven't I looked hard enough and this is already an option? :-)
 
 Thanks
 
 an unnamed enterprise vendor provides the Preserve Untrusted Issuer
 functionality, very much like you describe.  it leaves the decision to
 the user whether or not to accept the untrusted cert.  the cert needs to
 be valid (within its dates), and match the URL exactly or via
 wildcarding or SAN to be allowed, too.  since i have not started
 intercepting ssl with squid yet, i have not run into this scenario or
 contemplated what i would look to do in it.
 


RE: Fwd: Re: [squid-users] google picking up squid as

2014-06-27 Thread Lawrence Pingree
That's very odd. I'd try calling them... There are quite a few folks blocking 
proxies these days. What I do is remove the via and forwarded for headers with 
the following command:
check_hostnames off
forwarded_for delete
via off

I realize this breaks the RFC, but lest be blocked if detected as a squid 
proxy. sux



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: sq...@proxyplayer.co.uk [mailto:sq...@proxyplayer.co.uk] 
Sent: Friday, June 27, 2014 4:43 AM
To: squid-users@squid-cache.org
Subject: Re: Fwd: Re: [squid-users] google picking up squid as

 How about contacting google for advise?
 They are the one that forces you to the issue.
 They don't like it that you have a 1k clients behind your IP address.
 They should tell you what to do.
 You can tell them that you are using squid as a forward proxy to 
 enforce usage acls on users inside the network.
 It's not a share to use squid...
 It's a shame that you cannot get a reasonable explanation to the 
 reason you are blocked...

There is only 1 client behind the IP address as it is a test server so 
something is going wrong with either routing or requests to google.
Google will not answer any emails.
I suppose one alternative is to use unbound in conjunction with squid and not 
redirect any requests to google?





[squid-users] My latest squid.conf with an average byte hit rate of 37%

2014-06-17 Thread Lawrence Pingree
http://www.lawrencepingree.com/2014/06/10/optimized-squid-conf-for-3-4-4-2/




Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 






RE: [squid-users] Re: squid with qlproxy on fedora 20 not working for https traffic

2014-06-14 Thread Lawrence Pingree
I have to remark, this is one of the significant downsides of people going
all-out SSL, including that in order for many security technologies to
properly inspect attacks they must also do similar ssl-bumping. sigh.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Saturday, June 14, 2014 10:12 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Re: squid with qlproxy on fedora 20 not working
for https traffic

On 06/13/2014 09:10 PM, Amos Jeffries wrote:
 On 14/06/2014 1:23 p.m., MrErr wrote:
 Does this mean that dstdomain does not work with ssl-bump?

 Yes and no. It works with CONNECT bumping in regular proxy traffic. 

... unless the browser uses IP addresses in CONNECT requests (some do) or
the user types in (or clicks on a link with) an IP address instead of a
domain name (rare and does not work well for the user even without SslBump,
but does happen in reality so be ready for it).


 It does not work on intercepted port 443 traffic reliably.

In summary, bumping SSL does not and cannot work reliably in most
environments. There will always be broken cases despite our continuing
efforts to minimize SslBump invasiveness. If user happiness is important, be
prepared to babysit your Squid and add low-level
(TCP/IP-based) exceptions.


 My other reason for not using ssl-bump server-first all is that the 
 kindle fire stops working. I read that it was because of something 
 called ssl pinning. So i do need to get some kind of targeted bumping to
happen.

 
 HSTS probably. And yes those sites bumping does not work for.

There is also bug 3966 that affects some sites, including Google-affiliated
sites, in some environments:
http://bugs.squid-cache.org/show_bug.cgi?id=3966


Cheers,

Alex.





RE: [squid-users] google picking up squid as

2014-06-07 Thread Lawrence Pingree
I use the following but you need to make sure you have no looping occurring in 
your nat rules if you are using Transparent mode.

forwarded_for delete
via off



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: sq...@proxyplayer.co.uk [mailto:sq...@proxyplayer.co.uk] 
Sent: Saturday, June 7, 2014 8:17 AM
To: squid-users@squid-cache.org
Subject: [squid-users] google picking up squid as

I get the following notice from google's site when connected to the proxy:
Our systems have detected unusual traffic from your computer network.

Any ideas how I can prevent this? I presume it might be the forwarded for 
argument?
The following is my conf:

auth_param basic realm proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd 
authenticate_cache_garbage_interval 1 hour authenticate_ip_ttl 2 hours acl 
manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 
127.0.0.0/8 acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access allow ncsa_users
http_access deny all
icp_access allow all
http_port 8080
http_port 123.123.123.123:80
cache deny all
access_log /var/log/squid/access.log squid cache_log /var/log/squid/cache.log 
buffered_logs on half_closed_clients off visible_hostname ProxyServer 
log_icp_queries off dns_nameservers 208.67.222.222 208.67.220.220 hosts_file 
/etc/hosts memory_pools off client_db off delay_pools 1 delay_class 1 2 
delay_parameters 1 -1/-1 40/40 forwarded_for off via off







RE: [squid-users] Re: BUG 3279: HTTP reply without Date

2014-06-04 Thread Lawrence Pingree
Hmm. I've not yet seen this in my version 3.4.4.2 (how can I test to see it?) I 
am running AUFS but no SMP mode.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Niki Gorchilov [mailto:n...@gorchilov.com] 
Sent: Tuesday, June 3, 2014 11:52 PM
To: Mike Mitchell
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Re: BUG 3279: HTTP reply without Date

I can also confirm, asserts are disappearing with diskd. I still get all the 
warnings, but there's no crash.

What e relief.

Niki

On Tue, Jun 3, 2014 at 10:00 PM, Mike Mitchell mike.mitch...@sas.com wrote:
 I followed the advice found here:
   
 http://www.mail-archive.com/squid-users@squid-cache.org/msg95078.html

 Switching to diskd from aufs fixed the crashes for me.
 I still get
   WARNING: swapfile header inconsistent with available data messages 
 in the log.  They appear within a hour of starting with a clean cache.
 When I clean the cache I stop squid, rename the cache directory, 
 create a new cache directory, start removing the old cache directory, then 
 run squid -z before starting squid.

 I run the following commands:

 /etc/init.d/squid stop
 sleep 5
 rm -f /var/squid/core*
 rm -f /var/squid/swap.state*
 rm -rf /var/squid/cache.hold
 mv /var/squid/cache /var/squid/cache.hold rm -rf /var/squid/cache.hold 
  squid -z /etc/init.d/squid start

 I'm running on a Red Hat Linux VM.
 Here is the output of 'uname -rv':
2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010

 Squid Cache: Version 3.4.5-20140514-r13135 configure options: 
 '--with-maxfd=16384' '--enable-storeio=diskd' 
 '--enable-removal-policies=heap' '--enable-delay-pools' 
 '--enable-wccpv2' '--enable-auth-basic=DB NCSA NIS POP3 RADIUS fake 
 getpwnam' '--enable-auth-digest=file' '--with-krb5-config=no' 
 '--disable-auth-ntlm'  '--disable-auth-negotiate' 
 '--disable-external-acl-helpers' '--disable-ipv6' 
 --enable-ltdl-convenience

 Mike Mitchell







RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

2014-05-24 Thread Lawrence Pingree
Here's my filesystem:
Filesystem  1K-blocks Used Available Use% Mounted on
/dev/mapper/system-root  92755960  7827884  80193308   9% /
devtmpfs  8177376   60   8177316   1% /dev
tmpfs 8190700 5840   8184860   1% /dev/shm
tmpfs 819070017812   8172888   1% /run
tmpfs 81907000   8190700   0% /sys/fs/cgroup
/dev/sda1  387456   189953172979  53% /boot
/dev/mapper/system-ssd  103589120 13909464  84394588  15% /ssd
/dev/mapper/system-var   41153856   964864  38075456   3% /var
tmpfs 819070017812   8172888   1% /var/run
tmpfs 819070017812   8172888   1% /var/lock


And permissions:

# ls -al /dev/shm
total 5840
drwxrwxrwt  2 root  root   140 May 22 17:26 .
drwxr-xr-x 19 root  root  4080 May 22 17:15 ..
-rwx--  1 root  root  67108904 May 22 16:45 pulse-shm-1091930706
-rwx--  1 root  root  67108904 May 22 16:45 pulse-shm-2870669293
-rwx--  1 root  root  67108904 May 22 16:45 pulse-shm-4040921779
-rw---  1 squid squid  7864392 May 22 17:29 squid-cache_mem.shm
-rw---  1 squid squid0 May 22 17:29 squid-squid-page-pool.shm

Notice that there are already squid files in the directory... so permissions 
should be good. Maybe the size is the problem?


Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: ferna...@lozano.eti.br [mailto:ferna...@lozano.eti.br] 
Sent: Thursday, May 22, 2014 8:48 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

Hi Lawrence,

 Here's the error I am getting and my squid.conf
 
 FATAL: Ipc::Mem::Segment::create failed to
 ftruncate(/squid-squid-page-pool.shm): (22) Invalid argument

If squid can't create the shm file, you should check your OS configuration. 
It's not squid fault, the server Os has to be configured to provide enough 
shared memory to squid.

Most of the time, it's enough to mount tmpfs mounted as /dev/shm, but I'm not 
familiar with SuSE installation defaults.


 cache_dir aufs /ssd/squid/cache0 45000 64 1024
 cache_dir aufs /ssd/squid/cache1 45000 64 1024

This won't work in smp mode. Each worker needs a dedicated cache_dir, 
except for rock store. But your squid never reached that point.

The way you did all workers will try to use both cache_dirs. I guess 
you want one for each worker. So replace those two lines by:

cache_dir aufs /ssd/squid/cache${process_number} 45000 64 1024

That way each worker will use only it's own, exclusive, cache dir.


 workers 2

Ok, you are trying to use 2 workers.


[]s, Fernando Lozano





RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

2014-05-23 Thread Lawrence Pingree
Created the directory and put in the cache dir as you specified, still got an 
error FATAL: Ipc::Mem::Segment::create failed to 
ftruncate(/squid-squid-page-pool.shm): (22) Invalid argument. 






Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: ferna...@lozano.eti.br [mailto:ferna...@lozano.eti.br] 
Sent: Thursday, May 22, 2014 8:48 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

Hi Lawrence,

 Here's the error I am getting and my squid.conf
 
 FATAL: Ipc::Mem::Segment::create failed to
 ftruncate(/squid-squid-page-pool.shm): (22) Invalid argument

If squid can't create the shm file, you should check your OS configuration. 
It's not squid fault, the server Os has to be configured to provide enough 
shared memory to squid.

Most of the time, it's enough to mount tmpfs mounted as /dev/shm, but I'm not 
familiar with SuSE installation defaults.


 cache_dir aufs /ssd/squid/cache0 45000 64 1024
 cache_dir aufs /ssd/squid/cache1 45000 64 1024

This won't work in smp mode. Each worker needs a dedicated cache_dir, 
except for rock store. But your squid never reached that point.

The way you did all workers will try to use both cache_dirs. I guess 
you want one for each worker. So replace those two lines by:

cache_dir aufs /ssd/squid/cache${process_number} 45000 64 1024

That way each worker will use only it's own, exclusive, cache dir.


 workers 2

Ok, you are trying to use 2 workers.


[]s, Fernando Lozano




RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

2014-05-22 Thread Lawrence Pingree
/access.log buffer-size=512KB
client_db off
dns_nameservers 127.0.0.1
tcp_outgoing_address 192.168.2.2
dns_v4_first on
ipcache_low 95
ipcache_high 98
positive_dns_ttl 24 hours
negative_dns_ttl 30 seconds
fqdncache_size 2048
check_hostnames off
forwarded_for delete
via off
pinger_enable off
memory_replacement_policy heap LRU
cache_replacement_policy heap LRU
memory_pools on
reload_into_ims on
cache_store_log none
read_ahead_gap 50 MB
client_persistent_connections on
server_persistent_connections on
workers 2

-=-=-=- End Squid.conf -=-=-=-=-=

Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: ferna...@lozano.eti.br [mailto:ferna...@lozano.eti.br] 
Sent: Thursday, May 22, 2014 7:24 AM
To: Lawrence Pingree
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

Hi Lawrence,

Write permissions for user/group squid.

Instead of just trying what worked for me, we should find what's happening with 
you:
- what happens when you try to start squid in smp mode?
- How are your squid.conf?
- Are there erros on squid's cache.log?


[]s, Fernando Lozano


 What permissions are needed?
 
 Best regards,
 
 The Geek Guy
 
 Lawrence Pingree




RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

2014-05-21 Thread Lawrence Pingree
Hi Fernando,
I don't believe so because I disabled apparmor and I am not running SELinux 
because it is not by default enabled on OpenSuse that I am aware of. 



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Fernando Lozano [mailto:ferna...@lozano.eti.br] 
Sent: Monday, May 19, 2014 8:50 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

Hi Lawrence,
 Personally I use RPM's on Opensuse and I'm running 3.4.4.2 which is pretty 
 recent to be honest.  (..) However, the down side is that I cannot run SMP 
 mode (for some reason I think the RPM is not compiled with it) as well as I 
 am stuck running this version until someone builds a new RPM.
This thread started when I offered a contribution for allowing SMP mode from 
the rpm packages. :-)

Are your issues related to SELinux or APParmour? If post them on the list 
someone may help you with them and the fix may be incorporated to a later rpm 
package.


[]s, Fernando Lozano





RE: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP

2014-05-17 Thread Lawrence Pingree
Personally I use RPM's on Opensuse and I'm running 3.4.4.2 which is pretty 
recent to be honest. Anyhow, I think running your own compiled version does 
have its plusses as described before. But If you get a platform that has rpms 
for the more recent versions like I do, and you really only use the basic squid 
functionality then you'll be fine. However, the down side is that I cannot run 
SMP mode (for some reason I think the RPM is not compiled with it) as well as I 
am stuck running this version until someone builds a new RPM. Which ever you 
choose there are positives and negatives. 



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com] 
Sent: Saturday, May 17, 2014 1:33 PM
To: Fernando Lozano; csn233
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] configuring Eliezer RPMs for CentOS 6 for SMP



On 05/16/2014 06:47 PM, Fernando Lozano wrote:
 Hi,

 I don't quite agree with you. Let me expose my views so each member of 
 the list can weight pros and cons:

 Not answering this thread, but would like to ask some related points 
 for anyone who may be listening in:
 
 1. RPMs.
 
 For practically everything else, I use RPMs for installation. For 
 Squid, I've moved away from this approach. Standard RPMs still 
 provide only 3.1.10. Non-standard RPMs, you have no idea where the 
 next one is coming from, or whether it suits your needs. If you 
 compile-your-own, you get the version you want, anytime you want
 In my experience using unofficial rpms from the community is way 
 better than compile-your-own.  More people try, test and fix 
 unofficial rpms than your own build. When you get someone providing 
 those RPMs for many releases, lie Eliezer, you can trust it almost like the 
 official
 community packages from your distro.

 Besides, in the rare occasions you really need a custom build you can 
 start from the SRPM and still get dependency management, integrity 
 verification and other RPM/yum features that you loose then you 
 compile-your-own.

 Better to help improve the RPM packages for the benefit of all the 
 community than selfishly wasting your time on a build only for yourself.

+1.  administrators that run production proxies usually want
stability and the fact that numerous others use it is a reason to trust the 
stability.

The statement that RPMs add an unnecessary component that may need debugging is 
utter nonsense.

Marcus




[squid-users] Best Refresh Rules Contest - No broken sites

2014-05-11 Thread Lawrence Pingree

Hey Amos,

Do you or does anyone else on here have an optimized set of refresh patterns
that they feel caches the most without breaking sites?



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 






RE: [squid-users] how to use refresh_pattern correct

2014-05-01 Thread Lawrence Pingree
Thanks Dan,
I get about 40% cache rate with no real issues with websites. And my web
surfing performance is sub-3 second response times in most cases.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Dan Charlesworth [mailto:d...@getbusi.com] 
Sent: Monday, April 28, 2014 9:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] how to use refresh_pattern correct

Hi Lawrence

I think that's the most extensive list of refresh patterns I've seen in one
place, for a forward proxy. Props.

Is anyone else using a collection like this and care to comment on its
performance / viability?

Dan

On 29 Apr 2014, at 2:05 pm, Lawrence Pingree geek...@geek-guy.com wrote:

 Try using my refresh patterns:
 http://www.lawrencepingree.com/2014/01/01/optimal-squid-config-conf-fo
 r-3-3-
 9/
 
 
 
 
 Best regards,
 The Geek Guy
 
 Lawrence Pingree
 http://www.lawrencepingree.com/resume/
 
 Author of The Manager's Guide to Becoming Great
 http://www.Management-Book.com
  
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, April 28, 2014 10:15 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] how to use refresh_pattern correct
 
 On 29/04/2014 2:02 a.m., tile1893 wrote:
 Hi,
 
 i'm running squid on openwrt and i want squid to cache all requests
 which
 are made.
 I think that this is done by defining refresh_pattern in squid config.
 But in my opinion no matter how i config them, they are always be
 ignored by
 squid and will never be used.
 
 for example:
 refresh_pattern www  5000   100%  1  override-expire
 override-lastmod
 ignore-reload ignore-no-store ignore-must-revalidate ignore-private 
 ignore-auth store-stale
 
 or:
 refresh_pattern  www  1200  100% 6000 override-expire
 
 But they both dont work.
 Any idea how to configure squid that it is caching every request?! Do 
 I
 have
 to enable those refresh_patterns somehow?!
 
 FYI: Caching everything is not possible. HTTP protocol requires at 
 least some non-cached traffic just to operate.
 
 Now that your expectations have been lowered ...
 
 *correct* usage is not to have any of the override-* or ignore-* 
 options at all. But correct and practical are not always the same. Use 
 the options if you are required to, but only then.
 
 There is also a very large diference betwen HTTP/1.0 caching and
 HTTP/1.1 caching you need to be aware of. In HTTP/1.0 there was 
 HIT/MISS and very little else. In HTTP/1.1 there is also revalidation 
 (304, REFRESH, IMS,
 INM) which is caching the [large] bodies of objects while still 
 sending the [small] headers back and forth - giving the best of both
worlds.
 
 
 So tile1893...
 what version of Squid do you have?
 how are you testing it?
 what makes you think its not caching?
 how much cache space do you have?
 what are your maximum object limits?
 what order is your cache, store and object related config options?
 what traffic rate (requests per second/minute) are you serving?
 what does redbot.org say about the URLs you are trying to cache?
 
 (Maybe more later but that should do for starters.)
 
 Amos
 
 
 





RE: [squid-users] feature requests

2014-05-01 Thread Lawrence Pingree
Hi Amos,
Thanks for your help in understanding my request. I have attempted to create
a rock store but was unsuccessful. There doesn't seem to be very good
guidance on the proper step by step process of creating a rock store. I
came across crashes the last time i attempted it. Also, I am using an x86
platform (32 bit) with multiple cores, when I attempted to use SMP mode with
multiple workers, instantly my intercept mode stopped functioning. I
couldn't figure out what was wrong so I'd love to get better guidance on
this as well. 



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, April 29, 2014 1:20 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] feature requests

On 29/04/2014 4:17 p.m., Lawrence Pingree wrote:
 
 I would like to request two features that could potentially help with 
 performance.
 

See item #1 Wait ...

http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feat
ure.2C_enhance.2C_of_fix_something.3F

Some comments to think about before you make the formal feature request bug.
Don't let these hold you back, but they are the bigger details that will
need to be overcome for these features to be accepted and useful.

I will also suggest you test out the 3.HEAD Squid code to see what we have
done recently with collapsed_forwarding, SMP support and large rock caches.
Perhapse the startup issues that make you want these are now resolved.


 1. I would like to specify a max age for memory-stored hot objects 
 different than those specified in the generic cache refresh patterns.

refresh_patterns are not generic. They are as targeted as the regex pattern
you write.

The only difference between memory, disk or network sources for a cache is
access latency. Objects are promoted from disk to memory when used, and
pushed from memory to disk when more memory space is needed.


I suspect this feature will result in disk objects maximum age stabilizing
at the same value as the memory cache is set to.
 - a memory age limit higher than disk objects needing to push to disk will
get erased as they are too old.
 - a memory age limit lower than disk objects promoted from disk will get
erased or revalidated to be within the memory limit (erasing the obsoleted
disk copy).
So either way anything not meeting the memory limit is erased. Disk will
only be used for the objects younger than the memory limit which need to
overspill into the slower storage area where they can age a bit beofre next
use ... which is effectively how it works today.


Additionally, there is the fact that objects *are* cached past their max-age
values. All that happens in HTTP/1.1 when an old object is requested is a
revalidation check over the network (used to be a re-fetch in HTTP/1.0). The
revalidation MAY supply a whole new object, or just a few new headers.
 - a memory age limit higher than disk causes the disk (already slow) to
have additional network lag for revalidation which is not applied to the in
memory objects.
 - a memory age limit lower than disk places the extra network lag on memory
objects.

... what benefit is gained from adding latency to one of the storage areas
which is not applicable to the same object when it is stored to the other
area?


The overarching limit on all this is the *size* of the storage areas, not
the object age. If you are in the habit of setting very large max-age value
on refresh_pattern to increase caching take a look at your storage LRU/LFU
age statistics sometime. You might be in for a bit of a surprise.


 
 2. I would like to pre-load hot disk objects during startup so that 
 squid is automatically re-launched with the memory cache populated. 
 I'd limit this to the maximum memory cache size amount.
 

This one is not so helpful as it seems when done by a cache. Loading on
demand solves several performance problems which pre-loading encounter in
full.

 1) Loading the objects takes time. Resulting in a slower time until first
request.

Loading on-demand we can guarantee that the first client starts receiving
its response as fast as possible. There is no waiting for GB of other
objects to fully load first, or even the end of the current object to
complete loading.


 2) Loading based on previous experience is as best an educated guess.
That can still load the wrong things, wasting the time spent.

Loading on-demand guarantees that only the currently hot objects are loaded.
Regardless of what was hot a few seconds, minutes or days ago when the proxy
shutdown. Freeing up CPU cycles and disk waiting time for servicing more
relevant requests.


 3) A large portion of traffic in HTTP/1.1 needs to be validated over the
network using the new clients request header details before use.

This comes back to (1). As soon as the headers are loaded the network

RE: [squid-users] how to use refresh_pattern correct

2014-05-01 Thread Lawrence Pingree
I actually have no real issues. It works very well to be quite honest. :)



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, April 29, 2014 6:45 AM
To: Dan Charlesworth; Lawrence Pingree
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] how to use refresh_pattern correct

On 29/04/2014 4:16 p.m., Dan Charlesworth wrote:
 Hi Lawrence
 
 I think that's the most extensive list of refresh patterns I've seen 
 in one place, for a forward proxy. Props.
 
 Is anyone else using a collection like this and care to comment on its 
 performance / viability?

You can find many similar lists in the archives of this mailing list if you
pick a really weird file extension type and search for it plus
refresh_pattern. eg.
https://www.google.co.nz/search?q=tgz+refresh_pattern+site%3Awww.squid-cach
e.org


As for viability. It uses most of the HTTP violation refresh_pattern options
based just on test for file extension patterns. I have no doubt that they
cause many things to store and log as HITs but the reliability of the
responses coming out of that cache is suspect.

Amos





RE: [squid-users] Access denied when in intercept mode

2014-05-01 Thread Lawrence Pingree
If you are getting access denied it is most likely a squid ACL. By default
squid.conf has most things blocked in the ACL.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Wednesday, April 30, 2014 8:34 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Access denied when in intercept mode

Hey there,

it depends on the topology and you current iptables and squid.conf state.
Can you share:
squid -v
iptables-save
cat squid.conf
(remove any confidential data and spaces + comments from the squid.conf)

Eliezer

On 05/01/2014 03:51 AM, nettrino wrote:
 Hello all, I am trying to set up a squid proxy to filter HTTPS 
 traffic. In particular, I want to connect an Android device to the 
 proxy and examine the requests issued by various applications.

 When 'intercept' mode is on, I get 'Access Denied'. I would be most 
 grateful if you gave me some hint on what is wrong with my 
 configuration.

 I have been following this tutorial:
 http://pen-testing-lab.blogspot.com/2013/11/squid-3310-transparent-pro
 xy-for-http.html

 My squid version is 3.4.4.2 running in Ubuntu with kernel
3.5.0-48-generic.



 Currently I am testing to connect from my laptop (squidCA.pem is added 
 in my
 browser)
 In the machine where squid is running I have flushed all my iptables 
 rules and only have


 Any help is greatly appreciated.

 Thank you



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-
 in-intercept-mode-tp4665775.html Sent from the Squid - Users mailing 
 list archive at Nabble.com.






RE: [squid-users] how to use refresh_pattern correct

2014-04-28 Thread Lawrence Pingree
Try using my refresh patterns:
http://www.lawrencepingree.com/2014/01/01/optimal-squid-config-conf-for-3-3-
9/




Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, April 28, 2014 10:15 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] how to use refresh_pattern correct

On 29/04/2014 2:02 a.m., tile1893 wrote:
 Hi,
 
 i'm running squid on openwrt and i want squid to cache all requests
which
 are made.
 I think that this is done by defining refresh_pattern in squid config.
 But in my opinion no matter how i config them, they are always be
ignored by
 squid and will never be used.
 
 for example:
 refresh_pattern www  5000   100%  1  override-expire
override-lastmod
 ignore-reload ignore-no-store ignore-must-revalidate ignore-private 
 ignore-auth store-stale
 
 or:
 refresh_pattern  www  1200  100% 6000 override-expire
 
 But they both dont work.
 Any idea how to configure squid that it is caching every request?! Do 
 I
have
 to enable those refresh_patterns somehow?!

FYI: Caching everything is not possible. HTTP protocol requires at least
some non-cached traffic just to operate.

Now that your expectations have been lowered ...

 *correct* usage is not to have any of the override-* or ignore-* options at
all. But correct and practical are not always the same. Use the options if
you are required to, but only then.

 There is also a very large diference betwen HTTP/1.0 caching and
HTTP/1.1 caching you need to be aware of. In HTTP/1.0 there was HIT/MISS and
very little else. In HTTP/1.1 there is also revalidation (304, REFRESH, IMS,
INM) which is caching the [large] bodies of objects while still sending the
[small] headers back and forth - giving the best of both worlds.


So tile1893...
 what version of Squid do you have?
 how are you testing it?
 what makes you think its not caching?
 how much cache space do you have?
 what are your maximum object limits?
 what order is your cache, store and object related config options?
 what traffic rate (requests per second/minute) are you serving?
 what does redbot.org say about the URLs you are trying to cache?

(Maybe more later but that should do for starters.)

Amos





RE: [squid-users] Cache-Control: public doesn't cache

2014-04-28 Thread Lawrence Pingree
I believe there were some bugs fixed regarding caching in 3.4.4. I noticed 
better cache hits after upgrading to the 3.4.4 version. I am now running 
3.4.4.2 since there was a connect bug also fixed in this version. 



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, April 28, 2014 10:14 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Cache-Control: public doesn't cache

On 29/04/2014 4:19 a.m., Rob van der Putten wrote:
 Hi
 
 
 AFAIK 'Cache-Control: public, no-transform' is cacheable. But Squid
 (3.3.8) doesn't cache.
 

That one header does not make it cacheable...
 * no-transform has no meaning at all in regards to cacheability.
 * public is only meaningful when authentication is being used.


... in HTTP/1.1 everything is considered cacheable unless the protocol defines 
a reason not to.

Squid may choose not to cache for many other reasons related to the request 
headers, response headers, server trustworthiness, age of the object on 
arrival, and available storage space.

Some reasons in the protocol are broad, like a response to an unknown method 
is not cacheable. **

Others are very specific, like a cached object with Cache-Control no-cache or 
must-revalidate may only be used as a response after a successful revalidation 
(304 not modified) from the origin server. **

** very rough paraphrasing by me.

Amos





[squid-users] feature requests

2014-04-28 Thread Lawrence Pingree

I would like to request two features that could potentially help with
performance.

1. I would like to specify a max age for memory-stored hot objects different
than those specified in the generic cache refresh patterns.

2. I would like to pre-load hot disk objects during startup so that squid is
automatically re-launched with the memory cache populated. I'd limit this to
the maximum memory cache size amount.




Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com