[squid-users] Squid + Squidguard ACL Problem

2011-12-05 Thread Claudio Prono
Hello all,

Today, i have discovered a limitation into SquidGuard with the
userlists. I make you an example:

src user1 {
userlist /etc/user1.txt
}

src user2 {
userlist /etc/user2.txt
}

dest user1web {
domainlist user1web/domains
expressionlist user1web/expressions
log user1web
}

dest user2web {
domainlist user2web/domains
expressionlist user2web/expressions
log user2web
}

acl {
user1  {
pass user1web white !blacklist all
}

acl {
user2  {
pass user2web white !blacklist all
}

If into the lists i have an user present in user1.txt and also into
user2.txt, the user will be associated to the first matching rule (in
that case user1), and not into the second one. So, the user can visit
the web sites of user1web, but not the ones of user2web

My question is: there is any method to tell SquidGuard to look all the
user association and parse all the two acl? Or maybe there is some
workaroud to this?

Best regards,

Claudio Prono.

-- 

Claudio Prono OPST
System Developer   
  Gsm: +39-349-54.33.258
@PSS Srl  Tel: +39-011-32.72.100
Via Santorelli, 15Fax: +39-011-32.46.497
10095 Grugliasco (TO) ITALY   http://atpss.net/disclaimer

PGP Key - http://keys.atpss.net/c_prono.asc






Re: [squid-users] Squid losing connectivity for 30 seconds

2011-12-05 Thread Amos Jeffries

On 5/12/2011 7:14 p.m., Elie Merhej wrote:





 Hi,

I am currently facing a problem that I wasn't able to find a 
solution for in the mailing list or on the internet,
My squid is dying for 30 seconds every one hour at the same 
exact time, squid process will still be running,
I lose my wccp connectivity, the cache peers detect the 
squid as a dead sibling, and the squid cannot server any 
requests
The network connectivity of the sever is not affected (a 
ping to the squid's ip doesn't timeout)



Hi,

here is the strace result
- 

snip looks perfectly normal traffic, file opening and closing data 
reading, DNS lookups and other network read/writes

read(165, !, 256) = 1

snip bunch of other normal traffic


read(165, !, 256) = 1
 


Squid is freezing at this point


The 1-byte read on FD #165 seems odd. Particularly suspicious being 
just before a pause and only having a constant 256 byte buffer space 
available. No ideas what it is yet though.





wccp2_router x.x.x.x
wccp2_forwarding_method l2
wccp2_return_method l2
wccp2_service dynamic x
wccp2_service_info x protocol=tcp flags=src_ip_hash priority=240 
ports=80

wccp2_service dynamic x
wccp2_service_info x protocol=tcp flags=dst_ip_hash,ports_source 
priority=240 ports=80

wccp2_assignment_method mask


#icp configuration
maximum_icp_query_timeout 30
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
cache_peer x.x.x.x sibling 3128 3130 proxy-only no-tproxy
log_icp_queries off
miss_access allow squidFarm
miss_access deny all


So if I understand this right. You have a layer of proxies defined 
as squidFarm which client traffic MUST pass through *first* before 
they are allowed to fetch MISS requests from this proxy.  Yet you 
are receiving WCCP traffic directly at this proxy with both NAT and 
TPROXY?


This miss_access policy seems decidedly odd. Perhapse you can 
enlighten me?

Hi,

Let me explain what I am trying to do,(I was hoping that this is the 
right setup) the squids are siblings so my clients pass through one 
squid only (this squid uses icp to check if the object is in my 
network, if not the squid fetches the object from the internet)


 if 
miss if miss
clientsWCCPsquid-ICP---InternetWCCP---squidclients 



I have over 400Mbps of bandwidth, but one squid (3.1) cannot 
withstand this kind of bandwidth (number of clients), this is why I 
have created a squidFarm
I have the following hardware: i7 xeon 8 cpus - 16GB Ram - 2 HDDs 
450GB  600GB no RAID
Software: Debian OS squeeze 6.0.3 with kernel 2.6.32-5-amd64 and 
iptables 1.4.8
Please note that when I only use one cache_dir (the small one 
cache_dir aufs /cache1/squid 32 480 256 ) I don't face this problem

The problem starts when the cache dir size is bigger then 320 GB
Please advise

Thank you for the advise on the refresh patterns
Regards
Elie


Hi Amos,

Thank you for your help, the problem was solved when I replaced the 
refresh patterns with what you recommended,

I replaced more then 20 lines of refresh patterns with 4 lines,


Wow. Not an effect I was expecting there. But great news either way :)



One more question, do you recommend a specific replacement policy, I 
have read that when the size of cache directory is large, you advise 
to leave the default replacement policy,


I don't really have a specific opinion about the available policies. 
There have been quite a few comments that the heap algorithms are faster 
than the classical linked-list LRU. But each is suited for different 
caching needs. So whichever you think best matches what data you want to 
keep in cache.


Amos


Re: [squid-users] Make Squid in interception mode completely

2011-12-05 Thread Amos Jeffries

On 5/12/2011 7:34 p.m., Nguyen Hai Nam wrote:

Hi,


As last time I had a squid box working in interception mode as well:
 traffic was redirected from default gateway to squid box, then IP-filter
 will NAT to intercepting squid. Look like this:

INTERNET Router
   |
   |
 SwitchDefault gateway
   |  \
   |   \
   |+ Squid box
   |
   |
  LAN


But I'm thinking that I don't have access to default gateway router to
modify http traffic to squid, so I do add one more NIC to squid box and
 change topo to this:

INTERNET Router
   |
   |eth1
 Squid
   |eth0
   |
 SwitchDefault gateway
   |
   |
   |
  LAN

I've just tried to do so, but the traffic passed through and don't come
 to Squid. So the box is like a switch only. How can I do to make sure
 http traffic always comes to squid?


Like a switch? or or did you really mean like a bridge?

* switch ... no solution. Switches do not perform the NAT operations 
required for interception. They also don't run software like Squid, so I 
think this is a bad choice of word in your description.


* bridge ... requires dropping packets out of the bridge into the 
routing functionality. See the bridge section at 
http://wiki.squid-cache.org/Features/Tproxy4#ebtables_on_a_Bridging_device


Amos


Re: [squid-users] Squid + Squidguard ACL Problem

2011-12-05 Thread jeffrey j donovan

On Dec 5, 2011, at 3:54 AM, Claudio Prono wrote:

 Hello all,
 
 Today, i have discovered a limitation into SquidGuard with the
 userlists. I make you an example:
 
 src user1 {
 userlist /etc/user1.txt
 }
 
 src user2 {
 userlist /etc/user2.txt
 }
 
 dest user1web {
domainlist user1web/domains
expressionlist user1web/expressions
log user1web
 }
 
 dest user2web {
domainlist user2web/domains
expressionlist user2web/expressions
log user2web
 }
 
 acl {
user1  {
pass user1web white !blacklist all
}
 
 acl {
user2  {
pass user2web white !blacklist all
}



try adding each user list to the ACL with a  NOT  ! 

acl {
   user1  {
   pass !user2web user1web white !blacklist all
   }

acl {
   user2  {
   pass !user1web user2web white !blacklist all
   }


 
 If into the lists i have an user present in user1.txt and also into
 user2.txt, the user will be associated to the first matching rule (in
 that case user1), and not into the second one. So, the user can visit
 the web sites of user1web, but not the ones of user2web
 
 My question is: there is any method to tell SquidGuard to look all the
 user association and parse all the two acl? Or maybe there is some
 workaroud to this?
 
 Best regards,
 
 Claudio Prono.
 



[squid-users] Re: not getting persistent connections to an ssl backend

2011-12-05 Thread rob yates
Sorry for the bump, could someone let me know if this is supported? If
it's not supported I'll need to look at something other than squid and
am far enough along that I would rather not,

Thanks,

Rob

On Fri, Dec 2, 2011 at 11:48 AM, rob yates robertya...@gmail.com wrote:
 Hello,

 we are trying to set squid up as an SSL reverse proxy in front of SSL.
  The flow is browser - ssl - squid - ssl - application.

 When we do this we're not seeing persistent connections being used for
 the backend connection.  It appears that squid is starting a new SSL
 connection for every request vs. keeping one open and using it for
 other browser requests.

 Is there a way of getting squid configured to maintain and reuse the
 persistent connection for different browser requests, we'd ideally
 like it to maintain the connection for 5 mins.  We're running on squid
 2.6 and the pertinent bit of squid.conf is below, we're using the
 defaults for everything else.

 We're using tcpdump to see that the connection keeps getting
 terminated and reopened with every request.

 I am happy to upgrade if that is what is needed.

 We have changed the pconn_timeout setting but it has no effect.

 Certainly appreciate any help,

 Thanks,

 Rob

 https_port 9.32.153.229:443 cert=/etc/pki/tls/certs/www.
 daily2.crt key=/etc/pki/tls/private/daily2.key accel
 defaultsite=www.daily2.com vhost
 https_port 9.32.153.230:443 cert=/etc/pki/tls/certs/apps.daily2.crt
 key=/etc/pki/tls/private/daily2.key accel defaultsite=apps.daily2.com
 vhost

 cache_peer 9.32.154.106 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=f5www login=PASS
 cache_peer 9.32.154.93 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=f5apps login=PASS

 acl engage_sites dstdomain www.daily2.com
 http_access allow engage_sites
 cache_peer_access f5www allow engage_sites

 acl engage_sites dstdomain apps.daily2.com
 http_access allow engage_sites
 cache_peer_access f5apps allow engage_sites


Re: [squid-users] Make Squid in interception mode completely

2011-12-05 Thread Nguyen Hai Nam
Hi Amos,

You're right, switch is not really true.

But I still can't find the way on Solaris-like system like /proc/sys/net/bridge


On Mon, Dec 5, 2011 at 7:25 PM, Amos Jeffries squ...@treenet.co.nz wrote:


 Like a switch? or or did you really mean like a bridge?

 * switch ... no solution. Switches do not perform the NAT operations
 required for interception. They also don't run software like Squid, so I
 think this is a bad choice of word in your description.

 * bridge ... requires dropping packets out of the bridge into the routing
 functionality. See the bridge section at
 http://wiki.squid-cache.org/Features/Tproxy4#ebtables_on_a_Bridging_device

 Amos



-- 
Best regards,
Hai Nam, Nguyen


[squid-users] limiting connection not working 3.1.4

2011-12-05 Thread J. Webster

I have squid 3.1.4 but using this conf, the rate limiting to 1Mbps does not 
seem to work.
What can I change in the conf / delay parameters?

auth_param basic realm Myname proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access deny manager
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny maxuser
http_access allow localhost
http_access deny all
icp_access allow all
http_port 8080
http_port xx.xx.xx.xx:80
hierarchy_stoplist cgi-bin ?
cache_mem 100 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
#cache_dir aufs /var/spool/squid 4 16 256
#cache_dir null /null
maximum_object_size 50 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log none
buffered_logs on
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern .   0    20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
#acl apache rep_header Server ^Apache
#broken_vary_encoding allow apache
half_closed_clients off
visible_hostname MyNameProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
#coredump_dir /var/spool/squid
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 125000/125000
forwarded_for off
via off   

Re: AW: [squid-users] block TOR

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I want to block the Tor traffic because my clients use it to jump my
rules about the blocked site. In my firewall it's a little more
difficult refresh the Node that I want to block.

Jenny told about he/she can't establish a connection to the TOR net
across squid, but I can't see the problem, using CONNECT and 443 port
it's all the client needs !!!

I'm waiting for you guys !!!

On Sun, Dec 4, 2011 at 1:50 AM, Jenny Lee bodycar...@live.com wrote:

 Judging from dst acl, ultrasurf traffic and all in this thread, this is 
 talking about outgoing traffic to Tor via squid.

 Why would anyone want to block Tor traffic to his/her webserver (if this is 
 not an ecommerce site)? If it was an ecommerce site, they would know what to 
 do already and not ask this question here. Tor exists are made available 
 daily and firewall is hte place to drop them.

 I still want to hear what OP would say.

 Jenny




 From: amuel...@gmx.de
 To: squid-users@squid-cache.org
 Date: Sun, 4 Dec 2011 00:39:01 +0100
 Subject: AW: [squid-users] block TOR

 The question is with traffic of tor should be blocked. Outgoing client
 traffic to the tor network or incoming httpd requests from tor exit nodes ?

 Andreas

 -Ursprüngliche Nachricht-
 Von: Jenny Lee [mailto:bodycar...@live.com]
 Gesendet: Sonntag, 4. Dezember 2011 00:09
 An: charlie@gmail.com; leolis...@solutti.com.br
 Cc: squid-users@squid-cache.org
 Betreff: RE: [squid-users] block TOR


 I dont understand how you are managing to have anything to do with Tor to
 start with.

 Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and
 SOCKS on the server side.

 I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via
 my Squid and I could not succeed. I have even tried Amos' custom squid with
 SOCKS support and still failed.

 Can someone explain to me as to how you are connecting to Tor with squid
 (and consequently having a need to block it)?

 Jenny


  Date: Sat, 3 Dec 2011 16:37:05 -0500
  Subject: Re: [squid-users] block TOR
  From: charlie@gmail.com
  To: leolis...@solutti.com.br
  CC: bodycar...@live.com; squid-users@squid-cache.org
 
  Sorry for reopen an old post, but a few days ago i tried with this
  solution, and . like magic, all traffic to the Tor net it's
  blocked, just typing this:
  acl tor dst /etc/squid3/tor
  http_access deny tor
  where /etc/squid3/tor it's the file that I download from the page you
  people recommend me !!!
 
  Thanks a lot, this is something that are searching a lot of admin that
  I know, you should put somewhere where are easily to find !!! Thanks
  again !!
 
  Sorry for my english
 
  On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
  charlie@gmail.com wrote:
   Thanks a lot, I gonna make that script to refresh the list. You´ve
   been lot of helpful.
  
   On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
   leolis...@solutti.com.br wrote:
  
   i dont know if this is valid for TOR ... but at least Ultrasurf,
   which i have analized a bit further, encapsulates traffic over
   squid always using CONNECT method and connecting to an IP address.
   It's basically different from normal HTTPS traffic, which also uses
   CONNECT method but almost always (i have found 2-3 exceptions in some
 years) connects to a FQDN.
  
   So, at least with Ultrasurf, i could handle it over squid simply
   blocking CONNECT connections which tries to connect to an IP
   address instead of a FQDN.
  
   Of course, Ultrasurf (and i suppose TOR) tries to encapsulate
   traffic to the browser-configured proxy as last resort. If it finds
   an NAT-opened network, it will always tries to go direct instead of
   through the proxy. So, its mandatory that you do NOT have a
   NAT-opened network, specially on ports
   TCP/80 and TCP/443. If you have those ports opened with your NAT
   rules, than i really think you'll never get rid of those services,
   like TOR and Ultrasurf.
  
  
  
  
   Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
  
   So, like I see, we (the admin) have no way to block it !!
  
   On Thu, Sep 29, 2011 at 3:30 PM, Jenny Leebodycar...@live.com wrote:
  
   Date: Thu, 29 Sep 2011 11:24:55 -0400
   From: charlie@gmail.com
   To: squid-users@squid-cache.org
   Subject: [squid-users] block TOR
  
   There is any way to block TOR with my Squid ?
  
   How do you get it working with tor in the first place?
  
   I really tried for one of our users. Even used Amos's custom
   squid with SOCKS option but no go.
  
   Jenny
  
  
   --
  
  
   Atenciosamente / Sincerily,
   Leonardo Rodrigues
   Solutti Tecnologia
   http://www.solutti.com.br
  
   Minha armadilha de SPAM, NÃO mandem email gertru...@solutti.com.br
   My SPAMTRAP, do not email it
  
  
  
  
  




[squid-users] about rewrite an URL

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I have in my FTP a mirror of the Bases of Kaspersky (all versions), I
use KLUpdater to make that (from the Kaspersky site), now I want to
redirect everyone to search for the domain of Kaspersky's update to my
FTP, how can I do that ?

That's a lot


Re: [squid-users] limiting connection not working 3.1.4

2011-12-05 Thread Amos Jeffries

On Mon, 5 Dec 2011 14:18:51 +, J. Webster wrote:

I have squid 3.1.4 but using this conf, the rate limiting to 1Mbps
does not seem to work.


Please consider an upgrade to 3.1.18. There are a lot of important bugs 
resolved since 3.1.4.



What can I change in the conf / delay parameters?



The default in delay pools is not to limit. You must has an explicit 
delay_access allow line defining what gets collected into each pool.


ie:


delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 125000/125000


Add:
  delay_access allow all



auth_param basic realm Myname proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth 
/etc/squid/squid_passwd

authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src 0.0.0.0/0.0.0.0


Erase the acl all line in squid-3. It is defined by default to a 
different value. This will silence several warnings.


snip

http_access deny manager
http_access allow ncsa_users


So all logged in users have unlimited access?



http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny maxuser


These deny rules are placed below the allow rule letting ALL logged in 
users through.
This means that for all machines on the Internet which can supply one 
of your users insecure plain-text logins:
 * the safe_ports rule preventing viral and P2P abuse relaying through 
Squid has no effect
 * the CONNECT rule preventing blind binary tunneling of data to any 
protocol port through Squid has no effect.

 * you maxuser policy has no effect.


http_access allow localhost
http_access deny all
icp_access allow all
http_port 8080
http_port xx.xx.xx.xx:80


And what are you expecting to arrive over port 80?
That port is reserved for reverse-proxy and origin server traffic.

It seems like you intended reverse-proxy or interception but have a 
wrong config for it.



snip

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY


Drop this QUERY stuff.


refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%    1440


Add:
  refresh_pattern -i (/cgi-bin/|\?)   0 0% 0


refresh_pattern .   0    20% 4320

snip


visible_hostname MyNameProxyServer


Funny domain name. I hope that is obfuscated for the post not in the 
config.
This is the domain name used in URLs your clients get told to use for 
Squid error and FTP page icons. If it does not resolve back to this or 
another Squid your clients will be facing page load problems on those 
generated responses.



HTH
Amos


Re: [squid-users] How to set the IP of the real originator in HTTP requests (instead of Squid's IP)?

2011-12-05 Thread Amos Jeffries

On Mon, 5 Dec 2011 17:31:45 +0100, Leonardo wrote:

On Thu, Dec 1, 2011 at 1:18 PM, Amos Jeffries wrote:
Squid supports transparent proxy (not the NAT interception people 
call the

same).
http://wiki.squid-cache.org/Features/Tproxy4



My Squid is already compiled to function as transparent:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience

Is Tproxy4 a kind of super-transparent feature (i.e. does it allow 
the

next-hop to see the client IP instead of the Squid IP)?


The 'T' in TPROXY means 'transparent'. It is transparent down to the IP 
layer. Like glass, transparent both ways. Neither end aware the proxy is 
present unless they explicitly do some active tests to identify it.


Where that thing properly called NAT interception, which a lot of 
people wrongly call transparent, is not transparent at all. It is HTTP 
*translation* (the 'T' in NAT). Like one-way mirrors. with the Server 
facing the mirror and trivially able to see that something is in the 
way.


Amos



Re: [squid-users] Re: not getting persistent connections to an ssl backend

2011-12-05 Thread Amos Jeffries

On Mon, 5 Dec 2011 09:14:40 -0500, rob yates wrote:
Sorry for the bump, could someone let me know if this is supported? 
If
it's not supported I'll need to look at something other than squid 
and

am far enough along that I would rather not,


It is supported. Ensure that you have server_persistent_connections ON 
(default setting) in squid.conf.



The word for today is upgrade anyway.

There are some behaviour and traffic conditions required to make 
persistent connections actually work. It appears that one of these 
conditions is not met in your system. Probably the server is emitting 
unknown-length headers or explicitly requesting connection closure.


The amount of persistence you can get is up to the particular software 
HTTP/1.1 compliance in both Squid and the backend server. So upgrading 
Squid to the latest stable release you can  will mean a better chance of 
persistence happening.


Amos



Re: [squid-users] about rewrite an URL

2011-12-05 Thread Amos Jeffries

On Mon, 5 Dec 2011 15:22:38 -0500, Carlos Manuel Trepeu Pupo wrote:

I have in my FTP a mirror of the Bases of Kaspersky (all versions), I
use KLUpdater to make that (from the Kaspersky site), now I want to
redirect everyone to search for the domain of Kaspersky's update to 
my

FTP, how can I do that ?


With a redirector.

http://wiki.squid-cache.org/Features/Redirectors#Using_an_HTTP_redirector


Amos


[squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2011-12-05 Thread Paul Freeman
Hi,
I have come across a problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS (gcc 
4.4.3, latest updates from Ubuntu).  The problem occurs in store.cc and has 
been reported in an earlier post (3 Dec 2011) related to compiling 3.1.17.

Another user has also reported this issue on the squid-dev mailing list on 5 
Dec 2011 but I have not seen a reply yet.

The error is as follows:
store.cc: In member function 'void StoreEntry::deferProducer(const 
RefCountAsyncCall)':
store.cc:376: error: no match for 'operator' in 'std::operator [with 
_Traits = ...

My knowledge of C++ is limited so I am not sure how to resolve the problem.

Someone has reported successfully compiling 3.1.18 on Solaris so perhaps the 
Solaris C++ libraries are a little different than in Ubuntu 10.04 LTS.

I am happy to assist with any testing that might be required.

Thanks

Paul


Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2011-12-05 Thread Amos Jeffries

On Tue, 6 Dec 2011 03:01:40 +, Paul Freeman wrote:

Hi,
I have come across a problem compiling Squid 3.1.18 on Ubuntu 10.04
LTS (gcc 4.4.3, latest updates from Ubuntu).  The problem occurs in
store.cc and has been reported in an earlier post (3 Dec 2011) 
related

to compiling 3.1.17.

Another user has also reported this issue on the squid-dev mailing
list on 5 Dec 2011 but I have not seen a reply yet.

The error is as follows:
store.cc: In member function 'void StoreEntry::deferProducer(const
RefCountAsyncCall)':
store.cc:376: error: no match for 'operator' in 'std::operator
[with _Traits = ...

My knowledge of C++ is limited so I am not sure how to resolve the 
problem.


Don't worry. This nasty trace is stressing the eyes of us familiar with 
C++ as well.




Someone has reported successfully compiling 3.1.18 on Solaris so
perhaps the Solaris C++ libraries are a little different than in
Ubuntu 10.04 LTS.

I am happy to assist with any testing that might be required.



It is only affecting adaptation (ICAP/eCAP) builds, so if you can run 
happily without those features use --disable, or comment out line 376 of 
src/store.cc.



Thank you for the testing offer. We can replicate it already so the 
only help needed is C++ familiar eyes to find which of this nested set 
of templates is missing a required print() operator.


Amos


RE: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04 LTS - store.cc

2011-12-05 Thread Paul Freeman
Amos
Thank you for the very prompt reply.

Unfortunately I need ICAP so I will need to wait until the problem is resolved 
although I guess in the interim I can do as you mention and simply comment out 
this line and forgo the debugging output.

Good luck trying to find the root cause.

Regards

Paul

 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Tuesday, 6 December 2011 2:10 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Problem compiling Squid 3.1.18 on Ubuntu 10.04
 LTS - store.cc
 
  On Tue, 6 Dec 2011 03:01:40 +, Paul Freeman wrote:
  Hi,
  I have come across a problem compiling Squid 3.1.18 on Ubuntu 10.04
  LTS (gcc 4.4.3, latest updates from Ubuntu).  The problem occurs in
  store.cc and has been reported in an earlier post (3 Dec 2011)
  related
  to compiling 3.1.17.
 
  Another user has also reported this issue on the squid-dev mailing
  list on 5 Dec 2011 but I have not seen a reply yet.
 
  The error is as follows:
  store.cc: In member function 'void StoreEntry::deferProducer(const
  RefCountAsyncCall)':
  store.cc:376: error: no match for 'operator' in 'std::operator
  [with _Traits = ...
 
  My knowledge of C++ is limited so I am not sure how to resolve the
  problem.
 
  Don't worry. This nasty trace is stressing the eyes of us familiar with
  C++ as well.
 
 
  Someone has reported successfully compiling 3.1.18 on Solaris so
  perhaps the Solaris C++ libraries are a little different than in
  Ubuntu 10.04 LTS.
 
  I am happy to assist with any testing that might be required.
 
 
  It is only affecting adaptation (ICAP/eCAP) builds, so if you can run
  happily without those features use --disable, or comment out line 376 of
  src/store.cc.
 
 
  Thank you for the testing offer. We can replicate it already so the
  only help needed is C++ familiar eyes to find which of this nested set
  of templates is missing a required print() operator.
 
  Amos


Re: [squid-users] Re: not getting persistent connections to an ssl backend

2011-12-05 Thread Rob Yates
Amos,

Many thanks, I upgraded to 3.1.18 and that seems to have done the
trick. Weird as no other parameters were changed. Not sure why 2.6 (
the default for the version of rhel that we have) did not work but no
need to investigate now,

Thanks again,

Rob

On Dec 5, 2011, at 9:42 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 On Mon, 5 Dec 2011 09:14:40 -0500, rob yates wrote:
 Sorry for the bump, could someone let me know if this is supported? If
 it's not supported I'll need to look at something other than squid and
 am far enough along that I would rather not,

 It is supported. Ensure that you have server_persistent_connections ON 
 (default setting) in squid.conf.


 The word for today is upgrade anyway.

 There are some behaviour and traffic conditions required to make persistent 
 connections actually work. It appears that one of these conditions is not met 
 in your system. Probably the server is emitting unknown-length headers or 
 explicitly requesting connection closure.

 The amount of persistence you can get is up to the particular software 
 HTTP/1.1 compliance in both Squid and the backend server. So upgrading Squid 
 to the latest stable release you can  will mean a better chance of 
 persistence happening.

 Amos



[squid-users] Cache has stopped using memory?

2011-12-05 Thread Sean SPALDING
Hi all,

I recently noticed one of my squid servers reverse its normal usage stats for:

Request Memory Hit Ratios:  5min: 0.6%, 60min: 0.5%
Request Disk Hit Ratios:5min: 41.7%, 60min: 48.3%

Normal usage looks more like:

Request Memory Hit Ratios:  5min: 48.9%, 60min: 48.9%
Request Disk Hit Ratios:5min: 2.9%, 60min: 6.2%


Is squid is now satisfying requests mainly from disk? Am I interpreting this 
correctly?

There is adequate free memory on this machine. None of the other (near 
identical) servers in the pool are exhibiting this behaviour. All servers are 
configured as an accelerator for the same application so the usage should be 
similar.


Regards,

Sean.


This e-mail is confidential. If you are not the intended recipient you must not 
disclose or use the information contained within. If you have received it in 
error please return it to the sender via reply e-mail and delete any record of 
it from your system. The information contained within is not the opinion of 
Edith Cowan University in general and the University accepts no liability for 
the accuracy of the information provided.

CRICOS IPC 00279B


Re: [squid-users] Make Squid in interception mode completely

2011-12-05 Thread Edmonds Namasenda
Hai,
Seems your network set-up is what might be ruining your connection
expectations or the default gateway needs a rule (possibly using a
firewall) to direct all HTTP traffic to the squid box rather than to
the internet.

Otherwise, think of the set-up below (with the Squid box the same as
the Gateway)

Internet Router   Eth0 |- Squid box  Default Gateway -| Eth1
   Switch   LAN

# Edz.

On Mon, Dec 5, 2011 at 5:14 PM, Nguyen Hai Nam nam...@nd24.net wrote:

 Hi Amos,

 You're right, switch is not really true.

 But I still can't find the way on Solaris-like system like 
 /proc/sys/net/bridge


 On Mon, Dec 5, 2011 at 7:25 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 
 
  Like a switch? or or did you really mean like a bridge?
 
  * switch ... no solution. Switches do not perform the NAT operations
  required for interception. They also don't run software like Squid, so I
  think this is a bad choice of word in your description.
 
  * bridge ... requires dropping packets out of the bridge into the routing
  functionality. See the bridge section at
  http://wiki.squid-cache.org/Features/Tproxy4#ebtables_on_a_Bridging_device
 
  Amos


[squid-users] compile error on Squid 3.1.18

2011-12-05 Thread kzl
There's error thrown while compiling Squid 3.1.18 in Solaris Sparc which never 
experience in earlier version like 3.1.17, 3.1.16, 3.0.15
Anyone having any idea what's the problem? 

Making all in compat
/bin/bash ../libtool --tag=CXX    --mode=link g++ -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -pthreads -g -O2   -g 
-olibcompat.la  assert.lo compat.lo GnuRegex.lo
libtool: link: false cru .libs/libcompat.a .libs/assert.o .libs/compat.o 
.libs/GnuRegex.o
*** Error code 1
make: Fatal error: Command failed for target `libcompat.la'
Current working directory /home/squid-3.1.18/compat
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
    *=* | --[!k]*);; \
    *k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='compat lib snmplib libltdl scripts src icons  errors doc helpers 
test-suite tools'; for subdir in $list; do \
  echo Making $target in $subdir; \
  if test $subdir = .; then \
    dot_seen=yes; \
    local_target=$target-am; \
  else \
    local_target=$target; \
  fi; \
  (CDPATH=${ZSH_VERSION+.}:  cd $subdir  make  $local_target) \
  || eval $failcom; \
done; \
if test $dot_seen = no; then \
  make  $target-am || exit 1; \
fi; test -z $fail
make: Fatal error: Command failed for target `all-recursive'


[squid-users] increasing file descriptors in Ubuntu 10.04/2.7.STABLE9

2011-12-05 Thread Sean Boran
Hi,

On  squid proxy using the stock Ubuntu squid packages, the file
descriptors need to be increased.

I found two suggestions:
http://chrischan.blog-city.com/ubuntu_804_lts_increasing_squid_file_descriptors.htm
but ulimit -n was still 1024 after rebooting.
(and it also talks about recompiling squid with
--with-filedescriptors=8192, but Id prefer to keep the stock ubuntu
package if possible).

This link:
http://www.cyberciti.biz/faq/squid-proxy-server-running-out-filedescriptors/
suggests alternative settings in /etc/security/limits.conf
but ulimit -a | grep 'open files' still says 1024

There was also a suggestion found to set a value in
/proc/sys/fs/file-max, but the current value was already 392877

Finally, the second article suggests (for red hat) just setting
max_filedesc 4096
in squid.conf
and this actually works, i.e.
squidclient -p 80  mgr:info | grep 'file descri'
reports 4096

So my question: is the squid.conf sufficent? How is the squid setting
related to ulimit, if at all?

Thanks in advance,

Sean