[squid-users] Re: Behavior multiple reverse proxies when origin server down

2013-11-29 Thread davidheijkamp
Eliezer Croitoru-2 wrote
 There are couple solutions that can be considered but it depends on the 
 LoadBalancer size and functionality.
 Also it depends on what level there is access from the LoadBalancer to 
 squid instances and source pages\instances.
 
 You can do something like squid2 to forward the requests from one 
 instance to the other in a case of a failure in one of the web-servers.
 
 If all that the LoadBalancer does is FailOver it will be much simpler to 
 implement but if it's also a Traffic Balancing over two services it's 
 one more complication from the LoadBalancer Angle.

Providing a failover is the primary function of the Load Balancer, balancing
the load between multiple servers is secondary. 

Based on Amos' reply I think it should be possible to do both. If one web
server goes down the Squid server in front knows, based on (the lack of?)
HTTP request status codes, that that server is indeed down and traffic will
be directed to the remaining web server in the other location. In normal
operation load is balanced between the two servers.


Eliezer Croitoru-2 wrote
 I think that there is an option in a LoadBalancer to use a specific user 
 designed Targer testing script which can answer the basic question to 
 the LoadBalancer if one route of traffic is down or up.

Do you mean the Monitor URL feature:
http://wiki.squid-cache.org/Features/MonitorUrl ?
It's not available in Squid 3 yet, and I guess it's not necessary because of
the above.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Behavior-multiple-reverse-proxies-when-origin-server-down-tp4663464p4663583.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid proxy lag

2013-11-29 Thread alamb200
Hi,
I have just installed a CentOS 6.4 server as a Hyper V guest on my Hyper V
sever.
I have given it 2gb of RAM and a Xeon 2.4Ghz processor to run Squid on a 30
user network is this enough?
Is there anything else I should be looking at to speed it up?
Thanks,
alamb200



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-proxy-lag-tp4663584.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Cannot select peer when request is IP address

2013-11-29 Thread Stephen Borrill
I've found a problem with selecting a parent cache if the request is an
IP address. Tested with various versions including 3.3.10. Example
config fragment is below:

cache_peer 192.168.1.143 parent 3128 0 no-query no-digest default name=prox1
cache_peer 192.168.1.144 parent 3128 0 no-query no-digest name=prox2
never_direct allow all
acl domlist dstdomain -n .bbc.co.uk

cache_peer_access prox1 deny domlist
cache_peer_access prox2 allow domlist
cache_peer_access prox1 allow all
cache_peer_access prox2 deny all

Idea is that the parent cache will be selected on the basis of
dstdomain. However, if the request is for, say, http://123.234.123.234/
HIER_NONE is returned. Selection works as expected if a domain is given.
Edited access log entries below:

TCP_MISS/500 1716 GET http://123.234.123.234/ - HIER_NONE/- text/html
TCP_MISS/200 56303 GET http://www.cam.ac.uk/ -
FIRSTUP_PARENT/192.168.1.143 text/html
TCP_MISS/200 126090 GET http://www.bbc.co.uk/ -
FIRSTUP_PARENT/192.168.1.144 text/html

Problem appears to be that IP address URLs bypass the 'all' acl checks.

Non-working parent selection:
Acl.cc(336) matches: ACLList::matches: checking domlist
Acl.cc(319) checklistMatches: ACL::checklistMatches: checking 'domlist'
DomainData.cc(131) match: aclMatchDomainList: checking '123.234.123.234'
DomainData.cc(135) match: aclMatchDomainList: '123.234.123.234' NOT found
ipcache.cc(960) ipcacheCheckNumeric: ipcacheCheckNumeric: HIT_BYPASS for
'123.234.123.234' == 123.234.123.234
fqdncache.cc(540) fqdncache_nbgethostbyaddr: fqdncache_nbgethostbyaddr:
Name '123.234.123.234'.
fqdncache.cc(578) fqdncache_nbgethostbyaddr: fqdncache_nbgethostbyaddr:
MISS for '123.234.123.234'
dns_internal.cc(1769) idnsPTRLookup: idnsPTRLookup: buf is 46 bytes for
123.234.123.234, id = 0x6f78
comm.cc(1197) comm_udp_sendto: comm_udp_sendto: Attempt to send UDP
packet to 127.0.0.1:53 using FD 7 using Port 64333
DestinationDomain.cc(109) match: aclMatchAcl: Can't yet compare
'domlist' ACL for '123.234.123.234'
Acl.cc(346) matches: domlist needs async lookup
Acl.cc(354) matches: domlist result is false
neighbors.cc(299) getFirstUpParent: getFirstUpParent: returning NULL

Working parent selection:
Acl.cc(336) matches: ACLList::matches: checking domlist
Acl.cc(319) checklistMatches: ACL::checklistMatches: checking 'domlist'
DomainData.cc(131) match: aclMatchDomainList: checking 'www.cam.ac.uk'
DomainData.cc(135) match: aclMatchDomainList: 'www.cam.ac.uk' NOT found
Acl.cc(349) matches: domlist mismatched.
Acl.cc(354) matches: domlist result is false
Acl.cc(336) matches: ACLList::matches: checking all
Acl.cc(319) checklistMatches: ACL::checklistMatches: checking 'all'
Ip.cc(134) aclIpAddrNetworkCompare: aclIpAddrNetworkCompare: compare:
127.0.0.1:57674/[::] ([::]:57674)  vs [::]-[::]/[::]
Ip.cc(560) match: aclIpMatchIp: '127.0.0.1:57674' found
Acl.cc(340) matches: all matched.
Acl.cc(354) matches: all result is true
neighbors.cc(1143) neighborUp: neighborUp: UP (no-query): 192.168.1.143
(192.168.1.143)
neighbors.cc(299) getFirstUpParent: getFirstUpParent: returning
192.168.1.143
peer_select.cc(702) peerGetSomeParent: peerSelect:
FIRSTUP_PARENT/192.168.1.143
peer_select.cc(935) peerAddFwdServer: peerAddFwdServer: adding
192.168.1.143 FIRSTUP_PARENT


-- 
Stephen


Re: [squid-users] Squid proxy lag

2013-11-29 Thread Kinkie
On Fri, Nov 29, 2013 at 3:42 PM, alamb200 alamb...@hotmail.com wrote:
 Hi,
 I have just installed a CentOS 6.4 server as a Hyper V guest on my Hyper V
 sever.
 I have given it 2gb of RAM and a Xeon 2.4Ghz processor to run Squid on a 30
 user network is this enough?
 Is there anything else I should be looking at to speed it up?

Hi alamb200,
  the answer is it depends, generally speaking it depends on what
your users are doing.
For normal browsing habits, in absence of any bottlenecks such as
networking issues or other VMs competing for resources on the same
server that sizing could even be enough for 3000 active users.
  Why would you need to speed it up? Do you have any evidence of bad
performance?

-- 
/kinkie


[squid-users] Transparent proxy

2013-11-29 Thread Monah Baki
Hi all,


I'm trying to setup a transparent proxy squid 3.3.9 using the following URL:


http://www.broexperts.com/2013/03/squid-as-transparent-proxy-on-centos-6-4/

What's the difference between

http_port 3128 transparent
and
http_port 3128


If I where to configure with http_port 3128 transparent and restart
squid I get in my access.log file:
 ERROR: No forward-proxy ports configured.

If I where to then browse, nothing happens.

I am not running iptables by the way.


Thanks


Re: [squid-users] Squid accel only after logon

2013-11-29 Thread P K
Hi Amos,

Thanks a lot for your reply. It gave me clues on how to go about
finding a solution. I wasn't confused as such between proxy mode and
reverse proxy mode. Basically, I had access to squid and an apache
server  to build an authentication mechanism to  my target websites.

I used the splash page mechanism but wrote my own external acl type in
PHP. I used deny_info to present a PHP logon page which stored the
session info (user, last accessed etc.)  in the database table and
stored a cookie on the client. I then configured squid to pass the
Cookie header (%{Cookie:;PHPSESSID}) field to my external acl PHP
script which validated the session or modified or destroyed if not
valid any more.

It works great.



On 27 November 2013 11:19, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/11/2013 8:58 p.m., P K wrote:
 Hi,

 I want to use Squid as a reverse proxy (accel) to my main website but
 only if they've authenticated - something like a captive portal (not
 sure if that's the right phrase). By authenticated, I don't mean
 basic or digest etc. I want to provide my own logon page (say php) - I
 can host another authentication website to host that.

 How do I go about achieving that? Splash page functionality is
 something that looks promising in squid but I can't get my head around
 how to force squid to reverse proxy my site only after users have
 authenticated on my php splash page. Also I need to terminate their
 session after 3 hours.


 Okay. I think you misunderstand what a reverse proxy does and how it
 operates in relation to the main web server.

 A reverse proxy is simply a gateway to the main server which is used to
 offload serving of static files, do server-side caching, routing between
 different backends, certain types of access control and reduce impact
 from DoS attacks.



 It is better to simply pass all trafic through the proxy on its way to
 the main web server.

 The type of authentication you are describing is called
 application-layer authentication and exists outside of HTTP and thus
 outside of the normal capabilities of an HTTP reverse proxy. It can be
 done but with great complexity and difficulty.


 Once again it is better to leave the authentication non-authenticatino
 decisions to the main web server and have it send back appropriate HTTP
 headers to inform the proxy how to handle the different responses.



 http://wiki.squid-cache.org/ConfigExamples/Portal/Splash


 No your requirements do not match with the limits or capabilities of a
 captive portal. Captive portal uses an *implicit* session. Your system
 uses an *explicit* session.

 Please also note that captive portal *doe not* do authentication in any
 reiable way. The splash page can have application-layer authentication
 built in, BUT what the HTTP layer is doing is assuming / guessing that
 any request with a similar fingerprint as the authenticated one is
 authorized to access the resource.
  Being an assumption this authorization has a relatively high rate of
 failure and vulnerability to a large number of attacks.

 For example; the captive portal works mostly okay in situations where
 the portal device is itself allocating the IP address or has access to
 the clients MAC address information.
  Doing it on a reverse proxy will immediately have trouble from NAT,
 relay routers, and ISP-based proxies - all of which obfuscate the IP
 address details.


 I can do something like this:

 #Show auth.php
 external_acl_type splash_page ttl=60 concurrency=100 %SRC
 /usr/local/sbin/squid/ext_session_acl -t 7200 -b
 /var/lib/squid/session.db

 acl existing_users external splash_page

 http_access deny !existing_users

 # Deny page to display
 deny_info 511:https://myauthserver/auth.php?url=%s existing_users
 #end authphp

 #reverse proxy

 https_port 443 cert=/path/to/x_domain_com.pem
 key=/path/to/x_domain_com.pem accel

 cache_peer 1.1.1.1 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=x_domain_com
 acl sites_server_x_domain_com dstdomain x.domain.com
 cache_peer_access x_domain_com allow sites_server_x_domain_com
 http_access allow sites_server_x_domain_com
 # end reverse proxy


 But how is this going to work? I can present a username/password on my
 auth.php and present a submit button to validate. But how do I tell
 squid that it is OK to serve x.domain.com?

 The external_acl_type helper is recording past visits and needs to
 determine its response based on whatever database records your login
 page did to record the login.


 Also is there a better way of achieving my purpose?

 Yes. Setup the proxy as a basic reverse proxy and leave the
 application-layer authentication decisions to the main web server.

 Application layer auth is usually done with session Cookies on the main
 server. You can check for Cookie header in the proxy and bounce with
 that same deny_info redirect if you like, to help reduce the main server
 load. It wont be perfect due to other uses of Cookie, but it will 

[squid-users] Re: Kerberos / Authentication / squid

2013-11-29 Thread Markus Moeller

You may need to increase the following:

src/auth/UserRequest.h:#define MAX_AUTHTOKEN_LEN   32768

Regards
Markus

Amos Jeffries  wrote in message news:52971e79.9030...@treenet.co.nz...

On 28/11/2013 10:42 p.m., Berthold Zettler wrote:

Hi Madhav,



all relevant a systems (AD-Controllers and the clients (Windows 7)) have a 
value for MaxTokenSize of 65535.


Therefore i don't think, that this failure was caused by AD- or client 
settings.


The tokensize (27332) reported by the MS tokensz.exe tool is far below 
this value.
Our other kerberized systems (Apaches) are working fine with this large 
tokensize.


So i think it's a squid / buffer or kerberos-helper related issue



That MAX_AUTHTOKEN_LEN (64KB) is what is used directly to allocate the
Squid buffer and helper buffer and the base-64 encoded version of the
token needs to fit inside it along with the 3-5 helper protocol bytes.

A bigger problem is the Squid network I/O parsing. The buffer holding
HTTP headers also has a default maximum length of 64KB ... for the
entire HTTP header block.
 http://www.squid-cache.org/Doc/config/request_header_max_size/
 http://www.squid-cache.org/Doc/config/reply_header_max_size/

If you need to you can bump those up to around 256KB before you start to
hit other limits in the primary I/O buffer itself.


PS. you should also look to the library Squid is using. It may have
limits or problems of its own separate from the Apache systems library.

PPS. The IETF HTTPbis WG did an analysis of many software a while back
and concluded that the maximum generlly acceptible HTTP header length
was 4KB. Squid with its 64KB limit is one of the more accepting out
there. So be careful of *any* other software involved with that traffic.

Amos