Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread Amos Jeffries

louis gonzales wrote:

Dist,
Squid 2.7.Stable6
I'm setting up a reverse proxy, such that the Squid system will be
viewed as the originserver to the clients contacting it.

Does the defaultsite= attribute get the name of the actual web
server or the proxy server?


defaultsite=  is the public domain name visitors are expected to visit.
It's used to fix broken HTTP Host: headers some clients still send.
In a virtual-host site broken clients will be passed to that domain.

seen these?
  http://wiki.squid-cache.org/ConfigExamples

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] cache_peer over openvpn

2009-04-07 Thread Amos Jeffries

jonnytabpni wrote:

Hi folks,

I have an openvpn server which also runs squid. I wish this squid server to
use a squid server running on a openvpn client as it's parent cache.

It's not working. The connection to to remote openvpn client times out. 


Access to the openvpn client is OK everwhere else (e.g. ping from
server/squid machine, ping from other hosts on LAN) etc...

It's as if squid can tunnel stuff down the openvpn link

Help is appreciated cheers


Smells like mtu failing.

 * check that ICMP errors etc are allowed on your network.
 * check that mtu size on the openvpn tunnel entrances is adjusted to 
account for the tunnel packet headers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread louis gonzales
Amos,
Yes did seen these.  My specific question, results from the fact
that the information provided for defaultsite=mysite.domain.com (not
origin server) - is ambiguous.

When one is configuring a reverse proxy server, it's usually, because
one wants to have the reverse proxy server appear as the origin
server.


On Tue, Apr 7, 2009 at 3:30 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 louis gonzales wrote:

 Dist,
 Squid 2.7.Stable6
 I'm setting up a reverse proxy, such that the Squid system will be
 viewed as the originserver to the clients contacting it.

 Does the defaultsite= attribute get the name of the actual web
 server or the proxy server?

 defaultsite=  is the public domain name visitors are expected to visit.
 It's used to fix broken HTTP Host: headers some clients still send.
 In a virtual-host site broken clients will be passed to that domain.

 seen these?
  http://wiki.squid-cache.org/ConfigExamples

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6




-- 
Louis Gonzales
BSCS EMU 2003
HP Certified Professional
louis.gonza...@linuxlouis.net


Re: [squid-users] Squid Reverse proxy cache Storage for Vhosts

2009-04-07 Thread Amos Jeffries

Prabhakar, Ramprasad (GE, Corporate, consultant) wrote:

For a squid reverse proxy cache, is there a way to set squid to use a
single cache storage for all the virtual hosts it hosts ?

For example, 
 abc.domain.com, cde.domain.com, fgh.domain.com all point to the same
site. 
 Will Squid storage abc.domain.com/image.png, cde.domain.com/image.png

separately in the cache (even though they are same files?) or will it
use the same cache storage ? If not, can we configure squid to use the
same ?



You should reduce the domains down and only provide one namespace domain 
for the duplicate content. This will solve the caching issue for both 
yourself and for the rest of the Internet which is currently struggling 
with your multiple domains passing the problem on to every other cache 
operator.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] cache-peer problem - query string requests

2009-04-07 Thread Amos Jeffries

Vivek wrote:

Hi All,



I am using squid 2.7 and configured Polipo server as a parent of squid..



cache_peer 172.16.1.40  parent8123  3130  no-query default



But all the requests go via Polipo except the URLs with query ? string. 
How do we force the squid to send all the request to parent?


I'm sure I answered this question for you yesterday and had you say it 
was working. :(


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread Amos Jeffries

louis gonzales wrote:

Amos,
Yes did seen these.  My specific question, results from the fact
that the information provided for defaultsite=mysite.domain.com (not
origin server) - is ambiguous.

When one is configuring a reverse proxy server, it's usually, because
one wants to have the reverse proxy server appear as the origin
server.


Aha, hopefully my new additions there will reduce that ambiguity a little.

Amos




On Tue, Apr 7, 2009 at 3:30 AM, Amos Jeffries squ...@treenet.co.nz wrote:

louis gonzales wrote:

Dist,
Squid 2.7.Stable6
I'm setting up a reverse proxy, such that the Squid system will be
viewed as the originserver to the clients contacting it.

Does the defaultsite= attribute get the name of the actual web
server or the proxy server?

defaultsite=  is the public domain name visitors are expected to visit.
It's used to fix broken HTTP Host: headers some clients still send.
In a virtual-host site broken clients will be passed to that domain.

seen these?
 http://wiki.squid-cache.org/ConfigExamples



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Squid-tproxy patch for squid 3.0

2009-04-07 Thread Vivek
Thanks Amos, We want Tproxy v4 support ( 2.6.28 kernel support) for 
squid 2.7. If we could get squid-3.0-tproxy patch from any achieves it 
would be very helpful for us to develop a patch for 2.7..




Thanks in advance.



-VIvek









-Original Message-

From: Amos Jeffries squ...@treenet.co.nz

To: Vivek vivek...@aol.in

Cc: squid-users@squid-cache.org

Sent: Tue, 7 Apr 2009 12:17 pm

Subject: Re: [squid-users] Squid-tproxy patch for squid 3.0



















Vivek wrote:




Hi All,

















I need squid tproxy patch for squid 3.0. I know squid 3.1 has the



built-in code for tproxy support. But i need the patch file.








Where can i download the patch( Not kernel patch) squid-tproxy 

patch?.








If anybody knows give the link.










The patch I and others were initially providing was found to be broken

and was dropped when the support in 3.1 required a major kernel 
overhaul


to fix the problem.





Amos



--

Please be using



 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13



 Current Beta Squid 3.1.0.6













You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread louis gonzales
:) - I'll take a peek when I get a chance.

Thanks for your insights, as always.

On Tue, Apr 7, 2009 at 4:05 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 louis gonzales wrote:

 Amos,
 Yes did seen these.  My specific question, results from the fact
 that the information provided for defaultsite=mysite.domain.com (not
 origin server) - is ambiguous.

 When one is configuring a reverse proxy server, it's usually, because
 one wants to have the reverse proxy server appear as the origin
 server.

 Aha, hopefully my new additions there will reduce that ambiguity a little.

 Amos



 On Tue, Apr 7, 2009 at 3:30 AM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 louis gonzales wrote:

 Dist,
 Squid 2.7.Stable6
 I'm setting up a reverse proxy, such that the Squid system will be
 viewed as the originserver to the clients contacting it.

 Does the defaultsite= attribute get the name of the actual web
 server or the proxy server?

 defaultsite=  is the public domain name visitors are expected to visit.
 It's used to fix broken HTTP Host: headers some clients still send.
 In a virtual-host site broken clients will be passed to that domain.

 seen these?
  http://wiki.squid-cache.org/ConfigExamples


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6




-- 
Louis Gonzales
BSCS EMU 2003
HP Certified Professional
louis.gonza...@linuxlouis.net


Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread louis gonzales
Amos,
on defaultsite=X tells Squid to assume the domain X was wanted when
broken clients fail to send a Host: header properly.

is domain=FQDN or just the domain name(viz. not host.domain.com, nor
cname.domain.com) - e.g. FQDN=www.domain.com, where as
domainname=domain.com without the host/cname prefix.

words themselves are ambiguous I appreciate your insight.


On Tue, Apr 7, 2009 at 4:05 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 louis gonzales wrote:

 Amos,
 Yes did seen these.  My specific question, results from the fact
 that the information provided for defaultsite=mysite.domain.com (not
 origin server) - is ambiguous.

 When one is configuring a reverse proxy server, it's usually, because
 one wants to have the reverse proxy server appear as the origin
 server.

 Aha, hopefully my new additions there will reduce that ambiguity a little.

 Amos



 On Tue, Apr 7, 2009 at 3:30 AM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 louis gonzales wrote:

 Dist,
 Squid 2.7.Stable6
 I'm setting up a reverse proxy, such that the Squid system will be
 viewed as the originserver to the clients contacting it.

 Does the defaultsite= attribute get the name of the actual web
 server or the proxy server?

 defaultsite=  is the public domain name visitors are expected to visit.
 It's used to fix broken HTTP Host: headers some clients still send.
 In a virtual-host site broken clients will be passed to that domain.

 seen these?
  http://wiki.squid-cache.org/ConfigExamples


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6




-- 
Louis Gonzales
BSCS EMU 2003
HP Certified Professional
louis.gonza...@linuxlouis.net


Re: [squid-users] acl dstdomains does not block!

2009-04-07 Thread Amos Jeffries

Leslie Jensen wrote:





2009/4/6 Leslie Jensen les...@eskk.nu



Leslie Jensen wrote:

Hello

My Proxy, Squid-3.0.13 on FreeBSD 7.1-RELEASE-p4, is running fine 
but I

can't get the folowing to work.

# acl blocked_sites dstdomain .aftonbladet.se.
   acl blocked_sites dstdomain /usr/local/etc/squid/dstdomain

deny_info ERR_ACCESS_DENIED blocked_sites

 http_access deny blocked_sites

I've tried both to list the domain in squid.conf and in the file

/usr/local/etc/squid/dstdomain
None of the options seems to work, no blocking occours. If I put in 
the

complete path to the ERR_ACCESS_DENIED, I get an error when I do squid
-NCd1

I suspect that maybe the order of the acl's can affect but I need some

help to diagnose the problem.

Yes order is important. Squid processes http_access stop-down and first

match wins.

ERR_ACCESS_DENIED is the default page displayed for http_access deny.

you don't have to specify its use.

Amos
--

Do I dare ask if someone will take a look at my conf file?

I think I'm going blind looking at my rules! I believe I've done it 
right,
but obviously I have not. I need the acl dstdomain to work and I 
can't see

where I'm wrong.

I'v tried to define only one domain and I've tried with a file with 
domain

names, none of them seem to work.

I've also considered the order of my rules but I can't get it to work.

Please help! Thanks

/Leslie


- snip -




- snip -



Bharath Raghavendran skrev:
  Were you testing it with a non-localhost client? The only line i can
  see that can affect it is
  http_access allow localhost .. which means localhost gets access
  irrespective of the http_access directives that come after this one.
 
  btw, although this is not related to the problem, you have
  http_access deny all after http_access deny blockedlist ... which
  means even if request isnt in blockedlist, yet it will be denied ...
  which kind of makes blockedlist acl useless .. probably u didnt intend
  that.
 
  -Bharath
 


I'm testing with a host on localnet.

No, you are right I did not intend that. How do you suggest I go about 
configuring so that the localnet is affected by the acl blockedlist?


/Leslie


Um, the config you showed simplifies down to:

 allow localhost access anywhere.
 deny anything else. Period.

I think you want:

#
# If we want to block certain sites.
#
# acl blockedsites dstdomain .aftonbladet.se.
 acl blockedsites dstdomain .squid-cache.org
# acl blockedsites dstdomain /usr/local/etc/squid/dstdomain
#
# Show message when blocked
# deny_info ERR_ACCESS_DENIED blocked_sites
#
 http_access deny blockedsites

# allow local network to other sites.

  http_access allow localhost
  http_access allow localnet

#
# And deny all other access to this proxy
#
 http_access deny all


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Squid-tproxy patch for squid 3.0

2009-04-07 Thread Amos Jeffries

Vivek wrote:
Thanks Amos, We want Tproxy v4 support ( 2.6.28 kernel support) for 
squid 2.7. If we could get squid-3.0-tproxy patch from any achieves it 
would be very helpful for us to develop a patch for 2.7..




There no single patch just a large collection of incremental changes.

The 2.7 code base is also a lot different to the 3.x codebase in these 
areas.


Whats missing from 3.1 that you need from 2.7? It would be a more 
future-proof work if the port was along the developer roadmap.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] cache_peer over openvpn

2009-04-07 Thread jonnytabpni



Amos Jeffries-2 wrote:
 
 jonnytabpni wrote:
 Hi folks,
 
 I have an openvpn server which also runs squid. I wish this squid server
 to
 use a squid server running on a openvpn client as it's parent cache.
 
 It's not working. The connection to to remote openvpn client times out. 
 
 Access to the openvpn client is OK everwhere else (e.g. ping from
 server/squid machine, ping from other hosts on LAN) etc...
 
 It's as if squid can tunnel stuff down the openvpn link
 
 Help is appreciated cheers
 
 Smells like mtu failing.
 
   * check that ICMP errors etc are allowed on your network.
   * check that mtu size on the openvpn tunnel entrances is adjusted to 
 account for the tunnel packet headers.
 
 Amos
 -- 
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6
 
 

Sorry folks,It was my fault. I hadn't allowed the OpenVPN subnet in the
hosts firewall!

Sorry!

Cheers
-- 
View this message in context: 
http://www.nabble.com/cache_peer-over-openvpn-tp22918982p22924088.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread louis gonzales
Amos,
Here's a challenge, I can't find the reason why the Header rewrite is
not happening?  My configuration has only 1 web
server(unified1.abstract.net, IP:192.168.0.10) behind the reverse
proxy server(proxy1.abstract.net, IP: 192.168.0.20)

What I want the reverse proxy server to do is:
a) not cache anything, essentially just stream the web server content
straight to the client
b) *suppose* the Header needs to be rewritten, but I am assuming this
get's done by virtue of using the following lines in squid.conf:

###
cache_peer unified1.abstract.net parent 80 0 no-query originserver
forceddomain=unified1.abstract.net name=myAccel

###
acl FMS dstdomain unified1.abstract.net
http_access allow FMS
cache_peer_access myAccel allow FMS

--  Any ideas???  Below is some logging
-- THNKs

The request GET http://proxy1:80/configMultiSiteConfigRequest is
ALLOWED, because it matched 'PROXY1'
2009/04/07 04:39:04| clientStoreURLRewriteDone:
'http://proxy1.abstract.net/configMultiSiteConfigRequest' result=NULL
2009/04/07 04:39:04| WARNING: Forwarding loop detected for:
Client: 192.168.0.20 http_port: 192.168.0.20:80
GET http://proxy1.abstract.net/configMultiSiteConfigRequest HTTP/1.0

User-Agent: FMS-FCC/1.3 (bd:20081014) Java/1.5.0_11

Host: proxy1.abstract.net

Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2

Via: 1.1 proxy1:80 (squid/2.7.STABLE6), 1.0 proxy1:80 (squid/2.7.STABLE6)

X-Forwarded-For: 192.168.0.20, 192.168.0.20

Cache-Control: max-age=259200

Connection: keep-alive
2009/04/07 04:39:04| clientProcessRequest2: storeGet() MISS
2009/04/07 04:39:04| storeCreateEntry:
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| new_MemObject: returning 00F818D8
2009/04/07 04:39:04| new_StoreEntry: returning 00F81890
2009/04/07 04:39:04| storeKeyPrivate: GET
http://proxy1.abstract.net/configMultiSiteConfigRequest
2009/04/07 04:39:04| storeHashInsert: Inserting Entry 00F81890 key
'3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeReleaseRequest: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeLockObject:
(C:\work\SNT-2.7\src\store_client.c:122): key
'3D8B98FA9C7FBCC0711DCF9AB7173967' count=2
2009/04/07 04:39:04| storeClientCopy:
3D8B98FA9C7FBCC0711DCF9AB7173967, seen 0, want 0, size 4096, cb
0041D44A, cbdata 00F80E80
2009/04/07 04:39:04| cbdataLock: 00F80E80
2009/04/07 04:39:04| cbdataLock: 00F81A78
2009/04/07 04:39:04| storeClientCopy2: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| storeClientCopy3: Waiting for more
2009/04/07 04:39:04| cbdataUnlock: 00F81A78
2009/04/07 04:39:04| storeLockObject:
(C:\work\SNT-2.7\src\errorpage.c:316): key
'3D8B98FA9C7FBCC0711DCF9AB7173967' count=3
2009/04/07 04:39:04| errorConvert: %U --
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| errorConvert: %U --
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| errorConvert: %w -- 'webmaster'
2009/04/07 04:39:04| errorConvert: %w -- 'webmaster'
2009/04/07 04:39:04| errorConvert: %T -- 'Tue, 07 Apr 2009 08:39:04 GMT'
2009/04/07 04:39:04| errorConvert: %h -- 'proxy1'
2009/04/07 04:39:04| errorConvert: %s -- 'squid/2.7.STABLE6'
2009/04/07 04:39:04| errorConvert: %S -- '
BR clear=all
HR noshade size=1px
ADDRESS
Generated Tue, 07 Apr 2009 08:39:04 GMT by proxy1 (squid/2.7.STABLE6)
/ADDRESS
/BODY/HTML
'
2009/04/07 04:39:04| storeExpireNow: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| InvokeHandlers: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| InvokeHandlers: checking client #0
2009/04/07 04:39:04| cbdataLock: 00F81A78
2009/04/07 04:39:04| storeClientCopy2: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| storeClientCopy3: Copying from memory
2009/04/07 04:39:04| cbdataValid: 00F80E80
2009/04/07 04:39:04| clientBuildReplyHeader: Error, don't keep-alive
2009/04/07 04:39:04| clientSendHeaders: 181 bytes of headers
2009/04/07 04:39:04| The reply for GET
http://proxy1.abstract.net/configMultiSiteConfigRequest is ALLOWED,
because it matched 'PROXY1'
2009/04/07 04:39:04| cbdataLock: 00F80E80
2009/04/07 04:39:04| cbdataUnlock: 00F80E80
2009/04/07 04:39:04| cbdataUnlock: 00F81A78
2009/04/07 04:39:04| storeComplete: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeEntryValidLength: Checking
'3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| InvokeHandlers: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| InvokeHandlers: checking client #0
2009/04/07 04:39:04| storeUnlockObject:
(C:\work\SNT-2.7\src\errorpage.c:331): key
'3D8B98FA9C7FBCC0711DCF9AB7173967' count=2

On Tue, Apr 7, 2009 at 5:03 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 louis gonzales wrote:

 Amos,
 on defaultsite=X tells Squid to assume the domain X was wanted when
 broken clients fail to send a Host: header properly.

 is domain=FQDN or just the domain name(viz. not host.domain.com, nor
 cname.domain.com) - e.g. FQDN=www.domain.com, where as
 domainname=domain.com without 

Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread louis gonzales
Amos,
I was giving the incorrect IP address for the webserver.

After lots of debugging and verbose DEBUG ALL,3 - OMG :)

my issues was, lack of sleep!

If anyone needs help getting a reverse proxy set up, let me know.

internalWebServer ---  Squid-Reverse-Proxy - [clients]

Thanks for your earlier insights Amos!

Regards,


Re: [squid-users] Squid-tproxy patch for squid 3.0

2009-04-07 Thread Vivek

Thanks Amos,



As per the benchmark result 2.7 perform better than 3.1. But Tproxy v2 
patch for 2.7 is obsolete. So that i need Tproxy v4 patch for squid 
2.7. If anybody have have ?..




--Vivek



-Original Message-

From: Amos Jeffries squ...@treenet.co.nz

To: Vivek vivek...@aol.in

Cc: squid-users@squid-cache.org

Sent: Tue, 7 Apr 2009 2:23 pm

Subject: Re: [squid-users] Squid-tproxy patch for squid 3.0





Vivek wrote:




Thanks Amos, We want Tproxy v4 support ( 2.6.28 kernel support) for


squid 2.7. If we could get squid-3.0-tproxy patch from any achieves 

it


would be very helpful for us to develop a patch for 2.7..










There no single patch just a large collection of incremental changes.





The 2.7 code base is also a lot different to the 3.x codebase in these

areas.





Whats missing from 3.1 that you need from 2.7? It would be a more

future-proof work if the port was along the developer roadmap.





Amos



--

Please be using



 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13



 Current Beta Squid 3.1.0.6













You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



RE: [squid-users] squid and proxy.pac file query

2009-04-07 Thread Tim Duncan


proxy_yes = PROXY proxy.baladia.gov.kw:3128; 

function FindProxyForURL(url, host)
{
// variable strings to return

if (
shExpMatch(url, http://www.baladia.gov.kw*;) ||
shExpMatch(url, http://host.kmun.gov.kw*;) ||
shExpMatch(url, http://km_online*;)) {
return DIRECT;  

} else {

return proxy_yes; // Proxy anything else return proxy_yes

}


-Tim


-Original Message-
From: Benedict simon [mailto:si...@kmun.gov.kw] 
Sent: Monday, April 06, 2009 2:45 PM
To: squid
Subject: [squid-users] squid and proxy.pac file query


Dear All,


I am sorry if i posting it to the wrong group
I have Centos OS 5.2 server with squid-2.6.STABLE6-5.el5_1.3 running
perfect for quite some time

we have a couple of local intranet web sites which are working with or
without the bypass proxy server for local address in their browsers.

now our intranet sites gonna increase by about 10 more servers and would
like to implement a proxy.pac file

after googling and trying out a couple of options i am still not able to
get it workin succesfully

here below r my details

proxy.pac file in /var/www/html .. the apache root
-
#
function FindProxyForURL(url, host)
{
// variable strings to return
var proxy_yes = PROXY proxy.baladia.gov.kw:3128;
var proxy_no = DIRECT;
if (shExpMatch(url, http://www.baladia.gov.kw*;)) { return proxy_no; }
if (shExpMatch(url, http://host.kmun.gov.kw*;)) { return proxy_no; } if
(shExpMatch(url, http://km_online*;)) { return proxy_no; } // Proxy
anything else return proxy_yes; }
---

apache is workin fine cause if i run the command in the browser address
http://proxy.baladia.gov.kw/proxy.pac I do get a prompt to save or open
the proxy.pac file


i have added the followin in /etc/mimetypes

application/x-ns-proxy-autoconfig pac

also in my /etc/http/conf/httpd.conf file i have

AddType application/x-ns-proxy-autoconfig .pac

now in my client browser IE 6 i have in lan setting ==Use automatic
configuration script selected n
have..http://proxy.baladia.gov.kw/proxy.pac

now when i start the browser on the client i am not able to browse

i check the apache access log

every time i start the browser i get one line of log

172.16.2.21 - - [06/Apr/2009:18:24:16 +0300] GET /proxy.pac HTTP/1.1
200 414 - Mozilla/4.0 (compatible; MSIE 6.0; Win32)

also in my squid access logs i dont see anything

cant figure out what i could be missing or where i could be goin wrong i
would highly apprecite if you someone could help me out

regards

simon




-- 
Network ADMIN
-
KUWAIT MUNICIPALITY:


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread Amos Jeffries

louis gonzales wrote:

Amos,
Here's a challenge, I can't find the reason why the Header rewrite is
not happening?  My configuration has only 1 web
server(unified1.abstract.net, IP:192.168.0.10) behind the reverse
proxy server(proxy1.abstract.net, IP: 192.168.0.20)

What I want the reverse proxy server to do is:
a) not cache anything, essentially just stream the web server content
straight to the client


 == cache deny all


b) *suppose* the Header needs to be rewritten, but I am assuming this
get's done by virtue of using the following lines in squid.conf:

###
cache_peer unified1.abstract.net parent 80 0 no-query originserver
forceddomain=unified1.abstract.net name=myAccel



Passes requests to unified1.abstract.net:80 and forces change of Host: 
header to Host: unified1.abstract.net so machine at 
unified1.abstract.net can run under the belief that it is serving that 
domain.




###
acl FMS dstdomain unified1.abstract.net


NP: this is the problem...
 *client browsers* will never ask Squid for this private domain 
unified1.abstract.net.
  They will only ask Squid for the public domain(s) which resolve to 
192.168.0.20 (squid IP).



Amos


http_access allow FMS
cache_peer_access myAccel allow FMS

--  Any ideas???  Below is some logging
-- THNKs

The request GET http://proxy1:80/configMultiSiteConfigRequest is
ALLOWED, because it matched 'PROXY1'
2009/04/07 04:39:04| clientStoreURLRewriteDone:
'http://proxy1.abstract.net/configMultiSiteConfigRequest' result=NULL
2009/04/07 04:39:04| WARNING: Forwarding loop detected for:
Client: 192.168.0.20 http_port: 192.168.0.20:80
GET http://proxy1.abstract.net/configMultiSiteConfigRequest HTTP/1.0

User-Agent: FMS-FCC/1.3 (bd:20081014) Java/1.5.0_11

Host: proxy1.abstract.net

Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2

Via: 1.1 proxy1:80 (squid/2.7.STABLE6), 1.0 proxy1:80 (squid/2.7.STABLE6)

X-Forwarded-For: 192.168.0.20, 192.168.0.20

Cache-Control: max-age=259200

Connection: keep-alive
2009/04/07 04:39:04| clientProcessRequest2: storeGet() MISS
2009/04/07 04:39:04| storeCreateEntry:
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| new_MemObject: returning 00F818D8
2009/04/07 04:39:04| new_StoreEntry: returning 00F81890
2009/04/07 04:39:04| storeKeyPrivate: GET
http://proxy1.abstract.net/configMultiSiteConfigRequest
2009/04/07 04:39:04| storeHashInsert: Inserting Entry 00F81890 key
'3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeReleaseRequest: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeLockObject:
(C:\work\SNT-2.7\src\store_client.c:122): key
'3D8B98FA9C7FBCC0711DCF9AB7173967' count=2
2009/04/07 04:39:04| storeClientCopy:
3D8B98FA9C7FBCC0711DCF9AB7173967, seen 0, want 0, size 4096, cb
0041D44A, cbdata 00F80E80
2009/04/07 04:39:04| cbdataLock: 00F80E80
2009/04/07 04:39:04| cbdataLock: 00F81A78
2009/04/07 04:39:04| storeClientCopy2: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| storeClientCopy3: Waiting for more
2009/04/07 04:39:04| cbdataUnlock: 00F81A78
2009/04/07 04:39:04| storeLockObject:
(C:\work\SNT-2.7\src\errorpage.c:316): key
'3D8B98FA9C7FBCC0711DCF9AB7173967' count=3
2009/04/07 04:39:04| errorConvert: %U --
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| errorConvert: %U --
'http://proxy1.abstract.net/configMultiSiteConfigRequest'
2009/04/07 04:39:04| errorConvert: %w -- 'webmaster'
2009/04/07 04:39:04| errorConvert: %w -- 'webmaster'
2009/04/07 04:39:04| errorConvert: %T -- 'Tue, 07 Apr 2009 08:39:04 GMT'
2009/04/07 04:39:04| errorConvert: %h -- 'proxy1'
2009/04/07 04:39:04| errorConvert: %s -- 'squid/2.7.STABLE6'
2009/04/07 04:39:04| errorConvert: %S -- '
BR clear=all
HR noshade size=1px
ADDRESS
Generated Tue, 07 Apr 2009 08:39:04 GMT by proxy1 (squid/2.7.STABLE6)
/ADDRESS
/BODY/HTML
'
2009/04/07 04:39:04| storeExpireNow: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| InvokeHandlers: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| InvokeHandlers: checking client #0
2009/04/07 04:39:04| cbdataLock: 00F81A78
2009/04/07 04:39:04| storeClientCopy2: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| storeClientCopy3: Copying from memory
2009/04/07 04:39:04| cbdataValid: 00F80E80
2009/04/07 04:39:04| clientBuildReplyHeader: Error, don't keep-alive
2009/04/07 04:39:04| clientSendHeaders: 181 bytes of headers
2009/04/07 04:39:04| The reply for GET
http://proxy1.abstract.net/configMultiSiteConfigRequest is ALLOWED,
because it matched 'PROXY1'
2009/04/07 04:39:04| cbdataLock: 00F80E80
2009/04/07 04:39:04| cbdataUnlock: 00F80E80
2009/04/07 04:39:04| cbdataUnlock: 00F81A78
2009/04/07 04:39:04| storeComplete: '3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| storeEntryValidLength: Checking
'3D8B98FA9C7FBCC0711DCF9AB7173967'
2009/04/07 04:39:04| InvokeHandlers: 3D8B98FA9C7FBCC0711DCF9AB7173967
2009/04/07 04:39:04| InvokeHandlers: checking client #0
2009/04/07 04:39:04| 

Re: [squid-users] ...Memory-only Squid questions

2009-04-07 Thread Amos Jeffries

Gregori Parker wrote:

Glad to help David, please let us know how it progresses.
 
Dont know if you saw this in the archives: http://www.mail-archive.com/squid-users@squid-cache.org/msg19824.html but it might help guide you on your SO_FAIL issue.  It might be worth moving to LRU and establishing a baseline of performance (using either SNMP+cacti or cachemgr) before moving to fancier replacement policies.  Personally, I would go 'store_log none' and not worry about it unless you see something in cache.log
 
The best all around advice I can give on Squid is to start simple!  Once everything works the way you expect, then start tweaking your way into complexity with a means to track the (in)effectiveness of each change you make (and a known good configuration that you can always go back to when you inevitably fubar the thing!).
 
- Gregori




Thank you for that last Gregory, I couldn't put it better myself.
(and I've tried a few times).

That has made its way to #1 item on the configuring squid FAQ.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] ACLs

2009-04-07 Thread Matus UHLAR - fantomas
On 02.04.09 03:00, Merdouille wrote:
 i use a transparent squid proxy and i want :
 - access as manager with squidclient from localhost only
 - allow only computer from localhost to go every where
 
 My ACLs :
 #== ACL
 #   nom type
 acl allsrc  all
 acl port  port82
 acl localnet src 192.168.100.0/192.168.100.255
 acl managerproto  cache_object
 acl PROTO  proto  http
 acl METHODmethod   GET
 acl localhost   src 127.0.0.1
 
 I try :
 
 http_access allow   localhost manager
 http_access allow   localnet port !manager
 http_access  denyall !port !PROTO !METHOD

Why did you define port PROTO and METHOD acl's?
Did you read default config file? You are making things complicated, 

http_access allow manager localhost
http_access deny manager

http_access allow localhost
http_access deny all


Btw, allow only computer from localhost to go every where, don't you
really mean allow access only from localnet? If so, replace localhost by
localnet in the latter http_access. But change localnet, as the netmask
isn't valid.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Enter any 12-digit prime number to continue.


Re: [squid-users] defaultsite=domainname?

2009-04-07 Thread Amos Jeffries

louis gonzales wrote:

Amos,
on defaultsite=X tells Squid to assume the domain X was wanted when
broken clients fail to send a Host: header properly.

is domain=FQDN or just the domain name(viz. not host.domain.com, nor
cname.domain.com) - e.g. FQDN=www.domain.com, where as
domainname=domain.com without the host/cname prefix.

words themselves are ambiguous I appreciate your insight.


The situation is slightly ambiguous also. It depends on what your DNS 
requirements are.


From the official docs:


defaultsite=domainname

What to use for the Host: header if it is not present
in a request. Determines what site (not origin server)
accelerators should consider the default.
Implies accel.


What the Host: header should contain depends entirely on whether you are 
virtual hosting or not.


All we can say is its a domain.  _should_ be a FQDN, but _may_ be 
anything the squid box can resolve.



Amos




On Tue, Apr 7, 2009 at 4:05 AM, Amos Jeffries squ...@treenet.co.nz wrote:

louis gonzales wrote:

Amos,
Yes did seen these.  My specific question, results from the fact
that the information provided for defaultsite=mysite.domain.com (not
origin server) - is ambiguous.

When one is configuring a reverse proxy server, it's usually, because
one wants to have the reverse proxy server appear as the origin
server.

Aha, hopefully my new additions there will reduce that ambiguity a little.

Amos



On Tue, Apr 7, 2009 at 3:30 AM, Amos Jeffries squ...@treenet.co.nz
wrote:

louis gonzales wrote:

Dist,
Squid 2.7.Stable6
I'm setting up a reverse proxy, such that the Squid system will be
viewed as the originserver to the clients contacting it.

Does the defaultsite= attribute get the name of the actual web
server or the proxy server?

defaultsite=  is the public domain name visitors are expected to visit.
It's used to fix broken HTTP Host: headers some clients still send.
In a virtual-host site broken clients will be passed to that domain.

seen these?
 http://wiki.squid-cache.org/ConfigExamples


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.6








--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] Question on Strange network setup with 2 Squid servers.

2009-04-07 Thread Michael D. Setzer II
My College has two 10Mb connections to two ISPs.
The campus has 4 Class C networks from the one ISP.
202.128.71.x
202.128.72.x
202.128.73.x
202.128.79.x

The Router has the .1 on all 4 networks.

The second ISP connects to the same router, but links via the IP address
202.151.91.113. 

The routing all goes the the router, and thru some systems, some IP 
addresses go normal route, and others seem to be NATed thru the other 
ISP.

I have no ideal on how they have it configure. 

Now for what I have running. Had a squid server running on 202.128.73.28 
that uses the Main ISP and that has worked fine. I then setup a second 
Squid on 202.128.71.129 that is routed thru the other ISP. So, the computers 
in my Classroom can use either of the two ISPs based on which squid server 
they are set to us. 

This works OK, and if one checks the load on the networks, one can use the 
one least used. 

Is there a way to set the squid servers so they can use both paths most 
efficently. 


+--+
  Michael D. Setzer II -  Computer Science Instructor  
  Guam Community College  Computer Center  
  mailto:mi...@kuentos.guam.net
  mailto:msetze...@gmail.com
  http://www.guam.net/home/mikes
  Guam - Where America's Day Begins
+--+

http://setiathome.berkeley.edu (Original)
Number of Seti Units Returned:  19,471
Processing time:  32 years, 290 days, 12 hours, 58 minutes
(Total Hours: 287,489)

bo...@home CREDITS
SETI 7,619,743.5373 | EINSTEIN 2,437,972.5409 | ROSETTA 
855,715.2563



Re: [squid-users] disable error page in squid

2009-04-07 Thread Matus UHLAR - fantomas
Hello,

please configure your mailer to wrap lines below 80 characters per line.
72 to 75 is usually OK.

Thank you.

On 06.04.09 20:54, Ryan Raymond wrote:
 When accessing an unreachable server or dns no found, squid has been
 generated an error page. How to disable this page generated by Squid? and
 display the original error page from browser ?

Impossible - it is the browser who decides that it will ask proxy for the
content instead of checking if server is reachable.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
2B|!2B, that's a question!


Re: [squid-users] Question on Strange network setup with 2 Squid servers.

2009-04-07 Thread Amos Jeffries

Michael D. Setzer II wrote:

My College has two 10Mb connections to two ISPs.
The campus has 4 Class C networks from the one ISP.
202.128.71.x
202.128.72.x
202.128.73.x
202.128.79.x

The Router has the .1 on all 4 networks.

The second ISP connects to the same router, but links via the IP address
202.151.91.113. 

The routing all goes the the router, and thru some systems, some IP 
addresses go normal route, and others seem to be NATed thru the other 
ISP.


I have no ideal on how they have it configure. 

Now for what I have running. Had a squid server running on 202.128.73.28 
that uses the Main ISP and that has worked fine. I then setup a second 
Squid on 202.128.71.129 that is routed thru the other ISP. So, the computers 
in my Classroom can use either of the two ISPs based on which squid server 
they are set to us. 

This works OK, and if one checks the load on the networks, one can use the 
one least used. 

Is there a way to set the squid servers so they can use both paths most 
efficently. 



Squid built with --enable-icmp and related pinger helper install will 
test network load on all paths to destinations, selecting the fastest or 
shortest link.


Building both Squid this way and not disabling netdb-exchange between 
them makes them share their knowledge of the network topology and 
destination speeds on a regular basis.


NP: Be prepared for some false security complaints from people who don't 
understand what ICMP is designed for or how it works.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] Strange problem accessing http://Bloomberg.com

2009-04-07 Thread Jason Taylor

Hello,

I ma having a very bizarre problem and I am wondering if anyone here can 
shed some light on it.
Our internal users are accessing the Internet via a squid v2.6-STABLE9 
proxy using a proxy.pac file.
Their browsers (corporate dictates Internet Explorer) are configured to 
Automatically detect proxy settings.


When I open the page http://bloomberg.com using the above settings, the 
page mostly loads but the browser locks up and needs to be killed.


If I configure the browser to use a statically configured proxy and 
port, then the page loads fine.


The


Client with Proxy.pac file accessing http://Bloomberg.com
--
1239113210.058 67 xxx.yyy.zzz.aaa TCP_REFRESH_MISS/200 10581 GET 
http://bloomberg.com/ - DIRECT/204.179.240.180 text/html
1239113210.216  4 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 28172 GET 
http://bloomberg.com/styles/main2.css - NONE/- text/css
1239113210.261  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 2603 GET 
http://bloomberg.com/jscommon/ctype.js - NONE/- application/x-javascript
1239113210.330  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1565 GET 
http://bloomberg.com/jscommon/analysis.js - NONE/- application/x-javascript
1239113210.407  2 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 16279 GET 
http://bloomberg.com/jscommon/banner.js - NONE/- application/x-javascript
1239113210.687 20 xxx.yyy.zzz.aaa TCP_MISS/200 541 GET 
http://js.revsci.net/common/pcx.js? - DIRECT/168.75.68.97 text/javascript
1239113210.702  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 752 GET 
http://bloomberg.com/jscommon/dm_extract.js - NONE/- 
application/x-javascript
1239113210.777  5 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 3701 GET 
http://bloomberg.com/jscommon/bringupPlayer.js - NONE/- 
application/x-javascript
1239113210.815175 xxx.yyy.zzz.aaa TCP_MISS/200 1434 GET 
http://pix01.revsci.net/K05539/a3/0/3/480/1/0/12080E719A6/0/0//254130177.gif? 
- DIRECT/206.191.161.8 image/gif
1239113210.859  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 9274 GET 
http://bloomberg.com/jscommon/dropmenu.js - NONE/- application/x-javascript
1239113210.969  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 996 GET 
http://images.bloomberg.com/r06/navigation/home-over.gif - NONE/- image/gif
1239113210.998  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1010 GET 
http://images.bloomberg.com/r06/navigation/news-over.gif - NONE/- image/gif
1239113211.028  2 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1235 GET 
http://images.bloomberg.com/r06/navigation/market-data-over.gif - NONE/- 
image/gif
1239113211.057  5 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1458 GET 
http://images.bloomberg.com/r06/navigation/pf-over.gif - NONE/- image/gif
1239113211.086  0 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1191 GET 
http://images.bloomberg.com/r06/navigation/tv-radio-over.gif - NONE/- 
image/gif
1239113211.122  3 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 3654 GET 
http://bloomberg.com/jscommon/header_v1.js - NONE/- application/x-javascript
1239113211.182  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 2285 GET 
http://bloomberg.com/jscommon/flsh_charts.js - NONE/- 
application/x-javascript
1239113211.240  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 5997 GET 
http://bloomberg.com/jscommon/flsh_nav.js - NONE/- application/x-javascript
1239113211.380  3 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1521 GET 
http://bloomberg.com/jscommon/RBB.js - NONE/- application/x-javascript
1239113211.510 31 xxx.yyy.zzz.aaa TCP_REFRESH_HIT/200 9350 GET 
http://bloomberg.com/intro3.html - DIRECT/204.179.240.224 text/html
1239113211.665  4 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 702 GET 
http://bloomberg.com/jscommon/validateticker.js - NONE/- 
application/x-javascript
1239113211.801162 xxx.yyy.zzz.aaa TCP_MISS/200 1723 GET 
http://pix01.revsci.net/K05539/a3/0/3/480/1/0/12080E71E0B/0/0//713105765.gif? 
- DIRECT/206.191.161.8 image/gif
1239113211.838  4 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 3108 GET 
http://images.bloomberg.com/r06/navigation/logo.gif - NONE/- image/gif
1239113211.867  2 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 4101 GET 
http://images.bloomberg.com/r06/homepage/bbganywhere4.gif - NONE/- image/gif
1239113211.896  6 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1633 GET 
http://images.bloomberg.com/r06/homepage/searchnews.gif - NONE/- image/gif
1239113211.926  0 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 660 GET 
http://images.bloomberg.com/r06/navigation/btn-go.gif - NONE/- image/gif
1239113212.070  5 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1020 GET 
http://images.bloomberg.com/r06/navigation/helpquestion.gif - NONE/- 
image/gif
1239113212.099  1 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1693 GET 
http://images.bloomberg.com/r06/navigation/quote_quote_btn.gif - NONE/- 
image/gif
1239113212.128  0 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1661 GET 
http://images.bloomberg.com/r06/navigation/quote_chart_btn.gif - NONE/- 
image/gif
1239113212.157  9 xxx.yyy.zzz.aaa TCP_MEM_HIT/200 1649 GET 
http://images.bloomberg.com/r06/navigation/quote_news_btn.gif - NONE/- 
image/gif
1239113212.187  7 xxx.yyy.zzz.aaa 

Re: [squid-users] Question on Strange network setup with 2 Squid servers.

2009-04-07 Thread Michael D. Setzer II
On 8 Apr 2009 at 2:11, Amos Jeffries wrote:

Date sent:  Wed, 08 Apr 2009 02:11:04 +1200
From:   Amos Jeffries squ...@treenet.co.nz
To: Michael D. Setzer II mi...@kuentos.guam.net
Copies to:  squid-users@squid-cache.org
Subject:Re: [squid-users] Question on Strange network setup 
with 
2 Squid
servers.

 Michael D. Setzer II wrote:
  My College has two 10Mb connections to two ISPs.
  The campus has 4 Class C networks from the one ISP.
  202.128.71.x
  202.128.72.x
  202.128.73.x
  202.128.79.x
  
  The Router has the .1 on all 4 networks.
  
  The second ISP connects to the same router, but links via the IP address
  202.151.91.113. 
  
  The routing all goes the the router, and thru some systems, some IP 
  addresses go normal route, and others seem to be NATed thru the other 
  ISP.
  
  I have no ideal on how they have it configure. 
  
  Now for what I have running. Had a squid server running on 202.128.73.28 
  that uses the Main ISP and that has worked fine. I then setup a second 
  Squid on 202.128.71.129 that is routed thru the other ISP. So, the 
  computers 
  in my Classroom can use either of the two ISPs based on which squid server 
  they are set to us. 
  
  This works OK, and if one checks the load on the networks, one can use the 
  one least used. 
  
  Is there a way to set the squid servers so they can use both paths most 
  efficently. 
  
 
 Squid built with --enable-icmp and related pinger helper install will 
 test network load on all paths to destinations, selecting the fastest or 
 shortest link.
 
 Building both Squid this way and not disabling netdb-exchange between 
 them makes them share their knowledge of the network topology and 
 destination speeds on a regular basis.
 
 NP: Be prepared for some false security complaints from people who don't 
 understand what ICMP is designed for or how it works.
 

Thanks for the info. Did look at the squid -v and neither has the icmp option 
set, so I'll have to download the source and try building squid with the 
options from source. The older machine has an earlier version, so I'll have to 
set it up on another machine since that is a production machine running lots 
of other things.

Thanks again.


 Amos
 -- 
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6


+--+
  Michael D. Setzer II -  Computer Science Instructor  
  Guam Community College  Computer Center  
  mailto:mi...@kuentos.guam.net
  mailto:msetze...@gmail.com
  http://www.guam.net/home/mikes
  Guam - Where America's Day Begins
+--+

http://setiathome.berkeley.edu (Original)
Number of Seti Units Returned:  19,471
Processing time:  32 years, 290 days, 12 hours, 58 minutes
(Total Hours: 287,489)

bo...@home CREDITS
SETI 7,619,743.5373 | EINSTEIN 2,437,972.5409 | ROSETTA 
855,715.2563



[squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread Henrique M.

I'm trying to run squid but I'm getting a few error msgs:

 * Starting Squid HTTP proxy squid  
 
2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
'wais_relay_port'
2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
'incoming_icp_average'
2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
'incoming_http_average'
2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
'incoming_dns_average'
2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
'min_icp_poll_cnt'
2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
'min_dns_poll_cnt'
2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
'min_http_poll_cnt'

Could you guys help me solve this?
-- 
View this message in context: 
http://www.nabble.com/Getting-error-msgs-when-trying-to-start-squid-tp22933693p22933693.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread ROBIN
Do you actualy require these functions? if no then comment them out.

Rob


On Tue, 2009-04-07 at 10:02 -0700, Henrique M. wrote:
 I'm trying to run squid but I'm getting a few error msgs:
 
  * Starting Squid HTTP proxy squid

 2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
 'wais_relay_port'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
 'incoming_icp_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
 'incoming_http_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
 'incoming_dns_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
 'min_icp_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
 'min_dns_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
 'min_http_poll_cnt'
 
 Could you guys help me solve this?



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread ROBIN
Also what version are you running? is this a hand crafted config or one
borrowed from somwhere else?

Post up the confg from lines 66 to 106

Rob


On Tue, 2009-04-07 at 10:02 -0700, Henrique M. wrote:
 I'm trying to run squid but I'm getting a few error msgs:
 
  * Starting Squid HTTP proxy squid

 2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
 'wais_relay_port'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
 'incoming_icp_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
 'incoming_http_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
 'incoming_dns_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
 'min_icp_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
 'min_dns_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
 'min_http_poll_cnt'
 
 Could you guys help me solve this?



Re: [squid-users] Strange problem accessing http://Bloomberg.com

2009-04-07 Thread Jason Taylor
So I think the client's proxy.pac script might be having trouble 
digesting the malformed URL below:


1239113823.055  0 xxx.yyy.zzz.aaa TCP_DENIED/400 1614 GET 
http://'wbetest2.bloomberg.com/jscommon/0/s_code.js' - NONE/- text/html


The single quote is making the proxy.pac freeze which in turn makes the 
browser window freeze.

So at least now I know this is a problem at Bloomberg's end.
However, in the mean time, I need to make this site work for my users 
since brokers are not known for their patience and understanding.


I know this isn't the ideal forum for this, but does anyone have an idea 
how I can let the proxy.pac properly parse a URL with a quoted string in it?


Cheers,

/Jason


Re: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Arthur Titeica

Karol Maginnis wrote:

Hello,

I am new to squid but not new to reverse proxies.  I am trying to 
implement a proxy that would work like this:


www.example.com - server 1
example.com - server 1
dev.example.com - server 2

I have read the wiki here:
wiki.squid-cache.org/SquidFaq/ReverseProxy

But I cant get it to work and I am about to pull my hair out.

My squid.conf looks like:

http_port 80 accel defaultsite=example.com
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com


This gives me a big fat: Access Denied

So I added this to my squid.conf:
---
acl our_sites dstdomain example.com dev.example.com
http_access allow our_sites
---

This clears the Access Denied however now all traffic goes to 
server_1 (the .115 addy).


I have tried all sorts of cute ACLs included but not limited to 
delcaring ACSs for server_1 and server_2 respectively and allowing 
access to server_1 from server_1 sites and denying server_2 sites and 
vice versa. However this just gives me an Access Denied for all sites.


I have also tired every example found on this issue in the Wiki.  I feel 
like the Wiki is leaving out a key config line that is causing this not 
to work, but I could be wrong.


I am runnig squid:
Squid Cache: Version 2.7.STABLE6
configure options:  '--disable-internal-dns'

I hate sending such a simple question to a mailing list but I have read 
the squid wiki so much that I almost have it memorized as far as the 
ReverseProxy pages are concerned.




I'm too new with squid to help others but I have to say that I spent 2 
weeks on the very same issue. Squid 2.6 has its options which are 
different from the 2.7 series and the big difference comes with the 3.x 
series.


If it helps I solved my issue with the code bellow (Squid 3.0.STABLE7) 
but I'm pretty sure this won't work in 2.7 ;)


-- cut --
cache_peer 192.168.1.115 parent 80 0 no-query no-digest originserver 
name=iis

acl sites_iis dstdomain example.net
cache_peer_access iis allow sites_iis
http_access allow sites_iis

--- end cut --

Maybe it's just me but something doesn't feel right in the above code 
but it works and for the moment I'm all too tired with this squid 
thingie...	


Re: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Mehmet ÇELIK

Hi.

You try the following..

cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
acl Server2-Domain dstdomain dev.example.com
cache_peer_access server_2 allow Server2-Domain

cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
acl Server1-Domain dstdomain  .example.com
cache_peer_access server_1 allow Server1-Domain

I defined dev.example.com for Server2.  Out of this, I defined for Server1.
You must look to Howto ACL.

Regards.


- Original Message - 
From: Karol Maginnis nullo...@sdf.lonestar.org

To: squid-users@squid-cache.org
Sent: Tuesday, April 07, 2009 9:30 PM
Subject: [squid-users] Reverse Proxy + Multiple Webservers woes



Hello,

I am new to squid but not new to reverse proxies.  I am trying to
implement a proxy that would work like this:

www.example.com - server 1
example.com - server 1
dev.example.com - server 2

I have read the wiki here:
wiki.squid-cache.org/SquidFaq/ReverseProxy

But I cant get it to work and I am about to pull my hair out.

My squid.conf looks like:

http_port 80 accel defaultsite=example.com
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com


This gives me a big fat: Access Denied

So I added this to my squid.conf:
---
acl our_sites dstdomain example.com dev.example.com
http_access allow our_sites
---

This clears the Access Denied however now all traffic goes to server_1
(the .115 addy).

I have tried all sorts of cute ACLs included but not limited to delcaring
ACSs for server_1 and server_2 respectively and allowing access to
server_1 from server_1 sites and denying server_2 sites and vice versa.
However this just gives me an Access Denied for all sites.

I have also tired every example found on this issue in the Wiki.  I feel
like the Wiki is leaving out a key config line that is causing this not to
work, but I could be wrong.

I am runnig squid:
Squid Cache: Version 2.7.STABLE6
configure options:  '--disable-internal-dns'

I hate sending such a simple question to a mailing list but I have read
the squid wiki so much that I almost have it memorized as far as the
ReverseProxy pages are concerned.

Thanks,
-KJ

nullo...@sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org








No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.0.238 / Virus Database: 270.11.44/2044 - Release Date: 04/06/09 
18:59:00




[squid-users] Re: Re: why RELEASE?

2009-04-07 Thread Brian J. Murrell
On Thu, 2009-04-02 at 11:35 +1200, Amos Jeffries wrote:
 
 IIRC, non-cachable objects larger than max_object_size_in_memory get a
 disk object saved for the transition buffer then released when completed
 whether they need it or not. One of the inefficiencies we are working
 towards killing.

OK.  But that doesn't explain these ones does it?

1239123733.599 RELEASE 00 2503 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239123131 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239123733.600 SWAPOUT 00 2548 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239123736 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239124338.647 RELEASE 00 2548 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239123736 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239124338.649 SWAPOUT 00 25D5 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239124341 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239124944.128 RELEASE 00 25D5 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239124341 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239124944.130 SWAPOUT 00 261E C11C22024C6BFA7AB028CE37E26CADBD  200 
1239124946 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239125550.777 RELEASE 00 261E C11C22024C6BFA7AB028CE37E26CADBD  200 
1239124946 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239125550.777 SWAPOUT 00 2660 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239125553 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239126156.245 RELEASE 00 2660 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239125553 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239126156.247 SWAPOUT 00 26D1 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239126158 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239126762.193 RELEASE 00 26D1 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239126158 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239126762.193 SWAPOUT 00 273B C11C22024C6BFA7AB028CE37E26CADBD  200 
1239126764 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239127368.419 RELEASE 00 273B C11C22024C6BFA7AB028CE37E26CADBD  200 
1239126764 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239127368.419 SWAPOUT 00 27BF C11C22024C6BFA7AB028CE37E26CADBD  200 
1239127370 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239127976.513 RELEASE 00 27BF C11C22024C6BFA7AB028CE37E26CADBD  200 
1239127370 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239127976.515 SWAPOUT 00 2827 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239127978 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239128584.002 RELEASE 00 2827 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239127978 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239128584.002 SWAPOUT 00 2A59 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239128586 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239129189.138 RELEASE 00 2A59 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239128586 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239129189.138 SWAPOUT 00 2A97 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239129191 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239129793.280 RELEASE 00 2A97 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239129191 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239129793.282 SWAPOUT 00 2AD4 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239129795 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif
1239130397.185 RELEASE 00 2AD4 C11C22024C6BFA7AB028CE37E26CADBD  200 
1239129795 1234617780-1 image/gif 1366/1366 GET 
http://www.example.org/forum/images/styles/soness/style/footer_top.gif

[squid-users] Problem with acl helper

2009-04-07 Thread Rodrigo Gliksberg

Hello, i have proxy server transparent running in

OpenBSD 4.4
with Mysql
cautive portal


i create acl helper, to lookup mysql database, this return ej squid put
192.168.35.121
OK user=$usuerofdb\n

this is work fine, but my problem is when i need loggin other user from 
192.168.35.121

, in access.log, the name user is olduser
whats happen? squid cache username for ip??


help!!!
regards!


RE: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Gregori Parker
You need to add the vhost option to http_port so that Squid determines
parent via hostname

i.e.

http_port 80 accel defaultsite=example.com vhost
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com

*** NOTE: if you have DNS for example.com resolving to Squid, then make
sure you override that in /etc/hosts on the squid boxes, pointing those
records to your origins so that you don't run into a loop.

For ACLs, I would recommend the following:

acl your_site1 dstdomain example.com
acl your_site2 dstdomain dev.example.com
acl origin1 dst 192.168.1.114
acl origin2 dst 192.168.1.115
acl acceleratedPort port 80

cache allow your_site1
cache allow your_site2
http_access allow origin1 acceleratedPort
http_access allow origin2 acceleratedPort
http_access deny all


GL, HTH

- Gregori


-Original Message-
From: Karol Maginnis [mailto:nullo...@sdf.lonestar.org] 
Sent: Tuesday, April 07, 2009 11:30 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Reverse Proxy + Multiple Webservers woes

Hello,

I am new to squid but not new to reverse proxies.  I am trying to 
implement a proxy that would work like this:

www.example.com - server 1
example.com - server 1
dev.example.com - server 2

I have read the wiki here:
wiki.squid-cache.org/SquidFaq/ReverseProxy

But I cant get it to work and I am about to pull my hair out.

My squid.conf looks like:

http_port 80 accel defaultsite=example.com
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com


This gives me a big fat: Access Denied

So I added this to my squid.conf:
---
acl our_sites dstdomain example.com dev.example.com
http_access allow our_sites
---

This clears the Access Denied however now all traffic goes to
server_1 
(the .115 addy).

I have tried all sorts of cute ACLs included but not limited to
delcaring 
ACSs for server_1 and server_2 respectively and allowing access to 
server_1 from server_1 sites and denying server_2 sites and vice versa. 
However this just gives me an Access Denied for all sites.

I have also tired every example found on this issue in the Wiki.  I feel

like the Wiki is leaving out a key config line that is causing this not
to 
work, but I could be wrong.

I am runnig squid:
Squid Cache: Version 2.7.STABLE6
configure options:  '--disable-internal-dns'

I hate sending such a simple question to a mailing list but I have read 
the squid wiki so much that I almost have it memorized as far as the 
ReverseProxy pages are concerned.

Thanks,
-KJ

nullo...@sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org


Re: [squid-users] Squid 3.1.6, zph, shorewall, and tc on debian 5.0 (lenny)

2009-04-07 Thread Jason

Amos,

Thanks for answering.

Amos Jeffries wrote:

Jason wrote:

Everyone,

   I have compiled squid 3.1.6 from source on amd64 Debian 5.0 with


NP: please use the correct version numbering: 3.1.0.6.
there will probably be a 3.1.6 at some point in the future and 
hopefully this problem will not apply to those users, best not to add 
confusion.

My mistake.  This is for 3.1.0.6.  My apologies to the squid community.



zph options enabled.  I don't peer with any other caches, so all peering
stuff is disabled in my build.  I did not compile a kernel with the zph
patches, because, as I understand, that is only necessary if I want to
preserve zph marks between caches.  Plus, there is no zph patch for
the kernel version I am running.


Right.



With shorewall redirect rules, squid is operating as a transparent
intercepting proxy just fine.  I do not use tproxy - this is a NAT 
setup.


I can not get the zph functions to work.

Here are my config options:

squid.conf
...
qos_flows local-hit=0x30
...

shorewall tcstart:
#root htb
tc qdisc add dev eth1 root handle 1: htb default 1

#default htb
tc class add dev eth1 parent 1: classid 1:1 htb rate 64kbps /
ceil 64kbps

#squid htb
tc class add dev eth1 parent 1: classid 1:7 htb rate 1Mbit

tc filter add dev eth1 parent 1: protocol ip prio 1 u32 match /
ip protocol 0x6 0xff match ip tos 0x30 0xff flowid 1:7

#I tried this for squid too
#tc filter add dev eth1 parent 1: protocol ip prio 1 u32 match /
ip protocol 0x6 0xff match u32 0x880430 0x at 20 flowid 1:7

The shorewall tcrules are all commented out right now, so it is not 
applying

any filtering.

I have about one week to finish off this server for production...  Help?


Jason Wallace



So what are the packet traces showing you about events?

Also, its much easier for most of us to read the real firewall rules. 
what does iptables -L  iptables -t nat -L show hapening?


Amos


iptables -L  iptables -t nat -L yields the following.  I will try to 
packet trace this afternoon.


iptables -L  iptables -t nat -L
Chain INPUT (policy DROP)
target prot opt source   destination
eth0_inall  --  anywhere anywhere
eth1_inall  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

Drop   all  --  anywhere anywhere
LOGall  --  anywhere anywhereLOG level 
warning prefix `Shorewall:INPUT:DROP:'

DROP   all  --  anywhere anywhere

Chain FORWARD (policy DROP)
target prot opt source   destination
eth0_fwd   all  --  anywhere anywhere
eth1_fwd   all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

Drop   all  --  anywhere anywhere
LOGall  --  anywhere anywhereLOG level 
warning prefix `Shorewall:FORWARD:DROP:'

DROP   all  --  anywhere anywhere

Chain OUTPUT (policy DROP)
target prot opt source   destination
eth0_out   all  --  anywhere anywhere
eth1_out   all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

ACCEPT all  --  anywhere anywhere

Chain Drop (7 references)
target prot opt source   destination
reject tcp  --  anywhere anywheretcp dpt:auth
dropBcast  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp 
fragmentation-needed
ACCEPT icmp --  anywhere anywhereicmp 
time-exceeded

dropInvalid  all  --  anywhere anywhere
DROP   udp  --  anywhere anywheremultiport 
dports loc-srv,microsoft-ds
DROP   udp  --  anywhere anywhereudp 
dpts:netbios-ns:netbios-ssn
DROP   udp  --  anywhere anywhereudp 
spt:netbios-ns dpts:1024:65535
DROP   tcp  --  anywhere anywheremultiport 
dports loc-srv,netbios-ssn,microsoft-ds

DROP   udp  --  anywhere anywhereudp dpt:1900
dropNotSyn  tcp  --  anywhere anywhere
DROP   udp  --  anywhere anywhereudp spt:domain

Chain Reject (0 references)
target prot opt source   destination
reject tcp  --  anywhere anywheretcp dpt:auth
dropBcast  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp 
fragmentation-needed
ACCEPT icmp --  anywhere anywhereicmp 
time-exceeded

dropInvalid  all  --  anywhere anywhere
reject udp  --  anywhere anywheremultiport 
dports loc-srv,microsoft-ds

[squid-users] Problem with acl helper

2009-04-07 Thread Rodrigo Gliksberg

Hello, i have proxy server transparent running in

OpenBSD 4.4
with Mysql
cautive portal


i create acl helper, to lookup mysql database, this return ej squid put
192.168.35.121
OK user=$usuerofdb\n

this is work fine, but my problem is when i need loggin other user from
192.168.35.121
, in access.log, the name user is olduser
whats happen? squid cache username for ip??

when restart squid, work fine with new user, but cached user when loggin 
other.



help!!!
regards!



[squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Karol Maginnis

Hello,

I am new to squid but not new to reverse proxies.  I am trying to 
implement a proxy that would work like this:


www.example.com - server 1
example.com - server 1
dev.example.com - server 2

I have read the wiki here:
wiki.squid-cache.org/SquidFaq/ReverseProxy

But I cant get it to work and I am about to pull my hair out.

My squid.conf looks like:

http_port 80 accel defaultsite=example.com
cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
cache_peer_domain server_2 dev.example.com
cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
cache_peer_domain server_1 example.com


This gives me a big fat: Access Denied

So I added this to my squid.conf:
---
acl our_sites dstdomain example.com dev.example.com
http_access allow our_sites
---

This clears the Access Denied however now all traffic goes to server_1 
(the .115 addy).


I have tried all sorts of cute ACLs included but not limited to delcaring 
ACSs for server_1 and server_2 respectively and allowing access to 
server_1 from server_1 sites and denying server_2 sites and vice versa. 
However this just gives me an Access Denied for all sites.


I have also tired every example found on this issue in the Wiki.  I feel 
like the Wiki is leaving out a key config line that is causing this not to 
work, but I could be wrong.


I am runnig squid:
Squid Cache: Version 2.7.STABLE6
configure options:  '--disable-internal-dns'

I hate sending such a simple question to a mailing list but I have read 
the squid wiki so much that I almost have it memorized as far as the 
ReverseProxy pages are concerned.


Thanks,
-KJ

nullo...@sdf.lonestar.org
SDF Public Access UNIX System - http://sdf.lonestar.org


Re: [squid-users] Strange problem accessing http://Bloomberg.com

2009-04-07 Thread Chris Robertson

Jason Taylor wrote:
So I think the client's proxy.pac script might be having trouble 
digesting the malformed URL below:


1239113823.055  0 xxx.yyy.zzz.aaa TCP_DENIED/400 1614 GET 
http://'wbetest2.bloomberg.com/jscommon/0/s_code.js' - NONE/- text/html


The single quote is making the proxy.pac freeze which in turn makes 
the browser window freeze.

So at least now I know this is a problem at Bloomberg's end.
However, in the mean time, I need to make this site work for my users 
since brokers are not known for their patience and understanding.


I know this isn't the ideal forum for this, but does anyone have an 
idea how I can let the proxy.pac properly parse a URL with a quoted 
string in it?


Just fix the URL itself using a url_rewrite_program.  Use 
url_rewrite_access to limit the program the bloomberg.com domain.




Cheers,

/Jason


Chris


Re: [squid-users] ...Memory-only Squid questions

2009-04-07 Thread Chris Robertson

David Tosoff wrote:

Thanks Chris.

I had already read both of the wiki post and the thread you directed me to 
before I posted this to the group.
  


Excellent.


I already had compiled heap into my squid before this issue happened. I am using heap 
GDSF. And, I wasn't able to find --enable-heap-replacement as a compile 
option in './configure --help' ... perhaps it's deprecated??


Seems to be.  Section 7.2 of the release notes says:

*--enable-heap-replacement*

   Please use --enable-removal-policies directive instead.



 Is it a still a valid compile option for 3.0 stable 13?

In any event, a gentleman named Gregori Parker responded and helped me with 
some suggestions and I've managed to stabalize the squid at ~20480 MB cache_mem
  


Nice.

The only thing I seem to be missing now is the SO_FAIL issue. 
Correct me if I'm wrong, but I assume 'SO' stands for 'Swap Out'... But how does this affect a system where there is nowhere for the squid to swap out to (cache_dir null /tmp)...?
  


Well two things (not mentioned in the other replies) come to mind.  
First you did specify that you've compiled in support for a null type 
store_dir.  Assuming you have the cache_dir null type is still a bit 
weird (in 3.0) in that it requires a valid directory, even though it's 
not supposed to write anything to it.  Does /tmp exist, and does the 
Squid effective user have access to it?



Thanks for all your help so far.
  


For what help I can offer, you are most welcome.


Cheers,

David
  


Chris



Re: [squid-users] Problem with acl helper

2009-04-07 Thread Chris Robertson

Rodrigo Gliksberg wrote:

Hello, i have proxy server transparent running in

OpenBSD 4.4
with Mysql
cautive portal


i create acl helper, to lookup mysql database, this return ej squid put
192.168.35.121
OK user=$usuerofdb\n

this is work fine, but my problem is when i need loggin other user 
from 192.168.35.121

, in access.log, the name user is olduser
whats happen? squid cache username for ip??


Yes.  For the length of the TTL on the helper (which defaults to 3600 
seconds).



help!!!
regards!



Chris


Re: [squid-users] Strange problem accessing http://Bloomberg.com

2009-04-07 Thread Amos Jeffries
 Hello,

 I ma having a very bizarre problem and I am wondering if anyone here can
 shed some light on it.
 Our internal users are accessing the Internet via a squid v2.6-STABLE9
 proxy using a proxy.pac file.
 Their browsers (corporate dictates Internet Explorer) are configured to
 Automatically detect proxy settings.

 When I open the page http://bloomberg.com using the above settings, the
 page mostly loads but the browser locks up and needs to be killed.

 If I configure the browser to use a statically configured proxy and
 port, then the page loads fine.

 The


elided long trace for brevity


 At this point, the page has loaded and is working fine.
 Note the TCP_DENIED 4 lines from the bottom of the second test.  This
 looks like a bad URL due to some bad copy-pasting of code by the
 webmasters of Bloomberg.
 As you can see, the additional lines in the session that works are
 nothing special (except for that DENIED entry).

 Any idea as to what could be going on here?
 My gut tells me that the fix lies in the IE configuration but I also
 think there should be some kind of work-around possible in squid.

I'm minded to suspect there is something in the .PAC file breaking under
that fubar URL.

Being javascript it's susceptible to $url with quote ' and  charecters in
their strings if the browser is broken enough to pass them unencoded ('
seems not to encode easily).

I've also seen Squid helpers which barf on miss-matched quoting in similar
ways (usually SQL-injection holes).

Amos




Re: [squid-users] Problem with acl helper

2009-04-07 Thread Amos Jeffries
 Hello, i have proxy server transparent running in

 OpenBSD 4.4
 with Mysql
 cautive portal


 i create acl helper, to lookup mysql database, this return ej squid put
 192.168.35.121
 OK user=$usuerofdb\n

 this is work fine, but my problem is when i need loggin other user from
 192.168.35.121
 , in access.log, the name user is olduser
 whats happen? squid cache username for ip??

Yes. When determining user name solely from IP there can only be one
logged in at a time. You will have to determine some other criteria to add
to the IP to tell the difference.

Amos




Re: [squid-users] Strange problem accessing http://Bloomberg.com

2009-04-07 Thread Amos Jeffries
 So I think the client's proxy.pac script might be having trouble
 digesting the malformed URL below:

 1239113823.055  0 xxx.yyy.zzz.aaa TCP_DENIED/400 1614 GET
 http://'wbetest2.bloomberg.com/jscommon/0/s_code.js' - NONE/- text/html

 The single quote is making the proxy.pac freeze which in turn makes the
 browser window freeze.
 So at least now I know this is a problem at Bloomberg's end.
 However, in the mean time, I need to make this site work for my users
 since brokers are not known for their patience and understanding.

 I know this isn't the ideal forum for this, but does anyone have an idea
 how I can let the proxy.pac properly parse a URL with a quoted string in
 it?

Hmmm:

 ...
  if ( strstr($url, \') ) return DIRECT;

should do the trick.


Of course I would never suggest passing them to PROXY
http://127.0.0.1:80/; ;)


Amos




Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread Amos Jeffries

 I'm trying to run squid but I'm getting a few error msgs:

  * Starting Squid HTTP proxy squid
 2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
 'wais_relay_port'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
 'incoming_icp_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
 'incoming_http_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
 'incoming_dns_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
 'min_icp_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
 'min_dns_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
 'min_http_poll_cnt'

 Could you guys help me solve this?

You are using an ancient squid.conf in a newer squid.

If you don't actually need the options remove them. If you do please check
for how they are done in your current version.

Also if you are doing an upgrade check the release notes for each version
of squid between the one you were using and the new one. We write a
section on upgrading changes every release that covers the dead or changed
options and what to do with them in the upgrade process.

One day we will have an automatic upgrader, but its not in visible sight.
(volunteers to do http://wiki.squid-cache.org/Features/ConfigUpdater ??)

Amos




Re: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Amos Jeffries
 Hello,

 I am new to squid but not new to reverse proxies.  I am trying to
 implement a proxy that would work like this:

 www.example.com - server 1
 example.com - server 1
 dev.example.com - server 2

 I have read the wiki here:
 wiki.squid-cache.org/SquidFaq/ReverseProxy

 But I cant get it to work and I am about to pull my hair out.

 My squid.conf looks like:
 
 http_port 80 accel defaultsite=example.com
 cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
 cache_peer_domain server_2 dev.example.com
 cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
 cache_peer_domain server_1 example.com
 

 This gives me a big fat: Access Denied

 So I added this to my squid.conf:
 ---
 acl our_sites dstdomain example.com dev.example.com
 http_access allow our_sites
 ---


Correct.

 This clears the Access Denied however now all traffic goes to server_1
 (the .115 addy).

This is because cache_peer_domain lists a set of domain suffixes, ie it
has an implicit wildcard built-in to the domain pattern *.example.com /
*.dev.example.com.


 I have tried all sorts of cute ACLs included but not limited to delcaring
 ACSs for server_1 and server_2 respectively and allowing access to
 server_1 from server_1 sites and denying server_2 sites and vice versa.
 However this just gives me an Access Denied for all sites.

 I have also tired every example found on this issue in the Wiki.  I feel
 like the Wiki is leaving out a key config line that is causing this not to
 work, but I could be wrong.

You can't cleanly mix the cache_peer_domain and cache_peer_access.
Perhapse you were doing that.

I think you want this:

  http_port 80 accel defaultsite=example.com

  cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2

  acl dev dstdomain dev.example.com
  cache_peer_access server_2 allow dev
  cache_peer_access server_2 deny all
  http_access allow dev

  cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1

  acl www dstdomain example.com www.example.com
  cache_peer_access server_1 allow www
  cache_peer_access server_1 deny all
  http_access allow www

  http_access deny all


If you are still having problems with the above, then I think the error is
elsewhere than the peering config.


 I am runnig squid:
 Squid Cache: Version 2.7.STABLE6
 configure options:  '--disable-internal-dns'

Good idea to re-enable that.

Amos




Re: [squid-users] Reverse Proxy + Multiple Webservers woes

2009-04-07 Thread Amos Jeffries
 Karol Maginnis wrote:
 Hello,

 I am new to squid but not new to reverse proxies.  I am trying to
 implement a proxy that would work like this:

 www.example.com - server 1
 example.com - server 1
 dev.example.com - server 2

 I have read the wiki here:
 wiki.squid-cache.org/SquidFaq/ReverseProxy

 But I cant get it to work and I am about to pull my hair out.

 My squid.conf looks like:
 
 http_port 80 accel defaultsite=example.com
 cache_peer 192.168.1.114 parent 80 0 no-query originserver name=server_2
 cache_peer_domain server_2 dev.example.com
 cache_peer 192.168.1.115 parent 80 0 no-query originserver name=server_1
 cache_peer_domain server_1 example.com
 

 This gives me a big fat: Access Denied

 So I added this to my squid.conf:
 ---
 acl our_sites dstdomain example.com dev.example.com
 http_access allow our_sites
 ---

 This clears the Access Denied however now all traffic goes to
 server_1 (the .115 addy).

 I have tried all sorts of cute ACLs included but not limited to
 delcaring ACSs for server_1 and server_2 respectively and allowing
 access to server_1 from server_1 sites and denying server_2 sites and
 vice versa. However this just gives me an Access Denied for all sites.

 I have also tired every example found on this issue in the Wiki.  I feel
 like the Wiki is leaving out a key config line that is causing this not
 to work, but I could be wrong.

 I am runnig squid:
 Squid Cache: Version 2.7.STABLE6
 configure options:  '--disable-internal-dns'

 I hate sending such a simple question to a mailing list but I have read
 the squid wiki so much that I almost have it memorized as far as the
 ReverseProxy pages are concerned.


 I'm too new with squid to help others but I have to say that I spent 2
 weeks on the very same issue. Squid 2.6 has its options which are
 different from the 2.7 series and the big difference comes with the 3.x
 series.

 If it helps I solved my issue with the code bellow (Squid 3.0.STABLE7)
 but I'm pretty sure this won't work in 2.7 ;)

That (below) should work in all squid 2.6 or later.


 -- cut --
 cache_peer 192.168.1.115 parent 80 0 no-query no-digest originserver
 name=iis
 acl sites_iis dstdomain example.net
 cache_peer_access iis allow sites_iis
 http_access allow sites_iis

 --- end cut --

 Maybe it's just me but something doesn't feel right in the above code
 but it works and for the moment I'm all too tired with this squid
 thingie...


Amos



Re: [squid-users] Re: Re: why RELEASE?

2009-04-07 Thread Amos Jeffries
 On Thu, 2009-04-02 at 11:35 +1200, Amos Jeffries wrote:

 IIRC, non-cachable objects larger than max_object_size_in_memory get a
 disk object saved for the transition buffer then released when completed
 whether they need it or not. One of the inefficiencies we are working
 towards killing.

 OK.  But that doesn't explain these ones does it?

 1239123733.599 RELEASE 00 2503 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239123131 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239123733.600 SWAPOUT 00 2548 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239123736 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239124338.647 RELEASE 00 2548 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239123736 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239124338.649 SWAPOUT 00 25D5 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239124341 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239124944.128 RELEASE 00 25D5 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239124341 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239124944.130 SWAPOUT 00 261E C11C22024C6BFA7AB028CE37E26CADBD  200
 1239124946 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239125550.777 RELEASE 00 261E C11C22024C6BFA7AB028CE37E26CADBD  200
 1239124946 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239125550.777 SWAPOUT 00 2660 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239125553 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239126156.245 RELEASE 00 2660 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239125553 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239126156.247 SWAPOUT 00 26D1 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239126158 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239126762.193 RELEASE 00 26D1 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239126158 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239126762.193 SWAPOUT 00 273B C11C22024C6BFA7AB028CE37E26CADBD  200
 1239126764 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239127368.419 RELEASE 00 273B C11C22024C6BFA7AB028CE37E26CADBD  200
 1239126764 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239127368.419 SWAPOUT 00 27BF C11C22024C6BFA7AB028CE37E26CADBD  200
 1239127370 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239127976.513 RELEASE 00 27BF C11C22024C6BFA7AB028CE37E26CADBD  200
 1239127370 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239127976.515 SWAPOUT 00 2827 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239127978 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239128584.002 RELEASE 00 2827 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239127978 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239128584.002 SWAPOUT 00 2A59 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239128586 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239129189.138 RELEASE 00 2A59 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239128586 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239129189.138 SWAPOUT 00 2A97 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239129191 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239129793.280 RELEASE 00 2A97 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239129191 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239129793.282 SWAPOUT 00 2AD4 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239129795 1234617780-1 image/gif 1366/1366 GET
 http://www.example.org/forum/images/styles/soness/style/footer_top.gif
 1239130397.185 RELEASE 00 2AD4 C11C22024C6BFA7AB028CE37E26CADBD  200
 1239129795 1234617780-1 image/gif 1366/1366 GET
 

Re: [squid-users] ...Memory-only Squid questions

2009-04-07 Thread Amos Jeffries
 David Tosoff wrote:
 Thanks Chris.

 I had already read both of the wiki post and the thread you directed me
 to before I posted this to the group.


 Excellent.

 I already had compiled heap into my squid before this issue happened. I
 am using heap GDSF. And, I wasn't able to find
 --enable-heap-replacement as a compile option in './configure --help'
 ... perhaps it's deprecated??

 Seems to be.  Section 7.2 of the release notes says:

 *--enable-heap-replacement*

 Please use --enable-removal-policies directive instead.


Correct. wiki now updated, thank you for finding this.


  Is it a still a valid compile option for 3.0 stable 13?

 In any event, a gentleman named Gregori Parker responded and helped me
 with some suggestions and I've managed to stabalize the squid at ~20480
 MB cache_mem


 Nice.

 The only thing I seem to be missing now is the SO_FAIL issue.
 Correct me if I'm wrong, but I assume 'SO' stands for 'Swap Out'... But
 how does this affect a system where there is nowhere for the squid to
 swap out to (cache_dir null /tmp)...?


 Well two things (not mentioned in the other replies) come to mind.
 First you did specify that you've compiled in support for a null type
 store_dir.  Assuming you have the cache_dir null type is still a bit
 weird (in 3.0) in that it requires a valid directory, even though it's
 not supposed to write anything to it.  Does /tmp exist, and does the
 Squid effective user have access to it?

NP: /tmp has some weird behavior when used for null store-dir. The files
squid saves there don't all interact nicely with the rules of /tmp
eraseability.

You might be better with a dedicated temp directory.

Amos




Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread Henrique M.


twinturbo-2 wrote:
 
 Also what version are you running? is this a hand crafted config or one
 borrowed from somwhere else?
 
 Post up the confg from lines 66 to 106
 
 Rob
 

I was running the default squid for ubuntu server 8.10 which is the version
2.7 stable. I'm using the default squid.conf that was installed together
with squid 2.7 and I don't really know if I need these command lines or not,
so for now I can comment them to see if I can get squid to work.

In the meantime, since squid 2.7 wasn't working I installed squid3 and tried
to run it, which also didn't work, but this time it only gave me a fail
message, it doesn't describes what it wrong.

I would like to keep the newer version of squid installed instead of moving
to the old one again, could you guys tell me where squid3 keep its error
messages?

Thanks
-- 
View this message in context: 
http://www.nabble.com/Getting-error-msgs-when-trying-to-start-squid-tp22933693p22941895.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] transparent proxy for CONNECT method

2009-04-07 Thread nyoman karna

dear squid-users,
it's been long I've accepted the fact
that transparent proxy will not work for CONNECT method
because of security issues (considered as man-in-the-middle attack).

but perhaps there's a way to get around this problem?
because everyone will stuck with using GMail, YahooMail, etc
since they're all using HTTPS for signing in.

-- 
  Nyoman Bogi Aditya Karna 
  IM Telkom 
  http://www.imtelkom.ac.id 
--


  


Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Pandu E Poluan

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access and 
never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing that 
certain URL through ProxyB/C is too damn slow (pardon the language).



Rgds.



Okay. Thought it might be something like that, just wanted to be sure 
before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet via 
a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking MISS 
requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?





Amos


--
*Pandu E Poluan*
*Panin Sekuritas*
IT Manager / Operations  Audit
Phone : +62-21-515-3055 ext 135
Fax :   +62-21-515-3061
Mobile :+62-856-8400-426
e-mail : 	pandu_pol...@paninsekuritas.co.id 
mailto:pandu_pol...@paninsekuritas.co.id






Y!M :   hands0me_irc
MSN :   si-gant...@live.com
GTalk : pandu.ca...@gmail.com



Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Pandu E Poluan

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be sure 
before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet via 
a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking MISS 
requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?





Amos




--
*Pandu E Poluan*
*Panin Sekuritas*
IT Manager / Operations  Audit
Phone : +62-21-515-3055 ext 135
Fax :   +62-21-515-3061
Mobile :+62-856-8400-426
e-mail : 	pandu_pol...@paninsekuritas.co.id 
mailto:pandu_pol...@paninsekuritas.co.id






Y!M :   hands0me_irc
MSN :   si-gant...@live.com
GTalk : pandu.ca...@gmail.com



Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Amos Jeffries

Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...


dstdomain covers a whole site or more. You perhapse want to use 
dstdomain + urlpath_regex then:


  acl siteX dstdomain example.com
  acl objectX urlpath_regex ^/fubar

  miss_access allow siteX objectX

etc..



miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be sure 
before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet via 
a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking MISS 
requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Amos Jeffries

Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?


Try with proxyA as parent peer in the config for proxyB/C

Amos



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be sure 
before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet via 
a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking MISS 
requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?





Amos







--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] transparent proxy for CONNECT method

2009-04-07 Thread Amos Jeffries

nyoman karna wrote:

dear squid-users,
it's been long I've accepted the fact
that transparent proxy will not work for CONNECT method
because of security issues (considered as man-in-the-middle attack).

but perhaps there's a way to get around this problem?
because everyone will stuck with using GMail, YahooMail, etc
since they're all using HTTPS for signing in.



WPAD and PAC configuration was created as a way around the transparent 
proxy limitations. When understood they can work very effectively in a 
LAN environment.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Pandu E Poluan

Okay, some experimentations I made:

I added the following lines on ProxyB:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
never_direct allow fastsites

Changes on ProxyA:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
# also from Amos' tip
miss_access allow fastsites
miss_access deny siblings
miss_access allow all
# and this one from Amos' tip
always_direct allow fastsites

My browser can't access .need-fast-inet.com

I further changed the following lines to ProxyB:

# added weight=2 allow-miss
cache_peer   ProxyA   sibling   3128   4827   htcp weight=2 allow-miss
# added the following line
neighbor_type_domain ProxyA parent .need-fast-inet.com 
.another-need-fast-inet.com


Now, I can access .need-fast-inet.com through ProxyB.

But, isn't that allow-miss dangerous?

Any comments?


Rgds.


[p]


Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be 
sure before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet 
via a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking 
MISS requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?





Amos






--
*Pandu E Poluan*
*Panin Sekuritas*
IT Manager / Operations  Audit
Phone : +62-21-515-3055 ext 135
Fax :   +62-21-515-3061
Mobile :+62-856-8400-426
e-mail : 	pandu_pol...@paninsekuritas.co.id 
mailto:pandu_pol...@paninsekuritas.co.id






Y!M :   hands0me_irc
MSN :   si-gant...@live.com
GTalk : pandu.ca...@gmail.com



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread Amos Jeffries

Henrique M. wrote:


twinturbo-2 wrote:

Also what version are you running? is this a hand crafted config or one
borrowed from somwhere else?

Post up the confg from lines 66 to 106

Rob



I was running the default squid for ubuntu server 8.10 which is the version
2.7 stable. I'm using the default squid.conf that was installed together
with squid 2.7 and I don't really know if I need these command lines or not,
so for now I can comment them to see if I can get squid to work.


Okay. Since they were obsoleted by 2.6 you don't. The bigger issue is 
how you got a config like that out of a 2.7 bundle!




In the meantime, since squid 2.7 wasn't working I installed squid3 and tried
to run it, which also didn't work, but this time it only gave me a fail
message, it doesn't describes what it wrong.

I would like to keep the newer version of squid installed instead of moving
to the old one again, could you guys tell me where squid3 keep its error
messages?


'error messages' in web terminology means something completely different 
which can be 'kept'.


I assume you mean where doe sit send the startup error output? That is 
usually sent to syslog by Debian/Ubuntu during init process and then 
when squid is going to the /var/logs/squid3/cache.log


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


[squid-users] reverse proxy setup, to http driven application that serves file segments

2009-04-07 Thread louis gonzales
Dist,
Version: Squid 2.7.STABLE6
OS(unfortunately :) ): Windows Server 2003

Issue:
I'm using the reverse proxy as the origin server to interface with a
server behind it that communicates file segments through http
protocol.  I don't want Squid to cache anything, so I added the cache
deny all to the very end of the squid.conf file.  I know that Squid
is working as the reverse proxy, because my application client is able
to startup and to do that, it's going through the reverse proxy
setup... however, it appears that Squid (in my current configuration)
is having difficulties with actually retrieving the file segments from
the same server, so I've enclosed snippets from the cache.log file, to
see if anyone could point me to a configuration issue in Squid.conf?

Noteworthy from cache.log are: 1) The request POST, 2) the reply POST,
and 3) clientReadRequest very last line.

Any insight would be greatly appreciated.  Thanks in advance!


--- Reverse Proxy Setup in Squid.conf:
---
cache_peer 192.168.0.1 parent 80 0 no-query originserver name=myAccel

#
acl FMS dstdomain unified1.abstract.net
http_access allow FMS
cache_peer_access myAccel allow FMS

http_port proxy1.abstract.net:80 accel defaultsite=unified1.abstract.net




---  From access.log:

1239166876.508 50 proxy1.abstract.net TCP_MISS/200 651 POST
http://unified1.abstract.net/tc/fms/513791787/mygroup/FSC_unified1_Administrator/
- FIRST_UP_PARENT/myAccel multipart/form-data,

---


---  From cache.log:

2009/04/08 01:01:16| Parser: retval 1: from 0-96: method 0-3; url
5-85; version 87-95 (1/1)
2009/04/08 01:01:16| The request POST
http://unified1.abstract.net:80/tc/fms/513791787/mygroup/FSC_unified1_Administrator/
is ALLOWED, because it matched 'FMS'
2009/04/08 01:01:16| peerSourceHashSelectParent: Calculating hash for
192.168.0.20
2009/04/08 01:01:16| clientReadBody: start fd=12 body_size=159922
in.offset=4095 cb=00450A7C req=00F53B28
2009/04/08 01:01:16| clientProcessBody: start fd=12 body_size=159922
in.offset=4095 cb=00450A7C req=00F53B28
2009/04/08 01:01:16| clientProcessBody: end fd=12 size=4095
body_size=155827 in.offset=0 cb=00450A7C req=00F53B28
...
   output truncated for readability 
...
2009/04/08 01:01:16| clientReadBody: start fd=12 body_size=217
in.offset=217 cb=00450A7C req=00F53B28
2009/04/08 01:01:16| clientProcessBody: start fd=12 body_size=217
in.offset=217 cb=00450A7C req=00F53B28
2009/04/08 01:01:16| clientProcessBody: end fd=12 size=217 body_size=0
in.offset=0 cb=00450A7C req=00F53B28
2009/04/08 01:01:16| The reply for POST
http://unified1.abstract.net/tc/fms/513791787/mygroup/FSC_unified1_Administrator/
is ALLOWED, because it matched 'all'
2009/04/08 01:01:16| clientReadRequest: FD 12: no data to process
((10035) WSAEWOULDBLOCK, Resource temporarily unavailable.)
---

-- 
Louis Gonzales
BSCS EMU 2003
HP Certified Professional
louis.gonza...@linuxlouis.net


Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-07 Thread Amos Jeffries

Pandu E Poluan wrote:

Okay, some experimentations I made:

I added the following lines on ProxyB:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
never_direct allow fastsites

Changes on ProxyA:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
# also from Amos' tip
miss_access allow fastsites
miss_access deny siblings
miss_access allow all
# and this one from Amos' tip
always_direct allow fastsites

My browser can't access .need-fast-inet.com

I further changed the following lines to ProxyB:

# added weight=2 allow-miss
cache_peer   ProxyA   sibling   3128   4827   htcp weight=2 allow-miss
# added the following line
neighbor_type_domain ProxyA parent .need-fast-inet.com 
.another-need-fast-inet.com


Now, I can access .need-fast-inet.com through ProxyB.

But, isn't that allow-miss dangerous?

Any comments?



It's dangerous to use it widely. And particularly on both ends of the 
peering link (ie DONT place it in proxyA config for proxyB/C).


It's safe to do on a one-way link. The miss_access controls you have in 
place at each of your Squid perform explicitly the same actions. So 
AFAIK you should not hit any of the loop cases that may occur.


Test without the 'allow-miss' option though.  I believe the setting 
neighbor_type_domain disables it more specifically for the objectX 
requests via the change to parent link.


Amos



Rgds.


[p]


Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be 
sure before fuzzing the issue.


You will need to create an ACL just for this URL (an others you want 
to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their parent 
peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved by 
only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet 
via a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking 
MISS requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through proxyB 
into proxy A and then internet?





Amos









--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6