Oops, I hit send by mistake before I finished. Here is the rest...

On Tue, 29 Sep 2009 13:22:58 +1300, Todd Nine <t...@spidertracks.co.nz>
wrote:
> Hi Amos,
>   Here is my squid.conf.  I've just used the defaults and added a single 
> rule.  We're pushing a lot of throughput (several gigs a day).  I've 
> disabled writing to disk as we actually run from a USB appliance, and 
> set the cache size to 1 GB (1024M) of RAM.  My main use of squid is not 
> caching, but rather http redirection to save us money on our usage fees 
> from our ISPs.
> 
> Thanks again for the help!
> 
> File:
<snip, see earlier email>
> 
> # Setup some default acls
> acl all src 0.0.0.0/0.0.0.0
> acl localhost src 127.0.0.1/255.255.255.255
> acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 1111 
> 3128 1025-65535
> acl sslports port 443 563 1111
> acl manager proto cache_object
> acl purge method PURGE
> acl connect method CONNECT
> acl dynamic urlpath_regex cgi-bin \?
> cache deny dynamic

bit of a speed boost dropping the above two QUERY lines and adding:
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

as the pattern directly above the . (dot) refresh_pattern.

> http_access allow manager localhost
>  
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !safeports
> http_access deny CONNECT !sslports
> 
> # Always allow localhost connections
> http_access allow localhost
> 
> request_body_max_size 0 KB
> reply_body_max_size 0 allow all
> delay_pools 1
> delay_class 1 2
> delay_parameters 1 -1/-1 -1/-1
> delay_initial_bucket_level 100
> delay_access 1 allow all

Huh? This is adding a lot of useless work to Squid.

-1/-1 is an 'unlimited' pool. The above configuration is identical in
effect as not having delay pools at all.

> 
> # Allow local network(s) on interface(s)
> http_access allow localnet
> # Custom options
> #Set up our ACL for high throughput sites
> acl high_throughput dstdomain .amazonaws.com .rapidshare.com
> #Bind high throughput to the wireless interface
> tcp_outgoing_address 116.90.140.xx high_throughput
> 
> # Default block all to be sure
> http_access deny all
> 

the end.

Amos

> 
> 
> Amos Jeffries wrote:
>> On Tue, 29 Sep 2009 09:32:49 +1300, Todd Nine <t...@spidertracks.co.nz>
>> wrote:
>>   
>>> Thanks for the help!  I read over the rules and it was quite easy to
set
>>>
>>> up what I needed once I had the right directive.  I simply set up the 
>>> following.
>>>
>>> #Set up our ACL for high throughput sites
>>> acl high_throughput dstdomain .amazonaws.com
>>>
>>> #Bind high throughput to the wireless interface
>>> tcp_outgoing_address 116.90.140.xx high_throughput
>>>
>>> However we're having a side effect issue.  Our router box is a bit old 
>>> (an old P4), and we can't keep up with the squid demands due to the 
>>> number of users with 2 GB of ram.  Is there a directive that I can tell

>>> squid not to proxy connections unless they meet the "high_throughput" 
>>> acl?  I looked and couldn't find any bypass directives that met what I 
>>> needed.
>>>
>>> Thanks,
>>> Todd
>>>     
>>
>> Once connections have already entered Squid its too late to not send
them
>> to Squid.
>>
>> I have run Squid on P4s routers with 256MB of RAM for hundreds of
domains
>> and dozens of clients without having the box run up much of a sweat.
What
>> is your load (both CPU box load, and visitor rates, bandwidth) like?
>> Also check your other configuration and access controls are using
>> efficient
>> methods, if you don't know what those are already I'm happy to give
>> configs
>> an audit and point things that need adjusting out.
>>
>> Amos
>>
>>   
>>> Amos Jeffries wrote:
>>>     
>>>> On Mon, 28 Sep 2009 16:21:16 +1300, Todd Nine
<t...@spidertracks.co.nz>
>>>> wrote:
>>>>   
>>>>       
>>>>> Hi all,
>>>>>   I'm using squid on a pfSense router we've built.  We have 2 
>>>>> connections, one we pay for usage (DSL) and one we do not (Wireless).

>>>>>
>>>>> We use Amazon S3 extensively at work.  We've been attempting to route

>>>>> all traffic over the wireless via an IP range, but as S3 can change
>>>>>         
>> IPs,
>>   
>>>>> this doesn't work and we end up with a large bill for our DSL.  Is it

>>>>> possible to have squid route connections via a specific interface if
a
>>>>>
>>>>> hostname such as "amazonaws.com" is in the HTTP request header?
>>>>>
>>>>> Thanks,
>>>>> Todd
>>>>>     
>>>>>         
>>>> Yes you can.
>>>>
>>>> Find an IP assigned to the interface you want traffic to go out. Use
>>>> the
>>>> tcp_outgoing_addr directive and ACLs that match the requests to make
>>>>       
>> sure
>>   
>>>> all the requests to that domain are assigned that outgoing address. 
>>>>       
>> Then
>>   
>>>> make sure the OS sends traffic from that IP out the right interface.
>>>>
>>>> Amos
>>>>
>>>>

Reply via email to